Extending Arshai¶
Comprehensive guides for extending the framework with custom components and contributing to the Arshai project.
Extension Guides
Overview¶
Arshai is designed to be extended and customized for your specific needs. The framework provides clear extension points through:
Protocol-Based Interfaces: Implement interfaces without inheritance constraints
Base Classes: Extend foundation classes for common patterns
Factory Pattern: Plugin your custom implementations seamlessly
Direct Instantiation: Full control over component creation
Extension Philosophy:
The framework follows the principle of developer authority - giving you complete control over:
Response formats and data structures
Tool integration patterns
Memory management strategies
LLM provider selection and configuration
System orchestration approaches
What You Can Extend¶
- Agents
Create custom agents with specialized behaviors, tool integrations, and response patterns.
Extend
BaseAgentfor standard patternsImplement
IAgentprotocol for full flexibilityIntegrate with tools, memory, and workflows
See Building Custom Agents for complete guide
- LLM Providers
Add support for new LLM providers or customize existing ones.
Extend
BaseLLMClientfor framework integrationImplement
ILLMprotocol for provider-specific logicSupport function calling, streaming, and structured output
See Adding LLM Providers for complete guide
- Memory Backends
Implement custom memory storage solutions.
Implement
IMemoryManagerprotocolSupport different memory types (short-term, long-term, working)
Integrate with various storage systems (Redis, PostgreSQL, etc.)
See ../framework/memory/index for patterns
- Vector Databases
Add support for new vector database providers.
Implement
IVectorDBClientprotocolSupport dense and sparse vector operations
Enable hybrid search capabilities
See Vector Database - Milvus Client for examples
- Embeddings
Integrate new embedding models.
Implement
IEmbeddingprotocolSupport batch processing
Handle dense and sparse embeddings
See Embedding Implementations for patterns
- Workflows
Build custom orchestration systems.
Create custom workflow nodes
Implement routing logic
Manage state across agents
See Building Your Own Orchestration for patterns
Extension Patterns¶
Protocol Implementation¶
All Arshai interfaces are protocols, enabling duck-typed implementations:
from arshai.core.interfaces import IAgent, IAgentInput
class MyCustomAgent:
"""Custom agent without inheriting from BaseAgent"""
async def process(self, input: IAgentInput) -> dict:
"""Implement the required process method"""
return {"response": "Custom logic here"}
# MyCustomAgent automatically satisfies IAgent protocol
Base Class Extension¶
Extend base classes for common infrastructure:
from arshai.agents.base import BaseAgent
class SpecializedAgent(BaseAgent):
"""Extend BaseAgent for common patterns"""
def __init__(self, llm_client, system_prompt, **kwargs):
super().__init__(llm_client, system_prompt)
self.custom_config = kwargs.get('custom_config')
async def process(self, input: IAgentInput) -> dict:
# Use inherited self.llm_client
# Use inherited self.system_prompt
# Implement custom logic
pass
Factory Integration¶
Register custom implementations with the factory system:
from arshai.config import Settings
# Create custom component
class MyLLMClient(BaseLLMClient):
# Implementation
pass
# Use via settings
settings = Settings()
llm = MyLLMClient(ILLMConfig(model="custom-model"))
# Or direct instantiation
from arshai.core.interfaces import ILLMConfig
llm = MyLLMClient(ILLMConfig(model="custom-model"))
Quick Start Guides¶
Creating a Custom Agent¶
Simple Example:
from arshai.agents.base import BaseAgent
from arshai.core.interfaces import IAgentInput, ILLMInput
class SentimentAgent(BaseAgent):
"""Agent that analyzes sentiment"""
async def process(self, input: IAgentInput) -> dict:
llm_input = ILLMInput(
system_prompt=self.system_prompt,
user_message=f"Analyze sentiment: {input.message}"
)
result = await self.llm_client.chat(llm_input)
return {
"sentiment": self._extract_sentiment(result['llm_response']),
"original_message": input.message
}
def _extract_sentiment(self, response: str) -> str:
# Custom sentiment extraction logic
return "positive" # Simplified
See Building Custom Agents for comprehensive agent extension guide.
Adding an LLM Provider¶
Minimal Example:
from arshai.llms.base_llm_client import BaseLLMClient
from arshai.core.interfaces import ILLMConfig, ILLMInput
class MyProviderClient(BaseLLMClient):
"""Custom LLM provider implementation"""
def _initialize_client(self):
"""Initialize provider client"""
import my_provider
return my_provider.Client(api_key="...")
def _convert_callables_to_provider_format(self, functions):
"""Convert functions to provider format"""
return [{"name": name, "function": func} for name, func in functions.items()]
async def _chat_simple(self, input: ILLMInput):
"""Handle simple chat"""
response = await self._client.chat(
system=input.system_prompt,
user=input.user_message
)
return {"llm_response": response.text, "usage": {...}}
async def _chat_with_functions(self, input: ILLMInput):
"""Handle function calling"""
# Multi-turn function calling implementation
pass
async def _stream_simple(self, input: ILLMInput):
"""Handle streaming"""
async for chunk in self._client.stream(...):
yield {"llm_response": chunk.text}
async def _stream_with_functions(self, input: ILLMInput):
"""Handle streaming with functions"""
# Streaming function calling implementation
pass
See Adding LLM Providers for comprehensive LLM provider integration guide.
Best Practices¶
Follow Framework Patterns
Use Protocols: Implement interfaces through duck typing
Extend Base Classes: Leverage common infrastructure when appropriate
Direct Instantiation: Create components explicitly
Type Safety: Use type hints throughout your code
Error Handling: Implement robust error handling
Code Quality
Testing: Write comprehensive tests for your extensions
Documentation: Document your custom components
Logging: Use appropriate logging for debugging
Performance: Optimize for your use case
Maintainability: Keep code clean and well-organized
Integration
Factory Support: Make components discoverable
Configuration: Support configuration via DTOs
Compatibility: Ensure compatibility with existing components
Versioning: Handle version compatibility
Testing Your Extensions¶
Unit Testing Agents:
import pytest
from unittest.mock import AsyncMock
from arshai.core.interfaces import IAgentInput
@pytest.mark.asyncio
async def test_custom_agent():
# Mock LLM client
mock_llm = AsyncMock()
mock_llm.chat.return_value = {
"llm_response": "Test response",
"usage": {"total_tokens": 100}
}
# Test agent
agent = MyCustomAgent(mock_llm, "Test prompt")
result = await agent.process(IAgentInput(message="Test"))
assert result is not None
assert "response" in result
mock_llm.chat.assert_called_once()
Integration Testing:
@pytest.mark.asyncio
async def test_agent_with_real_llm():
from arshai.llms.openai_client import OpenAIClient
from arshai.core.interfaces import ILLMConfig
llm = OpenAIClient(ILLMConfig(model="gpt-3.5-turbo"))
agent = MyCustomAgent(llm, "You are a helpful assistant")
result = await agent.process(IAgentInput(message="Hello"))
assert result is not None
assert isinstance(result, dict)
Testing LLM Clients:
@pytest.mark.asyncio
async def test_llm_client():
from arshai.core.interfaces import ILLMInput
client = MyProviderClient(ILLMConfig(model="test-model"))
result = await client.chat(ILLMInput(
system_prompt="You are helpful",
user_message="Hello"
))
assert "llm_response" in result
assert "usage" in result
Contributing to Arshai¶
We welcome contributions to the Arshai framework! Here’s how you can help:
Code Contributions
Fork the Repository: https://github.com/felesh-ai/arshai
Create a Branch:
git checkout -b feature/your-featureWrite Code: Follow the code standards below
Write Tests: Ensure comprehensive test coverage
Submit PR: Create a pull request with clear description
Code Standards
Style Guide:
Black: Use Black for code formatting (
poetry run black .)isort: Sort imports with isort (
poetry run isort .)Type Hints: Use type hints throughout
Docstrings: Document all public methods and classes
Testing Requirements:
Unit Tests: Test individual components
Integration Tests: Test component interactions
Coverage: Aim for >80% code coverage
Commands: Run
poetry run pytest --cov=arshai
Quality Checks:
# Format code
poetry run black .
poetry run isort .
# Type checking
poetry run mypy arshai/
# Security analysis
poetry run bandit -r arshai/
# Run tests
poetry run pytest --cov=arshai
Documentation Contributions
API Documentation: Add/update docstrings
Guides: Improve or add documentation guides
Examples: Contribute working examples
Tutorials: Create tutorial content
Issue Reporting
Report bugs or request features:
GitHub Issues: https://github.com/felesh-ai/arshai/issues
Include Details: Provide code samples, error messages, environment info
Reproducible: Provide steps to reproduce issues
Community Guidelines
Be Respectful: Treat all community members with respect
Be Constructive: Provide constructive feedback
Be Patient: Maintainers are volunteers
Share Knowledge: Help others in the community
Development Setup¶
Prerequisites:
Python 3.9 or higher
Poetry for dependency management
Git
Setup Steps:
Clone Repository:
git clone https://github.com/felesh-ai/arshai.git cd arshai
Install Dependencies:
# Install all dependencies including dev tools poetry install -E all
Set Up Pre-commit Hooks:
poetry run pre-commit install
Run Tests:
poetry run pytest
Build Documentation:
cd docs_sphinx && make html
Project Structure:
arshai/
├── arshai/ # Main package
│ ├── agents/ # Agent implementations
│ ├── core/ # Core interfaces and types
│ ├── llms/ # LLM client implementations
│ ├── memory/ # Memory implementations
│ ├── workflows/ # Workflow system
│ └── ...
├── tests/ # Test suite
│ ├── unit/ # Unit tests
│ └── integration/ # Integration tests
├── docs_sphinx/ # Sphinx documentation
├── examples/ # Example applications
└── pyproject.toml # Poetry configuration
Resources¶
Documentation:
Framework Core - Framework guides
API Reference - API reference
Tutorials - Complete tutorials
Reference Implementations - Implementation guides
Example Code:
Agent Examples - Agent examples
examples/directory in repositoryTutorial code in documentation
GitHub:
Repository: https://github.com/felesh-ai/arshai
Pull Requests: https://github.com/felesh-ai/arshai/pulls
Architecture Documents:
CLAUDE.md- Development guidelines for Claude CodeLLM client architecture standards
Interface-driven design principles
Next Steps¶
Ready to extend Arshai? Choose your path:
Guide |
Description |
|---|---|
Build custom agents with specialized behaviors |
|
Add support for new LLM providers |
|
../framework/memory/index |
Implement custom memory backends |
Create custom orchestration systems |
Happy extending! 🚀