Agent Development Guide
This document provides best practices, lifecycle steps, and resources for developing agents for the Meta Agent Platform.
Overview
Developing robust, reusable, and secure agents is key to building powerful workflows. This guide covers the agent development lifecycle, testing, documentation, and publishing.
Agent Development Lifecycle

Note: This is a placeholder for an agent development lifecycle diagram. The actual diagram should be created and added to the project.
1. Design
Define the agent's purpose, inputs, outputs, and configuration:
# agent-design.yaml
name: sentiment-analyzer
purpose: "Analyze text for sentiment and emotional content"
description: "This agent processes text input and returns sentiment scores and emotional analysis"
inputs:
- name: text
type: string
description: "Text to analyze for sentiment"
required: true
max_length: 10000
- name: language
type: string
description: "Language code (ISO 639-1)"
required: false
default: "en"
outputs:
- name: sentiment
type: object
properties:
score:
type: number
description: "Sentiment score from -1.0 (negative) to 1.0 (positive)"
confidence:
type: number
description: "Confidence level from 0.0 to 1.0"
emotions:
type: object
description: "Detected emotions with scores"
runtime: docker
resources:
memory: "512Mi"
cpu: "0.5"
gpu: false
2. Implementation
Develop agent logic using the appropriate runtime:
# sentiment_agent.py
import json
import nltk
from nltk.sentiment import SentimentIntensityAnalyzer
from meta_agent_platform import Agent
class SentimentAnalyzerAgent(Agent):
def __init__(self, config):
super().__init__(config)
# Download NLTK resources if needed
nltk.download('vader_lexicon')
self.analyzer = SentimentIntensityAnalyzer()
def process(self, inputs):
# Extract inputs
text = inputs.get('text')
language = inputs.get('language', 'en')
# Validate inputs
if not text:
return self.error_response("Text input is required")
if language != 'en':
return self.error_response("Only English is currently supported")
# Analyze sentiment
sentiment_scores = self.analyzer.polarity_scores(text)
# Map emotions based on scores
emotions = self.extract_emotions(sentiment_scores, text)
# Return results
return {
"sentiment": {
"score": sentiment_scores['compound'],
"confidence": 0.7, # Simplified confidence calculation
"emotions": emotions
}
}
def extract_emotions(self, scores, text):
# Simplified emotion extraction
emotions = {}
if scores['compound'] >= 0.5:
emotions['joy'] = 0.8
emotions['trust'] = 0.6
elif scores['compound'] <= -0.5:
emotions['anger'] = 0.7
emotions['sadness'] = 0.8
elif scores['compound'] > 0:
emotions['contentment'] = 0.5
else:
emotions['neutral'] = 0.9
return emotions
def error_response(self, message):
return {"error": message}
# For local testing
if __name__ == "__main__":
agent = SentimentAnalyzerAgent({"name": "sentiment-analyzer"})
test_input = {"text": "I love this product! It's amazing and works perfectly."}
result = agent.process(test_input)
print(json.dumps(result, indent=2))
3. Testing
Write unit and integration tests:
# test_sentiment_agent.py
import unittest
from sentiment_agent import SentimentAnalyzerAgent
class TestSentimentAnalyzerAgent(unittest.TestCase):
def setUp(self):
self.agent = SentimentAnalyzerAgent({"name": "sentiment-analyzer"})
def test_positive_sentiment(self):
result = self.agent.process({"text": "I love this product! It's amazing."})
self.assertIn("sentiment", result)
self.assertGreater(result["sentiment"]["score"], 0.5)
self.assertIn("joy", result["sentiment"]["emotions"])
def test_negative_sentiment(self):
result = self.agent.process({"text": "This is terrible, I hate it."})
self.assertIn("sentiment", result)
self.assertLess(result["sentiment"]["score"], -0.5)
self.assertIn("anger", result["sentiment"]["emotions"])
def test_missing_text(self):
result = self.agent.process({"language": "en"})
self.assertIn("error", result)
def test_unsupported_language(self):
result = self.agent.process({"text": "Bonjour", "language": "fr"})
self.assertIn("error", result)
if __name__ == "__main__":
unittest.main()
4. Documentation
Provide clear usage instructions and examples:
# Sentiment Analyzer Agent
This agent analyzes text for sentiment and emotional content.
## Inputs
- `text` (string, required): Text to analyze for sentiment (max 10,000 characters)
- `language` (string, optional): Language code (ISO 639-1), default: "en"
## Outputs
- `sentiment` (object): Sentiment analysis results
- `score` (number): Sentiment score from -1.0 (negative) to 1.0 (positive)
- `confidence` (number): Confidence level from 0.0 to 1.0
- `emotions` (object): Detected emotions with scores
## Example
### Input
```json
{
"text": "I love this product! It's amazing and works perfectly.",
"language": "en"
}
Output
Error Handling
The agent returns an error object when: - Text input is missing - Unsupported language is specified
Dependencies
- NLTK 3.6+
- Python 3.8+
### 5. Packaging Prepare agent for deployment: ```dockerfile # Dockerfile for sentiment-analyzer agent FROM python:3.9-slim WORKDIR /app # Install dependencies COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt # Copy agent code COPY sentiment_agent.py . # Download NLTK data during build RUN python -c "import nltk; nltk.download('vader_lexicon')" # Set environment variables ENV PYTHONUNBUFFERED=1 # Run as non-root user RUN useradd -m agent USER agent # Command to run the agent CMD ["python", "sentiment_agent.py"]
6. Registration
Register agent in the platform registry with complete metadata:
{
"name": "sentiment-analyzer",
"version": "1.0.0",
"description": "Analyzes text for sentiment and emotional content",
"type": "docker",
"image": "registry.example.com/agents/sentiment-analyzer:1.0.0",
"author": "AI Agent Platform Team",
"license": "MIT",
"tags": ["nlp", "sentiment", "text-analysis", "emotions"],
"categories": ["Natural Language Processing", "Text Analysis"],
"input_schema": {
"type": "object",
"required": ["text"],
"properties": {
"text": {
"type": "string",
"description": "Text to analyze for sentiment",
"maxLength": 10000
},
"language": {
"type": "string",
"description": "Language code (ISO 639-1)",
"default": "en",
"enum": ["en"]
}
}
},
"output_schema": {
"type": "object",
"properties": {
"sentiment": {
"type": "object",
"properties": {
"score": {
"type": "number",
"description": "Sentiment score from -1.0 (negative) to 1.0 (positive)"
},
"confidence": {
"type": "number",
"description": "Confidence level from 0.0 to 1.0"
},
"emotions": {
"type": "object",
"description": "Detected emotions with scores"
}
}
}
}
},
"resources": {
"memory": "512Mi",
"cpu": "0.5"
},
"documentation_url": "https://docs.example.com/agents/sentiment-analyzer",
"repository_url": "https://github.com/example/sentiment-analyzer-agent",
"visibility": "public"
}
7. Publishing
Optionally publish to the marketplace for broader use. See Marketplace Guide for details.
8. Maintenance
Update, version, and support your agent as needed:
# Changelog
## [1.0.0] - 2025-04-18
- Initial release with English language support
- Basic sentiment analysis and emotion detection
## [1.1.0] - 2025-05-15
- Added support for Spanish language
- Improved emotion detection accuracy
- Reduced memory usage by 20%
## [1.2.0] - 2025-06-10
- Added support for French and German languages
- Added confidence scoring for individual emotions
- Fixed bug with very short text inputs
Best Practices
- Single Responsibility: Each agent should do one thing well.
- Clear Interface: Define precise input/output schemas (JSON Schema recommended).
- Error Handling: Handle edge cases and provide meaningful error messages.
- Security: Validate all inputs, avoid code injection, and follow least privilege.
- Resource Efficiency: Optimize for memory, CPU, and network usage, especially for edge agents.
- Observability: Include logging and metrics for monitoring and debugging.
- Versioning: Use semantic versioning and maintain a changelog.
- Compliance: Respect data privacy and licensing requirements.
Testing Strategies
Unit Testing
Test individual components of your agent:
# Test specific functions within your agent
def test_extract_emotions():
agent = SentimentAnalyzerAgent({})
emotions = agent.extract_emotions({'compound': 0.8}, "I'm happy")
assert 'joy' in emotions
assert emotions['joy'] > 0.5
Integration Testing
Validate agent behavior in workflow context:
# Test agent within a workflow
def test_sentiment_in_workflow():
workflow = TestWorkflow()
workflow.add_agent("sentiment", SentimentAnalyzerAgent({}))
workflow.add_agent("categorizer", CategorizerAgent({}))
# Connect agents
workflow.connect("sentiment.output.sentiment", "categorizer.input.sentiment_data")
# Run workflow
result = workflow.run({"text": "I love this product!"})
# Verify results
assert "category" in result
assert result["category"] == "positive_feedback"
Performance Testing
Benchmark resource usage and latency:
import time
import psutil
import statistics
def benchmark_agent(agent, test_inputs, iterations=100):
latencies = []
memory_usages = []
cpu_usages = []
process = psutil.Process()
for _ in range(iterations):
# Measure memory before
mem_before = process.memory_info().rss / 1024 / 1024 # MB
# Measure execution time
start_time = time.time()
agent.process(test_inputs)
latency = time.time() - start_time
latencies.append(latency)
# Measure memory after
mem_after = process.memory_info().rss / 1024 / 1024 # MB
memory_usages.append(mem_after - mem_before)
# Measure CPU usage
cpu_usages.append(process.cpu_percent())
return {
"latency": {
"mean": statistics.mean(latencies),
"p95": sorted(latencies)[int(iterations * 0.95)],
"max": max(latencies)
},
"memory": {
"mean": statistics.mean(memory_usages),
"max": max(memory_usages)
},
"cpu": {
"mean": statistics.mean(cpu_usages),
"max": max(cpu_usages)
}
}
Security Testing
Check for vulnerabilities and input validation:
def test_security():
agent = SentimentAnalyzerAgent({})
# Test SQL injection attempt
result = agent.process({"text": "DROP TABLE users; --"})
assert "error" not in result # Should handle this as normal text
# Test XSS attempt
result = agent.process({"text": "<script>alert('XSS')</script>"})
assert "error" not in result # Should handle this as normal text
# Test command injection attempt
result = agent.process({"text": "`rm -rf /`"})
assert "error" not in result # Should handle this as normal text
# Test oversized input
huge_text = "a" * 1000000 # 1MB of text
result = agent.process({"text": huge_text})
assert "error" in result # Should reject oversized input
Documentation Templates
README Template
# Agent Name
Brief description of what the agent does.
## Features
- Key feature 1
- Key feature 2
- Key feature 3
## Inputs
Describe all input parameters, their types, and purpose.
## Outputs
Describe all output fields, their types, and meaning.
## Examples
Provide example inputs and outputs.
## Configuration
Describe configuration options.
## Dependencies
List all dependencies and requirements.
## License
Specify the license.
Publishing & Registration
- Register: Use the platform UI or API to register your agent.
- Visibility: Choose appropriate visibility (private, workspace, tenant, public).
- Marketplace: Submit for review to publish in the marketplace and enable monetization.
Troubleshooting
| Issue | Possible Cause | Solution |
|---|---|---|
| Agent fails to start | Missing dependencies | Check requirements.txt and ensure all dependencies are installed |
| High memory usage | Inefficient data processing | Optimize data handling, use streaming where possible |
| Slow performance | Large model loading | Implement lazy loading, optimize model size |
| Input validation errors | Unexpected input format | Check input schema and add better validation |
| Integration failures | Incompatible I/O formats | Ensure output format matches expected input of next agent |
References
- Agent Templates & Examples
- Marketplace Guide
- Component Design: Agent Registry & Marketplace
- Data Model: Agents
- Execution & Runtimes
- Security & Compliance
Last updated: 2025-04-18