Documentation
Write Clear Docstrings
All functions should have comprehensive docstrings:Include Type Hints
Type hints improve IDE support and validation:Document Expected Formats
Clearly specify input/output formats:Testing
Write Unit Tests
Create comprehensive tests for your functions:Test MCP Tools Locally
Before pushing, test your MCP server:Use Test Fixtures
Create reusable test data:CI/CD Integration
GitHub Actions Workflow
Create.github/workflows/test.yml:
Pre-commit Hooks
Create.pre-commit-config.yaml:
Security
Never Commit Secrets
Use environment variables for sensitive data:Use .gitignore
Ensure.gitignore includes:
Review Permissions Carefully
When connecting private repos:- Grant minimal required permissions
- Review which repositories Osmosis can access
- Use deploy keys for specific repo access
- Regularly audit connected integrations
Validate Inputs
Always validate and sanitize inputs:Code Organization
Keep Functions Focused
Each function should have a single, clear purpose:Use Helper Functions
Break complex logic into smaller pieces:Organize by Feature
Structure your code logically:Performance
Cache Expensive Operations
Choose Appropriate Models
For rubrics:Batch Operations When Possible
Monitoring and Debugging
Add Logging
Track Metrics
Troubleshooting
Sync Issues
Problem: Repository not syncing to Osmosis Solutions:- Verify folder structure matches exactly (case-sensitive)
- Check webhook settings in GitHub repository settings
- Review Osmosis sync logs for specific errors
- Ensure
pyproject.tomlincludes all dependencies - Validate decorators are spelled correctly
Tool Discovery Issues
Problem: MCP tools not appearing in Osmosis Solutions:- Confirm
@mcp.tool()decorator is present - Check tools are exported in
mcp/tools/__init__.py: - Verify type hints exist for all parameters and return values
- Ensure no syntax errors in tool files
- Check Osmosis platform logs for import errors
Reward Function Issues
Problem: Reward functions returning unexpected scores Solutions:- Test locally with sample inputs
- Add print statements or logging
- Verify input format matches expectations
- Check error handling catches all edge cases
- Ensure return type is
float
Rubric Evaluation Issues
Problem: Rubric scores inconsistent or errors Solutions:- Verify API key is set correctly
- Check API key has sufficient credits/quota
- Test with simpler rubric first
- Add error handling around
evaluate_rubriccall - Use
return_details=Trueto see evaluation reasoning - Verify model name is correct for provider
Import Errors
Problem:ModuleNotFoundError or import failures
Solutions:
- Ensure all directories have
__init__.pyfiles - Verify imports use correct paths
- Check dependencies are installed:
pip install -e . - Use absolute imports from package root
- Verify virtual environment is activated