| prompts | ||
| skills | ||
| src | ||
| tests | ||
| .env.example | ||
| .flake8 | ||
| .gitignore | ||
| AGENTS.md | ||
| Charm-Crush.md | ||
| crush.json | ||
| demo_workflow.py | ||
| Project-Outline.md | ||
| project-plan.md | ||
| pyproject.toml | ||
| README.md | ||
| skill-plan.md | ||
| test_cli_integration.py | ||
| test_post_body.py | ||
Covert Comrade
An AI agent that reads comment threads from Lemmy instances, formulates replies from a Marxist‑Leninist left‑wing perspective, and builds persistent user profiles based on observed political positions.
Features
- Lemmy thread fetching: Retrieve entire comment trees from Lemmy posts
- User profile building: Create and update markdown profiles with inferred political positions
- AI‑generated replies: Generate context‑aware replies using LLMs
- Research capability: Conduct web searches and store findings
- Modular design: Each component is isolated and replaceable
- Crush skill integration: Use as a standalone Python package or as a Charm Crush skill
Installation
Prerequisites
- Python 3.11+
- Charm Crush (optional, for skill usage)
- API keys for at least one LLM provider (OpenAI, Anthropic, Groq, or DeepSeek)
Setup
-
Clone the repository:
git clone https://github.com/yourusername/covert-comrade.git cd covert-comrade -
Create and activate a virtual environment:
python -m venv .venv source .venv/bin/activate # On Windows: .venv\Scripts\activate -
Install dependencies:
pip install -e . -
Install development dependencies (optional):
pip install -e .[dev] -
Configure environment variables:
cp .env.example .env # Edit .env with your API keys and settings
Quick Start
After installation, you can test the system with the mock LLM provider using the demo workflow:
python demo_workflow.py
This will:
- Load sample thread data
- Update user profiles
- Generate a reply using the mock LLM provider
- Output the generated reply
For a more realistic test, you can use the standalone CLI tools with a real Lemmy URL (dry-run mode):
python skills/covert-comrade/scripts/lemmy_reply.py https://lemmy.ml/post/12345 --dry-run --llm-provider mock
Note: The mock provider returns a placeholder reply without making actual API calls. To use real LLMs, set the corresponding API key in .env and specify the provider (e.g., --llm-provider openai).
Configuration
The project uses environment variables for configuration. Create a .env file from .env.example:
# Lemmy API credentials (optional for public instances)
LEMMY_INSTANCE_URL=https://lemmy.ml
LEMMY_USERNAME=
LEMMY_PASSWORD=
# LLM Provider API keys (choose at least one)
OPENAI_API_KEY=
ANTHROPIC_API_KEY=
GROQ_API_KEY=
DEEPSEEK_API_KEY=
# Web search (optional)
DDG_SEARCH_ENABLED=true
# Paths
PROFILES_DIR=profiles
RESEARCH_DIR=research
PROMPTS_DIR=prompts
Key Configuration Areas
- Lemmy API: Set
LEMMY_INSTANCE_URLand credentials if you need authenticated access (e.g., for posting replies) - LLM Providers: At least one API key is required for generating replies (use
mockprovider for testing) - Web Search: Set
DDG_SEARCH_ENABLED=falseto disable web research - Directories: Customize storage paths for profiles, research, and prompts
Usage
As a Python Package
The core functionality is available as Python modules:
from src.lemmy_client import LemmyClient
from src.profile_store import ProfileStore
from src.research_engine import ResearchEngine
from src.agent_orchestrator import AgentOrchestrator
# Fetch a thread
client = LemmyClient()
thread = client.fetch_thread("https://lemmy.ml/post/12345")
# Update user profiles
store = ProfileStore()
store.update_from_thread(thread)
# Generate a reply
orchestrator = AgentOrchestrator()
reply = orchestrator.generate_reply(thread)
print(reply)
Using CLI Tools
The project provides standalone CLI scripts in skills/covert-comrade/scripts/:
-
Fetch a Lemmy thread:
python skills/covert-comrade/scripts/fetch_thread.py https://lemmy.ml/post/12345 -
Update user profiles from a thread:
python skills/covert-comrade/scripts/update_profile.py --thread-file thread.json -
Research a topic:
python skills/covert-comrade/scripts/research_topic.py "Marxist theory of the state" -
Generate a reply (with optional research):
python skills/covert-comrade/scripts/generate_reply.py --thread-file thread.json --research
Complete Workflow Example
The main lemmy_reply.py script combines all steps:
# Dry run: generate reply without posting
python skills/covert-comrade/scripts/lemmy_reply.py https://lemmy.ml/post/12345 --research --llm-provider mock
# With real LLM and optional posting
python skills/covert-comrade/scripts/lemmy_reply.py https://lemmy.ml/post/12345 \
--llm-provider openai \
--research \
--post \
--username your_lemmy_username \
--password your_lemmy_password
Available options for lemmy_reply.py:
--research: Enable web research on thread topic--llm-provider: Choose provider (mock,openai,anthropic,groq,deepseek)--post: Post reply to Lemmy (requires authentication)--dry-run: Generate reply but don't post--target-comment: Reply to specific comment ID--output: Save reply to file--prompt-file: Custom system prompt file
As a Crush Skill
The project includes a fully packaged Crush skill for use with Charm Crush:
-
Add the skill to your Crush configuration:
crush skill add /path/to/covert-comrade/skills/covert-comrade -
Use the skill:
crush lemmy-reply --url https://lemmy.ml/post/12345 -
Available tools through Crush:
fetch_thread: Fetch Lemmy threads and comment treesupdate_profile: Build/update user profiles with political position inferenceresearch_topic: Conduct web research via DuckDuckGogenerate_reply: Generate Marxist‑Leninist replies using LLMmanage_prompt: Manage versioned system prompts
Project Structure
covert-comrade/
├── src/
│ ├── lemmy_client.py # Lemmy API client
│ ├── profile_store.py # User profile management
│ ├── research_engine.py # Web search and storage
│ ├── prompt_manager.py # Prompt versioning
│ └── agent_orchestrator.py # LLM integration
├── tests/ # Unit and integration tests
├── skills/
│ └── covert-comrade/ # Crush skill packaging
│ ├── SKILL.md # Skill documentation
│ ├── crush.json # Crush configuration
│ └── scripts/ # CLI tool wrappers
├── profiles/ # User profile markdown files
├── research/ # Research findings
├── prompts/ # System prompts
├── .env.example # Environment template
├── pyproject.toml # Project metadata and dependencies
└── README.md # This file
Development
Running Tests
# Run all tests
pytest
# Run with coverage
pytest --cov=src --cov-report=term-missing
# Run specific test file
pytest tests/test_lemmy_client.py -v
Code Quality
# Format code with Black
black src tests
# Type checking with mypy
mypy src
# Lint with flake8
flake8 src tests
# Run all checks
black --check src tests && mypy src && flake8 src tests
Project Plan
See Project Plan for detailed implementation roadmap and phases.
Troubleshooting
Import Errors
If you encounter import errors when running CLI scripts, ensure you're in the project root directory or have added it to your Python path:
export PYTHONPATH=/path/to/covert-comrade:$PYTHONPATH
LLM Provider Issues
- Mock provider: Use
--llm-provider mockfor testing without API keys - Missing API keys: Set at least one LLM provider key in
.env - Provider not supported: Check available providers in
lemmy_reply.py
Lemmy API Issues
- Public instances: No credentials needed for reading
- Authentication: Set
LEMMY_USERNAMEandLEMMY_PASSWORDfor posting - Rate limiting: The client includes basic rate limiting
Web Search
- Disabled by default: Set
DDG_SEARCH_ENABLED=truein.env - Network issues: DuckDuckGo search may be blocked in some regions
Future Enhancements
- Multi‑platform support: Reddit, Mastodon, Bluesky
- Advanced NLP: Fine‑tuned models for political‑position detection
- Real‑time operation: Deploy as a background service monitoring communities
- User‑feedback loop: Allow users to rate replies and adjust confidence
- Graph‑based user network: Visualize connections between users
- Dashboard: Web UI to view profiles, research, and reply history
License
MIT
Acknowledgments
- Built with Charm Crush
- Lemmy API via pythorhead
- Web search via duckduckgo-search
For detailed API documentation, workflow examples, and configuration guides, see the skill documentation in skills/covert-comrade/SKILL.md.