No description
Find a file
2026-02-20 15:58:35 -05:00
prompts progress 2026-02-20 15:58:35 -05:00
skills progress 2026-02-20 15:58:35 -05:00
src progress 2026-02-20 15:58:35 -05:00
tests progress 2026-02-20 15:58:35 -05:00
.env.example progress 2026-02-20 15:58:35 -05:00
.flake8 progress 2026-02-19 22:37:54 -05:00
.gitignore progress 2026-02-19 16:00:33 -05:00
AGENTS.md Fix pythorhead dependency and add core modules 2026-02-19 19:17:02 -05:00
Charm-Crush.md init commit 2026-02-19 15:42:50 -05:00
crush.json progress 2026-02-20 15:58:35 -05:00
demo_workflow.py progress 2026-02-20 15:58:35 -05:00
Project-Outline.md init commit 2026-02-19 15:42:50 -05:00
project-plan.md init commit 2026-02-19 15:42:50 -05:00
pyproject.toml progress 2026-02-19 22:37:54 -05:00
README.md progress 2026-02-20 15:58:35 -05:00
skill-plan.md progress 2026-02-20 15:58:35 -05:00
test_cli_integration.py progress 2026-02-19 22:37:54 -05:00
test_post_body.py progress 2026-02-20 15:58:35 -05:00

Covert Comrade

An AI agent that reads comment threads from Lemmy instances, formulates replies from a MarxistLeninist leftwing perspective, and builds persistent user profiles based on observed political positions.

Features

  • Lemmy thread fetching: Retrieve entire comment trees from Lemmy posts
  • User profile building: Create and update markdown profiles with inferred political positions
  • AIgenerated replies: Generate contextaware replies using LLMs
  • Research capability: Conduct web searches and store findings
  • Modular design: Each component is isolated and replaceable
  • Crush skill integration: Use as a standalone Python package or as a Charm Crush skill

Installation

Prerequisites

  • Python 3.11+
  • Charm Crush (optional, for skill usage)
  • API keys for at least one LLM provider (OpenAI, Anthropic, Groq, or DeepSeek)

Setup

  1. Clone the repository:

    git clone https://github.com/yourusername/covert-comrade.git
    cd covert-comrade
    
  2. Create and activate a virtual environment:

    python -m venv .venv
    source .venv/bin/activate  # On Windows: .venv\Scripts\activate
    
  3. Install dependencies:

    pip install -e .
    
  4. Install development dependencies (optional):

    pip install -e .[dev]
    
  5. Configure environment variables:

    cp .env.example .env
    # Edit .env with your API keys and settings
    

Quick Start

After installation, you can test the system with the mock LLM provider using the demo workflow:

python demo_workflow.py

This will:

  1. Load sample thread data
  2. Update user profiles
  3. Generate a reply using the mock LLM provider
  4. Output the generated reply

For a more realistic test, you can use the standalone CLI tools with a real Lemmy URL (dry-run mode):

python skills/covert-comrade/scripts/lemmy_reply.py https://lemmy.ml/post/12345 --dry-run --llm-provider mock

Note: The mock provider returns a placeholder reply without making actual API calls. To use real LLMs, set the corresponding API key in .env and specify the provider (e.g., --llm-provider openai).

Configuration

The project uses environment variables for configuration. Create a .env file from .env.example:

# Lemmy API credentials (optional for public instances)
LEMMY_INSTANCE_URL=https://lemmy.ml
LEMMY_USERNAME=
LEMMY_PASSWORD=

# LLM Provider API keys (choose at least one)
OPENAI_API_KEY=
ANTHROPIC_API_KEY=
GROQ_API_KEY=
DEEPSEEK_API_KEY=

# Web search (optional)
DDG_SEARCH_ENABLED=true

# Paths
PROFILES_DIR=profiles
RESEARCH_DIR=research
PROMPTS_DIR=prompts

Key Configuration Areas

  • Lemmy API: Set LEMMY_INSTANCE_URL and credentials if you need authenticated access (e.g., for posting replies)
  • LLM Providers: At least one API key is required for generating replies (use mock provider for testing)
  • Web Search: Set DDG_SEARCH_ENABLED=false to disable web research
  • Directories: Customize storage paths for profiles, research, and prompts

Usage

As a Python Package

The core functionality is available as Python modules:

from src.lemmy_client import LemmyClient
from src.profile_store import ProfileStore
from src.research_engine import ResearchEngine
from src.agent_orchestrator import AgentOrchestrator

# Fetch a thread
client = LemmyClient()
thread = client.fetch_thread("https://lemmy.ml/post/12345")

# Update user profiles
store = ProfileStore()
store.update_from_thread(thread)

# Generate a reply
orchestrator = AgentOrchestrator()
reply = orchestrator.generate_reply(thread)
print(reply)

Using CLI Tools

The project provides standalone CLI scripts in skills/covert-comrade/scripts/:

  1. Fetch a Lemmy thread:

    python skills/covert-comrade/scripts/fetch_thread.py https://lemmy.ml/post/12345
    
  2. Update user profiles from a thread:

    python skills/covert-comrade/scripts/update_profile.py --thread-file thread.json
    
  3. Research a topic:

    python skills/covert-comrade/scripts/research_topic.py "Marxist theory of the state"
    
  4. Generate a reply (with optional research):

    python skills/covert-comrade/scripts/generate_reply.py --thread-file thread.json --research
    

Complete Workflow Example

The main lemmy_reply.py script combines all steps:

# Dry run: generate reply without posting
python skills/covert-comrade/scripts/lemmy_reply.py https://lemmy.ml/post/12345 --research --llm-provider mock

# With real LLM and optional posting
python skills/covert-comrade/scripts/lemmy_reply.py https://lemmy.ml/post/12345 \
  --llm-provider openai \
  --research \
  --post \
  --username your_lemmy_username \
  --password your_lemmy_password

Available options for lemmy_reply.py:

  • --research: Enable web research on thread topic
  • --llm-provider: Choose provider (mock, openai, anthropic, groq, deepseek)
  • --post: Post reply to Lemmy (requires authentication)
  • --dry-run: Generate reply but don't post
  • --target-comment: Reply to specific comment ID
  • --output: Save reply to file
  • --prompt-file: Custom system prompt file

As a Crush Skill

The project includes a fully packaged Crush skill for use with Charm Crush:

  1. Add the skill to your Crush configuration:

    crush skill add /path/to/covert-comrade/skills/covert-comrade
    
  2. Use the skill:

    crush lemmy-reply --url https://lemmy.ml/post/12345
    
  3. Available tools through Crush:

    • fetch_thread: Fetch Lemmy threads and comment trees
    • update_profile: Build/update user profiles with political position inference
    • research_topic: Conduct web research via DuckDuckGo
    • generate_reply: Generate MarxistLeninist replies using LLM
    • manage_prompt: Manage versioned system prompts

Project Structure

covert-comrade/
├── src/
│   ├── lemmy_client.py      # Lemmy API client
│   ├── profile_store.py     # User profile management
│   ├── research_engine.py   # Web search and storage
│   ├── prompt_manager.py    # Prompt versioning
│   └── agent_orchestrator.py # LLM integration
├── tests/                  # Unit and integration tests
├── skills/
│   └── covert-comrade/     # Crush skill packaging
│       ├── SKILL.md        # Skill documentation
│       ├── crush.json      # Crush configuration
│       └── scripts/        # CLI tool wrappers
├── profiles/               # User profile markdown files
├── research/               # Research findings
├── prompts/                # System prompts
├── .env.example           # Environment template
├── pyproject.toml         # Project metadata and dependencies
└── README.md              # This file

Development

Running Tests

# Run all tests
pytest

# Run with coverage
pytest --cov=src --cov-report=term-missing

# Run specific test file
pytest tests/test_lemmy_client.py -v

Code Quality

# Format code with Black
black src tests

# Type checking with mypy
mypy src

# Lint with flake8
flake8 src tests

# Run all checks
black --check src tests && mypy src && flake8 src tests

Project Plan

See Project Plan for detailed implementation roadmap and phases.

Troubleshooting

Import Errors

If you encounter import errors when running CLI scripts, ensure you're in the project root directory or have added it to your Python path:

export PYTHONPATH=/path/to/covert-comrade:$PYTHONPATH

LLM Provider Issues

  • Mock provider: Use --llm-provider mock for testing without API keys
  • Missing API keys: Set at least one LLM provider key in .env
  • Provider not supported: Check available providers in lemmy_reply.py

Lemmy API Issues

  • Public instances: No credentials needed for reading
  • Authentication: Set LEMMY_USERNAME and LEMMY_PASSWORD for posting
  • Rate limiting: The client includes basic rate limiting
  • Disabled by default: Set DDG_SEARCH_ENABLED=true in .env
  • Network issues: DuckDuckGo search may be blocked in some regions

Future Enhancements

  • Multiplatform support: Reddit, Mastodon, Bluesky
  • Advanced NLP: Finetuned models for politicalposition detection
  • Realtime operation: Deploy as a background service monitoring communities
  • Userfeedback loop: Allow users to rate replies and adjust confidence
  • Graphbased user network: Visualize connections between users
  • Dashboard: Web UI to view profiles, research, and reply history

License

MIT

Acknowledgments


For detailed API documentation, workflow examples, and configuration guides, see the skill documentation in skills/covert-comrade/SKILL.md.