bossjones democracy-exe .cursorrules file for Python

<?xml version="1.0" encoding="UTF-8"?>
<cursorrules>
<purpose>
  You are an AI assistant responsible for helping a developer maintain Python code quality and develop an agentic system using LangChain and LangGraph.
</purpose>

<instructions>
  <instruction>Create a main folder called `democracy_exe` to house all components.</instruction>
  <instruction>Divide the library into two main categories: `agents` and `tasks`.</instruction>
  <instruction>Within the `agents` folder, create subcategories for `creative` and `engineering` tasks.</instruction>
  <instruction>Under `creative`, include subfolders for image-generation, fabrication, story-generation, and quote-generation.</instruction>
  <instruction>Under `engineering`, create subfolders for code-generation, code-review, bug-finding, and style-enforcement.</instruction>
  <instruction>In `tasks`, include subfolders for image-generation, lore-writing, code-generation, and code-review.</instruction>
  <instruction>Create a `templates` folder to store reusable component structures.</instruction>
  <instruction>Include a comprehensive README.md file at the root level.</instruction>
  <instruction>When generating or modifying code, return only the changed or new portions, not the entire function or file.</instruction>
</instructions>

<python_standards>
  <project_structure>
    <standard>Maintain clear project structure with separate directories for source code (`democracy_exe/`), tests (`tests/`), and documentation (`docs/`).</standard>
    <standard>Use modular design with distinct files for models, services, controllers, and utilities.</standard>
    <standard>Create separate files for AI components (chat models, prompts, output parsers, chat history, documents/loaders, stores, retrievers, tools)</standard>
    <standard>Follow modular design patterns for LangChain/LangGraph components</standard>
    <standard>Use UV (https://docs.astral.sh/uv) for dependency management and virtual environments</standard>
    <standard>Follow composition over inheritance principle.</standard>
    <standard>Use design patterns like Adapter, Decorator, and Bridge when appropriate.</standard>
    <standard>Organize AI components following LangChain conventions (see: https://python.langchain.com/v0.2/docs/concepts/#few-shot-prompting)</standard>
  </project_structure>

  <code_quality>
    <standard>Add typing annotations to ALL functions and classes with return types.</standard>
    <standard>Include PEP 257-compliant docstrings in Google style for all functions and classes.</standard>
    <standard>Implement robust error handling and logging using structlog with context capture.</standard>
    <standard>Use descriptive variable and function names.</standard>
    <standard>Add detailed comments for complex logic.</standard>
    <standard>Provide rich error context for debugging.</standard>
    <standard>Follow DRY (Don't Repeat Yourself) and KISS (Keep It Simple, Stupid) principles.</standard>
    <standard>Skip type annotations and docstrings for marimo notebook cells (files prefixed with `marimo_*`) that use the pattern `@app.cell def __()`.</standard>
    <standard>Use dataclasses for configuration and structured data.</standard>
    <standard>Implement proper error boundaries and exception handling.</standard>
    <standard>Use structlog for all logging with proper formatting and context.</standard>
    <standard>Return only modified portions of code in generation results to optimize token usage.</standard>
    <standard>Add `# pyright: reportAttributeAccessIssue=false` at the top of any Python file that uses discord.py.</standard>
    <standard>Skip type annotations and docstrings for marimo notebook cells (files prefixed with `marimo_*`) that use the pattern `@app.cell def __()`.</standard>
    <standard>Use aiofiles for all async file operations instead of builtin open, with appropriate file modes specified.</standard>
    <standard>In async functions, NEVER use the built-in `open()` function - always use `aiofiles.open()` to prevent blocking I/O operations.</standard>

    <import_standards>
      <standard>Always use Ruff's isort (I) rules for import sorting.</standard>
      <standard>Group imports into three sections separated by a blank line: standard library, third-party, and local imports.</standard>
      <standard>Sort imports alphabetically within each section.</standard>
      <standard>Place `from __future__ import annotations` at the top of the file before other imports.</standard>
      <standard>Use absolute imports instead of relative imports.</standard>
      <standard>For TYPE_CHECKING imports in tests, group pytest-specific imports together:
        - CaptureFixture
        - FixtureRequest
        - LogCaptureFixture
        - MonkeyPatch
        - MockerFixture
        - VCRRequest (when using pytest-recording)</standard>
      <standard>Configure Ruff to automatically fix import sorting with `ruff --select I --fix`.</standard>
    </import_standards>

    <examples>
      <example>
        <![CDATA[
        # Logging Configuration Example
        import structlog
        from pathlib import Path
        from typing import Union, Dict, Any

        def setup_logging(
            log_path: Union[str, Path] = "logs/app.log",
            log_level: str = "INFO"
        ) -> None:
            """Configure application logging.

            Args:
                log_path: Path to log file
                log_level: Minimum log level to capture
            """
            structlog.configure(
                processors=[
                    structlog.contextvars.merge_contextvars,
                    structlog.processors.add_log_level,
                    structlog.processors.TimeStamper(fmt="iso"),
                    structlog.processors.StackInfoRenderer(),
                    structlog.dev.ConsoleRenderer()
                ],
                wrapper_class=structlog.make_filtering_bound_logger(log_level),
                context_class=dict,
                logger_factory=structlog.PrintLoggerFactory(),
                cache_logger_on_first_use=True
            )

        # Exception Hierarchy Example
        class DemocracyExeError(Exception):
            """Base exception for all application errors."""
            pass

        class LLMError(DemocracyExeError):
            """Base exception for LLM-related errors."""
            pass

        class ModelNotFoundError(LLMError):
            """Raised when specified model is not available."""
            pass

        class TokenLimitError(LLMError):
            """Raised when token limit is exceeded."""
            pass

        # Error Context Capture Example
        def log_error_context(
            error: Exception,
            context: Dict[str, Any],
            level: str = "error"
        ) -> None:
            """Log error with additional context.

            Args:
                error: Exception that occurred
                context: Additional context information
                level: Log level to use
            """
            logger = structlog.get_logger()
            logger.bind(**context).log(
                level,
                "error_occurred",
                error=str(error),
                error_type=type(error).__name__
            )

        # Usage Example
        from functools import wraps
        from typing import Callable, TypeVar, ParamSpec

        P = ParamSpec("P")
        T = TypeVar("T")

        def with_error_handling(func: Callable[P, T]) -> Callable[P, T]:
            """Decorator for handling errors with context capture.

            Args:
                func: Function to wrap

            Returns:
                Wrapped function with error handling
            """
            @wraps(func)
            async def wrapper(*args: P.args, **kwargs: P.kwargs) -> T:
                try:
                    return await func(*args, **kwargs)
                except Exception as e:
                    context = {
                        "function": func.__name__,
                        "args": args,
                        "kwargs": kwargs,
                        "error_type": type(e).__name__
                    }
                    log_error_context(e, context)
                    raise

            return wrapper

        # Example Usage
        @with_error_handling
        async def process_document(
            doc_path: Path,
            max_tokens: int = 1000
        ) -> str:
            """Process a document with error handling.

            Args:
                doc_path: Path to document
                max_tokens: Maximum tokens to process

            Returns:
                Processed text

            Raises:
                FileNotFoundError: If document doesn't exist
                TokenLimitError: If document exceeds token limit
            """
            if not doc_path.exists():
                raise FileNotFoundError(f"Document not found: {doc_path}")

            # Process document...
            return "Processed text"
        ]]>
      </example>
      <example>
        <![CDATA[
        # Async File Operations Example
        import aiofiles
        from pathlib import Path
        from typing import Union, List

        async def read_file_async(file_path: Union[str, Path]) -> str:
            """Read file contents asynchronously.

            Args:
                file_path: Path to the file to read

            Returns:
                str: Contents of the file

            Raises:
                FileNotFoundError: If file doesn't exist
                IOError: If file cannot be read
            """
            async with aiofiles.open(file_path, mode='r', encoding='utf-8') as f:
                return await f.read()

        async def write_file_async(
            file_path: Union[str, Path],
            content: str,
            append: bool = False
        ) -> None:
            """Write content to file asynchronously.

            Args:
                file_path: Path to write to
                content: Content to write
                append: Whether to append to file (default: False)

            Raises:
                IOError: If file cannot be written
            """
            mode = 'a' if append else 'w'
            async with aiofiles.open(file_path, mode=mode, encoding='utf-8') as f:
                await f.write(content)

        async def read_lines_async(file_path: Union[str, Path]) -> List[str]:
            """Read file lines asynchronously.

            Args:
                file_path: Path to the file to read

            Returns:
                List[str]: Lines from the file

            Raises:
                FileNotFoundError: If file doesn't exist
                IOError: If file cannot be read
            """
            async with aiofiles.open(file_path, mode='r', encoding='utf-8') as f:
                return await f.readlines()
        ]]>
      </example>
    </examples>
  </code_quality>

  <testing>
    <standard>Use pytest exclusively for all testing (no unittest module).</standard>
    <standard>Place all tests in `./tests/` directory with proper subdirectories matching source code structure.</standard>
    <standard>Include `__init__.py` files in all test directories and subdirectories.</standard>
    <standard>Add type annotations and docstrings to all tests.</standard>
    <standard>Use pytest markers to categorize tests (e.g., `@pytest.mark.unit`, `@pytest.mark.integration`, `@pytest.mark.asyncio`).</standard>
    <standard>Mark cursor-generated code with `@pytest.mark.cursor`.</standard>
    <standard>Strive for 100% unit test code coverage.</standard>
    <standard>Use pytest-recording for tests involving Langchain runnables (limited to unit/integration tests)</standard>
    <standard>Implement proper Discord.py testing using discord.ext.test</standard>
    <standard>Use typer.testing.CliRunner for CLI application testing</standard>
    <standard>For file-based tests, use tmp_path fixture to handle test files.</standard>
    <standard>Avoid context managers for pytest mocks, use mocker.patch instead.</standard>
    <standard>Mirror source code directory structure in tests directory.</standard>
    <standard>Use VCR.py for recording and replaying HTTP interactions in tests.</standard>

    <test_execution>
      <standard>For testing/fixing individual test files, use the following command format:</standard>
      <example>
        <![CDATA[
        # Run a specific test file with verbose output and local variables shown
        uv run pytest -s --verbose --showlocals --tb=short path/to/file.py

        # Example:
        uv run pytest -s --verbose --showlocals --tb=short tests/test_logsetup.py
        ]]>
      </example>
    </test_execution>

    <directory_structure>
      <standard>Organize tests into logical subdirectories matching source code structure:</standard>
      <structure>
        <![CDATA[
        tests/
        ├── __init__.py
        ├── conftest.py              # Global test fixtures and configuration
        ├── fake_embeddings.py       # Test utilities
        ├── test_*.py               # Top-level tests
        ├── internal/               # Internal testing utilities
        │   ├── __init__.py
        │   └── cogs/              # Discord bot cog testing utilities
        │       ├── __init__.py
        │       ├── echo.py
        │       ├── greeting.py
        │       └── misc.py
        └── unittests/             # Unit tests matching source structure
            ├── __init__.py
            ├── ai/
            │   ├── __init__.py
            │   ├── agents/
            │   │   ├── __init__.py
            │   │   └── test_router_agent.py
            │   ├── graphs/
            │   │   ├── __init__.py
            │   │   └── test_router_graph.py
            │   ├── test_base.py
            │   └── test_state.py
            └── chatbot/
                ├── __init__.py
                └── ai/
                    ├── __init__.py
                    └── test_langchain_utils.py
        ]]>
      </structure>
    </directory_structure>

    <test_types>
      <standard>Unit tests should be placed in tests/unittests/ directory</standard>
      <standard>Integration tests should be placed in tests/integration/ directory</standard>
      <standard>End-to-end tests should be placed in tests/e2e/ directory</standard>
      <standard>Performance tests should be placed in tests/performance/ directory</standard>
    </test_types>

    <test_fixtures>
      <standard>Define shared fixtures in conftest.py files</standard>
      <standard>Use proper typing for all fixtures</standard>
      <standard>Include comprehensive docstrings for all fixtures</standard>
      <standard>Use appropriate fixture scopes (function, class, module, session)</standard>
    </test_fixtures>
  </testing>

  <dependency_management>
    <standard>Use uv (https://docs.astral.sh/uv) for dependency management.</standard>
    <standard>Use `uv sync` to install dependencies, avoid `uv pip install`.</standard>
    <standard>Use Ruff for code style consistency.</standard>
    <standard>Document Ruff rules in pyproject.toml with stability indicators.</standard>
    <standard>Use UV for all package management operations</standard>
    <standard>Prefer `uv sync` over `uv pip install` for dependency installation</standard>
    <standard>Maintain clear dependency specifications in pyproject.toml</standard>
  </dependency_management>

  <langchain_standards>
    <standard>Mark tests involving Langchain runnables with @pytest.mark.vcr (except evaluation tests).</standard>
    <standard>Use proper VCR.py configuration for HTTP interaction recording.</standard>
    <standard>Implement proper typing for all Langchain components.</standard>
    <standard>Follow Langchain's component structure guidelines.</standard>
    <standard>Create distinct files for different LangChain component types.</standard>
    <standard>Use proper error handling for LLM API calls.</standard>
    <standard>Implement retry logic for API failures.</standard>
    <standard>Use streaming responses when appropriate.</standard>
    <examples>
      <example>
        <![CDATA[
        # Chain Construction Example
        from langchain_core.prompts import ChatPromptTemplate
        from langchain_core.output_parsers import StrOutputParser
        from langchain_openai import ChatOpenAI

        def create_qa_chain(
            model_name: str = "gpt-3.5-turbo",
            temperature: float = 0.7
        ) -> Runnable:
            """Create a question-answering chain.

            Args:
                model_name: Name of the LLM model to use
                temperature: Sampling temperature

            Returns:
                Configured QA chain
            """
            prompt = ChatPromptTemplate.from_template("""
                Answer the question based on the context.
                Context: {context}
                Question: {question}
                Answer:""")

            model = ChatOpenAI(
                model_name=model_name,
                temperature=temperature
            )

            chain = prompt | model | StrOutputParser()
            return chain

        # Error Handling Example
        from tenacity import retry, stop_after_attempt, wait_exponential

        @retry(
            stop=stop_after_attempt(3),
            wait=wait_exponential(multiplier=1, min=4, max=10)
        )
        async def call_llm_with_retry(
            chain: Runnable,
            inputs: Dict[str, Any]
        ) -> str:
            """Call LLM with retry logic.

            Args:
                chain: LangChain runnable
                inputs: Input parameters

            Returns:
                Model response

            Raises:
                Exception: If all retries fail
            """
            try:
                response = await chain.ainvoke(inputs)
                return response
            except Exception as e:
                logger.exception(f"LLM call failed: {str(e)}")
                raise
        ]]>
      </example>
      <example>
        <![CDATA[
        # LangGraph Agent Example
        from langgraph.prebuilt import create_agent_executor
        from langchain_core.messages import HumanMessage
        from typing import Dict, List, Tuple

        async def create_research_agent(
            tools: List[BaseTool],
            system_message: str
        ) -> AgentExecutor:
            """Create a research agent with tools.

            Args:
                tools: List of tools for the agent
                system_message: System prompt for the agent

            Returns:
                Configured agent executor
            """
            agent = create_agent_executor(
                tools=tools,
                llm=ChatOpenAI(temperature=0),
                system_message=system_message
            )

            return agent

        # Agent Usage Example
        async def research_topic(
            agent: AgentExecutor,
            query: str
        ) -> Tuple[str, List[Dict[str, Any]]]:
            """Research a topic using an agent.

            Args:
                agent: Research agent
                query: Research query

            Returns:
                Tuple of final answer and intermediate steps
            """
            result = await agent.ainvoke({
                "input": query,
                "chat_history": []
            })

            return result["output"], result["intermediate_steps"]
        ]]>
      </example>
    </examples>
  </langchain_standards>

  <langgraph_standards>
    <standard>Follow LangGraph's component structure for agent workflows.</standard>
    <standard>Use proper state management in graph nodes.</standard>
    <standard>Implement proper error handling in graph edges.</standard>
    <standard>Use appropriate markers for graph-based tests.</standard>
    <standard>Create reusable graph components when possible.</standard>
  </langgraph_standards>

  <design_patterns>
    <pattern>
      <name>Composition Over Inheritance</name>
      <description>Favor object composition over class inheritance to avoid subclass explosion and enhance flexibility</description>
      <example>
        <![CDATA[
        # Prefer composition
        class DocumentProcessor:
            def __init__(self, loader: BaseLoader, splitter: TextSplitter):
                self.loader = loader
                self.splitter = splitter

        # Instead of inheritance
        class PDFProcessor(BaseLoader, TextSplitter):
            pass
        ]]>
      </example>
    </pattern>
    <pattern>
      <name>Decorator Pattern</name>
      <description>Use for dynamically adjusting behavior of objects without modifying their structure</description>
      <example>
        <![CDATA[
        def log_llm_calls(func: Callable) -> Callable:
            @wraps(func)
            async def wrapper(*args: Any, **kwargs: Any) -> Any:
                logger.info(f"Calling LLM with args: {args}, kwargs: {kwargs}")
                return await func(*args, **kwargs)
            return wrapper
        ]]>
      </example>
    </pattern>
    <pattern>
      <name>Adapter Pattern</name>
      <description>Allow incompatible interfaces to work together, promoting flexibility and reusability</description>
    </pattern>
    <pattern>
      <name>Global Object Pattern</name>
      <description>Use for creating module-level objects that provide methods for actions</description>
    </pattern>
  </design_patterns>

  <configuration_standards>
    <ruff_rules>
      <standard>Document all Ruff rules in pyproject.toml with inline comments.</standard>
      <standard>Include stability indicators for each rule:
        - ✔️ (stable)
        - 🧪 (unstable/preview)
        - ⚠️ (deprecated)
        - ❌ (removed)
        - 🛠️ (auto-fixable)
      </standard>
      <standard>Keep rule descriptions under 160 characters when possible.</standard>
      <standard>Reference Ruff version from .pre-commit-config.yaml.</standard>
      <example>
        <![CDATA[
        [tool.ruff.lint]
        select = [
            "D200", # fits-on-one-line: One-line docstring should fit on one line (stable)
            "E226", # missing-whitespace-around-arithmetic-operator: Missing whitespace around arithmetic operator (unstable)
        ]
        ]]>
      </example>
    </ruff_rules>

    <tool_configurations>
      <standard>Document configuration options for:
        - pylint (reference: pylint.pycqa.org)
        - pyright (reference: microsoft.github.io/pyright)
        - mypy (reference: mypy.readthedocs.io)
        - commitizen (reference: commitizen-tools.github.io)
      </standard>
      <standard>Include descriptive comments for each configuration option.</standard>
    </tool_configurations>

    <test_imports>
      <standard>Import necessary pytest types in TYPE_CHECKING block:
        - CaptureFixture
        - FixtureRequest
        - LogCaptureFixture
        - MonkeyPatch
        - MockerFixture
        - VCRRequest (when using pytest-recording)
      </standard>
    </test_imports>
  </configuration_standards>

  <testing_practices>
    <fixtures>
      <standard>Use pytest fixtures for reusable test components.</standard>
      <standard>Utilize tmp_path fixture for file-based tests.</standard>
      <examples>
        <example>
          <![CDATA[
          # VCR Configuration Example
          @pytest.fixture(scope="module")
          def vcr_config() -> Dict[str, Any]:
              """Configure VCR for test recording.

              Returns:
                  VCR configuration dictionary
              """
              return {
                  "filter_headers": ["authorization", "x-api-key"],
                  "match_on": ["method", "scheme", "host", "port", "path", "query"],
                  "decode_compressed_response": True
              }

          # Discord.py Test Fixtures
          @pytest.fixture
          async def test_guild() -> AsyncGenerator[discord.Guild, None]:
              """Create a test guild.

              Yields:
                  Test guild instance
              """
              guild = await dpytest.driver.create_guild()
              await dpytest.driver.configure_guild(guild)
              yield guild
              await dpytest.empty_queue()

          @pytest.fixture
          async def test_channel(
              test_guild: discord.Guild
          ) -> AsyncGenerator[discord.TextChannel, None]:
              """Create a test channel.

              Args:
                  test_guild: Test guild fixture

              Yields:
                  Test channel instance
              """
              channel = await dpytest.driver.create_text_channel(test_guild)
              yield channel
              await dpytest.empty_queue()
          ]]>
        </example>
        <example>
          <![CDATA[
          # Async Test Examples
          @pytest.mark.asyncio
          @pytest.mark.vcr(
              filter_headers=["authorization"],
              match_on=["method", "scheme", "host", "port", "path", "query"]
          )
          async def test_agent_research(
              mocker: MockerFixture,
              test_agent: AgentExecutor,
              caplog: LogCaptureFixture
          ) -> None:
              """Test agent research functionality.

              Args:
                  mocker: Pytest mocker fixture
                  test_agent: Agent fixture
                  caplog: Log capture fixture
              """
              # Mock web search tool
              mock_search = mocker.patch(
                  "your_package.tools.web_search",
                  return_value="Test search result"
              )

              query = "What is the capital of France?"
              result, steps = await research_topic(test_agent, query)

              assert "Paris" in result.lower()
              assert len(steps) > 0
              assert mock_search.call_count > 0

          # Discord.py Command Test
          @pytest.mark.asyncio
          async def test_research_command(
              test_guild: discord.Guild,
              test_channel: discord.TextChannel,
              test_agent: AgentExecutor
          ) -> None:
              """Test Discord research command.

              Args:
                  test_guild: Test guild fixture
                  test_channel: Test channel fixture
                  test_agent: Agent fixture
              """
              await dpytest.message("?research What is Python?")

              messages = await dpytest.sent_queue.get()
              assert len(messages) == 1
              assert "programming language" in messages[0].content.lower()
          ]]>
        </example>
      </examples>
    </fixtures>

    <test_organization>
      <standard>Mirror source code directory structure in tests directory.</standard>
      <standard>Use appropriate pytest markers for test categorization.</standard>
      <standard>Include comprehensive docstrings for all test functions.</standard>
      <example>
        <![CDATA[
        @pytest.mark.slow()
        @pytest.mark.services()
        @pytest.mark.vcr(
            allow_playback_repeats=True,
            match_on=["method", "scheme", "port", "path", "query"],
            ignore_localhost=False
        )
        def test_load_documents(
            mocker: MockerFixture,
            mock_pdf_file: Path,
            vcr: Any
        ) -> None:
            """Test the loading of documents from a PDF file.

            Verifies that the load_documents function correctly processes PDF files.

            Args:
                mocker: The pytest-mock fixture
                mock_pdf_file: Path to test PDF
                vcr: VCR.py fixture
            """
            # Test implementation
        ]]>
      </example>
    </test_organization>

    <structlog_testing>
      <standard>Always use structlog's capture_logs context manager for testing log output.</standard>
      <standard>Never use pytest's caplog fixture for structlog message verification.</standard>
      <standard>Check log events using log.get("event") instead of checking message strings.</standard>
      <standard>Include descriptive error messages in log assertions.</standard>
      <standard>Remove caplog.set_level() calls when using structlog.</standard>
      <standard>For dynamic log messages containing variable content (like file paths), use startswith() or partial matching.</standard>
      <example>
        <![CDATA[
        @pytest.mark.asyncio
        async def test_example_event(bot: DemocracyBot) -> None:
            """Test example event logging.

            Args:
                bot: The Discord bot instance
            """
            with structlog.testing.capture_logs() as captured:
                # Perform the action that generates logs
                await some_action()

                # Check if the log message exists in the captured structlog events
                assert any(
                    log.get("event") == "Expected Event Message" for log in captured
                ), "Expected 'Expected Event Message' not found in logs"

                # For multiple log checks, use multiple assertions
                assert any(
                    log.get("event") == "Another Expected Event" for log in captured
                ), "Expected 'Another Expected Event' not found in logs"

                # For dynamic messages with variable content, use startswith()
                assert any(
                    log.get("event").startswith("File created at:") for log in captured
                ), "Expected file creation message not found in logs"

                # For messages containing variable paths or IDs, use partial matching
                assert any(
                    "user_123" in log.get("event") for log in captured
                ), "Expected user ID in log message not found"
        ]]>
      </example>
      <best_practices>
        <standard>Use descriptive variable names like 'captured' for the capture_logs result.</standard>
        <standard>Check exact event messages rather than using string contains when possible.</standard>
        <standard>Use startswith() for messages with known prefixes but variable content.</standard>
        <standard>Use string contains (in operator) for messages where the variable content could be anywhere.</standard>
        <standard>Include the full expected message in the assertion error message.</standard>
        <standard>Group related log checks together within the same capture_logs context.</standard>
      </best_practices>
    </structlog_testing>
  </testing_practices>

  <examples>
    <example>
      <![CDATA[
      Example folder structure:
democracy-exe/
├── democracy_exe/                   # Main package directory
│   ├── __init__.py
│   ├── __main__.py
│   ├── __version__.py
│   ├── agentic/                    # Agentic system components
│   │   ├── __init__.py
│   │   ├── agents/
│   │   └── workflows/
│   ├── ai/                         # AI/ML components
│   │   ├── __init__.py
│   │   ├── chains/
│   │   ├── models/
│   │   └── tools/
│   ├── bot_logger/                 # Logging components
│   ├── chatbot/                    # Discord chatbot components
│   ├── clients/                    # API clients
│   ├── data/                       # Data storage
│   ├── exceptions/                 # Custom exceptions
│   ├── factories/                  # Factory classes
│   ├── models/                     # Data models
│   ├── shell/                      # Shell/CLI components
│   ├── subcommands/                # CLI subcommands
│   ├── utils/                      # Utility functions
│   ├── vendored/                   # Vendored dependencies
│   ├── aio_settings.py            # Async settings
│   ├── asynctyper.py              # Async CLI utilities
│   ├── base.py                    # Base classes
│   ├── cli.py                     # CLI implementation
│   ├── constants.py               # Constants
│   ├── debugger.py               # Debugging utilities
│   ├── llm_manager.py            # LLM management
│   ├── main.py                   # Main entry point
│   └── types.py                  # Type definitions
│
├── tests/                         # Test directory
│   ├── __init__.py
│   ├── conftest.py
│   ├── unit/
│   ├── integration/
│   └── fixtures/
│
├── docs/                          # Documentation
├── scripts/                       # Utility scripts
├── stubs/                        # Type stubs
├── ai_docs/                      # AI documentation
├── cookbook/                     # Code examples
│
├── .github/                      # GitHub configuration
├── .vscode/                      # VSCode configuration
├── .devcontainer/               # Dev container config
│
├── pyproject.toml               # Project configuration
├── Justfile                     # Just commands
├── Makefile                     # Make commands
├── README.md                    # Project documentation
├── CONTRIBUTING.md             # Contribution guide
├── LICENSE                     # License file
└── mkdocs.yml                  # Documentation config
      ]]>
    </example>
    <example>
      <![CDATA[
      Example README.md content:
      # Democracy Exe

      This repository contains a structured agentic system built with LangChain and LangGraph.

      ## Structure
      - `agents/`: Components for continuous use in agentic systems
      - `tasks/`: Components for specific task execution
      - `templates/`: Reusable component structures

      ## Usage
      [Include guidelines on how to use and contribute to the system]
      ]]>
    </example>
    <example>
      <![CDATA[
      Example prompt.xml for John Helldiver:
      <?xml version="1.0" encoding="UTF-8"?>
      <prompt>
        <context>
          You are a skilled lore writer for the Helldivers 2 universe. Your task is to create a compelling backstory for John Helldiver, a legendary commando known for his exceptional skills and unwavering dedication to the mission.
        </context>
        <instruction>
          Write a brief but engaging backstory for John Helldiver, highlighting his:
          1. Origin and early life
          2. Key missions and accomplishments
          3. Unique personality traits
          4. Signature weapons or equipment
          5. Relationships with other Helldivers or characters
        </instruction>
        <example>
          Here's an example of a brief backstory for another character:

          Sarah "Stormbreaker" Chen, born on a remote Super Earth colony, joined the Helldivers at 18 after her home was destroyed by Terminid forces. Known for her unparalleled skill with the Arc Thrower, Sarah has become a legend for single-handedly holding off waves of Bug attacks during the Battle of New Helsinki. Her stoic demeanor and tactical genius have earned her the respect of both rookies and veterans alike.
        </example>
        <output_format>
          Provide a cohesive narrative of 200-300 words that captures the essence of John Helldiver's legendary status while maintaining the gritty, militaristic tone of the Helldivers universe.
        </output_format>
      </prompt>
      ]]>
    </example>
    <example>
      <![CDATA[
      Example README.md for John Helldiver:
      # John Helldiver Backstory Prompt

      ## Purpose
      This prompt is designed to generate a compelling backstory for John Helldiver, a legendary commando in the Helldivers 2 universe. It aims to create a rich, engaging narrative that fits seamlessly into the game's lore.

      ## Usage
      1. Use this prompt with a large language model capable of creative writing and understanding context.
      2. Provide the prompt to the model without modification.
      3. The generated output should be a 200-300 word backstory that can be used as-is or as a foundation for further development.

      ## Expected Output
      A brief but detailed backstory covering John Helldiver's origin, key accomplishments, personality traits, equipment, and relationships within the Helldivers universe.

      ## Special Considerations
      - Ensure the tone matches the gritty, militaristic style of Helldivers 2.
      - The backstory should emphasize John's exceptional skills and dedication to his missions.
      - Feel free to iterate on the output, using it as a starting point for more detailed character development.
      ]]>
    </example>
      <example>
      <![CDATA[
      Example metadata.json for John Helldiver:
      {
        "promptName": "JohnHelldiverBackstory",
        "version": "1.0",
        "targetModel": "gpt4o",
        "author": "YourName",
        "creationDate": "2024-12-08",
        "lastTestedDate": "2024-12-08",
        "tags": ["Helldivers2", "lore", "character-backstory", "sci-fi"],
        "description": "Generates a backstory for John Helldiver, a legendary commando in the Helldivers 2 universe",
        "performanceMetrics": {
          "averageOutputQuality": 4.5,
          "successRate": 0.95
        },
        "promptStructure": "Four-level prompt (Context, Instruction, Example, Output Format)"
      }
      ]]>
    </example>
    <example>
      <![CDATA[
      Example examples/example1.md for John Helldiver:
      # Example Output 1: John Helldiver Backstory

      John "Hellfire" Helldiver was born in the underground bunkers of Super Earth during the height of the Bug War. Raised by veteran Helldivers, John's childhood was a brutal training regimen that forged him into a living weapon. At 16, he led his first mission against a Terminid hive, earning his call sign "Hellfire" after single-handedly destroying the hive with nothing but a flamethrower and sheer determination.

      Known for his uncanny ability to turn the tide of impossible battles, John has become a symbol of hope for humanity. His most famous exploit came during the Siege of New Atlantis, where he held off waves of Automaton forces for 72 hours straight, allowing thousands of civilians to evacuate. John's preferred loadout includes a customized Liberator assault rifle and the experimental P-7 "Punisher" sidearm, both gifts from Super Earth's top weapons engineers.

      Despite his legendary status, John remains a man of few words, letting his actions speak louder than any speech could. His unwavering loyalty to Super Earth and his fellow Helldivers is matched only by his hatred for the enemies of democracy. Rookies whisper that John Helldiver doesn't sleep; he just waits for the next drop.

      (Word count: 182)
      ]]>
    </example>
    <example>
      <![CDATA[
      Example prompt_schema.xsd:
      <?xml version="1.0" encoding="UTF-8"?>
      <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema">
        <xs:element name="prompt">
          <xs:complexType>
            <xs:sequence>
              <xs:element name="context" type="xs:string"/>
              <xs:element name="instruction" type="xs:string"/>
              <xs:element name="example" type="xs:string"/>
              <xs:element name="output_format" type="xs:string"/>
            </xs:sequence>
          </xs:complexType>
        </xs:element>
      </xs:schema>
      ]]>
    </example>
    <example>
      <![CDATA[
      Example Justfile:
      lint:
        xmllint --schema prompt_schema.xsd prompt.xml --noout
      ]]>
    </example>
  </examples>

  <reasoning>
    <point>Hierarchical structure allows for easy navigation and scalability.</point>
    <point>Separation of agents and one-off tasks ensures quick access to appropriate prompts.</point>
    <point>Detailed subcategories simplify locating prompts for specific tasks.</point>
    <point>Structure accommodates both general categories and specific use cases.</point>
    <point>Templates folder promotes consistency in prompt creation.</point>
    <point>README file provides clear documentation for all users.</point>
  </reasoning>

  <prompt_engineering_standards>
    <output_format>
      <standard>Specify that responses should only include changed or new code snippets, not entire functions or files.</standard>
      <standard>Use diff-like format to clearly indicate additions and deletions when appropriate.</standard>
    </output_format>
    <xml_structure>
      <standard>Use clear, descriptive tag names that are self-explanatory (e.g., &lt;context&gt;, &lt;task&gt;, &lt;examples&gt;).</standard>
      <standard>Organize content hierarchically with proper nesting of tags.</standard>
      <standard>Maintain consistent tag usage throughout prompts.</standard>
      <standard>Use line breaks and indentation for readability.</standard>
    </xml_structure>

    <component_organization>
      <standard>Separate different components with distinct tags.</standard>
      <standard>Use &lt;context&gt; for background information.</standard>
      <standard>Use &lt;instructions&gt; for specific directives.</standard>
      <standard>Use &lt;examples&gt; for sample inputs and outputs.</standard>
      <standard>Use &lt;output_format&gt; to define response structure.</standard>
      <standard>Use &lt;reflection&gt; for AI thinking steps.</standard>
    </component_organization>

    <best_practices>
      <standard>Include only necessary information in each tag.</standard>
      <standard>Number or bullet point instructions for clarity.</standard>
      <standard>Use variables with descriptive names (e.g., &lt;variable_name&gt;{{value}}&lt;/variable_name&gt;).</standard>
      <standard>Combine XML tags with other prompt engineering techniques when appropriate.</standard>
      <standard>Include validation using XSD schemas for prompt structure.</standard>
    </best_practices>

    <reflection_patterns>
      <standard>Always use chain-of-thought prompting by default to improve accuracy and coherence.</standard>
      <standard>Include explicit thinking steps in prompts using &lt;thinking&gt; and &lt;answer&gt; tags.</standard>
      <standard>Break down complex tasks into clear steps.</standard>
      <standard>Use structured thinking for tasks involving:</standard>
      <list>
        <item>Complex math or logic</item>
        <item>Multi-step analysis</item>
        <item>Writing complex documents</item>
        <item>Decisions with multiple factors</item>
        <item>Research and investigation</item>
      </list>
      <example>
        <![CDATA[
        <prompt>
          <context>Analyzing a complex codebase for refactoring.</context>
          <thinking>
            1. First, I'll identify the main components and their relationships
            2. Then, I'll analyze each component for SOLID principles
            3. Finally, I'll propose specific refactoring steps
          </thinking>
          <answer>
            Provide a structured analysis with:
            - Component relationships
            - SOLID violations
            - Refactoring proposals
          </answer>
        </prompt>
        ]]>
      </example>
    </reflection_patterns>

    <variable_handling>
      <standard>Use descriptive variable names in XML tags.</standard>
      <standard>Include type hints and validation rules for variables.</standard>
      <standard>Document expected formats and constraints.</standard>
      <example>
        <![CDATA[
        <prompt>
          <variables>
            <code_snippet type="python" max_length="500">{{code_to_review}}</code_snippet>
            <style_guide type="url">{{style_guide_link}}</style_guide>
            <severity_level type="enum" values="high,medium,low">{{severity}}</severity_level>
          </variables>
          <task>Review the code according to the style guide at the specified severity level.</task>
        </prompt>
        ]]>
      </example>
    </variable_handling>
  </prompt_engineering_standards>

  <marimo_standards>
    <imports>
      <standard>All external imports must be in the first cell of marimo_* files.</standard>
      <standard>First cell should import and return all modules needed by subsequent cells.</standard>
      <standard>Use importlib.reload() for development modules that may change.</standard>
    </imports>

    <cell_definition>
      <standard>All cells must be decorated with @app.cell.</standard>
      <standard>Always use explicit tuple returns, even for single values.</standard>
      <standard>No function definitions allowed in marimo notebook files (prefix: marimo_*).</standard>
      <standard>All functions must be imported from prompt_library_module.py.</standard>
      <standard>No error handling in notebook cells - handle errors in imported functions.</standard>
      <standard>Cell parameters should only include variables actually used in the cell.</standard>
      <standard>Skip type annotations and docstrings for marimo notebook cells.</standard>
    </cell_definition>

    <state_management>
      <standard>All cell dependencies must be explicitly declared as parameters.</standard>
      <standard>Avoid mutating shared state between cells.</standard>
      <standard>Use proper typing for all state variables.</standard>
    </state_management>

    <ui_components>
      <standard>UI components should be created and modified through the reactive system.</standard>
      <standard>Use proper typing for all UI components.</standard>
      <standard>Include descriptive labels and help text.</standard>
    </ui_components>

    <error_handling>
      <standard>Use proper error boundaries and guards in each cell.</standard>
      <standard>Provide descriptive error messages with context.</standard>
      <standard>Use mo.stop() for validation guards.</standard>
    </error_handling>

    <cell_dependencies>
      <standard>All cell dependencies must be explicitly declared.</standard>
      <standard>Avoid circular dependencies between cells.</standard>
      <standard>Use proper ordering of cells based on dependencies.</standard>
    </cell_dependencies>

    <ui_styling>
      <standard>Use consistent styling objects for UI components.</standard>
      <standard>Follow Material Design principles for component styling.</standard>
      <standard>Maintain responsive design patterns.</standard>
    </ui_styling>

    <python_differences>
      <standard>Understand key differences from regular Python code.</standard>
      <standard>Follow Marimo-specific patterns for state and reactivity.</standard>
    </python_differences>

    <reactive_patterns>
      <standard>Use reactive programming patterns for UI and state updates.</standard>
      <standard>Maintain unidirectional data flow.</standard>
      <standard>Handle side effects properly in reactive contexts.</standard>
    </reactive_patterns>
  </marimo_standards>

  <cli_standards>
    <standard>Use AsyncTyperImproved for the main APP instance to support both sync and async commands.</standard>
    <standard>Initialize the main APP with: APP = AsyncTyperImproved()</standard>
    <standard>Load subcommands dynamically using the load_commands() function.</standard>
    <standard>Place all subcommands in the subcommands directory with _cmd.py suffix.</standard>
    <standard>Each subcommand module should define its own APP instance.</standard>
    <standard>Use proper type annotations for all command parameters and return values.</standard>
    <standard>Include descriptive docstrings for all commands following Google style.</standard>
    <standard>Use Annotated for command parameters to provide help text and options.</standard>
    <standard>Prefix async command functions with 'async' or 'aio' for clarity.</standard>
    <standard>Use proper error handling and logging in command functions.</standard>
    <examples>
      <example>
        <![CDATA[
        # Main APP initialization
        APP = AsyncTyperImproved()

        # Sync command example
        @APP.command()
        def version(
            verbose: Annotated[bool, typer.Option("--verbose", "-v", help="Show detailed version info")] = False,
        ) -> None:
            """Display version information."""
            rich.print(f"democracy_exe version: {democracy_exe.__version__}")
            if verbose:
                rich.print(f"Python version: {sys.version}")

        # Async command example
        @APP.command()
        async def run_bot() -> None:
            """Run the Discord bot."""
            logger.info("Running bot")
            try:
                async with DemocracyBot() as bot:
                    await bot.start()
            except Exception as ex:
                logger.exception("Bot error occurred")
                if aiosettings.dev_mode:
                    bpdb.pm()

        # Subcommand module example (dummy_cmd.py)
        APP = AsyncTyperImproved(help="dummy command")

        @APP.command("dummy")
        def cli_dummy_cmd(prompt: str) -> str:
            """Generate a new module.

            Args:
                prompt: The input prompt

            Returns:
                str: The generated output
            """
            return f"dummy cmd: {prompt}"

        @APP.command()
        async def aio_cli_dummy_cmd() -> str:
            """Returns information asynchronously."""
            await asyncio.sleep(1)
            return "slept for 1 second"
        ]]>
      </example>
    </examples>
  </cli_standards>

  <discord_testing_standards>
    <test_configuration>
      <standard>Add required linter disables for Discord.py files</standard>
      <standard>Configure proper intents for test environment</standard>
      <standard>Set up test guilds with appropriate permissions</standard>
      <standard>Configure logging for test environment</standard>
      <standard>Use consistent test data across test suite</standard>

      <file_setup>
        <standard>Add necessary linter disables at the top of test files</standard>
        <example>
          <![CDATA[
          # pylint: disable=no-member
          # pylint: disable=possibly-used-before-assignment
          # pyright: reportImportCycles=false
          # mypy: disable-error-code="index"
          # mypy: disable-error-code="no-redef"
          # pyright: reportAttributeAccessIssue=false

          import pytest
          import discord
          import discord.ext.test as dpytest
          from discord.ext import commands
          from typing import AsyncGenerator, Generator
          ]]>
        </example>
      </file_setup>

      <bot_configuration>
        <standard>Set up bot with all required intents for testing</standard>
        <standard>Configure proper command prefix and settings</standard>
        <standard>Initialize bot with test-specific settings</standard>
        <example>
          <![CDATA[
          @pytest.fixture
          async def bot() -> AsyncGenerator[commands.Bot, None]:
              """Create a DemocracyBot instance for testing.

              Returns:
                  AsyncGenerator[commands.Bot, None]: DemocracyBot instance with test configuration
              """
              # Configure intents
              intents = discord.Intents.default()
              intents.members = True
              intents.message_content = True
              intents.messages = True
              intents.guilds = True

              # Create DemocracyBot with test configuration
              from democracy_exe.chatbot.bot import DemocracyBot
              bot = DemocracyBot(
                  command_prefix="?",
                  intents=intents,
                  description="Test DemocracyBot instance"
              )

              # Add test-specific error handling
              @bot.event
              async def on_command_error(ctx: commands.Context, error: Exception) -> None:
                  """Handle command errors in test environment."""
                  raise error  # Re-raise for pytest to catch

              # Setup and cleanup
              await bot._async_setup_hook()  # Required for proper initialization
              dpytest.configure(bot)
              yield bot
              await dpytest.empty_queue()

          @pytest.fixture
          async def test_guild(bot: DemocracyBot) -> AsyncGenerator[discord.Guild, None]:
              """Create a test guild.

              Args:
                  bot: DemocracyBot instance

              Yields:
                  Test guild instance
              """
              guild = await dpytest.driver.create_guild()
              await dpytest.driver.configure_guild(guild)
              yield guild
              await dpytest.empty_queue()

          @pytest.fixture
          async def test_channel(test_guild: discord.Guild) -> AsyncGenerator[discord.TextChannel, None]:
              """Create a test channel.

              Args:
                  test_guild: Test guild fixture

              Yields:
                  Test channel instance
              """
              channel = await dpytest.driver.create_text_channel(test_guild)
              yield channel
              await dpytest.empty_queue()
          ]]>
        </example>
      </bot_configuration>

      <test_data_management>
        <standard>Use consistent test data across test suite</standard>
        <standard>Create fixtures for common test data</standard>
        <standard>Clean up test data after each test</standard>
        <example>
          <![CDATA[
          @pytest.fixture
          def test_data() -> dict:
              """Provide consistent test data for bot tests.

              Returns:
                  dict: Test data dictionary
              """
              return {
                  "guild_name": "Test Guild",
                  "channel_name": "test-channel",
                  "user_name": "TestUser",
                  "role_name": "TestRole",
                  "command_prefix": "?",
                  "test_message": "Hello, bot!",
                  "test_embed": discord.Embed(
                      title="Test Embed",
                      description="Test description"
                  )
              }

          @pytest.fixture(autouse=True)
          async def cleanup_test_data() -> AsyncGenerator[None, None]:
              """Clean up test data after each test."""
              yield
              await dpytest.empty_queue()
              # Reset any modified bot state
              bot = dpytest.get_config().client
              bot.clear()
          ]]>
        </example>
      </test_data_management>

      <logging_setup>
        <standard>Configure logging for test environment using structlog</standard>
        <standard>Use structlog's capture_logs context manager for testing log output</standard>
        <standard>Never use pytest's caplog fixture for structlog message verification</standard>
        <example>
          <![CDATA[
          @pytest.fixture(autouse=True)
          def setup_logging() -> None:
              """Configure structlog for test environment."""
              import structlog

              structlog.configure(
                  processors=[
                      structlog.contextvars.merge_contextvars,
                      structlog.processors.add_log_level,
                      structlog.processors.TimeStamper(fmt="iso"),
                      structlog.processors.StackInfoRenderer(),
                      structlog.testing.capture_logs,
                  ],
                  wrapper_class=structlog.make_filtering_bound_logger("DEBUG"),
                  context_class=dict,
                  logger_factory=structlog.testing.LogCapture,
                  cache_logger_on_first_use=True
              )

          @pytest.mark.asyncio
          async def test_example_event(bot: DemocracyBot) -> None:
              """Test example event logging.

              Args:
                  bot: The Discord bot instance
              """
              with structlog.testing.capture_logs() as captured:
                  # Perform the action that generates logs
                  await some_action()

                  # Check if the log message exists in the captured structlog events
                  assert any(
                      log.get("event") == "Expected Event Message" for log in captured
                  ), "Expected 'Expected Event Message' not found in logs"

                  # For multiple log checks, use multiple assertions
                  assert any(
                      log.get("event") == "Another Expected Event" for log in captured
                  ), "Expected 'Another Expected Event' not found in logs"

                  # For dynamic messages with variable content, use startswith()
                  assert any(
                      log.get("event").startswith("File created at:") for log in captured
                  ), "Expected file creation message not found in logs"

                  # For messages containing variable paths or IDs, use partial matching
                  assert any(
                      "user_123" in log.get("event") for log in captured
                  ), "Expected user ID in log message not found"
          ]]>
        </example>
      </logging_setup>
    </test_configuration>

    <message_testing_patterns>
      <standard>Use dpytest.message() to simulate user messages</standard>
      <standard>Use dpytest.verify() to check bot responses</standard>
      <standard>Always verify both message content and message type (text, embed, etc.)</standard>
      <standard>Clear message queues between tests to prevent cross-test contamination</standard>
      <standard>Test both direct messages and guild messages separately</standard>

      <message_verification>
        <standard>Use appropriate verification method based on expected response type</standard>
        <example>
          <![CDATA[
          # Text message verification
          await dpytest.message("?command")
          assert dpytest.verify().message().content("Expected response")

          # Embed verification
          await dpytest.message("?embed_command")
          assert dpytest.verify().message().embed(expected_embed)

          # Multiple message verification
          await dpytest.message("?multi_response")
          assert dpytest.verify().message().content("First response")
          assert dpytest.verify().message().content("Second response")

          # Partial content verification
          await dpytest.message("?partial")
          assert dpytest.verify().message().contains().content("partial match")
          ]]>
        </example>
      </message_verification>

      <message_queue_management>
        <standard>Clear message queue before each test using dpytest.empty_queue()</standard>
        <standard>Use verify().nothing() to ensure no unexpected messages</standard>
        <standard>Handle message queues in async context</standard>
        <example>
          <![CDATA[
          @pytest.mark.asyncio
          async def test_no_response():
              await dpytest.message("?invalid_command")
              assert dpytest.verify().message().nothing()

          @pytest.mark.asyncio
          async def test_message_cleanup():
              # Setup
              await dpytest.empty_queue()

              # Test
              await dpytest.message("?command")
              assert dpytest.verify().message().content("Response")

              # Cleanup
              await dpytest.empty_queue()
          ]]>
        </example>
      </message_queue_management>

      <error_handling>
        <standard>Test both successful and error scenarios</standard>
        <standard>Verify error messages are properly formatted</standard>
        <standard>Test permission-based message handling</standard>
        <example>
          <![CDATA[
          @pytest.mark.asyncio
          async def test_error_handling():
              # Test invalid command
              await dpytest.message("?invalid")
              assert dpytest.verify().message().embed(error_embed)

              # Test permission error
              await dpytest.message("?admin_only")
              assert dpytest.verify().message().contains().content("You don't have permission")

              # Test rate limiting
              for _ in range(5):  # Exceed rate limit
                  await dpytest.message("?rate_limited")
              assert dpytest.verify().message().contains().content("Rate limit exceeded")
          ]]>
        </example>
      </error_handling>

      <best_practices>
        <standard>Group related message tests together in test classes</standard>
        <standard>Test command interactions and side effects</standard>
        <standard>Mock external services used by commands</standard>
        <standard>Test command error states and recovery</standard>
        <standard>Test command output formatting and localization</standard>
      </best_practices>
    </message_testing_patterns>

    <command_testing_patterns>
      <standard>Test both sync and async commands separately</standard>
      <standard>Test command aliases and different prefix variations</standard>
      <standard>Test command argument parsing and validation</standard>
      <standard>Test command cooldowns and rate limiting</standard>
      <standard>Test command permissions and role-based access</standard>

      <command_verification>
        <standard>Verify command registration and availability</standard>
        <standard>Test command help and documentation</standard>
        <standard>Verify command responses in different contexts (DM vs Guild)</standard>
        <example>
          <![CDATA[
          @pytest.mark.asyncio
          async def test_command_registration(bot):
              """Test command registration and help documentation."""
              # Test command exists
              assert "ping" in [cmd.name for cmd in bot.commands]

              # Test help documentation
              await dpytest.message("?help ping")
              assert dpytest.verify().message().contains().content("Returns the ping of the bot")

              # Test command aliases
              cmd = bot.get_command("ping")
              assert cmd.aliases == ["p", "latency"]
          ]]>
        </example>
      </command_verification>

      <argument_testing>
        <standard>Test required vs optional arguments</standard>
        <standard>Test argument type conversion and validation</standard>
        <standard>Test argument error handling</standard>
        <example>
          <![CDATA[
          @pytest.mark.asyncio
          async def test_command_arguments():
              # Test missing required argument
              await dpytest.message("?echo")
              assert dpytest.verify().message().contains().content("Missing required argument")

              # Test invalid argument type
              await dpytest.message("?repeat abc 3")
              assert dpytest.verify().message().contains().content("Converting to integer failed")

              # Test valid arguments
              await dpytest.message("?echo Hello World")
              assert dpytest.verify().message().content("Hello World")

              # Test optional arguments with defaults
              await dpytest.message("?repeat Hello")
              assert dpytest.verify().message().content("Hello")  # Uses default count=1
          ]]>
        </example>
      </argument_testing>

      <permission_testing>
        <standard>Test commands with different permission levels</standard>
        <standard>Test owner-only and admin-only commands</standard>
        <standard>Test role-based command access</standard>
        <example>
          <![CDATA[
          @pytest.mark.asyncio
          async def test_command_permissions(bot):
              # Test admin-only command
              await dpytest.message("?admin_command")
              assert dpytest.verify().message().contains().content("You must have administrator permissions")

              # Test with admin permissions
              member = dpytest.get_config().members[0]
              member.guild_permissions.administrator = True
              await dpytest.message("?admin_command")
              assert dpytest.verify().message().content("Admin command executed")

              # Test owner-only command
              await dpytest.message("?owner_command")
              assert dpytest.verify().message().contains().content("This command is owner-only")
          ]]>
        </example>
      </permission_testing>

      <cooldown_testing>
        <standard>Test command cooldown implementation</standard>
        <standard>Test cooldown bypass for privileged users</standard>
        <standard>Test cooldown error messages</standard>
        <example>
          <![CDATA[
          @pytest.mark.asyncio
          async def test_command_cooldowns():
              # First command use
              await dpytest.message("?cooldown_cmd")
              assert dpytest.verify().message().content("Command executed")

              # Second command use (should be on cooldown)
              await dpytest.message("?cooldown_cmd")
              assert dpytest.verify().message().contains().content("is on cooldown")

              # Test admin bypass
              member = dpytest.get_config().members[0]
              member.guild_permissions.administrator = True
              await dpytest.message("?cooldown_cmd")
              assert dpytest.verify().message().content("Command executed")
          ]]>
        </example>
      </cooldown_testing>

      <subcommand_testing>
        <standard>Test subcommand registration and hierarchy</standard>
        <standard>Test subcommand argument parsing</standard>
        <standard>Test subcommand-specific permissions</standard>
        <example>
          <![CDATA[
          @pytest.mark.asyncio
          async def test_subcommands():
              # Test parent command
              await dpytest.message("?settings")
              assert dpytest.verify().message().contains().content("Available settings")

              # Test subcommand
              await dpytest.message("?settings prefix ?")
              assert dpytest.verify().message().content("Prefix updated to ?")

              # Test nested subcommand
              await dpytest.message("?settings role add @Role")
              assert dpytest.verify().message().content("Role added")
          ]]>
        </example>
      </subcommand_testing>

      <best_practices>
        <standard>Group related command tests in test classes</standard>
        <standard>Test command interactions and side effects</standard>
        <standard>Mock external services used by commands</standard>
        <standard>Test command error states and recovery</standard>
        <standard>Test command output formatting and localization</standard>
      </best_practices>
    </command_testing_patterns>

    <event_testing_patterns>
      <standard>Test both synchronous and asynchronous event handlers</standard>
      <standard>Test event registration and deregistration</standard>
      <standard>Test event propagation and cancellation</standard>
      <standard>Clean up event-generated files after testing</standard>
      <standard>Test event payload handling and validation</standard>

      <session_cleanup>
        <standard>Clean up temporary files created during testing</standard>
        <standard>Use pytest_sessionfinish for global cleanup</standard>
        <example>
          <![CDATA[
          def pytest_sessionfinish(session: pytest.Session, exitstatus: int) -> None:
              """Code to execute after all tests.

              Args:
                  session: The pytest session object
                  exitstatus: The exit status code
              """
              # Clean up attachment files created by dpytest
              print("\n-------------------------\nClean dpytest_*.dat files")
              fileList = glob.glob("./dpytest_*.dat")
              for filePath in fileList:
                  try:
                      os.remove(filePath)
                  except Exception:
                      print("Error while deleting file : ", filePath)
          ]]>
        </example>
      </session_cleanup>

      <event_handlers>
        <standard>Test event handler registration and execution</standard>
        <standard>Verify event handler receives correct event data</standard>
        <standard>Test multiple handlers for same event</standard>
        <example>
          <![CDATA[
          @pytest.mark.asyncio
          async def test_on_message_event(bot):
              """Test message event handling."""
              # Send test message
              test_message = "Hello bot!"
              await dpytest.message(test_message)

              # Verify handler received message
              assert dpytest.verify().message().content(test_message)

              # Test message processing
              assert dpytest.get_message().content == test_message

          @pytest.mark.asyncio
          async def test_on_member_join(bot):
              """Test member join event handling."""
              # Add test member
              test_member = await dpytest.member_join()

              # Verify welcome message
              assert dpytest.verify().message().contains().content("Welcome")

              # Verify member in guild
              guild = dpytest.get_config().guilds[0]
              assert test_member in guild.members
          ]]>
        </example>
      </event_handlers>

      <attachment_testing>
        <standard>Test file upload and attachment handling</standard>
        <standard>Verify attachment metadata and content</standard>
        <standard>Clean up attachment files after tests</standard>
        <example>
          <![CDATA[
          @pytest.mark.asyncio
          async def test_attachment_handling(bot):
              """Test handling of message attachments."""
              # Create test file
              test_content = b"Test file content"
              await dpytest.message("?upload", file=discord.File(
                  fp=io.BytesIO(test_content),
                  filename="test.txt"
              ))

              # Verify bot processed attachment
              assert dpytest.verify().message().contains().content("File uploaded")

              # Verify file cleanup
              await dpytest.empty_queue()
              # Attachment files (dpytest_*.dat) will be cleaned up by pytest_sessionfinish
          ]]>
        </example>
      </attachment_testing>

      <error_handling>
        <standard>Test error handling in event handlers</standard>
        <standard>Verify error events are properly caught and logged</standard>
        <standard>Test recovery from event handling failures</standard>
        <example>
          <![CDATA[
          @pytest.mark.asyncio
          async def test_event_error_handling(bot, caplog):
              """Test error handling in events."""
              # Trigger error condition
              await dpytest.message("?error_trigger")

              # Verify error was logged
              assert "Error in event handler" in caplog.text

              # Verify bot remains operational
              await dpytest.message("?ping")
              assert dpytest.verify().message().contains().content("pong")
          ]]>
        </example>
      </error_handling>

      <best_practices>
        <standard>Group related event tests by functionality</standard>
        <standard>Test both success and failure scenarios</standard>
        <standard>Clean up resources after event testing</standard>
        <standard>Mock external services used in event handlers</standard>
        <standard>Test event handler order and priority</standard>
      </best_practices>
    </event_testing_patterns>

    <test_state_management>
      <standard>Use a global test state flag to modify behavior in test environments</standard>
      <standard>Handle permissions differently when testing vs production</standard>
      <standard>Allow test bypass of certain checks when appropriate</standard>
      <standard>Add dpytest-specific state flags for Discord testing</standard>
      <standard>Use test state to control bot behavior in test environment</standard>
      <example>
        <![CDATA[
        # Global test state
        is_dpytest = False  # Default to production mode
        is_test_environment = False

        def is_owner(ctx: commands.Context) -> bool:
            """Check if user is owner or in test environment.

            Args:
                ctx: The command context

            Returns:
                bool: True if user is owner or in test environment
            """
            if is_dpytest or is_test_environment:
                return True
            return ctx.author.id == bot.owner_id

        def check_permissions(ctx: commands.Context, *perms: str) -> bool:
            """Check if user has required permissions or is in test environment.

            Args:
                ctx: The command context
                *perms: Required permissions

            Returns:
                bool: True if user has permissions or in test environment
            """
            if is_dpytest or is_test_environment:
                return True
            return all(getattr(ctx.channel.permissions_for(ctx.author), perm, False) for perm in perms)

        @pytest.fixture(autouse=True)
        def setup_test_state() -> Generator[None, None, None]:
            """Setup test state for all tests."""
            global is_dpytest, is_test_environment
            is_dpytest = True
            is_test_environment = True
            yield
            is_dpytest = False
            is_test_environment = False
        ]]>
      </example>
    </test_state_management>
  </discord_testing_standards>
</python_standards>
</cursorrules>
bun
dockerfile
golang
javascript
jupyter notebook
just
langchain
less
+7 more

First Time Repository

democracy_exe is an advanced, agentic Python application leveraging LangChain and LangGraph

Python

Languages:

Dockerfile: 4.6KB
JavaScript: 2.5KB
Jupyter Notebook: 289.3KB
Just: 42.0KB
Makefile: 2.2KB
Python: 27766.6KB
Shell: 32.1KB
Created: 11/8/2024
Updated: 1/10/2025

All Repositories (1)

democracy_exe is an advanced, agentic Python application leveraging LangChain and LangGraph