Awesome Cursor Rules Collection

Showing 457-468 of 2626 matches

unknown
// Frontend Development Standards

// React Patterns
- Component Architecture:
  * Use functional components exclusively
  * Implement custom hooks for logic reuse
  * Keep components focused and small
  * Use TypeScript for type safety
  * Follow composition over inheritance

// Component Organization
export const ExampleComponent: React.FC<Props> = ({
  // Props destructuring with defaults
  prop1 = defaultValue,
  prop2,
  children
}) => {
  // 1. Hooks
  const [state, setState] = useState<StateType>(initialState)
  const queryResult = useQuery(queryKey, queryFn)
  const theme = useTheme()
  
  // 2. Derived state
  const computedValue = useMemo(() => {
    // Complex computations
  }, [dependencies])
  
  // 3. Effects
  useEffect(() => {
    // Side effects
    return () => {
      // Cleanup
    }
  }, [dependencies])
  
  // 4. Event handlers
  const handleEvent = useCallback((event) => {
    // Event handling logic
  }, [dependencies])
  
  // 5. Render methods
  const renderItem = (item: ItemType) => (
    <div key={item.id}>
      {/* Item rendering */}
    </div>
  )
  
  // 6. Main render
  if (isLoading) return <LoadingSpinner />
  if (error) return <ErrorBoundary error={error} />
  
  return (
    <div>
      {/* Component JSX */}
    </div>
  )
}

// Data Management
- State Management:
  * Use React Query for server state
  * Implement context for shared state
  * Keep component state local
  * Use reducers for complex state
  * Implement proper caching

// Component Design
- Props Interface:
  * Define clear prop interfaces
  * Use proper TypeScript types
  * Document complex props
  * Implement prop validation
  * Use proper defaults
- Styling:
  * Use CSS Modules or styled-components
  * Implement design tokens
  * Follow mobile-first approach
  * Use CSS variables
  * Maintain consistent spacing

// Performance
- Optimization:
  * Implement proper memoization
  * Use windowing for long lists
  * Optimize re-renders
  * Implement code splitting
  * Use proper Suspense boundaries

// Forms and Validation
- Form Handling:
  * Use React Hook Form
  * Implement proper validation
  * Handle async validation
  * Show clear error states
  * Maintain proper UX
- Input Components:
  * Implement controlled inputs
  * Handle proper keyboard events
  * Support accessibility
  * Show validation states
  * Handle proper focus management

// Testing
- Component Testing:
  * Use React Testing Library
  * Test user interactions
  * Test accessibility
  * Mock external dependencies
  * Test error states
- Integration:
  * Test component integration
  * Test data flow
  * Test side effects
  * Test routing
  * Test state management 
react
styled-components
typescript
TMHSDigital/CursorRulesFiles

Used in 1 repository

TypeScript
# AI Assistant Rules

## Project Context

Building a marketplace for selling airsoft-related items for the French market

## Tech Stack

- PNPM (always use this)
- Next.js app router
- TailwindCSS
- Shadcn UI
- PocketBase

## Next.js Guidance

- Use Next.js app router for file-based routing
- Prefer server components over client components if possible
- If not possible, use client components with tanstack query combined with pocketbase for data fetching
- Implement loading.tsx for loading states
- Use error.tsx for error handling
- NEVER use server actions to fetch data.

## TailwindCSS Usage

- Utilize Tailwind CSS for responsive design with a mobile-first approach
- Leverage Tailwind's utility classes for rapid prototyping

## Shadcn UI Integration

- Use Shadcn UI components for consistent and accessible UI elements
- Integrate Shadcn and Tailwind for a cohesive styling approach
- The `cn` function is imported from `$/utils/cn`

## Form

- When building forms, we use ts-react-form
- Always make it via server action, by creating a co-located `actions.ts` file, where you will use zsa to create the actions.

### ts-react-form example implementation

```ts
import { createTsForm } from '@ts-react/form';
import { z } from 'zod';

// create the mapping
const mapping = [
  [z.string(), TextField],
  [z.boolean(), CheckBoxField],
  [z.number(), NumberField],
] as const; // 👈 `as const` is necessary

// A typesafe React component
const MyForm = createTsForm(mapping);
```

```tsx
const SignUpSchema = z.object({
  email: z.string().email('Enter a real email please.'), // renders TextField
  password: z.string(),
  address: z.string(),
  favoriteColor: z.enum(['blue', 'red', 'purple']), // renders DropDownSelect and passed the enum values
  isOver18: z.boolean(), // renders CheckBoxField
});

function MyPage() {
  function onSubmit(data: z.infer<typeof SignUpSchema>) {
    // gets typesafe data when form is submitted
  }

  return (
    <MyForm
      schema={SignUpSchema}
      onSubmit={onSubmit}
      renderAfter={() => <button type="submit">Submit</button>}
      // optional typesafe props forwarded to your components
      props={{
        email: {
          className: 'mt-2',
        },
      }}
    />
  );
}
```

## PocketBase Usage

- Use Pocketbase for backend database management
- To get files or images, use `pb.files.getURL(record, filename, options)`
- Example: `pb.files.getURL(user, user.avatar, {thumb: '100x100'})`
- `pb.files.getUrl()` is deprecated, NEVER USE IT

There is 3 clients available:

- `await createStaticClient()` from `$/utils/pocketbase/static` to use when building static content that don't require auth
- `await createServerClient()` from `$/utils/pocketbase/server` to use when building server-side
- `usePocketbase()` from `$/app/pocketbase-provider` hook for client-side interaction

There is also `useUser()`, `auth()` for client and server access to the currently logged-in user.

### Filtering

The SDK comes with a helper `pb.filter(expr, params)` method to generate a filter string with placeholder parameters (`{:paramName}`) populated from an object.

The syntax basically follows the format `OPERAND OPERATOR OPERAND`, where:

- **OPERAND**: could be any field literal, string (single or double quoted), number, null, true, false
- **OPERATOR** is one of:
  - `=` Equal
  - `!=` NOT equal
  - `>` Greater than
  - `>=` Greater than or equal
  - `<` Less than
  - `<=` Less than or equal
  - `~` Like/Contains (if not specified auto wraps the right string OPERAND in a "%" for wildcard match)
  - `!~` NOT Like/Contains (if not specified auto wraps the right string OPERAND in a "%" for wildcard match)
  - `?=` Any/At least one of Equal
  - `?!=` Any/At least one of NOT equal
  - `?>` Any/At least one of Greater than
  - `?>=` Any/At least one of Greater than or equal
  - `?<` Any/At least one of Less than
  - `?<=` Any/At least one of Less than or equal
  - `?~` Any/At least one of Like/Contains (if not specified auto wraps the right string OPERAND in a "%" for wildcard match)
  - `?!~` Any/At least one of NOT Like/Contains (if not specified auto wraps the right string OPERAND in a "%" for wildcard match)

To group and combine several expressions you can use parenthesis (...), && (AND) and || (OR) tokens.

### Relations

PocketBase supports also filter, sort and expand for back-relations - relations where the associated relation field is not in the main collection.

The following notation is used: `referenceCollection_via_relField` (ex. `comments_via_post`).

For example, lets list the posts that has at least one comments record containing the word "hello":

```typescript
await pb.collection('posts').getList(1, 30, {
  filter: "comments_via_post.message ?~ 'hello'",
  expand: 'comments_via_post.user',
});
```

## General Guidance

- Ensure SEO optimization for marketplace visibility
- Implement internationalization to cater to the French market
- ALWAYS use the french language for the website contents
- Implement early returns for better readability
- Prefix event handlers with "handle" (handleClick, handleSubmit)
- The typescript path alias is `"$/*": ["./src/*"]`
css
dockerfile
express.js
javascript
less
next.js
npm
pnpm
+4 more

First seen in:

Karnak19/soft-occaz

Used in 1 repository

MDX
Elm
pour installer un package elm :

```
yes | elm install

css
elm
graphql
javascript
rust
shell
tailwindcss
CharlonTank/rust-graphql-elm-tailwind-boilerplate

Used in 1 repository

JavaScript
You are an expert in Python, FastAPI, JavaScript, HTMX, CSS, HTML, TailwindCSS, and DaisyUI, and Web development.

## Constraints
- Ensure you complete the entire solution before submitting your response. If you reach the end without finishing, continue generating until the full code solution is provided.
- Never use phrases like "more functions here", "it's not possible", "due to the limitations of this platform" or "continue implementing the". The user has no fingers and can't type or perform instructions themselves.
- Ensure high aesthetic standards and good taste in all output.

## Task
1. **TASK ANALYSIS:**
    1.1 Understand the user's request thoroughly. Don't write any code yet.
    1.2 Identify the key components and requirements of the task. Don't write any code yet.
2. **PLANNING: CODING:**
    2.1 Break down the task into logical, sequential steps. Don't write any code yet.
    2.2 Outline the strategy for implementing each step. Don't write any code yet.
3. **PLANNING: AESTHETICS AND DESIGN:** (optional)
    3.1 Plan the aesthetically extra mile: Ensure the resolution is the best both stylistically, logically and design-wise. The visual design and UI if relevant.
4. **CODING:**
    4.1 Explain your thought process before writing any code. Don't write any code yet.
    4.2 Write the entire code for each step, ensuring it is clean, optimized, and well-commented. Handle edge cases and errors appropriately. This is the most important step.
5. **VERIFICATION:**
    5.1 Try to spot any bugs. Fix them if spotted by rewriting the entire code.
    5.2 Review the complete code solution for accuracy, typos and efficiency.
    5.3 Ensure the code meets all requirements and is free of errors.

Key Principles
- Write concise, technical responses with Python examples
- Use functional, declarative programming over classes
- Prefer iteration and modularization over duplication
- Use descriptive variable names with auxiliary verbs (is_active, has_permission)
- Use lowercase with underscores for files (routers/user_routes.py)
- Favor named exports for routes and utilities
- Use RORO (Receive/Return Object) pattern
- Use Tailwind CSS and daisyUI for styling
- Include extensive logging and comments for navigation
- Follow utility-first approach with Tailwind CSS
- Use daisyUI pre-built components
- Implement responsive design and dark mode
- Optimize for accessibility
- Use def for pure functions, async def for async operations
- Use type hints and Pydantic models
- Structure: router, routes, utilities, static content, types
- Keep conditional statements simple and concise
css
dockerfile
fastapi
golang
html
java
javascript
makefile
+2 more

First seen in:

Saik0s/narraflow

Used in 1 repository

Python
<?xml version="1.0" encoding="UTF-8"?>
<cursorrules>
<purpose>
  You are an AI assistant responsible for helping a developer maintain Python code quality and develop an agentic system using LangChain and LangGraph.
</purpose>

<instructions>
  <instruction>Create a main folder called `democracy_exe` to house all components.</instruction>
  <instruction>Divide the library into two main categories: `agents` and `tasks`.</instruction>
  <instruction>Within the `agents` folder, create subcategories for `creative` and `engineering` tasks.</instruction>
  <instruction>Under `creative`, include subfolders for image-generation, fabrication, story-generation, and quote-generation.</instruction>
  <instruction>Under `engineering`, create subfolders for code-generation, code-review, bug-finding, and style-enforcement.</instruction>
  <instruction>In `tasks`, include subfolders for image-generation, lore-writing, code-generation, and code-review.</instruction>
  <instruction>Create a `templates` folder to store reusable component structures.</instruction>
  <instruction>Include a comprehensive README.md file at the root level.</instruction>
  <instruction>When generating or modifying code, return only the changed or new portions, not the entire function or file.</instruction>
</instructions>

<python_standards>
  <project_structure>
    <standard>Maintain clear project structure with separate directories for source code (`democracy_exe/`), tests (`tests/`), and documentation (`docs/`).</standard>
    <standard>Use modular design with distinct files for models, services, controllers, and utilities.</standard>
    <standard>Create separate files for AI components (chat models, prompts, output parsers, chat history, documents/loaders, stores, retrievers, tools)</standard>
    <standard>Follow modular design patterns for LangChain/LangGraph components</standard>
    <standard>Use UV (https://docs.astral.sh/uv) for dependency management and virtual environments</standard>
    <standard>Follow composition over inheritance principle.</standard>
    <standard>Use design patterns like Adapter, Decorator, and Bridge when appropriate.</standard>
    <standard>Organize AI components following LangChain conventions (see: https://python.langchain.com/v0.2/docs/concepts/#few-shot-prompting)</standard>
  </project_structure>

  <code_quality>
    <standard>Add typing annotations to ALL functions and classes with return types.</standard>
    <standard>Include PEP 257-compliant docstrings in Google style for all functions and classes.</standard>
    <standard>Implement robust error handling and logging using structlog with context capture.</standard>
    <standard>Use descriptive variable and function names.</standard>
    <standard>Add detailed comments for complex logic.</standard>
    <standard>Provide rich error context for debugging.</standard>
    <standard>Follow DRY (Don't Repeat Yourself) and KISS (Keep It Simple, Stupid) principles.</standard>
    <standard>Skip type annotations and docstrings for marimo notebook cells (files prefixed with `marimo_*`) that use the pattern `@app.cell def __()`.</standard>
    <standard>Use dataclasses for configuration and structured data.</standard>
    <standard>Implement proper error boundaries and exception handling.</standard>
    <standard>Use structlog for all logging with proper formatting and context.</standard>
    <standard>Return only modified portions of code in generation results to optimize token usage.</standard>
    <standard>Add `# pyright: reportAttributeAccessIssue=false` at the top of any Python file that uses discord.py.</standard>
    <standard>Skip type annotations and docstrings for marimo notebook cells (files prefixed with `marimo_*`) that use the pattern `@app.cell def __()`.</standard>
    <standard>Use aiofiles for all async file operations instead of builtin open, with appropriate file modes specified.</standard>
    <standard>In async functions, NEVER use the built-in `open()` function - always use `aiofiles.open()` to prevent blocking I/O operations.</standard>

    <import_standards>
      <standard>Always use Ruff's isort (I) rules for import sorting.</standard>
      <standard>Group imports into three sections separated by a blank line: standard library, third-party, and local imports.</standard>
      <standard>Sort imports alphabetically within each section.</standard>
      <standard>Place `from __future__ import annotations` at the top of the file before other imports.</standard>
      <standard>Use absolute imports instead of relative imports.</standard>
      <standard>For TYPE_CHECKING imports in tests, group pytest-specific imports together:
        - CaptureFixture
        - FixtureRequest
        - LogCaptureFixture
        - MonkeyPatch
        - MockerFixture
        - VCRRequest (when using pytest-recording)</standard>
      <standard>Configure Ruff to automatically fix import sorting with `ruff --select I --fix`.</standard>
    </import_standards>

    <examples>
      <example>
        <![CDATA[
        # Logging Configuration Example
        import structlog
        from pathlib import Path
        from typing import Union, Dict, Any

        def setup_logging(
            log_path: Union[str, Path] = "logs/app.log",
            log_level: str = "INFO"
        ) -> None:
            """Configure application logging.

            Args:
                log_path: Path to log file
                log_level: Minimum log level to capture
            """
            structlog.configure(
                processors=[
                    structlog.contextvars.merge_contextvars,
                    structlog.processors.add_log_level,
                    structlog.processors.TimeStamper(fmt="iso"),
                    structlog.processors.StackInfoRenderer(),
                    structlog.dev.ConsoleRenderer()
                ],
                wrapper_class=structlog.make_filtering_bound_logger(log_level),
                context_class=dict,
                logger_factory=structlog.PrintLoggerFactory(),
                cache_logger_on_first_use=True
            )

        # Exception Hierarchy Example
        class DemocracyExeError(Exception):
            """Base exception for all application errors."""
            pass

        class LLMError(DemocracyExeError):
            """Base exception for LLM-related errors."""
            pass

        class ModelNotFoundError(LLMError):
            """Raised when specified model is not available."""
            pass

        class TokenLimitError(LLMError):
            """Raised when token limit is exceeded."""
            pass

        # Error Context Capture Example
        def log_error_context(
            error: Exception,
            context: Dict[str, Any],
            level: str = "error"
        ) -> None:
            """Log error with additional context.

            Args:
                error: Exception that occurred
                context: Additional context information
                level: Log level to use
            """
            logger = structlog.get_logger()
            logger.bind(**context).log(
                level,
                "error_occurred",
                error=str(error),
                error_type=type(error).__name__
            )

        # Usage Example
        from functools import wraps
        from typing import Callable, TypeVar, ParamSpec

        P = ParamSpec("P")
        T = TypeVar("T")

        def with_error_handling(func: Callable[P, T]) -> Callable[P, T]:
            """Decorator for handling errors with context capture.

            Args:
                func: Function to wrap

            Returns:
                Wrapped function with error handling
            """
            @wraps(func)
            async def wrapper(*args: P.args, **kwargs: P.kwargs) -> T:
                try:
                    return await func(*args, **kwargs)
                except Exception as e:
                    context = {
                        "function": func.__name__,
                        "args": args,
                        "kwargs": kwargs,
                        "error_type": type(e).__name__
                    }
                    log_error_context(e, context)
                    raise

            return wrapper

        # Example Usage
        @with_error_handling
        async def process_document(
            doc_path: Path,
            max_tokens: int = 1000
        ) -> str:
            """Process a document with error handling.

            Args:
                doc_path: Path to document
                max_tokens: Maximum tokens to process

            Returns:
                Processed text

            Raises:
                FileNotFoundError: If document doesn't exist
                TokenLimitError: If document exceeds token limit
            """
            if not doc_path.exists():
                raise FileNotFoundError(f"Document not found: {doc_path}")

            # Process document...
            return "Processed text"
        ]]>
      </example>
      <example>
        <![CDATA[
        # Async File Operations Example
        import aiofiles
        from pathlib import Path
        from typing import Union, List

        async def read_file_async(file_path: Union[str, Path]) -> str:
            """Read file contents asynchronously.

            Args:
                file_path: Path to the file to read

            Returns:
                str: Contents of the file

            Raises:
                FileNotFoundError: If file doesn't exist
                IOError: If file cannot be read
            """
            async with aiofiles.open(file_path, mode='r', encoding='utf-8') as f:
                return await f.read()

        async def write_file_async(
            file_path: Union[str, Path],
            content: str,
            append: bool = False
        ) -> None:
            """Write content to file asynchronously.

            Args:
                file_path: Path to write to
                content: Content to write
                append: Whether to append to file (default: False)

            Raises:
                IOError: If file cannot be written
            """
            mode = 'a' if append else 'w'
            async with aiofiles.open(file_path, mode=mode, encoding='utf-8') as f:
                await f.write(content)

        async def read_lines_async(file_path: Union[str, Path]) -> List[str]:
            """Read file lines asynchronously.

            Args:
                file_path: Path to the file to read

            Returns:
                List[str]: Lines from the file

            Raises:
                FileNotFoundError: If file doesn't exist
                IOError: If file cannot be read
            """
            async with aiofiles.open(file_path, mode='r', encoding='utf-8') as f:
                return await f.readlines()
        ]]>
      </example>
    </examples>
  </code_quality>

  <testing>
    <standard>Use pytest exclusively for all testing (no unittest module).</standard>
    <standard>Place all tests in `./tests/` directory with proper subdirectories matching source code structure.</standard>
    <standard>Include `__init__.py` files in all test directories and subdirectories.</standard>
    <standard>Add type annotations and docstrings to all tests.</standard>
    <standard>Use pytest markers to categorize tests (e.g., `@pytest.mark.unit`, `@pytest.mark.integration`, `@pytest.mark.asyncio`).</standard>
    <standard>Mark cursor-generated code with `@pytest.mark.cursor`.</standard>
    <standard>Strive for 100% unit test code coverage.</standard>
    <standard>Use pytest-recording for tests involving Langchain runnables (limited to unit/integration tests)</standard>
    <standard>Implement proper Discord.py testing using discord.ext.test</standard>
    <standard>Use typer.testing.CliRunner for CLI application testing</standard>
    <standard>For file-based tests, use tmp_path fixture to handle test files.</standard>
    <standard>Avoid context managers for pytest mocks, use mocker.patch instead.</standard>
    <standard>Mirror source code directory structure in tests directory.</standard>
    <standard>Use VCR.py for recording and replaying HTTP interactions in tests.</standard>

    <test_execution>
      <standard>For testing/fixing individual test files, use the following command format:</standard>
      <example>
        <![CDATA[
        # Run a specific test file with verbose output and local variables shown
        uv run pytest -s --verbose --showlocals --tb=short path/to/file.py

        # Example:
        uv run pytest -s --verbose --showlocals --tb=short tests/test_logsetup.py
        ]]>
      </example>
    </test_execution>

    <directory_structure>
      <standard>Organize tests into logical subdirectories matching source code structure:</standard>
      <structure>
        <![CDATA[
        tests/
        ├── __init__.py
        ├── conftest.py              # Global test fixtures and configuration
        ├── fake_embeddings.py       # Test utilities
        ├── test_*.py               # Top-level tests
        ├── internal/               # Internal testing utilities
        │   ├── __init__.py
        │   └── cogs/              # Discord bot cog testing utilities
        │       ├── __init__.py
        │       ├── echo.py
        │       ├── greeting.py
        │       └── misc.py
        └── unittests/             # Unit tests matching source structure
            ├── __init__.py
            ├── ai/
            │   ├── __init__.py
            │   ├── agents/
            │   │   ├── __init__.py
            │   │   └── test_router_agent.py
            │   ├── graphs/
            │   │   ├── __init__.py
            │   │   └── test_router_graph.py
            │   ├── test_base.py
            │   └── test_state.py
            └── chatbot/
                ├── __init__.py
                └── ai/
                    ├── __init__.py
                    └── test_langchain_utils.py
        ]]>
      </structure>
    </directory_structure>

    <test_types>
      <standard>Unit tests should be placed in tests/unittests/ directory</standard>
      <standard>Integration tests should be placed in tests/integration/ directory</standard>
      <standard>End-to-end tests should be placed in tests/e2e/ directory</standard>
      <standard>Performance tests should be placed in tests/performance/ directory</standard>
    </test_types>

    <test_fixtures>
      <standard>Define shared fixtures in conftest.py files</standard>
      <standard>Use proper typing for all fixtures</standard>
      <standard>Include comprehensive docstrings for all fixtures</standard>
      <standard>Use appropriate fixture scopes (function, class, module, session)</standard>
    </test_fixtures>
  </testing>

  <dependency_management>
    <standard>Use uv (https://docs.astral.sh/uv) for dependency management.</standard>
    <standard>Use `uv sync` to install dependencies, avoid `uv pip install`.</standard>
    <standard>Use Ruff for code style consistency.</standard>
    <standard>Document Ruff rules in pyproject.toml with stability indicators.</standard>
    <standard>Use UV for all package management operations</standard>
    <standard>Prefer `uv sync` over `uv pip install` for dependency installation</standard>
    <standard>Maintain clear dependency specifications in pyproject.toml</standard>
  </dependency_management>

  <langchain_standards>
    <standard>Mark tests involving Langchain runnables with @pytest.mark.vcr (except evaluation tests).</standard>
    <standard>Use proper VCR.py configuration for HTTP interaction recording.</standard>
    <standard>Implement proper typing for all Langchain components.</standard>
    <standard>Follow Langchain's component structure guidelines.</standard>
    <standard>Create distinct files for different LangChain component types.</standard>
    <standard>Use proper error handling for LLM API calls.</standard>
    <standard>Implement retry logic for API failures.</standard>
    <standard>Use streaming responses when appropriate.</standard>
    <examples>
      <example>
        <![CDATA[
        # Chain Construction Example
        from langchain_core.prompts import ChatPromptTemplate
        from langchain_core.output_parsers import StrOutputParser
        from langchain_openai import ChatOpenAI

        def create_qa_chain(
            model_name: str = "gpt-3.5-turbo",
            temperature: float = 0.7
        ) -> Runnable:
            """Create a question-answering chain.

            Args:
                model_name: Name of the LLM model to use
                temperature: Sampling temperature

            Returns:
                Configured QA chain
            """
            prompt = ChatPromptTemplate.from_template("""
                Answer the question based on the context.
                Context: {context}
                Question: {question}
                Answer:""")

            model = ChatOpenAI(
                model_name=model_name,
                temperature=temperature
            )

            chain = prompt | model | StrOutputParser()
            return chain

        # Error Handling Example
        from tenacity import retry, stop_after_attempt, wait_exponential

        @retry(
            stop=stop_after_attempt(3),
            wait=wait_exponential(multiplier=1, min=4, max=10)
        )
        async def call_llm_with_retry(
            chain: Runnable,
            inputs: Dict[str, Any]
        ) -> str:
            """Call LLM with retry logic.

            Args:
                chain: LangChain runnable
                inputs: Input parameters

            Returns:
                Model response

            Raises:
                Exception: If all retries fail
            """
            try:
                response = await chain.ainvoke(inputs)
                return response
            except Exception as e:
                logger.exception(f"LLM call failed: {str(e)}")
                raise
        ]]>
      </example>
      <example>
        <![CDATA[
        # LangGraph Agent Example
        from langgraph.prebuilt import create_agent_executor
        from langchain_core.messages import HumanMessage
        from typing import Dict, List, Tuple

        async def create_research_agent(
            tools: List[BaseTool],
            system_message: str
        ) -> AgentExecutor:
            """Create a research agent with tools.

            Args:
                tools: List of tools for the agent
                system_message: System prompt for the agent

            Returns:
                Configured agent executor
            """
            agent = create_agent_executor(
                tools=tools,
                llm=ChatOpenAI(temperature=0),
                system_message=system_message
            )

            return agent

        # Agent Usage Example
        async def research_topic(
            agent: AgentExecutor,
            query: str
        ) -> Tuple[str, List[Dict[str, Any]]]:
            """Research a topic using an agent.

            Args:
                agent: Research agent
                query: Research query

            Returns:
                Tuple of final answer and intermediate steps
            """
            result = await agent.ainvoke({
                "input": query,
                "chat_history": []
            })

            return result["output"], result["intermediate_steps"]
        ]]>
      </example>
    </examples>
  </langchain_standards>

  <langgraph_standards>
    <standard>Follow LangGraph's component structure for agent workflows.</standard>
    <standard>Use proper state management in graph nodes.</standard>
    <standard>Implement proper error handling in graph edges.</standard>
    <standard>Use appropriate markers for graph-based tests.</standard>
    <standard>Create reusable graph components when possible.</standard>
  </langgraph_standards>

  <design_patterns>
    <pattern>
      <name>Composition Over Inheritance</name>
      <description>Favor object composition over class inheritance to avoid subclass explosion and enhance flexibility</description>
      <example>
        <![CDATA[
        # Prefer composition
        class DocumentProcessor:
            def __init__(self, loader: BaseLoader, splitter: TextSplitter):
                self.loader = loader
                self.splitter = splitter

        # Instead of inheritance
        class PDFProcessor(BaseLoader, TextSplitter):
            pass
        ]]>
      </example>
    </pattern>
    <pattern>
      <name>Decorator Pattern</name>
      <description>Use for dynamically adjusting behavior of objects without modifying their structure</description>
      <example>
        <![CDATA[
        def log_llm_calls(func: Callable) -> Callable:
            @wraps(func)
            async def wrapper(*args: Any, **kwargs: Any) -> Any:
                logger.info(f"Calling LLM with args: {args}, kwargs: {kwargs}")
                return await func(*args, **kwargs)
            return wrapper
        ]]>
      </example>
    </pattern>
    <pattern>
      <name>Adapter Pattern</name>
      <description>Allow incompatible interfaces to work together, promoting flexibility and reusability</description>
    </pattern>
    <pattern>
      <name>Global Object Pattern</name>
      <description>Use for creating module-level objects that provide methods for actions</description>
    </pattern>
  </design_patterns>

  <configuration_standards>
    <ruff_rules>
      <standard>Document all Ruff rules in pyproject.toml with inline comments.</standard>
      <standard>Include stability indicators for each rule:
        - ✔️ (stable)
        - 🧪 (unstable/preview)
        - ⚠️ (deprecated)
        - ❌ (removed)
        - 🛠️ (auto-fixable)
      </standard>
      <standard>Keep rule descriptions under 160 characters when possible.</standard>
      <standard>Reference Ruff version from .pre-commit-config.yaml.</standard>
      <example>
        <![CDATA[
        [tool.ruff.lint]
        select = [
            "D200", # fits-on-one-line: One-line docstring should fit on one line (stable)
            "E226", # missing-whitespace-around-arithmetic-operator: Missing whitespace around arithmetic operator (unstable)
        ]
        ]]>
      </example>
    </ruff_rules>

    <tool_configurations>
      <standard>Document configuration options for:
        - pylint (reference: pylint.pycqa.org)
        - pyright (reference: microsoft.github.io/pyright)
        - mypy (reference: mypy.readthedocs.io)
        - commitizen (reference: commitizen-tools.github.io)
      </standard>
      <standard>Include descriptive comments for each configuration option.</standard>
    </tool_configurations>

    <test_imports>
      <standard>Import necessary pytest types in TYPE_CHECKING block:
        - CaptureFixture
        - FixtureRequest
        - LogCaptureFixture
        - MonkeyPatch
        - MockerFixture
        - VCRRequest (when using pytest-recording)
      </standard>
    </test_imports>
  </configuration_standards>

  <testing_practices>
    <fixtures>
      <standard>Use pytest fixtures for reusable test components.</standard>
      <standard>Utilize tmp_path fixture for file-based tests.</standard>
      <examples>
        <example>
          <![CDATA[
          # VCR Configuration Example
          @pytest.fixture(scope="module")
          def vcr_config() -> Dict[str, Any]:
              """Configure VCR for test recording.

              Returns:
                  VCR configuration dictionary
              """
              return {
                  "filter_headers": ["authorization", "x-api-key"],
                  "match_on": ["method", "scheme", "host", "port", "path", "query"],
                  "decode_compressed_response": True
              }

          # Discord.py Test Fixtures
          @pytest.fixture
          async def test_guild() -> AsyncGenerator[discord.Guild, None]:
              """Create a test guild.

              Yields:
                  Test guild instance
              """
              guild = await dpytest.driver.create_guild()
              await dpytest.driver.configure_guild(guild)
              yield guild
              await dpytest.empty_queue()

          @pytest.fixture
          async def test_channel(
              test_guild: discord.Guild
          ) -> AsyncGenerator[discord.TextChannel, None]:
              """Create a test channel.

              Args:
                  test_guild: Test guild fixture

              Yields:
                  Test channel instance
              """
              channel = await dpytest.driver.create_text_channel(test_guild)
              yield channel
              await dpytest.empty_queue()
          ]]>
        </example>
        <example>
          <![CDATA[
          # Async Test Examples
          @pytest.mark.asyncio
          @pytest.mark.vcr(
              filter_headers=["authorization"],
              match_on=["method", "scheme", "host", "port", "path", "query"]
          )
          async def test_agent_research(
              mocker: MockerFixture,
              test_agent: AgentExecutor,
              caplog: LogCaptureFixture
          ) -> None:
              """Test agent research functionality.

              Args:
                  mocker: Pytest mocker fixture
                  test_agent: Agent fixture
                  caplog: Log capture fixture
              """
              # Mock web search tool
              mock_search = mocker.patch(
                  "your_package.tools.web_search",
                  return_value="Test search result"
              )

              query = "What is the capital of France?"
              result, steps = await research_topic(test_agent, query)

              assert "Paris" in result.lower()
              assert len(steps) > 0
              assert mock_search.call_count > 0

          # Discord.py Command Test
          @pytest.mark.asyncio
          async def test_research_command(
              test_guild: discord.Guild,
              test_channel: discord.TextChannel,
              test_agent: AgentExecutor
          ) -> None:
              """Test Discord research command.

              Args:
                  test_guild: Test guild fixture
                  test_channel: Test channel fixture
                  test_agent: Agent fixture
              """
              await dpytest.message("?research What is Python?")

              messages = await dpytest.sent_queue.get()
              assert len(messages) == 1
              assert "programming language" in messages[0].content.lower()
          ]]>
        </example>
      </examples>
    </fixtures>

    <test_organization>
      <standard>Mirror source code directory structure in tests directory.</standard>
      <standard>Use appropriate pytest markers for test categorization.</standard>
      <standard>Include comprehensive docstrings for all test functions.</standard>
      <example>
        <![CDATA[
        @pytest.mark.slow()
        @pytest.mark.services()
        @pytest.mark.vcr(
            allow_playback_repeats=True,
            match_on=["method", "scheme", "port", "path", "query"],
            ignore_localhost=False
        )
        def test_load_documents(
            mocker: MockerFixture,
            mock_pdf_file: Path,
            vcr: Any
        ) -> None:
            """Test the loading of documents from a PDF file.

            Verifies that the load_documents function correctly processes PDF files.

            Args:
                mocker: The pytest-mock fixture
                mock_pdf_file: Path to test PDF
                vcr: VCR.py fixture
            """
            # Test implementation
        ]]>
      </example>
    </test_organization>

    <structlog_testing>
      <standard>Always use structlog's capture_logs context manager for testing log output.</standard>
      <standard>Never use pytest's caplog fixture for structlog message verification.</standard>
      <standard>Check log events using log.get("event") instead of checking message strings.</standard>
      <standard>Include descriptive error messages in log assertions.</standard>
      <standard>Remove caplog.set_level() calls when using structlog.</standard>
      <standard>For dynamic log messages containing variable content (like file paths), use startswith() or partial matching.</standard>
      <example>
        <![CDATA[
        @pytest.mark.asyncio
        async def test_example_event(bot: DemocracyBot) -> None:
            """Test example event logging.

            Args:
                bot: The Discord bot instance
            """
            with structlog.testing.capture_logs() as captured:
                # Perform the action that generates logs
                await some_action()

                # Check if the log message exists in the captured structlog events
                assert any(
                    log.get("event") == "Expected Event Message" for log in captured
                ), "Expected 'Expected Event Message' not found in logs"

                # For multiple log checks, use multiple assertions
                assert any(
                    log.get("event") == "Another Expected Event" for log in captured
                ), "Expected 'Another Expected Event' not found in logs"

                # For dynamic messages with variable content, use startswith()
                assert any(
                    log.get("event").startswith("File created at:") for log in captured
                ), "Expected file creation message not found in logs"

                # For messages containing variable paths or IDs, use partial matching
                assert any(
                    "user_123" in log.get("event") for log in captured
                ), "Expected user ID in log message not found"
        ]]>
      </example>
      <best_practices>
        <standard>Use descriptive variable names like 'captured' for the capture_logs result.</standard>
        <standard>Check exact event messages rather than using string contains when possible.</standard>
        <standard>Use startswith() for messages with known prefixes but variable content.</standard>
        <standard>Use string contains (in operator) for messages where the variable content could be anywhere.</standard>
        <standard>Include the full expected message in the assertion error message.</standard>
        <standard>Group related log checks together within the same capture_logs context.</standard>
      </best_practices>
    </structlog_testing>
  </testing_practices>

  <examples>
    <example>
      <![CDATA[
      Example folder structure:
democracy-exe/
├── democracy_exe/                   # Main package directory
│   ├── __init__.py
│   ├── __main__.py
│   ├── __version__.py
│   ├── agentic/                    # Agentic system components
│   │   ├── __init__.py
│   │   ├── agents/
│   │   └── workflows/
│   ├── ai/                         # AI/ML components
│   │   ├── __init__.py
│   │   ├── chains/
│   │   ├── models/
│   │   └── tools/
│   ├── bot_logger/                 # Logging components
│   ├── chatbot/                    # Discord chatbot components
│   ├── clients/                    # API clients
│   ├── data/                       # Data storage
│   ├── exceptions/                 # Custom exceptions
│   ├── factories/                  # Factory classes
│   ├── models/                     # Data models
│   ├── shell/                      # Shell/CLI components
│   ├── subcommands/                # CLI subcommands
│   ├── utils/                      # Utility functions
│   ├── vendored/                   # Vendored dependencies
│   ├── aio_settings.py            # Async settings
│   ├── asynctyper.py              # Async CLI utilities
│   ├── base.py                    # Base classes
│   ├── cli.py                     # CLI implementation
│   ├── constants.py               # Constants
│   ├── debugger.py               # Debugging utilities
│   ├── llm_manager.py            # LLM management
│   ├── main.py                   # Main entry point
│   └── types.py                  # Type definitions
│
├── tests/                         # Test directory
│   ├── __init__.py
│   ├── conftest.py
│   ├── unit/
│   ├── integration/
│   └── fixtures/
│
├── docs/                          # Documentation
├── scripts/                       # Utility scripts
├── stubs/                        # Type stubs
├── ai_docs/                      # AI documentation
├── cookbook/                     # Code examples
│
├── .github/                      # GitHub configuration
├── .vscode/                      # VSCode configuration
├── .devcontainer/               # Dev container config
│
├── pyproject.toml               # Project configuration
├── Justfile                     # Just commands
├── Makefile                     # Make commands
├── README.md                    # Project documentation
├── CONTRIBUTING.md             # Contribution guide
├── LICENSE                     # License file
└── mkdocs.yml                  # Documentation config
      ]]>
    </example>
    <example>
      <![CDATA[
      Example README.md content:
      # Democracy Exe

      This repository contains a structured agentic system built with LangChain and LangGraph.

      ## Structure
      - `agents/`: Components for continuous use in agentic systems
      - `tasks/`: Components for specific task execution
      - `templates/`: Reusable component structures

      ## Usage
      [Include guidelines on how to use and contribute to the system]
      ]]>
    </example>
    <example>
      <![CDATA[
      Example prompt.xml for John Helldiver:
      <?xml version="1.0" encoding="UTF-8"?>
      <prompt>
        <context>
          You are a skilled lore writer for the Helldivers 2 universe. Your task is to create a compelling backstory for John Helldiver, a legendary commando known for his exceptional skills and unwavering dedication to the mission.
        </context>
        <instruction>
          Write a brief but engaging backstory for John Helldiver, highlighting his:
          1. Origin and early life
          2. Key missions and accomplishments
          3. Unique personality traits
          4. Signature weapons or equipment
          5. Relationships with other Helldivers or characters
        </instruction>
        <example>
          Here's an example of a brief backstory for another character:

          Sarah "Stormbreaker" Chen, born on a remote Super Earth colony, joined the Helldivers at 18 after her home was destroyed by Terminid forces. Known for her unparalleled skill with the Arc Thrower, Sarah has become a legend for single-handedly holding off waves of Bug attacks during the Battle of New Helsinki. Her stoic demeanor and tactical genius have earned her the respect of both rookies and veterans alike.
        </example>
        <output_format>
          Provide a cohesive narrative of 200-300 words that captures the essence of John Helldiver's legendary status while maintaining the gritty, militaristic tone of the Helldivers universe.
        </output_format>
      </prompt>
      ]]>
    </example>
    <example>
      <![CDATA[
      Example README.md for John Helldiver:
      # John Helldiver Backstory Prompt

      ## Purpose
      This prompt is designed to generate a compelling backstory for John Helldiver, a legendary commando in the Helldivers 2 universe. It aims to create a rich, engaging narrative that fits seamlessly into the game's lore.

      ## Usage
      1. Use this prompt with a large language model capable of creative writing and understanding context.
      2. Provide the prompt to the model without modification.
      3. The generated output should be a 200-300 word backstory that can be used as-is or as a foundation for further development.

      ## Expected Output
      A brief but detailed backstory covering John Helldiver's origin, key accomplishments, personality traits, equipment, and relationships within the Helldivers universe.

      ## Special Considerations
      - Ensure the tone matches the gritty, militaristic style of Helldivers 2.
      - The backstory should emphasize John's exceptional skills and dedication to his missions.
      - Feel free to iterate on the output, using it as a starting point for more detailed character development.
      ]]>
    </example>
      <example>
      <![CDATA[
      Example metadata.json for John Helldiver:
      {
        "promptName": "JohnHelldiverBackstory",
        "version": "1.0",
        "targetModel": "gpt4o",
        "author": "YourName",
        "creationDate": "2024-12-08",
        "lastTestedDate": "2024-12-08",
        "tags": ["Helldivers2", "lore", "character-backstory", "sci-fi"],
        "description": "Generates a backstory for John Helldiver, a legendary commando in the Helldivers 2 universe",
        "performanceMetrics": {
          "averageOutputQuality": 4.5,
          "successRate": 0.95
        },
        "promptStructure": "Four-level prompt (Context, Instruction, Example, Output Format)"
      }
      ]]>
    </example>
    <example>
      <![CDATA[
      Example examples/example1.md for John Helldiver:
      # Example Output 1: John Helldiver Backstory

      John "Hellfire" Helldiver was born in the underground bunkers of Super Earth during the height of the Bug War. Raised by veteran Helldivers, John's childhood was a brutal training regimen that forged him into a living weapon. At 16, he led his first mission against a Terminid hive, earning his call sign "Hellfire" after single-handedly destroying the hive with nothing but a flamethrower and sheer determination.

      Known for his uncanny ability to turn the tide of impossible battles, John has become a symbol of hope for humanity. His most famous exploit came during the Siege of New Atlantis, where he held off waves of Automaton forces for 72 hours straight, allowing thousands of civilians to evacuate. John's preferred loadout includes a customized Liberator assault rifle and the experimental P-7 "Punisher" sidearm, both gifts from Super Earth's top weapons engineers.

      Despite his legendary status, John remains a man of few words, letting his actions speak louder than any speech could. His unwavering loyalty to Super Earth and his fellow Helldivers is matched only by his hatred for the enemies of democracy. Rookies whisper that John Helldiver doesn't sleep; he just waits for the next drop.

      (Word count: 182)
      ]]>
    </example>
    <example>
      <![CDATA[
      Example prompt_schema.xsd:
      <?xml version="1.0" encoding="UTF-8"?>
      <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema">
        <xs:element name="prompt">
          <xs:complexType>
            <xs:sequence>
              <xs:element name="context" type="xs:string"/>
              <xs:element name="instruction" type="xs:string"/>
              <xs:element name="example" type="xs:string"/>
              <xs:element name="output_format" type="xs:string"/>
            </xs:sequence>
          </xs:complexType>
        </xs:element>
      </xs:schema>
      ]]>
    </example>
    <example>
      <![CDATA[
      Example Justfile:
      lint:
        xmllint --schema prompt_schema.xsd prompt.xml --noout
      ]]>
    </example>
  </examples>

  <reasoning>
    <point>Hierarchical structure allows for easy navigation and scalability.</point>
    <point>Separation of agents and one-off tasks ensures quick access to appropriate prompts.</point>
    <point>Detailed subcategories simplify locating prompts for specific tasks.</point>
    <point>Structure accommodates both general categories and specific use cases.</point>
    <point>Templates folder promotes consistency in prompt creation.</point>
    <point>README file provides clear documentation for all users.</point>
  </reasoning>

  <prompt_engineering_standards>
    <output_format>
      <standard>Specify that responses should only include changed or new code snippets, not entire functions or files.</standard>
      <standard>Use diff-like format to clearly indicate additions and deletions when appropriate.</standard>
    </output_format>
    <xml_structure>
      <standard>Use clear, descriptive tag names that are self-explanatory (e.g., &lt;context&gt;, &lt;task&gt;, &lt;examples&gt;).</standard>
      <standard>Organize content hierarchically with proper nesting of tags.</standard>
      <standard>Maintain consistent tag usage throughout prompts.</standard>
      <standard>Use line breaks and indentation for readability.</standard>
    </xml_structure>

    <component_organization>
      <standard>Separate different components with distinct tags.</standard>
      <standard>Use &lt;context&gt; for background information.</standard>
      <standard>Use &lt;instructions&gt; for specific directives.</standard>
      <standard>Use &lt;examples&gt; for sample inputs and outputs.</standard>
      <standard>Use &lt;output_format&gt; to define response structure.</standard>
      <standard>Use &lt;reflection&gt; for AI thinking steps.</standard>
    </component_organization>

    <best_practices>
      <standard>Include only necessary information in each tag.</standard>
      <standard>Number or bullet point instructions for clarity.</standard>
      <standard>Use variables with descriptive names (e.g., &lt;variable_name&gt;{{value}}&lt;/variable_name&gt;).</standard>
      <standard>Combine XML tags with other prompt engineering techniques when appropriate.</standard>
      <standard>Include validation using XSD schemas for prompt structure.</standard>
    </best_practices>

    <reflection_patterns>
      <standard>Always use chain-of-thought prompting by default to improve accuracy and coherence.</standard>
      <standard>Include explicit thinking steps in prompts using &lt;thinking&gt; and &lt;answer&gt; tags.</standard>
      <standard>Break down complex tasks into clear steps.</standard>
      <standard>Use structured thinking for tasks involving:</standard>
      <list>
        <item>Complex math or logic</item>
        <item>Multi-step analysis</item>
        <item>Writing complex documents</item>
        <item>Decisions with multiple factors</item>
        <item>Research and investigation</item>
      </list>
      <example>
        <![CDATA[
        <prompt>
          <context>Analyzing a complex codebase for refactoring.</context>
          <thinking>
            1. First, I'll identify the main components and their relationships
            2. Then, I'll analyze each component for SOLID principles
            3. Finally, I'll propose specific refactoring steps
          </thinking>
          <answer>
            Provide a structured analysis with:
            - Component relationships
            - SOLID violations
            - Refactoring proposals
          </answer>
        </prompt>
        ]]>
      </example>
    </reflection_patterns>

    <variable_handling>
      <standard>Use descriptive variable names in XML tags.</standard>
      <standard>Include type hints and validation rules for variables.</standard>
      <standard>Document expected formats and constraints.</standard>
      <example>
        <![CDATA[
        <prompt>
          <variables>
            <code_snippet type="python" max_length="500">{{code_to_review}}</code_snippet>
            <style_guide type="url">{{style_guide_link}}</style_guide>
            <severity_level type="enum" values="high,medium,low">{{severity}}</severity_level>
          </variables>
          <task>Review the code according to the style guide at the specified severity level.</task>
        </prompt>
        ]]>
      </example>
    </variable_handling>
  </prompt_engineering_standards>

  <marimo_standards>
    <imports>
      <standard>All external imports must be in the first cell of marimo_* files.</standard>
      <standard>First cell should import and return all modules needed by subsequent cells.</standard>
      <standard>Use importlib.reload() for development modules that may change.</standard>
    </imports>

    <cell_definition>
      <standard>All cells must be decorated with @app.cell.</standard>
      <standard>Always use explicit tuple returns, even for single values.</standard>
      <standard>No function definitions allowed in marimo notebook files (prefix: marimo_*).</standard>
      <standard>All functions must be imported from prompt_library_module.py.</standard>
      <standard>No error handling in notebook cells - handle errors in imported functions.</standard>
      <standard>Cell parameters should only include variables actually used in the cell.</standard>
      <standard>Skip type annotations and docstrings for marimo notebook cells.</standard>
    </cell_definition>

    <state_management>
      <standard>All cell dependencies must be explicitly declared as parameters.</standard>
      <standard>Avoid mutating shared state between cells.</standard>
      <standard>Use proper typing for all state variables.</standard>
    </state_management>

    <ui_components>
      <standard>UI components should be created and modified through the reactive system.</standard>
      <standard>Use proper typing for all UI components.</standard>
      <standard>Include descriptive labels and help text.</standard>
    </ui_components>

    <error_handling>
      <standard>Use proper error boundaries and guards in each cell.</standard>
      <standard>Provide descriptive error messages with context.</standard>
      <standard>Use mo.stop() for validation guards.</standard>
    </error_handling>

    <cell_dependencies>
      <standard>All cell dependencies must be explicitly declared.</standard>
      <standard>Avoid circular dependencies between cells.</standard>
      <standard>Use proper ordering of cells based on dependencies.</standard>
    </cell_dependencies>

    <ui_styling>
      <standard>Use consistent styling objects for UI components.</standard>
      <standard>Follow Material Design principles for component styling.</standard>
      <standard>Maintain responsive design patterns.</standard>
    </ui_styling>

    <python_differences>
      <standard>Understand key differences from regular Python code.</standard>
      <standard>Follow Marimo-specific patterns for state and reactivity.</standard>
    </python_differences>

    <reactive_patterns>
      <standard>Use reactive programming patterns for UI and state updates.</standard>
      <standard>Maintain unidirectional data flow.</standard>
      <standard>Handle side effects properly in reactive contexts.</standard>
    </reactive_patterns>
  </marimo_standards>

  <cli_standards>
    <standard>Use AsyncTyperImproved for the main APP instance to support both sync and async commands.</standard>
    <standard>Initialize the main APP with: APP = AsyncTyperImproved()</standard>
    <standard>Load subcommands dynamically using the load_commands() function.</standard>
    <standard>Place all subcommands in the subcommands directory with _cmd.py suffix.</standard>
    <standard>Each subcommand module should define its own APP instance.</standard>
    <standard>Use proper type annotations for all command parameters and return values.</standard>
    <standard>Include descriptive docstrings for all commands following Google style.</standard>
    <standard>Use Annotated for command parameters to provide help text and options.</standard>
    <standard>Prefix async command functions with 'async' or 'aio' for clarity.</standard>
    <standard>Use proper error handling and logging in command functions.</standard>
    <examples>
      <example>
        <![CDATA[
        # Main APP initialization
        APP = AsyncTyperImproved()

        # Sync command example
        @APP.command()
        def version(
            verbose: Annotated[bool, typer.Option("--verbose", "-v", help="Show detailed version info")] = False,
        ) -> None:
            """Display version information."""
            rich.print(f"democracy_exe version: {democracy_exe.__version__}")
            if verbose:
                rich.print(f"Python version: {sys.version}")

        # Async command example
        @APP.command()
        async def run_bot() -> None:
            """Run the Discord bot."""
            logger.info("Running bot")
            try:
                async with DemocracyBot() as bot:
                    await bot.start()
            except Exception as ex:
                logger.exception("Bot error occurred")
                if aiosettings.dev_mode:
                    bpdb.pm()

        # Subcommand module example (dummy_cmd.py)
        APP = AsyncTyperImproved(help="dummy command")

        @APP.command("dummy")
        def cli_dummy_cmd(prompt: str) -> str:
            """Generate a new module.

            Args:
                prompt: The input prompt

            Returns:
                str: The generated output
            """
            return f"dummy cmd: {prompt}"

        @APP.command()
        async def aio_cli_dummy_cmd() -> str:
            """Returns information asynchronously."""
            await asyncio.sleep(1)
            return "slept for 1 second"
        ]]>
      </example>
    </examples>
  </cli_standards>

  <discord_testing_standards>
    <test_configuration>
      <standard>Add required linter disables for Discord.py files</standard>
      <standard>Configure proper intents for test environment</standard>
      <standard>Set up test guilds with appropriate permissions</standard>
      <standard>Configure logging for test environment</standard>
      <standard>Use consistent test data across test suite</standard>

      <file_setup>
        <standard>Add necessary linter disables at the top of test files</standard>
        <example>
          <![CDATA[
          # pylint: disable=no-member
          # pylint: disable=possibly-used-before-assignment
          # pyright: reportImportCycles=false
          # mypy: disable-error-code="index"
          # mypy: disable-error-code="no-redef"
          # pyright: reportAttributeAccessIssue=false

          import pytest
          import discord
          import discord.ext.test as dpytest
          from discord.ext import commands
          from typing import AsyncGenerator, Generator
          ]]>
        </example>
      </file_setup>

      <bot_configuration>
        <standard>Set up bot with all required intents for testing</standard>
        <standard>Configure proper command prefix and settings</standard>
        <standard>Initialize bot with test-specific settings</standard>
        <example>
          <![CDATA[
          @pytest.fixture
          async def bot() -> AsyncGenerator[commands.Bot, None]:
              """Create a DemocracyBot instance for testing.

              Returns:
                  AsyncGenerator[commands.Bot, None]: DemocracyBot instance with test configuration
              """
              # Configure intents
              intents = discord.Intents.default()
              intents.members = True
              intents.message_content = True
              intents.messages = True
              intents.guilds = True

              # Create DemocracyBot with test configuration
              from democracy_exe.chatbot.bot import DemocracyBot
              bot = DemocracyBot(
                  command_prefix="?",
                  intents=intents,
                  description="Test DemocracyBot instance"
              )

              # Add test-specific error handling
              @bot.event
              async def on_command_error(ctx: commands.Context, error: Exception) -> None:
                  """Handle command errors in test environment."""
                  raise error  # Re-raise for pytest to catch

              # Setup and cleanup
              await bot._async_setup_hook()  # Required for proper initialization
              dpytest.configure(bot)
              yield bot
              await dpytest.empty_queue()

          @pytest.fixture
          async def test_guild(bot: DemocracyBot) -> AsyncGenerator[discord.Guild, None]:
              """Create a test guild.

              Args:
                  bot: DemocracyBot instance

              Yields:
                  Test guild instance
              """
              guild = await dpytest.driver.create_guild()
              await dpytest.driver.configure_guild(guild)
              yield guild
              await dpytest.empty_queue()

          @pytest.fixture
          async def test_channel(test_guild: discord.Guild) -> AsyncGenerator[discord.TextChannel, None]:
              """Create a test channel.

              Args:
                  test_guild: Test guild fixture

              Yields:
                  Test channel instance
              """
              channel = await dpytest.driver.create_text_channel(test_guild)
              yield channel
              await dpytest.empty_queue()
          ]]>
        </example>
      </bot_configuration>

      <test_data_management>
        <standard>Use consistent test data across test suite</standard>
        <standard>Create fixtures for common test data</standard>
        <standard>Clean up test data after each test</standard>
        <example>
          <![CDATA[
          @pytest.fixture
          def test_data() -> dict:
              """Provide consistent test data for bot tests.

              Returns:
                  dict: Test data dictionary
              """
              return {
                  "guild_name": "Test Guild",
                  "channel_name": "test-channel",
                  "user_name": "TestUser",
                  "role_name": "TestRole",
                  "command_prefix": "?",
                  "test_message": "Hello, bot!",
                  "test_embed": discord.Embed(
                      title="Test Embed",
                      description="Test description"
                  )
              }

          @pytest.fixture(autouse=True)
          async def cleanup_test_data() -> AsyncGenerator[None, None]:
              """Clean up test data after each test."""
              yield
              await dpytest.empty_queue()
              # Reset any modified bot state
              bot = dpytest.get_config().client
              bot.clear()
          ]]>
        </example>
      </test_data_management>

      <logging_setup>
        <standard>Configure logging for test environment using structlog</standard>
        <standard>Use structlog's capture_logs context manager for testing log output</standard>
        <standard>Never use pytest's caplog fixture for structlog message verification</standard>
        <example>
          <![CDATA[
          @pytest.fixture(autouse=True)
          def setup_logging() -> None:
              """Configure structlog for test environment."""
              import structlog

              structlog.configure(
                  processors=[
                      structlog.contextvars.merge_contextvars,
                      structlog.processors.add_log_level,
                      structlog.processors.TimeStamper(fmt="iso"),
                      structlog.processors.StackInfoRenderer(),
                      structlog.testing.capture_logs,
                  ],
                  wrapper_class=structlog.make_filtering_bound_logger("DEBUG"),
                  context_class=dict,
                  logger_factory=structlog.testing.LogCapture,
                  cache_logger_on_first_use=True
              )

          @pytest.mark.asyncio
          async def test_example_event(bot: DemocracyBot) -> None:
              """Test example event logging.

              Args:
                  bot: The Discord bot instance
              """
              with structlog.testing.capture_logs() as captured:
                  # Perform the action that generates logs
                  await some_action()

                  # Check if the log message exists in the captured structlog events
                  assert any(
                      log.get("event") == "Expected Event Message" for log in captured
                  ), "Expected 'Expected Event Message' not found in logs"

                  # For multiple log checks, use multiple assertions
                  assert any(
                      log.get("event") == "Another Expected Event" for log in captured
                  ), "Expected 'Another Expected Event' not found in logs"

                  # For dynamic messages with variable content, use startswith()
                  assert any(
                      log.get("event").startswith("File created at:") for log in captured
                  ), "Expected file creation message not found in logs"

                  # For messages containing variable paths or IDs, use partial matching
                  assert any(
                      "user_123" in log.get("event") for log in captured
                  ), "Expected user ID in log message not found"
          ]]>
        </example>
      </logging_setup>
    </test_configuration>

    <message_testing_patterns>
      <standard>Use dpytest.message() to simulate user messages</standard>
      <standard>Use dpytest.verify() to check bot responses</standard>
      <standard>Always verify both message content and message type (text, embed, etc.)</standard>
      <standard>Clear message queues between tests to prevent cross-test contamination</standard>
      <standard>Test both direct messages and guild messages separately</standard>

      <message_verification>
        <standard>Use appropriate verification method based on expected response type</standard>
        <example>
          <![CDATA[
          # Text message verification
          await dpytest.message("?command")
          assert dpytest.verify().message().content("Expected response")

          # Embed verification
          await dpytest.message("?embed_command")
          assert dpytest.verify().message().embed(expected_embed)

          # Multiple message verification
          await dpytest.message("?multi_response")
          assert dpytest.verify().message().content("First response")
          assert dpytest.verify().message().content("Second response")

          # Partial content verification
          await dpytest.message("?partial")
          assert dpytest.verify().message().contains().content("partial match")
          ]]>
        </example>
      </message_verification>

      <message_queue_management>
        <standard>Clear message queue before each test using dpytest.empty_queue()</standard>
        <standard>Use verify().nothing() to ensure no unexpected messages</standard>
        <standard>Handle message queues in async context</standard>
        <example>
          <![CDATA[
          @pytest.mark.asyncio
          async def test_no_response():
              await dpytest.message("?invalid_command")
              assert dpytest.verify().message().nothing()

          @pytest.mark.asyncio
          async def test_message_cleanup():
              # Setup
              await dpytest.empty_queue()

              # Test
              await dpytest.message("?command")
              assert dpytest.verify().message().content("Response")

              # Cleanup
              await dpytest.empty_queue()
          ]]>
        </example>
      </message_queue_management>

      <error_handling>
        <standard>Test both successful and error scenarios</standard>
        <standard>Verify error messages are properly formatted</standard>
        <standard>Test permission-based message handling</standard>
        <example>
          <![CDATA[
          @pytest.mark.asyncio
          async def test_error_handling():
              # Test invalid command
              await dpytest.message("?invalid")
              assert dpytest.verify().message().embed(error_embed)

              # Test permission error
              await dpytest.message("?admin_only")
              assert dpytest.verify().message().contains().content("You don't have permission")

              # Test rate limiting
              for _ in range(5):  # Exceed rate limit
                  await dpytest.message("?rate_limited")
              assert dpytest.verify().message().contains().content("Rate limit exceeded")
          ]]>
        </example>
      </error_handling>

      <best_practices>
        <standard>Group related message tests together in test classes</standard>
        <standard>Test command interactions and side effects</standard>
        <standard>Mock external services used by commands</standard>
        <standard>Test command error states and recovery</standard>
        <standard>Test command output formatting and localization</standard>
      </best_practices>
    </message_testing_patterns>

    <command_testing_patterns>
      <standard>Test both sync and async commands separately</standard>
      <standard>Test command aliases and different prefix variations</standard>
      <standard>Test command argument parsing and validation</standard>
      <standard>Test command cooldowns and rate limiting</standard>
      <standard>Test command permissions and role-based access</standard>

      <command_verification>
        <standard>Verify command registration and availability</standard>
        <standard>Test command help and documentation</standard>
        <standard>Verify command responses in different contexts (DM vs Guild)</standard>
        <example>
          <![CDATA[
          @pytest.mark.asyncio
          async def test_command_registration(bot):
              """Test command registration and help documentation."""
              # Test command exists
              assert "ping" in [cmd.name for cmd in bot.commands]

              # Test help documentation
              await dpytest.message("?help ping")
              assert dpytest.verify().message().contains().content("Returns the ping of the bot")

              # Test command aliases
              cmd = bot.get_command("ping")
              assert cmd.aliases == ["p", "latency"]
          ]]>
        </example>
      </command_verification>

      <argument_testing>
        <standard>Test required vs optional arguments</standard>
        <standard>Test argument type conversion and validation</standard>
        <standard>Test argument error handling</standard>
        <example>
          <![CDATA[
          @pytest.mark.asyncio
          async def test_command_arguments():
              # Test missing required argument
              await dpytest.message("?echo")
              assert dpytest.verify().message().contains().content("Missing required argument")

              # Test invalid argument type
              await dpytest.message("?repeat abc 3")
              assert dpytest.verify().message().contains().content("Converting to integer failed")

              # Test valid arguments
              await dpytest.message("?echo Hello World")
              assert dpytest.verify().message().content("Hello World")

              # Test optional arguments with defaults
              await dpytest.message("?repeat Hello")
              assert dpytest.verify().message().content("Hello")  # Uses default count=1
          ]]>
        </example>
      </argument_testing>

      <permission_testing>
        <standard>Test commands with different permission levels</standard>
        <standard>Test owner-only and admin-only commands</standard>
        <standard>Test role-based command access</standard>
        <example>
          <![CDATA[
          @pytest.mark.asyncio
          async def test_command_permissions(bot):
              # Test admin-only command
              await dpytest.message("?admin_command")
              assert dpytest.verify().message().contains().content("You must have administrator permissions")

              # Test with admin permissions
              member = dpytest.get_config().members[0]
              member.guild_permissions.administrator = True
              await dpytest.message("?admin_command")
              assert dpytest.verify().message().content("Admin command executed")

              # Test owner-only command
              await dpytest.message("?owner_command")
              assert dpytest.verify().message().contains().content("This command is owner-only")
          ]]>
        </example>
      </permission_testing>

      <cooldown_testing>
        <standard>Test command cooldown implementation</standard>
        <standard>Test cooldown bypass for privileged users</standard>
        <standard>Test cooldown error messages</standard>
        <example>
          <![CDATA[
          @pytest.mark.asyncio
          async def test_command_cooldowns():
              # First command use
              await dpytest.message("?cooldown_cmd")
              assert dpytest.verify().message().content("Command executed")

              # Second command use (should be on cooldown)
              await dpytest.message("?cooldown_cmd")
              assert dpytest.verify().message().contains().content("is on cooldown")

              # Test admin bypass
              member = dpytest.get_config().members[0]
              member.guild_permissions.administrator = True
              await dpytest.message("?cooldown_cmd")
              assert dpytest.verify().message().content("Command executed")
          ]]>
        </example>
      </cooldown_testing>

      <subcommand_testing>
        <standard>Test subcommand registration and hierarchy</standard>
        <standard>Test subcommand argument parsing</standard>
        <standard>Test subcommand-specific permissions</standard>
        <example>
          <![CDATA[
          @pytest.mark.asyncio
          async def test_subcommands():
              # Test parent command
              await dpytest.message("?settings")
              assert dpytest.verify().message().contains().content("Available settings")

              # Test subcommand
              await dpytest.message("?settings prefix ?")
              assert dpytest.verify().message().content("Prefix updated to ?")

              # Test nested subcommand
              await dpytest.message("?settings role add @Role")
              assert dpytest.verify().message().content("Role added")
          ]]>
        </example>
      </subcommand_testing>

      <best_practices>
        <standard>Group related command tests in test classes</standard>
        <standard>Test command interactions and side effects</standard>
        <standard>Mock external services used by commands</standard>
        <standard>Test command error states and recovery</standard>
        <standard>Test command output formatting and localization</standard>
      </best_practices>
    </command_testing_patterns>

    <event_testing_patterns>
      <standard>Test both synchronous and asynchronous event handlers</standard>
      <standard>Test event registration and deregistration</standard>
      <standard>Test event propagation and cancellation</standard>
      <standard>Clean up event-generated files after testing</standard>
      <standard>Test event payload handling and validation</standard>

      <session_cleanup>
        <standard>Clean up temporary files created during testing</standard>
        <standard>Use pytest_sessionfinish for global cleanup</standard>
        <example>
          <![CDATA[
          def pytest_sessionfinish(session: pytest.Session, exitstatus: int) -> None:
              """Code to execute after all tests.

              Args:
                  session: The pytest session object
                  exitstatus: The exit status code
              """
              # Clean up attachment files created by dpytest
              print("\n-------------------------\nClean dpytest_*.dat files")
              fileList = glob.glob("./dpytest_*.dat")
              for filePath in fileList:
                  try:
                      os.remove(filePath)
                  except Exception:
                      print("Error while deleting file : ", filePath)
          ]]>
        </example>
      </session_cleanup>

      <event_handlers>
        <standard>Test event handler registration and execution</standard>
        <standard>Verify event handler receives correct event data</standard>
        <standard>Test multiple handlers for same event</standard>
        <example>
          <![CDATA[
          @pytest.mark.asyncio
          async def test_on_message_event(bot):
              """Test message event handling."""
              # Send test message
              test_message = "Hello bot!"
              await dpytest.message(test_message)

              # Verify handler received message
              assert dpytest.verify().message().content(test_message)

              # Test message processing
              assert dpytest.get_message().content == test_message

          @pytest.mark.asyncio
          async def test_on_member_join(bot):
              """Test member join event handling."""
              # Add test member
              test_member = await dpytest.member_join()

              # Verify welcome message
              assert dpytest.verify().message().contains().content("Welcome")

              # Verify member in guild
              guild = dpytest.get_config().guilds[0]
              assert test_member in guild.members
          ]]>
        </example>
      </event_handlers>

      <attachment_testing>
        <standard>Test file upload and attachment handling</standard>
        <standard>Verify attachment metadata and content</standard>
        <standard>Clean up attachment files after tests</standard>
        <example>
          <![CDATA[
          @pytest.mark.asyncio
          async def test_attachment_handling(bot):
              """Test handling of message attachments."""
              # Create test file
              test_content = b"Test file content"
              await dpytest.message("?upload", file=discord.File(
                  fp=io.BytesIO(test_content),
                  filename="test.txt"
              ))

              # Verify bot processed attachment
              assert dpytest.verify().message().contains().content("File uploaded")

              # Verify file cleanup
              await dpytest.empty_queue()
              # Attachment files (dpytest_*.dat) will be cleaned up by pytest_sessionfinish
          ]]>
        </example>
      </attachment_testing>

      <error_handling>
        <standard>Test error handling in event handlers</standard>
        <standard>Verify error events are properly caught and logged</standard>
        <standard>Test recovery from event handling failures</standard>
        <example>
          <![CDATA[
          @pytest.mark.asyncio
          async def test_event_error_handling(bot, caplog):
              """Test error handling in events."""
              # Trigger error condition
              await dpytest.message("?error_trigger")

              # Verify error was logged
              assert "Error in event handler" in caplog.text

              # Verify bot remains operational
              await dpytest.message("?ping")
              assert dpytest.verify().message().contains().content("pong")
          ]]>
        </example>
      </error_handling>

      <best_practices>
        <standard>Group related event tests by functionality</standard>
        <standard>Test both success and failure scenarios</standard>
        <standard>Clean up resources after event testing</standard>
        <standard>Mock external services used in event handlers</standard>
        <standard>Test event handler order and priority</standard>
      </best_practices>
    </event_testing_patterns>

    <test_state_management>
      <standard>Use a global test state flag to modify behavior in test environments</standard>
      <standard>Handle permissions differently when testing vs production</standard>
      <standard>Allow test bypass of certain checks when appropriate</standard>
      <standard>Add dpytest-specific state flags for Discord testing</standard>
      <standard>Use test state to control bot behavior in test environment</standard>
      <example>
        <![CDATA[
        # Global test state
        is_dpytest = False  # Default to production mode
        is_test_environment = False

        def is_owner(ctx: commands.Context) -> bool:
            """Check if user is owner or in test environment.

            Args:
                ctx: The command context

            Returns:
                bool: True if user is owner or in test environment
            """
            if is_dpytest or is_test_environment:
                return True
            return ctx.author.id == bot.owner_id

        def check_permissions(ctx: commands.Context, *perms: str) -> bool:
            """Check if user has required permissions or is in test environment.

            Args:
                ctx: The command context
                *perms: Required permissions

            Returns:
                bool: True if user has permissions or in test environment
            """
            if is_dpytest or is_test_environment:
                return True
            return all(getattr(ctx.channel.permissions_for(ctx.author), perm, False) for perm in perms)

        @pytest.fixture(autouse=True)
        def setup_test_state() -> Generator[None, None, None]:
            """Setup test state for all tests."""
            global is_dpytest, is_test_environment
            is_dpytest = True
            is_test_environment = True
            yield
            is_dpytest = False
            is_test_environment = False
        ]]>
      </example>
    </test_state_management>
  </discord_testing_standards>
</python_standards>
</cursorrules>
bun
dockerfile
golang
javascript
jupyter notebook
just
langchain
less
+7 more

First seen in:

bossjones/democracy-exe

Used in 1 repository

TypeScript
# Cursor Rules

You are an expert in Astro, Storybook, TypeScript, CSS, ESlint, npm, and modern UI development.

  General Coding Guidelines

- Document code with verbosity, providing clear and detailed comments.
- Use descriptive names for variables, classes, functions, and files.
- Update comments when editing files; avoid removing relevant comments.
- Use comments to separate code into logical sections.
- Use comments to indicate the purpose of the code.
- Avoid duplicate code by reusing the same styles and components. There's no need to define the same style or component in multiple places.

  Code Style and Structure

- Write concise, technical TypeScript code with accurate examples
- Ensure all code follows TDD (Test Driven Development): write tests first, verify that the tests satisfy the requirements given, then write and iteratively improve the code to pass the tests.
- Organize files with a logical structure: component files, subcomponents, helpers, static content, and types.
- Use descriptive, verbose file names that indicate their purpose.

  Documentation References

- Follow Astro's official documentation: <https://docs.astro.build>.
- Use Storybook's documentation for building and testing UI components: <https://storybook.js.org/docs>.
- Refer to TypeScript's official documentation for type definitions and practices: <https://www.typescriptlang.org/docs>.
- Follow CSS specifications and best practices from the MDN documentation: <https://developer.mozilla.org/en-US/docs/Web/CSS>.
- Use ESLint documentation for linting configuration and rules: <https://eslint.org/docs/latest>.
- Refer to npm's official documentation for managing dependencies: <https://docs.npmjs.com>.
- For each dependency, refer to the documentation for the latest version of the dependency. For more specificity, refer to package.json.

  Styling and UI

- Use CSS and Tailwind for styling components; integrate with PostCSS as required.
- Prioritize responsive design using Astro's built-in capabilities and CSS media queries.
- Maintain strict WCAG compliance and high accessibility (a11y) standards, including ARIA roles.
- Follow consistent naming conventions for CSS classes.
- Make the UI look like Flux UI: <https://fluxui.dev/docs>

  Testing

- Write unit tests for individual components using the Storybook Testing Library.
- Write integration tests for workflows and key user interactions.
- Implement test cases before writing any feature code, adhering to TDD.
- Ensure all test cases are updated when modifying existing functionality.

  Configs
- TypeScript code should adhere to the tsconfig.json file.
- Astro code should adhere to the astro.config.ts and types/astro.ts files.
- CSS code should adhere to the tailwind.config.ts, styles/globals.css, and types/theme.ts files.
- Storybook code should adhere to the .storybook/main.ts, .storybook/preview.ts, and types/storybook.ts files.

  Tooling

- Configure ESLint for consistent code quality and enforce project rules.
- Use npm scripts to manage tasks like builds, tests, and deployments.
- Update Cursor's .cursorrules file dynamically when adding dependencies to the project. For example, if PostCSS is added, include relevant rules and documentation references for their usage.
- Rely on Astro's and Storybook's tools for development and component visualization.

  Previous Errors To Avoid

- Do not include React in this project. This project uses Astro.
- Do not include lit or any web component support in this project. This project uses Astro.
- Do not include JavaScript in this project. This project uses TypeScript.
- Do not use playwright in this project. This project uses Storybook for testing.

  Key Conventions

  1. Maintain strict TypeScript checking to ensure type safety.
  2. Use Astro's and Storybook's features to optimize component-driven development.
  3. Ensure compatibility with modern browsers and performance best practices.
  4. Test all UI changes for WCAG compliance and accessibility.

  Directory Structure

src/
├── components/                         # Reusable UI components
│   └── ComponentName/
│       ├── ComponentName.astro         # Component implementation
│       ├── ComponentName.test.ts       # Component tests
│       └── ComponentName.stories.ts    # Storybook stories
├── layouts/                            # Page layouts and templates
├── pages/                              # Astro pages and routing
├── styles/                             # Global styles and Tailwind configuration
│   ├── globals.css                     # Global CSS styles
│   └── themes/                         # Theme-specific styles
├── types/                              # TypeScript type definitions
│   ├── astro.ts                        # Astro-specific types
│   ├── components.ts                   # Component prop types
│   ├── theme.ts                        # Theme and styling types
│   └── storybook.ts                    # Storybook configuration types
├── utils/                              # Shared utilities and helpers
└── assets/                             # Static assets (images, fonts, etc.)

.storybook/                             # Storybook configuration
├── main.ts                             # Main Storybook config
└── preview.ts                          # Preview configuration

  Directory Usage

- components/: Each component should have its own directory containing the component file, tests, and stories. Follow the pattern:
  - ComponentName.astro - The main component implementation
  - ComponentName.test.ts - Unit and integration tests
  - ComponentName.stories.ts - Storybook documentation and examples
  - index.ts - (optional) For exporting multiple related components

- layouts/: Contains reusable page layouts. Each layout should:
  - Be a single .astro file
  - Handle common page elements (header, footer, etc.)
  - Accept slots for content injection
  - Support responsive design patterns

- pages/: Contains Astro pages that map to routes. Each page should:
  - Use appropriate layouts
  - Handle SEO metadata
  - Implement page-specific logic
  - Follow Astro's file-based routing conventions

- styles/: Manages all styling concerns:
  - globals.css - Contains root variables and reset styles
  - themes/ - Contains theme-specific variables and overrides
  - Follow utility-first approach with Tailwind
  - Keep component-specific styles in their component files

- types/: Contains all TypeScript type definitions:
  - Organize by domain (components, theme, etc.)
  - Export reusable types and interfaces
  - Maintain strict type checking
  - Document complex types with JSDoc comments

- utils/: Contains shared helper functions:
  - Group related utilities in separate files
  - Export named functions (no default exports)
  - Document with JSDoc comments
  - Include unit tests for complex utilities

- assets/: Manages static files:
  - Organize by type (images/, fonts/, etc.)
  - Use appropriate formats for web (webp, woff2, etc.)
  - Include source files when needed
  - Optimize for production

- .storybook/: Configures Storybook:
  - main.ts - Configure addons and webpack
  - preview.ts - Set up global decorators and parameters
  - Follow Storybook best practices
  - Maintain documentation standards
astro
css
eslint
html
java
javascript
npm
playwright
+5 more
Artemis-Cooperative/astro-ui

Used in 1 repository

TypeScript
# プロジェクトガイドライン

このリポジトリはプロジェクトのウィジェット機能とダッシュボードを提供するmonorepoです。
以下のガイドラインに従って開発を進めてください:

1. 技術スタック:
  - ダッシュボード:TypeScript, Next.js App Router, React, Shadcn UI, Radix UI and Tailwind, clerk, stripe, TanStack Query, TanStack Table, Drizzle, supabaseを使用します。
  - ウィジェット:TypeScript, Next.js App Router, Web Components, Shadcn UI, Radix UI, Tailwind CSS, Supabaseを使用します。

2. コードスタイル:
  - 簡潔で技術的なTypeScriptコードを書き、正確な例を提供します。
  - 関数型と宣言的なプログラミングパターンを使用し、クラスは避けます(ウィジェットのWeb Components除く)。
  - 反復処理とモジュール化を優先し、コードの重複を避けます。
  - 説明的な変数名を使用し、補助動詞を含めます(例:isLoading、hasError)。
  - ファイル構造:エクスポートされるコンポーネント、サブコンポーネント、ヘルパー、静的コンテンツ、型の順に構成します。
  - forEach、for-ofループの代わりに、map、filter、find、reduceなどを使用します。

3. コメント:
  - コードの先頭にJSDocコメントを追加し、@fileを使用します。
  - スクリプトの概要、主な仕様、制限事項を小学生でも理解できるように記述します。
  - 複雑な処理がある場合は、処理の流れを説明します。重複を避け、@returns {JSX.Element}は記載しません。
  - @path, @exampleを記載します。

4. 命名規則:
  - ディレクトリ名は小文字とダッシュを使用します(例:components/auth-wizard)。
  - コンポーネントには名前付きエクスポートを使用します。

5. TypeScript:
  - すべてのコードでTypeScriptを使用し、型よりもインターフェースを優先します。
  - enumの代わりにマップを使用します。
  - TypeScriptインターフェースを持つ関数コンポーネントを使用します。
  - any型を使用しないでください。

6. 構文とフォーマット:
  - 条件文では不要な中括弧を避け、簡潔な構文を使用します。
  - 宣言的なJSXを使用します。

7. UIとスタイリング:
  - Shadcn UI、Radix、Tailwindを使用します。
  - レスポンシブデザインを実装し、モバイルファーストアプローチを取ります。

8. パフォーマンス最適化:
  - ダッシュボード:
    - 'use client'、useEffect、setStateの使用を最小限に抑え、React Server Components(RSC)を優先します。
    - クライアントコンポーネントはSuspenseでラップし、フォールバックを提供します。
    - クライアントコンポーネントは最小単位の粒度で作成し、必要な機能のみを含めます。
  - ウィジェット:
    - バンドルサイズを最小限に抑え、パフォーマンスを最適化します。
  - 共通:
    - 重要でないコンポーネントには動的ロードを使用します。
    - 画像最適化:WebP形式を使用し、サイズデータを含め、遅延ロードを実装します。

9. 主要な規約:
  - ダッシュボード:URL検索パラメータの状態管理には'nuqs'を使用します。
  - Web Vitals(LCP、CLS、FID)を最適化します。

10. アプリケーション開発ガイドライン:
    - Supabaseをデータベースとストレージ管理に使用し、認証には使用しません。
    - ウィジェット:React ViteでWeb Componentsを作成し、クラスベースアプローチとShadow DOMを採用します。
    - ダッシュボード:Next.jsのServer ComponentsとServer Actionsを実装してパフォーマンスを向上させます。
    - データフェッチとテーブル管理にはTanStack QueryとTanStack Tableを使用します。
    - クライアントとサーバー両側で適切なエラーハンドリングを行い、ユーザーフレンドリーなメッセージを提供します。
    - 最低限、英語と日本語の国際化(i18n)サポートを実装します。
    - ユーザーが複数のプロジェクトを作成、表示、管理できるプロジェクト管理システムを実装します。
    - 各プロジェクトに固有のウィジェットコードを生成し、外部サイトへの簡単な埋め込みを可能にします。
    - 環境変数を使用して機密情報とAPIキーを管理します。
    - 入力のサニタイズやContent Security Policy(CSP)など、適切なセキュリティ対策を実装します。
    - ダッシュボードとウィジェットコンポーネントの両方でレスポンシブデザインを確保します。
    - 認証フローを含む包括的なユニットテストと統合テストを作成します。
    - スケーラビリティと保守性を確保するためにモジュラーアーキテクチャを使用します。

11. 依存関係管理:
    - パッケージマネージャーとしてpnpmを使用します。
    - 定期的に依存関係を更新し、セキュリティパッチを適用します。
    - pnpm-workspace.yamlを使用してmonorepoの依存関係を管理します。

12. コード品質とフォーマット:
    - Biomeを使用してコードの品質管理とフォーマットを行います。
    - Biomeの設定ファイル(biome.json)をプロジェクトルートに配置し、一貫したコードスタイルを強制します。
    - コミット前にBiomeを実行し、コードの品質を確保します。

13. クライアントコンポーネントの設計:
    - クライアントコンポーネントは最小単位の粒度で作成します。
    - 各クライアントコンポーネントは単一の責任を持つように設計します。
    - クライアントコンポーネントとサーバーコンポーネントの境界を明確に定義し、パフォーマンスを最適化します。
    - 状態管理やイベントハンドリングが必要な場合のみ、クライアントコンポーネントを使用します。
clerk
css
drizzle-orm
handlebars
html
javascript
next.js
npm
+10 more

First seen in:

chibataku0815/forwidg

Used in 1 repository

JavaScript
TypeScript
You are a Senior Front-End Developer and an Expert in ReactJS, Vite, JavaScript, TypeScript, HTML, CSS and modern UI/UX frameworks (e.g., TailwindCSS, Shadcn, Radix). You carefully provide accurate, factual, thoughtful answers, and are a genius at reasoning. Before responding to a prompt, you carefully consider the user's request and provide a thoughtful response. Write out your thoughts and rationale in <thinking> tags before providing an answer. If you need to implement a feature, create a plan after thinking through the problem; write out your plan in <plan> tags before writing out the code.

- Follow the user’s requirements carefully & to the letter.
- First think step-by-step - describe your plan for what to build in pseudocode, written out in great detail.
- Confirm, then write code!
- Always write correct, best practice, DRY principle (Dont Repeat Yourself), bug free, fully functional and working code also it should be aligned to listed rules down below at Code Implementation Guidelines .
- Fully implement all requested functionality.
- If you think there might not be a correct answer, you say so.
- If you do not know the answer, say so, instead of guessing.

### Coding Environment
The user asks questions about the following coding languages:
- ReactJS
- JavaScript
- TypeScript
- TailwindCSS
- HTML
- CSS
- Vite

### Code Implementation Guidelines
Follow these rules when you write code:
- Use early returns whenever possible to make the code more readable.
- Always use Tailwind classes for styling HTML elements; avoid using CSS or tags.
- Use descriptive variable and function/const names. Also, event functions should be named with a “handle” prefix, like “handleClick” for onClick and “handleKeyDown” for onKeyDown.
css
dockerfile
html
java
javascript
radix-ui
react
shadcn/ui
+3 more
Kabilan108/kabilan108.com

Used in 1 repository

TypeScript
Custom Technical Instructions for Developing the SuperAdmin App

1. Environment Setup

	1.	Languages & Frameworks:
	•	Use Next.js with the App Router for the frontend.
	•	Use TypeScript for strict typing and maintainable code.
	•	Use Supabase for backend services, including database and authentication.
	2.	Libraries:
	•	UI: ShadCN UI with Radix UI for a consistent, accessible component library.
	•	Error Tracking: Integrate Sentry to monitor and debug issues.
	•	State Management: Use built-in React Context or Zustand for managing state.
	3.	Database & Supabase:
	•	Rely on Supabase’s built-in PostgreSQL database with RLS (Row-Level Security).
	•	Use Supabase’s authentication system for managing user roles and access.

2. Authentication

	1.	Role-Based Access Control (RBAC):
	•	Assign the superadmin role during user creation using Supabase’s custom auth.users table or metadata.
	•	Implement policies for database tables to restrict access based on the user’s role.
	•	Example Policy:

CREATE POLICY "Allow superadmin access"
ON public.audit_logs
FOR ALL
USING (auth.role() = 'superadmin');


	2.	Middleware:
	•	Implement a middleware function to check for auth.role() === 'superadmin' before allowing access to the app’s pages.

3. App Architecture

	1.	Folder Structure:

src/
  app/
    dashboard/     # Main SuperAdmin dashboard page
    auth/          # Auth-related pages (login/logout)
  components/       # Reusable UI components
  lib/              # Supabase client, API utilities
  hooks/            # Custom React hooks
  styles/           # Global styles with Tailwind
  utils/            # Helper functions
  types/            # TypeScript types and interfaces


	2.	Page Setup:
	•	Dashboard:
	•	Display an overview of users, audit logs, and system stats.
	•	User Management:
	•	CRUD interface for managing users and roles.
	•	Audit Logs:
	•	Filterable list of actions performed in the system.
	•	System Info:
	•	View Supabase schema versions and configuration details.

4. Database Integration

	1.	Supabase Setup:
	•	Add RLS policies to secure data access.
	•	Use database relationships to link tables (e.g., organization_id).
	•	Example relationship for audit_logs:

ALTER TABLE audit_logs
ADD CONSTRAINT fk_organization_id FOREIGN KEY (organization_id)
REFERENCES organizations(id);


	2.	Supabase Client:
	•	Configure the client in lib/supabase.ts:

import { createClient } from '@supabase/supabase-js';

const supabase = createClient(process.env.NEXT_PUBLIC_SUPABASE_URL!, process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!);

export default supabase;

5. UI/UX Design

	1.	UI Framework:
	•	Use ShadCN UI components for dropdowns, modals, and tables.
	•	Apply Radix UI primitives for accessible interactions.
	2.	Responsive Design:
	•	Follow mobile-first principles with Tailwind CSS.
	•	Add dark mode support.
	3.	Interactive Data Views:
	•	Implement searchable and filterable tables for audit_logs, users, etc.
	•	Add charts (e.g., with Chart.js or Recharts) for system metrics.

6. Advanced Features

	1.	Activity Logs:
	•	Display filtered logs with real-time updates using Supabase realtime subscriptions.
	•	Example:

const { data } = useSWR('audit_logs', fetchLogs);

async function fetchLogs() {
  const { data } = await supabase.from('audit_logs').select('*');
  return data;
}


	2.	Notifications:
	•	Push important updates to the superadmin via toast notifications or a notification center.
	3.	Error Monitoring:
	•	Use Sentry to log frontend and backend errors.

7. Testing & Deployment

	1.	Testing:
	•	Use Jest and React Testing Library for unit tests.
	•	Integrate Playwright for end-to-end testing of admin workflows.
	2.	Deployment:
	•	Deploy to Vercel for scalable hosting.
	•	Use environment variables for Supabase keys:

NEXT_PUBLIC_SUPABASE_URL=
NEXT_PUBLIC_SUPABASE_ANON_KEY=


	3.	Error Logging in Production:
	•	Configure Sentry for frontend and backend.

8. Security

	1.	RLS in Supabase:
	•	Ensure all data access respects RLS policies.
	2.	Environment Variables:
	•	Never expose sensitive keys in the frontend. Use server-side functions when needed.
	3.	Authentication & Authorization:
	•	Secure sensitive routes with role checks in Next.js middleware.

9. Documentation

	1.	Create developer documentation for:
	•	Setting up the project.
	•	Managing users and roles.
	•	Deploying the app.
	2.	Use tools like Storybook for UI documentation.

Let me know if you need help with any specific steps!
css
golang
javascript
jest
next.js
playwright
postgresql
radix-ui
+10 more
samedayramps/tiny-church-app-nextjs

Used in 1 repository