Awesome Cursor Rules Collection

Showing 2341-2352 of 2626 matches

Python
# MediaHost Project Cursor Rules

# Project root directory
root: /

# Python version
python_version: 3.9

# Main application file
main_file: app/main.py

# Test directory
test_dir: tests/

# Configuration files
config_files:
  - .env
  - .env.example
  - pyproject.toml
  - requirements.txt
  - environment.yml
  - Tiltfile
  - docker-compose.yml

# Ignore directories
ignore_dirs:
  - .git
  - .venv
  - __pycache__

# Custom file associations
file_associations:
  .py: python
  .yml: yaml
  .yaml: yaml
  .md: markdown
  .toml: toml
  .sh: shell
  .sql: sql

# Linter configurations
linters:
  python: flake8
  yaml: yamllint
  sql: sqlfluff

# Formatter configurations
formatters:
  python: black
  yaml: prettier
  sql: sqlformat

# Custom rules
rules:
  - name: use_f_strings
    description: Prefer f-strings over .format() or % formatting
    pattern: '\.format\(|%[sd]'
    message: Consider using an f-string instead

  - name: avoid_print_statements
    description: Avoid using print statements in production code
    pattern: 'print\('
    message: Consider using a logging statement instead of print

  - name: use_type_hints
    description: Encourage the use of type hints
    pattern: 'def [a-zA-Z_]+\([^:]+\):'
    message: Consider adding type hints to function parameters and return values

# Project-specific conventions
conventions:
  - All new features should have corresponding unit tests
  - Use snake_case for function and variable names
  - Use CamelCase for class names
  - Keep functions and methods under 50 lines where possible
  - Use docstrings for all public functions, classes, and modules
  - Follow PEP 8 style guide for Python code
  - Use meaningful variable and function names
  - Keep SQL queries in separate .sql files under config/sql/ directory

# Dependencies
dependencies:
  - streamlit==1.38.0
  - mysql-connector-python==8.0.33
  - nats-py==2.2.0
  - minio==7.1.15
  - python-dotenv==1.0.0
  - stripe==5.4.0
  - pillow==9.5.0
  - bcrypt==4.0.1
  - PyJWT==2.6.0
  - faker==18.9.0
  - requests==2.31.0
  - prometheus_client==0.16.0
  - streamlit-player==0.1.5
  - numpy==2.0.2
  - pandas==2.2.2
  - plotly==5.14.1
  - icalendar==4.0.7
  - pytz==2021.1
  - schedule==1.1.0
  - werkzeug==2.0.2
  - google-auth-oauthlib==0.4.6
  - google-auth-httplib2==0.1.0
  - google-api-python-client==2.23.0

# Environment variables (do not include actual values)
env_vars:
  - DB_HOST
  - DB_USER
  - DB_PASSWORD
  - DB_NAME
  - MINIO_ENDPOINT
  - MINIO_ACCESS_KEY
  - MINIO_SECRET_KEY
  - MINIO_SECURE
  - NATS_URL
  - STRIPE_SECRET_KEY
  - STRIPE_PUBLISHABLE_KEY
  - FRONTEND_URL
  - JWT_SECRET
  - GOOGLE_ANALYTICS_KEY_FILE
  - GOOGLE_ANALYTICS_VIEW_ID
  - DOMAIN_NAME
  - SITE_NAME
  - REPO_URL
  - ADMIN_EMAIL
  - SECRET_KEY
  - ALGORITHM
  - ACCESS_TOKEN_EXPIRE_MINUTES
  - PROMETHEUS_PORT

# SQL files
sql_files:
  - config/sql/01_users.sql
  - config/sql/02_events.sql
  - config/sql/03_videos.sql
  - config/sql/04_merchandise.sql
  - config/sql/05_page_blocks.sql
  - config/sql/06_comments.sql
  - config/sql/07_ratings.sql
  - config/sql/08_categories.sql
  - config/sql/09_event_categories.sql
  - config/sql/10_tags.sql
  - config/sql/11_event_tags.sql
  - config/sql/12_event_views.sql
  - config/sql/13_event_access.sql
  - config/sql/14_merchandise_purchases.sql
  - config/sql/15_blog_posts.sql
  - config/sql/16_notifications.sql
analytics
docker
dockerfile
golang
jwt
mysql
oauth
prettier
+4 more

First seen in:

dodwmd/mediahost

Used in 1 repository

Shell
use simple and easy-to-understand language

Before responding to any request, follow these steps:

# Fundamental Principles
- write clean, simple, readable code
- reliability is the top priority - if you can't make it reliable, don't build it
- implement features in the simplest possible way
- kepp files small and focused (<200 lines)

# Error fixing
- consider multiple possibles causes before deciding. Do not jump to conclusions.
- Explain the problem in plain english.

1. Request Analysis
   - Determine task type (code creation, debugging, architecture, etc.)
   - Identify languages and frameworks involved
   - Note explicit and implicit requirements
   - Define core problem and desired outcome
   - Consider project context and constraints

2. Solution Planning
   - Break down the solution into logical steps
   - Consider modularity and reusability
   - Identify necessary files and dependencies
   - Evaluate alternative approaches
   - Plan for testing and validation

3. Implementation Strategy
   - Choose appropriate design patterns
   - Consider performance implications
   - Plan for error handling and edge cases
   - Ensure accessibility compliance
   - Verify best practices alignment

## Code Style and Structure

### General Principles
- Write concise, readable TypeScript code
- Use functional and declarative programming patterns
- Follow DRY (Don't Repeat Yourself) principle
- Implement early returns for better readability
- Structure components logically: exports, subcomponents, helpers, types

### Naming Conventions
- Use descriptive names with auxiliary verbs (isLoading, hasError)
- Prefix event handlers with "handle" (handleClick, handleSubmit)
- Use lowercase with dashes for directories (components/auth-wizard)
- Favor named exports for components

### TypeScript Usage
- Use TypeScript for all code
- Prefer interfaces over types, use types for components 
- Avoid enums; use const maps instead
- Implement proper type safety and inference
- Use `satisfies` operator for type validation
- if a type is necessary in multiple files it should go in types folder in a d.ts file
- avoid creating types and interfaces in utils, services, constant files

### Component Architecture
- Favor React Server Components (RSC) where possible
- Minimize 'use client' directives
- Implement proper error boundaries
- Use Suspense for async operations
- Optimize for performance and Web Vitals

### State Management
- Use `useActionState` instead of deprecated `useFormState`
- Leverage enhanced `useFormStatus` with new properties (data, method, action)
- Implement URL state management with 'nuqs'
- Minimize client-side state

### Async Request APIs
```typescript
// Always use async versions of runtime APIs
const cookieStore = await cookies()
const headersList = await headers()
const { isEnabled } = await draftMode()

// Handle async params in layouts/pages
const params = await props.params
const searchParams = await props.searchParams
```

### Data Fetching
- Fetch requests are no longer cached by default
- Use `cache: 'force-cache'` for specific cached requests
- Implement `fetchCache = 'default-cache'` for layout/page-level caching
- Use appropriate fetching methods (Server Components, SWR, React Query)

### Route Handlers
```typescript
// Cached route handler example
export const dynamic = 'force-static'

export async function GET(request: Request) {
  const params = await request.params
  // Implementation
}
```

## UI Development

### Styling
- Use SASS styles with a mobile-first approach
- Implement Shadcn UI and Radix UI components
- Follow consistent spacing and layout patterns
- Ensure responsive design across breakpoints
- Use CSS variables for theme customiz

### Accessibility
- Implement proper ARIA attributes
- Ensure keyboard navigation
- Provide appropriate alt text
- Follow WCAG 2.1 guidelines
- Test with screen readers

### Performance
- Optimize images (WebP, sizing, lazy loading)
- Implement code splitting
- Use `next/font` for font optimization
- Configure `staleTimes` for client-side router cache
- Monitor Core Web Vitals

## Testing and Validation

### Code Quality
- Implement comprehensive error handling
- Write maintainable, self-documenting code
- Follow security best practices
- Ensure proper type coverage
- Use ESLint and Prettier

### Testing Strategy
- Plan for unit and integration tests
- Implement proper test coverage
- Consider edge cases and error scenarios
- Validate accessibility compliance
- Use React Testing Library

Remember: Prioritize clarity and maintainability while delivering robust, accessible, and performant solutions aligned with the latest React 19, Next.js 15, and Vercel AI SDK features and best practices.

in react all functions should be in arrow functions including the component function,
should have FC declaration  and return type ReactElement
dockerfile
eslint
golang
javascript
lua
next.js
prettier
radix-ui
+6 more

First seen in:

vianch/config-files

Used in 1 repository

Vim Snippet
always use Chinese
lua
python
vim script
vim snippet

First seen in:

zuozuo/nvim-config

Used in 1 repository

Lua
You are an expert in Roblox Luau programming, with deep knowledge of its unique features and common use cases in Roblox game development.

Key Principles
- Write clear, concise Luau code that follows Roblox best practices
- Leverage Luau's strict type checking for better code reliability
- Use proper error handling and Promises effectively
- Follow consistent naming conventions and code organization
- Optimize for performance while maintaining readability
- Consider client/server context before implementing code
- Implement robust anti-cheat measures and exploit prevention
- Utilize the Spring module for smooth animations in GUI and camera movements

Detailed Guidelines
- Prioritize Clean, Efficient Code: Write clear, optimized code that follows Roblox's performance guidelines. Balance efficiency with readability.
- Focus on Player Experience: Ensure code contributes to smooth gameplay, efficient replication, and minimal client-side lag.
- Create Modular & Reusable Code: Use Roblox's service architecture and break functionality into self-contained ModuleScripts.
- Adhere to Coding Standards: Follow Roblox's coding standards and Luau best practices. Use type annotations when possible.
- Ensure Comprehensive Testing: Use TestEZ for unit testing and implement proper test coverage for critical systems.
- Prioritize Security: Follow Roblox's security guidelines, implement proper filtering, and secure remote events/functions.
- Enhance Code Maintainability: Write self-documenting code with proper type annotations and clear comments.
- Optimize Performance: Consider Roblox-specific optimizations like instance caching, proper event handling, and efficient data structures.
- Implement Robust Error Handling: Use pcall/xpcall and implement proper error reporting using Roblox's Debug library.

Anti-Cheat & Security Measures
- Implement server-side validation for all critical game actions
- Add sanity checks for player positions and movements
- Use encryption for sensitive data transmission
- Monitor and log suspicious player behavior
- Add checks for impossible actions or states
- Use server-authoritative design patterns

Exploit Prevention Systems
- Implement server-side hit detection and validation
- Add cooldown systems with server-side enforcement
- Use checksums for critical game state verification
- Implement anti-noclip detection systems
- Use server-side physics validation
- Use secure random number generation

Roblox-Specific Guidelines
- Use ModuleScripts for organizing code
- Implement proper client-server communication
- Utilize Roblox services effectively
- Follow Roblox's security guidelines
- Use DataStores properly for data persistence
- Verify script context (Server/Client) before accessing specific services

Package Usage
- Leverage Promise library for async operations
- Consider Signal for custom events
- Check ReplicatedStorage.Packages for available modules
- Use the Spring module for GUI animations and camera effects

Naming Conventions
- Use PascalCase for classes/components
- Use camelCase for variables/functions
- Use SCREAMING_SNAKE_CASE for constants
- Prefix private members with underscore
- Use descriptive names reflecting purpose

Code Organization
- Separate client/server logic appropriately
- Use ModuleScripts for shared code
- Organize services into their own modules
- Keep files focused and manageable
- Properly structure game hierarchy

Error Handling
- Use pcall/xpcall for protected calls
- Implement proper error messages
- Handle nil checks explicitly
- Use assert() for validation
- Log errors appropriately

Performance Optimization
- Cache frequently accessed instances
- Minimize RemoteEvent usage
- Use FastCast for raycasting if needed
- Implement proper memory management
- Optimize render operations

Memory Management
- Clean up connections properly
- Implement proper garbage collection
- Avoid memory leaks in loops
- Clear references when destroying
- Monitor memory usage

Testing
- Use TestEZ for unit testing
- Test networking code thoroughly
- Validate game mechanics
- Profile performance regularly
- Test cross-platform compatibility
- Test for common exploit scenarios

Documentation
- Document API interfaces
- Explain complex game systems
- Include usage examples
- Document remote events/functions
- Maintain clear code comments
- Document security measures

Best Practices
- Use strict type checking
- Implement proper data validation
- Follow Roblox's security guidelines
- Use proper service calls
- Implement proper game loop structure

Security Considerations
- Filter user input properly
- Secure remote events/functions
- Implement anti-exploitation measures
- Use proper data validation
- Follow Roblox's security guidelines
- Implement server authority

Common Patterns
- Implement proper service pattern
- Use component-based design
- Implement proper replication
- Use Promises for async operations
- Handle client-server communication
- Use secure design patterns

Game Systems
- Implement efficient physics
- Use proper collision groups
- Manage game state properly
- Optimize rendering/effects
- Handle player data safely
- Implement anti-cheat systems

Debugging
- Use Roblox Studio debugger
- Implement logging systems
- Use print() strategically
- Monitor performance metrics
- Use Developer Console
- Log suspicious activities

Code Review Guidelines
- Verify security measures
- Check performance impact
- Validate type safety
- Ensure proper error handling
- Confirm documentation completeness
- Review for potential exploits

Note: For NevermoreEngine modules or other external packages, please ask about specific needs for:
- Trove (for cleanup management)
- Spring (for smooth animations)
- Maid (for cleanup management)
- CameraShaker (for camera effects)
- Character Controller (for custom character movement)

Remember to always refer to the Roblox Developer Hub and Luau documentation for specific implementation details and best practices.
lua
spring

First seen in:

gh-Constant/AW_rojo

Used in 1 repository

Zig
JavaScript
# Magic Pocket 前端插件代码风格指南

## 1. 文件结构与命名

### 1.1 文件命名
- 使用 camelCase 命名法
- 文件名应当表明其主要功能,如 `floatingWindow.js`
- 组件相关文件使用组件名命名

### 1.2 目录结构
```
Front/
  ├── src/
  │   ├── content/     # 内容脚本
  │   ├── background/  # 后台脚本
  │   ├── popup/       # 弹出窗口
  │   └── utils/       # 工具函数
  ├── styles/          # 样式文件
  └── lib/            # 第三方库
```

## 2. 代码组织

### 2.1 类的组织
- 使用 ES6 类语法
- 类名使用 PascalCase
- 构造函数在最前面
- 公共方法在前,私有方法在后
- 相关方法应该组织在一起

### 2.2 方法命名
- 初始化方法使用 `init` 前缀
- 设置方法使用 `setup` 前缀
- 创建元素方法使用 `create` 前缀
- 更新相关方法使用 `update` 前缀
- 事件处理方法使用 `handle` 前缀

## 3. 样式规范

### 3.1 样式定义
- 使用 Object.assign 进行样式设置
- 样式属性按照以下顺序排列:
  1. 定位属性 (position, top, right, z-index)
  2. 盒模型属性 (display, width, height, margin, padding)
  3. 视觉属性 (background, border, box-shadow)
  4. 文字属性 (font, color, text-align)
  5. 其他属性

### 3.2 类名命名
- 使用 kebab-case 命名法
- 组件前缀使用功能描述,如 `floating-window`
- 子元素使用父元素名称作为前缀

## 5. 设计模式

### 5.1 组件设计
- 采用单一职责原则
- 组件应该是独立和可复用的
- 使用组合优于继承

### 5.2 状态管理
- 使用 chrome.storage 进行状态持久化
- 本地状态使用类属性管理
- 状态更新后需要触发相应的视图更新


## 7. 性能考虑

### 7.1 DOM 操作
- 批量进行 DOM 操作
- 使用 DocumentFragment 优化多个元素的添加
- 注意防抖和节流的使用

### 7.2 资源加载
- 懒加载非必要的资源
- 合理使用缓存机制
css
html
javascript
python

First seen in:

jiaqi-xiao/MagicPocket

Used in 1 repository

TypeScript
Always use the latest features of Next.js 15 with App Router, including Server Components, Server Actions, and API Routes.
Ensure using next.config.ts.

Add 'use client' at the top of the file when using client-side components.

Use the @/ alias for imports to maintain consistency and improve readability.
Example: import { Button } from '@/components/ui/button';

Use the @/components/ui/ prefix for shadcn/ui components.
Example: import { Card } from '@/components/ui/card';

Install shadcn/ui using: pnpm dlx shadcn@latest add

Use TailwindCSS for styling. Ensure proper configuration and usage of Tailwind classes. Ensure using tailwind.config.ts.

Use zod for form validation and schema definition.

Use lucide-react for icons. Import icons as needed.
Example: import { Search } from 'lucide-react';

Use pnpm as the package manager for consistency and efficiency.
Install dependencies using: pnpm install

Configure your IDE to automatically import dependencies when possible.

Utilize TypeScript for type safety. Define interfaces and types for props and state.

Leverage Server Components for improved performance and reduced client-side JavaScript.

Use API Routes for server-side logic and external API interactions.

Implement proper error handling and display user-friendly error messages.

Ensure components are accessible by using appropriate ARIA attributes and semantic HTML.

Utilize Next.js built-in code splitting features for optimal performance.

Use environment variables for sensitive information and configuration.

Write unit tests and integration tests for components and utilities.

Provide clear documentation for components, functions, and complex logic.

Implement performance optimizations such as memoization and lazy loading when necessary.

Use consistent code formatting. Consider using Prettier for automatic code formatting.
css
java
javascript
next.js
npm
pnpm
prettier
react
+3 more

First seen in:

aletheia/tots.com

Used in 1 repository

Python
.cursorrules
python

First seen in:

hypnodes/JsonNodes

Used in 1 repository

Python
The main goal is to provide comprehensive project analysis and maintain the `.cursorrules` file, which defines project standards and AI behavior guidelines for your codebase.
- **Anthropic DEVELOPMENT**: You are required to use the brand new latest model: **'claude-3-5-sonnet-20241022'**!!
* !!!NEVER use 'claude-3-sonnet-20240229' or 'claude-3-haiku-20240307' or 'claude-3-opus-20240229'!!!
* **OpenAI DEVELOPMENT**: You are required to use the brand new latest model: **'o1-preview'**!!
* !!!NEVER use 'gpt-4o' or 'gpt-4' or 'gpt-4-turbo' or 'gpt-4-turbo-preview', 'gpt-3.5'!!!

<!-- BEGIN_STRUCTURE -->
# Project Directory Structure
------------------------------
```
📁 cursorrules-architect
├── 📁 agents
│   ├── 📄 dependency_analyzer_agent.py
│   └── 📄 project_architect_agent.py
├── 📁 architects
│   ├── 📄 monitoring-cursorrules.py
│   ├── 📄 project_doc_generator.py
│   └── 📄 project_extractor.py
├── 📁 docs
│   └── 📄 directory_structure.md
├── 📁 utils
│   └── 📄 project_doc_generator.py
├── 📄 .env
└── 📄 README.md
```
<!-- END_STRUCTURE -->
<tree_structure>
.
├── .cursorrules
├── .env
├── .gitignore
├── CONTRIBUTING.md
├── README.md
├── agents
│   ├── dependency_analyzer_agent.py
│   ├── project_architect_agent.py
├── main.py
├── requirements.txt
├── seperate_architects
│   ├── project_doc_generator.py
├── utils
│   ├── __init__.py
│   ├── __pycache__
│   │   ├── project_doc_generator.cpython-311.pyc
│   ├── monitoring-cursorrules.py
│   ├── project_doc_generator.py
</tree_structure>

The system uses a sophisticated 5-phase analysis approach that alternates between Claude-3.5-Sonnet and o1-preview models for different types of analysis. Here's how it works:

1. **Phase 1: Initial Discovery** (Claude-3.5-Sonnet)
   - Uses three parallel agents:
     1. Structure Agent: Analyzes directory/file organization
     2. Dependency Agent: Investigates packages and libraries
     3. Tech Stack Agent: Identifies frameworks and technologies
   - You see multiple "Phase 1" logs because each agent runs independently

2. **Phase 2: Methodical Planning** (o1-preview)
   - Takes the findings from all Phase 1 agents
   - Creates a detailed analysis plan including:
     - File-by-file examination approach
     - Critical areas needing investigation
     - Documentation requirements
     - Inter-dependency mapping method

3. **Phase 3: Deep Analysis** (Claude-3.5-Sonnet)
   - Uses four specialized agents:
     1. Code Analysis Agent: Examines logic patterns
     2. Dependency Mapping Agent: Maps file relationships
     3. Architecture Agent: Studies design patterns
     4. Documentation Agent: Creates documentation

4. **Phase 4: Synthesis** (o1-preview)
   - Reviews and synthesizes all findings from Phase 3
   - Updates analysis directions
   - Identifies areas needing deeper investigation

5. **Phase 5: Consolidation** (Claude-3.5-Sonnet)
   - Final consolidation of all findings
   - Creates comprehensive documentation
   - Prepares report for final analysis

The reason you see multiple Phase 1/2 logs is because:
- Phase 1 runs multiple agents in parallel (hence multiple Claude API calls)
- Each agent works independently but simultaneously
- The system waits for all agents to complete before moving to the next phase

The flow looks like this:
```mermaid
graph TD
    A[Start] --> B[Phase 1: Initial Discovery]
    B -->|Structure Agent| C[Claude Analysis]
    B -->|Dependency Agent| D[Claude Analysis]
    B -->|Tech Stack Agent| E[Claude Analysis]
    
    C & D & E --> F[Phase 2: Planning]
    F -->|o1-preview| G[Create Analysis Plan]
    
    G --> H[Phase 3: Deep Analysis]
    H -->|Multiple Agents| I[Detailed Analysis]
    
    I --> J[Phase 4: Synthesis]
    J -->|o1-preview| K[Synthesize Findings]
    
    K --> L[Phase 5: Consolidation]
    L -->|Claude| M[Final Report]
```

This multi-phase approach ensures:
1. Thorough analysis from different perspectives
2. Cross-validation of findings
3. Progressive refinement of understanding
4. Comprehensive documentation
5. Leveraging the strengths of both Claude and o1-preview models

### Number of API calls per phase

1. **Phase 1** appears multiple times (3 times) because it runs three agents in parallel:
   ```python
   # From main.py
   self.phase1_agents = [
       ClaudeAgent("Structure Agent", "analyzing directory and file organization", [...]),
       ClaudeAgent("Dependency Agent", "investigating packages and libraries", [...]),
       ClaudeAgent("Tech Stack Agent", "identifying frameworks and technologies", [...])
   ]
   ```
   - Each agent makes its own API call to Claude
   - They run simultaneously (using `asyncio.gather`)
   - That's why you see three separate "Phase 1" logs with HTTP requests

2. **Phase 2** appears multiple times because:
   - It processes the results from each Phase 1 agent
   - Makes API calls to o1-preview to plan next steps based on each agent's findings
   - The logs show the HTTP requests to OpenAI's API

3. **Phases 3, 4, and 5** only appear once because:
   - Phase 3: While it uses multiple agents, they're batched into a single phase execution
   - Phase 4: Single synthesis step using o1-preview
   - Phase 5: Single consolidation step using Claude

Here's the count of API calls per phase in one complete run:
```
Phase 1: 3 calls (one per agent) to Claude
Phase 2: Multiple calls to o1-preview for planning
Phase 3: 1 batch call to Claude (though using multiple agents internally)
Phase 4: 1 call to o1-preview
Phase 5: 1 call to Claude
```

The key is in this part of `main.py`:
```python
# Phase 1: Multiple parallel agents
agent_tasks = [agent.analyze(context) for agent in self.phase1_agents]
results = await asyncio.gather(*agent_tasks)  # This runs them all at once

# Later phases: Single execution
phase3_results = await self.run_phase3(phase2_results, tree)  # One batch
phase4_results = await self.run_phase4(phase3_results)        # One call
consolidated_report = await self.run_phase5(all_results)      # One call
```

So while you see multiple logs, it's still just one run through the phases - Phase 1 just happens to make multiple parallel API calls for efficiency.

Here's how the files are connected:

1. **Main Entry Point**:
   - `main.py` is the primary entry point that orchestrates the entire analysis process
   - It uses a 5-phase analysis system combining Claude-3.5-Sonnet and o1-preview models

2. **Agent System**:
   - `agents/project_architect_agent.py` and `agents/dependency_analyzer_agent.py` are specialized workers
   - They're called by `main.py` during different phases of analysis
   - The agents work independently but are orchestrated by the main analyzer

3. **Documentation System**:
   - `utils/project_doc_generator.py` is the core library for tree generation
   - `architects/project_doc_generator.py` is a higher-level interface that uses the core library
   - `utils/monitoring-cursorrules.py` monitors file changes and updates documentation

Here's the flow of how they work together:

```mermaid
graph TD
    A[main.py] -->|Orchestrates| B[Project Analysis]
    B -->|Phase 1| C[Initial Discovery]
    B -->|Phase 2| D[Methodical Planning]
    B -->|Phase 3| E[Deep Analysis]
    B -->|Phase 4| F[Synthesis]
    B -->|Phase 5| G[Consolidation]
    
    C -->|Uses| H[project_architect_agent.py]
    C -->|Uses| I[dependency_analyzer_agent.py]
    
    H -->|Generates Trees| J[utils/project_doc_generator.py]
    I -->|Analyzes Dependencies| K[Package Files]
    
    L[seperate_architects/project_doc_generator.py] -->|Uses| J
    M[utils/monitoring-cursorrules.py] -->|Monitors| N[.cursorrules]
```

The files that can run independently are:
1. `utils/monitoring-cursorrules.py` - Can run standalone to monitor project changes
2. `architects/project_doc_generator.py` - Can run independently to generate documentation
3. `main.py` - The main entry point that can run the full analysis

The files that are dependent on others:
1. `agents/project_architect_agent.py` - Depends on `utils/project_doc_generator.py`
2. `agents/dependency_analyzer_agent.py` - Used by the main analyzer
3. `architects/project_doc_generator.py` - Depends on `utils/project_doc_generator.py`

So while some files can operate independently, they're designed to work together in a cohesive system where `main.py` orchestrates the full analysis workflow.

Let me break down each of these files and their purposes:

# 1. monitoring-cursorrules.py
This file is responsible for monitoring and maintaining the `.cursorrules` file in your project. Its main functions are:

- Watches for file system changes in your project directory
- Automatically updates the project structure tree in `.cursorrules` when files are added/removed/modified
- Provides an interactive CLI interface with 4 modes:
  1. Generate directory tree only
  2. Generate tree and monitor for changes
  3. Generate tree and update .cursorrules
  4. Generate tree, update .cursorrules, and monitor for changes

# 2. project_doc_generator.py
This file is an enhanced version of the documentation generator that:

- Generates directory structure documentation
- Maintains file comments and metadata
- Has the same monitoring capabilities as monitoring-cursorrules.py
- Additionally tracks and preserves comments about files in the tree structure
- Can output the documentation to markdown files

# 3. project_extractor.py
This is a sophisticated project analysis tool that:

- Uses a multi-phase analysis approach with both Claude and OpenAI models
- Has 5 distinct phases:
  1. Initial Discovery (using Claude agents)
  2. Methodical Planning (using OpenAI)
  3. Deep Analysis (using Claude agents)
  4. Synthesis (using OpenAI)
  5. Consolidation (using Claude)

The key difference between these files and the agents is:

## Architects vs Agents
- **Architects** (these files) are standalone tools that provide specific functionality:
  - monitoring-cursorrules.py → Project structure monitoring
  - project_doc_generator.py → Documentation generation
  - project_extractor.py → Project analysis and understanding

- **Agents** (in the agents/ directory) are more like specialized workers:
  - dependency_analyzer_agent.py → Analyzes project dependencies
  - project_architect_agent.py → Makes architectural decisions

Here's a diagram to help visualize the relationship:

```mermaid
graph TD
    A[Architects] --> B[monitoring-cursorrules.py]
    A --> C[project_doc_generator.py]
    A --> D[project_extractor.py]
    
    E[Agents] --> F[dependency_analyzer_agent.py]
    E --> G[project_architect_agent.py]
    
    D -->|uses| F
    D -->|uses| G
```

The architects are the tools you interact with directly, while the agents are components that are used by these tools (particularly by project_extractor.py) to perform specific analysis tasks.

Let me break down these agents and their relationships clearly.

First, I'll examine both files to understand their roles:

Search files...


Let me explain the project structure and how the agents and architects work together:

This is a sophisticated project analysis system that uses AI to analyze codebases. Here's how it works:

### 1. Agents vs Architects

**Agents** are specialized workers that perform specific analysis tasks:
- `dependency_analyzer_agent.py`: Analyzes project dependencies, versions, and compatibility
- `project_architect_agent.py`: Makes high-level architectural decisions and categorizes project components

**Architects** are higher-level tools that orchestrate the agents and provide specific functionality:
- `monitoring-cursorrules.py`: Monitors and maintains the `.cursorrules` file
- `project_doc_generator.py`: Generates project documentation
- `project_extractor.py`: Runs the complete project analysis workflow

### 2. The Analysis Flow

The system uses a sophisticated 5-phase analysis process:

1. **Initial Discovery** (Using Claude-3.5-Sonnet)
   - Structure Agent: Analyzes directory organization
   - Dependency Agent: Examines package dependencies
   - Tech Stack Agent: Identifies technologies used

2. **Methodical Planning** (Using o1-preview)
   - Processes agent findings
   - Creates detailed analysis plans

3. **Deep Analysis** (Using Claude-3.5-Sonnet)
   - Code Analysis Agent: Examines logic patterns
   - Dependency Mapping Agent: Maps file relationships
   - Architecture Agent: Studies design patterns
   - Documentation Agent: Creates documentation

4. **Synthesis** (Using o1-preview)
   - Reviews and synthesizes findings
   - Updates analysis directions

5. **Consolidation** (Using Claude-3.5-Sonnet)
   - Combines all findings
   - Prepares final documentation

### 3. Key Features

- Uses modern AI models (Claude-3.5-Sonnet-20241022 and o1-preview)
- Real-time progress tracking with rich console output
- Comprehensive dependency analysis
- Automatic documentation generation
- Project structure monitoring
- Security-first approach (no storage of sensitive data)

The main goal is to provide comprehensive project analysis and maintain the `.cursorrules` file, which defines project standards and AI behavior guidelines for your codebase.
golang
openai
python
solidjs
SlyyCooper/cursorrules-architect

Used in 1 repository