PacktPublishing Engineering-Cognitive-AI-Agents .cursorrules file for Jupyter Notebook (stars: 3)

# Python Coding Conventions

- Tab size: 2
- Python version: 3.12+
- Use snake_case for variable names
- Use type hints for all functions
- Use dict[], list[], set[] for type annotations (not typing.Dict, typing.List, typing.Set)
- Always use fully specified type annotations such as `dict[<key_type>, <value_type>]` and `list[<element_type>]` instead of plain `dict`, `list`, etc., to enhance code clarity and type safety.
- Use | None for optional types (not Optional[T])
- Use f-strings for string formatting
- Add trailing commas to function and method arguments lists
- Add trailing commas to lists, dictionaries, sets, tuples, etc.
- Use numpy-style docstrings for all functions and methods
- Prefer from collections.abc import AsyncIterator over typing.AsyncIterator

# OpenAI
- Prefer gpt-4o-mini for all LLM calls
- Use OpenAI's pydantic_function_tool for tools
- Pydantic models used in response_format must not have dict[] and string field can't have defaults


# Quick Guide: OpenAI Tools with Pydantic

## Core Pattern
1. Define request/response models with Pydantic
2. Use OpenAI's pydantic_function_tool
3. Let OpenAI handle parsing/validation

## Basic Template
```python
from pydantic import BaseModel, Field
from openai import OpenAI

# 1. Define your models
class ToolRequest(BaseModel):
    param1: str = Field(..., description="Clear description for LLM")
    param2: int = Field(default=0, description="Include defaults when sensible")

class ToolResponse(BaseModel):
    result: str
    details: list[str]

# 2. Define your handler
async def tool_handler(request: ToolRequest) -> ToolResponse:
    # Implement tool logic
    return ToolResponse(...)

# 3. Register and use
client = OpenAI()
completion = await client.chat.completions.create(
    model="gpt-4-turbo-preview",
    messages=[{"role": "user", "content": query}],
    tools=[
        client.pydantic_function_tool(
            ToolRequest,
            name="tool_name",
            description="Tool description"
        )
    ]
)

# 4. Execute
tool_call = completion.choices[0].message.tool_calls[0]
request = tool_call.function.parsed_arguments  # Auto-parsed to ToolRequest
response = await tool_handler(request)
```

## Key Points to Remember
- Use Field(...) for required params
- Use Field(default=x) for optional params
- Let Pydantic handle validation
- Trust OpenAI's parsing
- Keep models focused and well-documented

## Common Patterns
```python
# Enums for constrained choices
class Unit(str, Enum):
    METRIC = "metric"
    IMPERIAL = "imperial"

# Optional fields
optional_field: Optional[str] = Field(default=None)

# Lists
items: List[str] = Field(default_factory=list)

# Nested models
class SubModel(BaseModel):
    field: str

class MainModel(BaseModel):
    sub: SubModel
```

That's it! Keep it simple, let the tools do the work.

# NetworkX
Note: When using NetworkX's DiGraph class, do not attempt to add type parameters or make it generic (e.g. DiGraph[str, int]). DiGraph is not designed to be subscriptable and should be used directly as DiGraph(). Node and edge attribute types are handled internally by NetworkX.

jupyter notebook
nestjs
openai
python
rust
shell

First Time Repository

Building LLM-Powered Agents, Published by Packt

Jupyter Notebook

Languages:

Jupyter Notebook: 1387.4KB
Python: 441.8KB
Shell: 0.4KB
Created: 6/25/2024
Updated: 12/6/2024

All Repositories (1)

Building LLM-Powered Agents, Published by Packt