Some notes on AI Agent Rule / Instruction / Context files / etc.
- https://llmstxt.org/
-
The
/llms.txtfile -
A proposal to standardise on using an
/llms.txtfile to provide information to help LLMs use a website at inference time. - https://llmstxt.org/#proposal
-
Proposal
We propose adding a
/llms.txtmarkdown file to websites to provide LLM-friendly content. This file offers brief background information, guidance, and links to detailed markdown files.llms.txt markdown is human and LLM readable, but is also in a precise format allowing fixed processing methods (i.e. classical programming techniques such as parsers and regex).
We furthermore propose that pages on websites that have information that might be useful for LLMs to read provide a clean markdown version of those pages at the same URL as the original page, but with
.mdappended. (URLs without file names should appendindex.html.mdinstead.) -
This proposal does not include any particular recommendation for how to process the llms.txt file, since it will depend on the application. For example, the FastHTML project opted to automatically expand the llms.txt to two markdown files with the contents of the linked URLs, using an XML-based structure suitable for use in LLMs such as Claude. The two files are: llms-ctx.txt, which does not include the optional URLs, and llms-ctx-full.txt, which does include them. They are created using the
llms_txt2ctxcommand line application, and the FastHTML documentation includes information for users about how to use them.
-
- https://llmstxt.org/#format
-
Format
-
- https://llmstxt.org/#existing-standards
-
Existing Standards
-
- https://github.com/AnswerDotAI/llms-txt
-
The
/llms.txtfile, helping language models use your website
-
-
- https://llmstxt.site/
-
llms.txt directory
-
A list of all
llms.txtfile locations across the web with stats.The
llms.txtis derived from the llmstxt.org standard. - https://github.com/krish-adi/llmstxt-site
-
llmstxt-site
-
This is a centralized directory of all
/llms.txtfiles available online. The/llms.txtfile is a proposed standard for websites to provide concise and structured information to help large language models (LLMs) efficiently use website content during inference time.Contributions are the backbone of this repository’s success. Let’s work together to build a comprehensive resource for
/llms.txtfiles and advance the adoption of this standard for LLM-friendly content!
-
-
- https://directory.llmstxt.cloud/
-
/llms.txtdirectory -
A curated directory of products and companies leading the adoption of the llms.txt standard.
-
Better Touch Tool (BTT) / [email protected]
- https://docs.folivora.ai/docs/3000_hallo_ai.html
- https://docs.folivora.ai/docs/3000_hallo_ai.html#projects
-
Projects
-
When calling the AI Assistant while in some sort of project, BTT checks whether there is an
AGENT.mdorBTT.mdfile. If so it will read the content of that file and use it to adapt the assistant to the project.
-
- https://docs.folivora.ai/docs/3000_hallo_ai.html#projects
- https://docs.folivora.ai/docs/3012_halloai_mcp.html
-
-
BetterTouchTool's MCP server configuration by default is stored in this file:
~/Library/Application Support/BetterTouchTool/AI/btt-mcp-config.json -
Alternative Configuration Locations
It can also be stored in these files - however you should pick one and stick with that.
~/.config/btt/mcp/.mcp.json~/.btt/mcp/.mcp.json~/Library/Application Support/BetterTouchTool/AI//Library/Application Support/BetterTouchTool/AI/
-
- See Also (?):
- https://www.anthropic.com/engineering/claude-code-best-practices
-
Claude Code: Best practices for agentic coding
-
Claude Code is a command line tool for agentic coding. This post covers tips and tricks that have proven effective for using Claude Code across various codebases, languages, and environments.
- https://www.anthropic.com/engineering/claude-code-best-practices#1-customize-your-setup
-
Customize your setup
Claude Code is an agentic coding assistant that automatically pulls context into prompts. This context gathering consumes time and tokens, but you can optimize it through environment tuning.
-
Create
CLAUDE.mdfilesCLAUDE.mdis a special file that Claude automatically pulls into context when starting a conversation. This makes it an ideal place for documenting:- Common bash commands
- Core files and utility functions
- Code style guidelines
- Testing instructions
- Repository etiquette (e.g., branch naming, merge vs. rebase, etc.)
- Developer environment setup (e.g., pyenv use, which compilers work)
- Any unexpected behaviors or warnings particular to the project
- Other information you want Claude to remember
There’s no required format for
CLAUDE.mdfiles. We recommend keeping them concise and human-readable. -
You can place
CLAUDE.mdfiles in several locations:- The root of your repo, or wherever you run
claudefrom (the most common usage). Name itCLAUDE.mdand check it into git so that you can share it across sessions and with your team (recommended), or name itCLAUDE.local.mdand.gitignoreit - Any parent of the directory where you run
claude. This is most useful for monorepos, where you might runclaudefromroot/foo, and haveCLAUDE.mdfiles in bothroot/CLAUDE.mdandroot/foo/CLAUDE.md. Both of these will be pulled into context automatically - Any child of the directory where you run
claude. This is the inverse of the above, and in this case, Claude will pull inCLAUDE.mdfiles on demand when you work with files in child directories - Your home folder (
~/.claude/CLAUDE.md), which applies it to all your claude sessions
When you run the
/initcommand, Claude will automatically generate aCLAUDE.mdfor you. - The root of your repo, or wherever you run
-
Tune your
CLAUDE.mdfilesYour
CLAUDE.mdfiles become part of Claude’s prompts, so they should be refined like any frequently used prompt. A common mistake is adding extensive content without iterating on its effectiveness. Take time to experiment and determine what produces the best instruction following from the model.You can add content to your
CLAUDE.mdmanually or press the#key to give Claude an instruction that it will automatically incorporate into the relevantCLAUDE.md. Many engineers use#frequently to document commands, files, and style guidelines while coding, then includeCLAUDE.mdchanges in commits so team members benefit as well.At Anthropic, we occasionally run
CLAUDE.mdfiles through the prompt improver and often tune instructions (e.g. adding emphasis with "IMPORTANT" or "YOU MUST") to improve adherence. -
Curate Claude's list of allowed tools
-
There are four ways to manage allowed tools
- ..snip..
- Manually edit your
.claude/settings.jsonor~/.claude.json(we recommend checking the former into source control to share with your team). - ..snip..
-
-
- https://www.anthropic.com/engineering/claude-code-best-practices#2-give-claude-more-tools
-
Give Claude more tools
-
Use Claude with MCP
Claude Code functions as both an MCP server and client. As a client, it can connect to any number of MCP servers to access their tools in three ways:
- In project config (available when running Claude Code in that directory)
- **In global config **(available in all projects)
- In a checked-in
.mcp.jsonfile (available to anyone working in your codebase). For example, you can add Puppeteer and Sentry servers to your.mcp.json, so that every engineer working on your repo can use these out of the box.
-
Use custom slash commands
For repeated workflows—debugging loops, log analysis, etc.—store prompt templates in Markdown files within the
.claude/commandsfolder. These become available through the slash commands menu when you type/. You can check these commands into git to make them available for the rest of your team.Custom slash commands can include the special keyword
$ARGUMENTSto pass parameters from command invocation. -
Putting the above content into
.claude/commands/fix-github-issue.mdmakes it available as the/project:fix-github-issuecommand in Claude Code. You could then for example use/project:fix-github-issue 1234to have Claude fix issue #1234. Similarly, you can add your own personal commands to the_ _~/.claude/commandsfolder for commands you want available in all of your sessions.
-
-
- See Also (?):
- https://modelcontextprotocol.io/quickstart/user
-
For Claude Desktop Users
Get started using pre-built servers in Claude for Desktop.
- https://modelcontextprotocol.io/quickstart/user#2-add-the-filesystem-mcp-server
-
This will create a configuration file at:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
- macOS:
-
-
- https://hub.continue.dev/
- https://docs.continue.dev/
-
Continue enables developers to create, share, and use custom AI code assistants with our open-source VS Code and JetBrains extensions and hub of models, rules, prompts, docs, and other building blocks
- https://docs.continue.dev/reference
-
config.yamlReference -
Continue hub assistants are defined using the
config.yamlspecification. Assistants can be loaded from the Hub or locally- Continue Hub - YAML is stored on the hub and automatically synced to the extension
- Locally
- in your global
.continuefolder (~/.continueon Mac,%USERPROFILE%\.continue) within.continue/assistants. The name of the file will be used as the display name of the assistant, e.g.My Assistant.yaml - in your workspace in a
/.continue/assistantsfolder, with the same naming convention
- in your global
Config YAML replaces
config.json, which is deprecated. View the Migration Guide.An assistant is made up of:
\1. Top level properties, which specify the
name,version, andconfig.yamlschemafor the assistant \2. Block lists, which are composable arrays of coding assistant building blocks available to the assistant, such as models, docs, and context providers.A block is a single standalone building block of a coding assistants, e.g., one model or one documentation source. In
config.yamlsyntax, a block consists of the same top-level properties as assistants (name,version, andschema), but only has ONE item under whichever block type it is.Examples of blocks and assistants can be found on the Continue hub.
Assistants can either explicitly define blocks - see Properties below - or import and configure existing hub blocks.
- https://docs.continue.dev/reference#local-blocks
-
Local Blocks
It is also possible to define blocks locally in a
.continuefolder. This folder can be located at either the root of your workspace (these will automatically be applied to all assistants when you are in that workspace) or in your home directory at~/.continue(these will automatically be applied globally).Place your YAML files in the following folders:
Assistants:
.continue/assistants- for assistants
Blocks:
.continue/rules- for rules.continue/models- for models.continue/prompts- for prompts.continue/context- for context providers.continue/docs- for docs.continue/data- for data.continue/mcpServers- for MCP Servers
You can find many examples of each of these block types on the Continue Explore Page
-
- https://docs.continue.dev/reference#complete-yaml-config-example
-
Complete YAML Config Example
Putting it all together, here's a complete example of a
config.yamlconfiguration file
-
-
- https://docs.continue.dev/blocks/models
-
Model Blocks
These blocks form the foundation of the entire assistant experience, offering different specialized capabilities:
- Chat: Power conversational interactions about code and provide detailed guidance
- Edit: Handle complex code transformations and refactoring tasks
- Apply: Execute targeted code modifications with high accuracy
- Autocomplete: Provide real-time suggestions as developers type
- Embedding: Transform code into vector representations for semantic search
- Reranker: Improve search relevance by ordering results based on semantic meaning
-
- https://docs.continue.dev/blocks/context-providers
-
Context Blocks
These blocks determine what internal information your AI assistant can access
-
- https://docs.continue.dev/blocks/rules
-
Rules Blocks
Think of these as the guardrails for your AI coding assistants:
- Enforce company-specific coding standards and security practices
- Implement quality checks that match your engineering culture
- Create paved paths for developers to follow organizational best practices
- https://docs.continue.dev/customize/deep-dives/rules#continuerules
-
.continuerulesYou can create project-specific rules by adding a.continuerulesfile to the root of your project. This file is raw text and its full contents will be used as rules.
-
-
- https://docs.continue.dev/blocks/prompts
-
Prompt Blocks
These are the specialized instructions that shape how models respond:
- Define interaction patterns for specific tasks or frameworks
- Encode domain expertise for particular technologies
- Ensure consistent guidance aligned with organizational practices
- Can be shared and reused across multiple assistants
- Act as automated code reviewers that ensure consistency across teams
- https://docs.continue.dev/customize/deep-dives/prompts#local-prompt-files
-
Local
.promptfiles In addition to Prompt blocks on the Hub, you can also define prompts in local.promptfiles, located in the.continue/promptsfolder at the top level of your workspace. This is useful for quick iteration on prompts to test them out before pushing up to the Hub. -
Below is a quick example of setting up a prompt file:
- Create a folder called
.continue/promptsat the top level of your workspace - Add a file called
test.promptto this folder. - Write the following contents to
test.promptand save.
- Create a folder called
-
- https://docs.continue.dev/customize/deep-dives/prompts#format
-
Format
The format is inspired by HumanLoops's
.promptfile, with additional templating to reference files, URLs, and context providers.
-
-
- https://docs.continue.dev/blocks/mcp
-
MCP Blocks
Model Context Protocol servers provide specialized functionality:
- Enable integration with external tools and systems
- Create extensible interfaces for custom capabilities
- Support more complex interactions with your development environment
- Allow partners to contribute specialized functionality
- Database Connectors: Understand schema and data models during development
-
-
- https://docs.cursor.com/context/rules
-
Rules
-
Control how the Agent model behaves with reusable, scoped instructions.
Rules allow you to provide system-level guidance to the Agent and Cmd-K AI. Think of them as a persistent way to encode context, preferences, or workflows for your projects or for yourself.
-
We support three types of rules:
- Project Rules: Stored in
.cursor/rules, version-controlled and scoped to your codebase. - User Rules: Global to your Cursor environment. Defined in settings and always applied.
.cursorrules(Legacy): Still supported, but deprecated. Use Project Rules instead.
- Project Rules: Stored in
- https://docs.cursor.com/context/rules#project-rules
-
Project rules
-
Project rules live in
.cursor/rules. Each rule is stored as a file and version-controlled. They can be scoped using path patterns, invoked manually, or included based on relevance.Use project rules to:
- Encode domain-specific knowledge about your codebase
- Automate project-specific workflows or templates
- Standardize style or architecture decisions
-
- https://docs.cursor.com/context/rules#cursorrules-legacy
-
.cursorrules(Legacy) -
The
.cursorrulesfile in the root of your project is still supported, but will be deprecated. We recommend migrating to the Project Rules format for more control, flexibility, and visibility.
-
-
- https://docs.cursor.com/context/ignore-files
-
Ignore Files
-
Control which files Cursor’s AI features and indexing can access using
.cursorignoreand.cursorindexingignore -
Cursor reads and indexes your project’s codebase to power its features. You can control which directories and files Cursor can access by adding a
.cursorignorefile to your root directory.
-
- https://docs.cursor.com/context/model-context-protocol
-
Model Context Protocol Connect external tools and data sources to Cursor using the Model Context Protocol (MCP) plugin system
- https://docs.cursor.com/context/model-context-protocol#configuring-mcp-servers
-
The MCP configuration file uses a JSON format
-
- https://docs.cursor.com/context/model-context-protocol#configuration-locations
-
Configuration Locations
You can place this configuration in two locations, depending on your use case:
- Project Configuration
- For tools specific to a project, create a
.cursor/mcp.jsonfile in your project directory. This allows you to define MCP servers that are only available within that specific project.
- For tools specific to a project, create a
- Global Configuration
- For tools that you want to use across all projects, create a
~/.cursor/mcp.jsonfile in your home directory. This makes MCP servers available in all your Cursor workspaces.
- For tools that you want to use across all projects, create a
- Project Configuration
-
-
- https://github.com/PatrickJS/awesome-cursorrules
-
Awesome CursorRules
-
A curated list of awesome
.cursorrulesfiles for enhancing your Cursor AI experience.
-
- https://github.com/google-gemini/gemini-cli
-
Gemini CLI
-
An open-source AI agent that brings the power of Gemini directly into your terminal.
-
This repository contains the Gemini CLI, a command-line AI workflow tool that connects to your tools, understands your code and accelerates your workflows.
- https://github.com/google-gemini/gemini-cli/blob/main/docs/cli/configuration.md#available-settings-in-settingsjson
-
Available settings in
settings.json:contextFileName(string or array of strings):- Description: Specifies the filename for context files (e.g.,
GEMINI.md,AGENTS.md). Can be a single filename or a list of accepted filenames. - Default:
GEMINI.md - Example:
"contextFileName": "AGENTS.md"
- Description: Specifies the filename for context files (e.g.,
-
- https://github.com/google-gemini/gemini-cli/blob/main/docs/cli/configuration.md#context-files-hierarchical-instructional-context
-
Context Files (Hierarchical Instructional Context)
While not strictly configuration for the CLI's behavior, context files (defaulting to
GEMINI.mdbut configurable via thecontextFileNamesetting) are crucial for configuring the instructional context (also referred to as "memory") provided to the Gemini model. This powerful feature allows you to give project-specific instructions, coding style guides, or any relevant background information to the AI, making its responses more tailored and accurate to your needs. The CLI includes UI elements, such as an indicator in the footer showing the number of loaded context files, to keep you informed about the active context.- Purpose: These Markdown files contain instructions, guidelines, or context that you want the Gemini model to be aware of during your interactions. The system is designed to manage this instructional context hierarchically.
-
- https://github.com/google-gemini/gemini-cli/blob/main/docs/cli/configuration.md#example-context-file-content-eg-geminimd
-
Example Context File Content (e.g.,
GEMINI.md)
-
- https://docs.github.com/en/copilot/tutorials/coding-agent/get-the-best-results#adding-custom-instructions-to-your-repository
-
By adding custom instructions to your repository, you can guide Copilot on how to understand your project and how to build, test and validate its changes.
-
You can add instructions in a single
.github/copilot-instructions.mdfile in the repository, or create one or more.github/instructions/**/*.instructions.mdfiles applying to different files or directories in your repository. -
For more information, see Adding repository custom instructions for GitHub Copilot.
- https://docs.github.com/en/copilot/tutorials/coding-agent/get-the-best-results#pre-installing-dependencies-in-github-copilots-environment
-
Pre-installing dependencies in GitHub Copilot's environment
While working on a task, Copilot has access to its own ephemeral development environment, powered by GitHub Actions, where it can explore your code, make changes, execute automated tests and linters and more.
If Copilot is able to build, test and validate its changes in its own development environment, it is more likely to produce good pull requests which can be merged quickly.
To do that, it will need your project's dependencies. Copilot can discover and install these dependencies itself via a process of trial and error - but this can be slow and unreliable, given the non-deterministic nature of large language models (LLMs).
You can configure a
copilot-setup-steps.ymlfile to pre-install these dependencies before the agent starts working so it can hit the ground running. For more information, see Customizing the development environment for GitHub Copilot coding agent.- https://docs.github.com/en/copilot/how-tos/use-copilot-agents/coding-agent/customize-the-agent-environment#preinstalling-tools-or-dependencies-in-copilots-environment
-
Preinstalling tools or dependencies in Copilot's environment
-
you can preconfigure Copilot's environment before the agent starts by creating a special GitHub Actions workflow file, located at
.github/workflows/copilot-setup-steps.ymlwithin your repository.A
copilot-setup-steps.ymlfile looks like a normal GitHub Actions workflow file, but must contain a singlecopilot-setup-stepsjob. This job will be executed in GitHub Actions before Copilot starts working.
-
- https://docs.github.com/en/copilot/how-tos/use-copilot-agents/coding-agent/customize-the-agent-environment#preinstalling-tools-or-dependencies-in-copilots-environment
-
-
- https://docs.github.com/en/copilot/how-tos/configure-custom-instructions/add-repository-instructions
-
Adding repository custom instructions for GitHub Copilot
-
Create a file in a repository that automatically adds information to questions you ask Copilot Chat.
- https://docs.github.com/en/copilot/how-tos/configure-custom-instructions/add-repository-instructions#creating-a-repository-custom-instructions-file
-
Creating a repository custom instructions file
-
Copilot Chat on the GitHub website, Copilot coding agent and Copilot code review support a single
.github/copilot-instructions.mdcustom instructions file stored in the repository.In addition, Copilot coding agent supports one or more
.instructions.mdfiles stored within.github/instructionsin the repository. Each file can specifyapplyTofrontmatter to define what files or directories its instructions apply to.
-
- https://docs.github.com/en/copilot/how-tos/configure-custom-instructions/add-repository-instructions#writing-effective-repository-custom-instructions
-
Writing effective repository custom instructions
-
The instructions you add to your custom instruction file(s) should be short, self-contained statements that provide Copilot with relevant information to help it work in this repository. Because the instructions are sent with every chat message, they should be broadly applicable to most requests you will make in the context of the repository.
-
-
- https://docs.github.com/en/copilot/customizing-copilot/extending-copilot-chat-with-mcp
-
Extending Copilot Chat with the Model Context Protocol (MCP)
-
Learn how to use the Model Context Protocol (MCP) to extend Copilot Chat.
- https://docs.github.com/en/copilot/customizing-copilot/extending-copilot-chat-with-mcp#configuring-mcp-servers-in-visual-studio-code
-
To configure MCP servers in Visual Studio Code, you need to set up a configuration script that specifies the details of the MCP servers you want to use. You can configure MCP servers for either:
- A specific repository. This will share MCP servers with anyone who opens the project in Visual Studio Code. To do this, create a
.vscode/mcp.jsonfile in the root of your repository. - Your personal instance of Visual Studio Code. You will be the only person who has access to configured MCP servers. To do this, add the configuration to your
settings.jsonfile in Visual Studio Code.
- A specific repository. This will share MCP servers with anyone who opens the project in Visual Studio Code. To do this, create a
-
-
- https://github.blog/changelog/2025-06-13-copilot-code-review-customization-for-all/
-
Copilot code review now supports the same custom instructions used by Copilot Chat and coding agent—unlocking personalized, consistent AI reviews across your workflow.
-
You can now tailor Copilot code review using
.github/copilot-instructions.md— the same customization file already used by Copilot Chat and Copilot coding agent. This brings a consistent way to shape how Copilot responds across your entire workflow.
-
- https://github.blog/changelog/2025-07-23-github-copilot-coding-agent-now-supports-instructions-md-custom-instructions/
-
GitHub Copilot coding agent now supports
.instructions.mdcustom instructions -
You can add custom instructions to your repository to teach Copilot how the repository works as well as how to run any build steps, automated tests, or linters. With these instructions, Copilot can produce higher quality pull requests.
Now, along with
.github/copilot-instructions.md, Copilot coding agent supports.instructions.mdfiles stored under.github/instructions.You can create many
.instructions.mdfiles, and each one can use YAML frontmatter to specify which files or directories it applies to. This means that you can give Copilot different instructions for different parts of your codebase.
-
- https://plugins.jetbrains.com/plugin/17718-github-copilot/versions/stable/722432
-
GitHub Copilot 1.5.42-241
-
Added: Custom instructions for generating Chat and Git commit messages. Specify these in the
.github/copilot-instructions.mdor.github/git-commit-instructions.mdfiles. - microsoft/copilot-intellij-feedback#38 (comment)
-
Support for repository custom instructions
-
You can create
.github/copilot-instructions.mdfor custom instructions for inline chat and panel chat.Additionally, you can create custom instructions for Git commit message generation in:
.github/git-commit-instructions.md -
This is available in the latest release, 1.5.41.
-
-
- https://copilot-instructions.md/
-
Adding custom instructions for GitHub Copilot
- https://copilot-instructions.md/prompts.html
-
Godlike Prompts
-
-
- https://humanloop.com/
-
Your AI product needs evals The LLM evals platform for enterprises. Humanloop gives you the tools that top teams use to ship and scale AI with confidence.
- https://humanloop.com/docs/reference/prompt-file-format
-
Prompt file format
-
Our file format for serializing Prompts to store alongside your source code.
-
Our
.promptfile format is a serialized representation of a Prompt, designed to be human-readable and suitable for checking into your version control systems alongside your code. This allows technical teams to maintain the source of truth for their prompts within their existing version control workflow. - https://humanloop.com/docs/reference/prompt-file-format#format
-
Format
The format is heavily inspired by MDX
, with model and parameters specified in a YAML header alongside a JSX-inspired syntax for chat templates.
-
-
-
- See Also:
- https://www.jetbrains.com/ai-assistant/
-
Exclude files from AI context
You can prevent AI Assistant from accessing specific files or folders by configuring an
.aiignorefile. This ensures restricted files are not processed when AI gathers project context.
-
- https://www.jetbrains.com/help/ai-assistant/getting-started-with-ai-assistant.html
- https://www.jetbrains.com/help/ai-assistant/configure-project-rules.html
-
Project-specific rules help AI Assistant better understand your code, preferred tools, and coding conventions. By defining these rules, you can improve the relevance of AI responses and ensure that suggestions align with your project setup.
By default, project rules are automatically added to each chat session, so AI Assistant adheres to the provided guidelines. You can customize this behavior – for example, apply rules only to specific files, invoke them manually, or let the model decide when to use them.
-
To configure project rules:
- Go to Settings | Tools | AI Assistant | Rules.
- Click New Project Rules File and provide a name for the file.
Alternatively, you can manually create the
.aiassistant/rulesfolder in the project root and add a.mdfile there.
-
- https://www.jetbrains.com/help/ai-assistant/disable-ai-assistant.html
-
Restrict or disable AI Assistant features
- https://www.jetbrains.com/help/ai-assistant/disable-ai-assistant.html#restrict-ai-assistant-usage-for-project
-
Restrict usage of AI Assistant for a project
-
As an alternative to disabling AI Assistant, you can create a file that will restrict the usage of AI features in the project.
Create an empty file named
.noaiin the root directory of the project.When this file is present, all AI Assistant features are fully disabled for the project. Even if this project is opened in another IDE, the AI Assistant features will not be available.
-
- https://www.jetbrains.com/help/ai-assistant/disable-ai-assistant.html#restrict-ai-assistant-usage-in-specific-files-or-folders
-
Restrict usage of AI Assistant in specific files or folders
-
You can restrict AI Assistant from processing specific files or folders by creating and configuring an
.aiignorefile. -
Files added to the
.aiignoreare not processed by AI Assistant, and AI features are disabled for them. However, in some cases, ignored files may still be processed due to unforeseen issues. If you notice any unexpected behavior, please report it so we can investigate. -
If your project already contains a
.cursorignore,.codeiumignore, or.aiexcludefile, there is no need to create a separate.aiignorefile, as these files are also supported. As long as they are located in the root folder of the project, they will be used to restrict AI Assistant's access to the specified files and folders.
-
-
- https://www.jetbrains.com/help/ai-assistant/mcp.html
-
Model Context Protocol (MCP)
-
AI Assistant can interact with external tools and data sources via the Model Context Protocol (MCP). By connecting to MCP servers, AI Assistant gains access to a range of tools that significantly extend its capabilities.
- https://www.jetbrains.com/help/ai-assistant/mcp.html#use_ide_as_an_mcp_server
-
Use your IDE as an MCP server
-
Starting with version 2025.2, JetBrains IDEs come with an integrated MCP server, allowing external clients such as Claude Desktop, Cursor, VS Code, and others to access tools provided by the IDE. This provides users with the ability to control and interact with JetBrains IDEs without leaving their application of choice.
-
-
- https://www.jetbrains.com/help/ai-assistant/configure-project-rules.html
- See Also:
- https://www.jetbrains.com/junie/
-
Junie
-
Your smart coding agent
-
- https://www.jetbrains.com/help/junie/get-started-with-junie.html
- https://www.jetbrains.com/help/junie/customize-guidelines.html
-
Guidelines
-
Guidelines allow you to provide persistent, reusable context to the agent. Junie adds this context to every task it works on.
-
Guidelines are stored in the
.junie/guidelines.mdfile in the root project directory, so you can version-control them and reuse at the project level. -
You can add and edit the
.junie/guidelines.mdfile manually, or prompt Junie to explore the project and generate theguidelines.mdfile for you.
-
- https://www.jetbrains.com/help/junie/aiignore.html
-
.aiignore -
You can restrict Junie from processing the contents of specific files or folders by creating and configuring an
.aiignorefile in the project root directory. If a file or folder is on the.aiignorelist, Junie will ask for explicit approval before viewing or editing it. -
Only the contents of files listed in
.aiignoreare protected. Junie will still have access to the file and folder names. -
The
.aiignorefile follows the same syntax and pattern format as the.gitignorefile.
-
- https://www.jetbrains.com/help/junie/model-context-protocol-mcp.html
-
Model Context Protocol (MCP)
-
You can connect Junie to Model Context Protocol (MCP) servers. This will extend Junie's capabilities with executable functionality for working with data sources and tools, such as file systems, productivity tools, or databases.
-
To connect Junie to an MCP server, use the
mcp.jsonfile in Junie's settings. -
The MCP servers added via the MCP Settings page are saved to the
~/.junie/mcp.jsonfile in the home directory. Such servers are available globally for all projects that are opened in the IDE. -
To configure an MCP server at the project level, add an
mcp.jsonfile manually to the.junie/mcpfolder in the project root.
-
- https://www.jetbrains.com/help/junie/customize-guidelines.html
- https://blog.jetbrains.com/idea/2025/05/coding-guidelines-for-your-ai-agents/#how-to-use-guidelines-with-junie
-
Junie is an autonomous AI coding agent from JetBrains. You can specify your coding style, best practices, and general preferences by specifying your desired guidelines in the
.junie/guidelines.mdfile so that Junie will follow them while generating code.
-
-
Check out the junie-guidelines repository, which has a catalog of guidelines for various technologies.
- https://github.com/JetBrains/junie-guidelines
-
Junie Guidelines
-
A catalog of technology-specific guidelines for optimizing Junie code generation.
-
- https://github.com/JetBrains/junie-guidelines
-
You can add all the guidelines for various technologies that you are using into the
.junie/guidelines.mdfile and delegate tasks to Junie. Junie will take these guidelines into consideration while generating the code so that you don’t have to add these guidelines in each prompt. -
You can ask Junie to create a
guidelines.mdfile that includes the coding conventions being followed in the current codebase so far. -
Once the
guidelines.mdfile is generated, you can tweak it to add more guidelines or update the existing ones as needed.
- See Also (?):
- https://github.com/openai/codex
-
Codex can be guided by
AGENTS.mdfiles placed within your repository. These are text files, akin toREADME.md, where you can inform Codex how to navigate your codebase, which commands to run for testing, and how best to adhere to your project's standard practices. Like human developers, Codex agents perform best when provided with configured dev environments, reliable testing setups, and clear documentation. - https://openai.com/index/introducing-codex/#appendix
-
System message
-
We are sharing the
codex-1system message to help developers understand the model’s default behavior and tailor Codex to work effectively in custom workflows. For example, thecodex-1system message encourages Codex to run all tests mentioned in theAGENTS.mdfile, but if you’re short on time, you can ask Codex to skip these tests. -
..snip.. # AGENTS.md spec - Containers often contain AGENTS.md files. These files can appear anywhere in the container's filesystem. Typical locations include `/`, `~`, and in various places inside of Git repos. - These files are a way for humans to give you (the agent) instructions or tips for working within the container. - Some examples might be: coding conventions, info about how code is organized, or instructions for how to run or test code. - AGENTS.md files may provide instructions about PR messages (messages attached to a GitHub Pull Request produced by the agent, describing the PR). These instructions should be respected. - Instructions in AGENTS.md files: - The scope of an AGENTS.md file is the entire directory tree rooted at the folder that contains it. - For every file you touch in the final patch, you must obey instructions in any AGENTS.md file whose scope includes that file. - Instructions about code style, structure, naming, etc. apply only to code within the AGENTS.md file's scope, unless the file states otherwise. - More-deeply-nested AGENTS.md files take precedence in the case of conflicting instructions. - Direct system/developer/user instructions (as part of a prompt) take precedence over AGENTS.md instructions. - AGENTS.md files need not live only in Git repos. For example, you may find one in your home directory. - If the AGENTS.md includes programmatic checks to verify your work, you MUST run all of them and make a best effort to validate that the checks pass AFTER all code changes have been made. - This applies even for changes that appear simple, i.e. documentation. You still must run all of the programmatic checks. ..snip..
-
-
- See Also (?):
- https://github.com/openai/codex
-
Lightweight coding agent that runs in your terminal
- https://github.com/openai/codex#memory--project-docs
-
Memory & project docs
You can give Codex extra instructions and guidance using
AGENTS.mdfiles. Codex looks forAGENTS.mdfiles in the following places, and merges them top-down:~/.codex/AGENTS.md- personal global guidanceAGENTS.mdat repo root - shared project notesAGENTS.mdin the current working directory - sub-folder/feature specifics
-
- https://github.com/openai/codex#example-prompts
-
Example prompts
Below are a few bite-size examples you can copy-paste. Replace the text in quotes with your own task. See the prompting guide for more tips and usage patterns.
-
- https://github.com/openai/codex/tree/main/codex-cli/examples
-
Quick start examples
This directory bundles some self‑contained examples using the Codex CLI.
-
If you want to get started using the Codex CLI directly, skip this and refer to the prompting guide.
- https://github.com/openai/codex/blob/main/codex-cli/examples/prompting_guide.md
- Note: This seems to have been removed
- openai/codex#2374
-
Issue #2374: Fix/restore
prompting_guide.md, or remove link to it inREADME.md
-
- openai/codex#2374
-
Prompting guide
- https://github.com/openai/codex/blob/main/codex-cli/examples/prompting_guide.md#custom-instructions
-
Custom instructions
Codex supports two types of Markdown-based instruction files that influence model behavior and prompting:
~/.codex/instructions.md- Global, user-level custom guidance injected into every session. You should keep this relatively short and concise. These instructions are applied to all Codex runs across all projects and are great for personal defaults, shell setup tips, safety constraints, or preferred tools.
- Example: "Before executing shell commands, create and activate a
.codex-venvPython environment." or "Avoid running pytest until you've completed all your changes."
CODEX.md- Project-specific instructions loaded from the current directory or Git root. Use this for repo-specific context, file structure, command policies, or project conventions. These are automatically detected unless
--no-project-docorCODEX_DISABLE_PROJECT_DOC=1is set. - Example: “All React components live in
src/components/".
- Project-specific instructions loaded from the current directory or Git root. Use this for repo-specific context, file structure, command policies, or project conventions. These are automatically detected unless
-
- Note: This seems to have been removed
-
-
- https://prompts.chat/
-
prompts.chat World's First & Most Famous Prompts Directory
- https://prompts.chat/vibe/
-
awesome vibe coding prompts to help you build simple apps
-
- https://github.com/f/awesome-chatgpt-prompts
-
This repo includes ChatGPT prompt curation to use ChatGPT and other LLM tools better.
-
-
- https://copilot-instructions.md/prompts.html
-
Godlike Prompts
-
- TODO: Find and add other examples (eg. aider (
.aider.conf.yml), llm, etc?)- This (private) ChatGPT convo gave some other suggestions that I need to look into deeper still: https://chatgpt.com/c/680b3bdc-80e8-8008-b05e-86d3e0b627a6
-
CLAUDE.md: Used by Claude (Anthropic) as a signal to scan the repo and use this file for context. It’s suggested in their documentation and blog posts. -
.aider.conf.json(oraider.conf.json): Used by Aider, a GPT-based coding assistant. Can include config such as files to include/exclude, model settings, etc. -
.aider.chat.md: Aider can also use this (or similarly named.aider.md) to persist chat history or provide persistent context for the assistant. While not always required, it’s sometimes created in the repo as a place to put system instructions or notes for context between runs. -
.prompt.md,PROMPT.md, orINSTRUCTIONS.md: Some AI agents or prompts (especially for open-source wrappers around GPT like smol-ai, Continue, or custom LangChain agents) look for files like these in root for either default instructions or human-readable context. -
.continue/context.json: Used by Continue (an open-source AI code agent IDE extension) to provide user preferences or context inclusion rules. -
prompt.config.json/agent.config.json: Custom LLM wrappers, especially those built with LangChain, Autogen, or AgentScript, sometimes define .config or .prompt.* files in root for behavior tuning. -
.smol-dev.yaml: The smol-ai developer tools may use YAML-based configs for defining how the assistant should scaffold or interact with the repo. -
There’s a growing informal convention for files that help tune LLM behavior:
AI.mdorAI_INSTRUCTIONS.md: General-purpose file to guide any AI tooling in a repoCONTRIBUTING.md: While not AI-specific, many LLMs (like Copilot or Claude) are trained to respect these as guidance for changesREADME.ai.md: Separate AI-focused readme, e.g. summarizing intent, goals, style guides, etc.STYLEGUIDE.md: Useful for AI tooling that supports code style customization or alignment
-
- This (private) ChatGPT convo gave some other suggestions that I need to look into deeper still: https://chatgpt.com/c/680b3bdc-80e8-8008-b05e-86d3e0b627a6