# AI Agent Rule / Instruction / Context files / etc Some notes on AI Agent Rule / Instruction / Context files / etc. ## Table of Contents - [llms.txt](#llmstxt) - [Agents / Tools](#agents--tools) - [Better Touch Tool (BTT) / h@llo.ai](#better-touch-tool-btt--hlloai) - [Claude Code](#claude-code) - [Claude Desktop](#claude-desktop) - [Continue](#continue) - [Cursor](#cursor) - [Gemini CLI](#gemini-cli) - [GitHub Copilot](#github-copilot) - [Humanloop](#humanloop) - [JetBrains AI Assistant](#jetbrains-ai-assistant) - [JetBrains Junie](#jetbrains-junie) - [OpenAI Codex](#openai-codex) - [OpenAI Codex CLI](#openai-codex-cli) - [Prompts](#prompts) - [Unsorted](#unsorted) - [See Also](#see-also) - [My Other Related Deepdive Gist's and Projects](#my-other-related-deepdive-gists-and-projects) ## llms.txt - https://llmstxt.org/ - > The `/llms.txt` file - > A proposal to standardise on using an `/llms.txt` file to provide information to help LLMs use a website at inference time. - https://llmstxt.org/#proposal - > Proposal > > We propose adding a `/llms.txt` markdown file to websites to provide LLM-friendly content. This file offers brief background information, guidance, and links to detailed markdown files. > > llms.txt markdown is human and LLM readable, but is also in a precise format allowing fixed processing methods (i.e. classical programming techniques such as parsers and regex). > > We furthermore propose that pages on websites that have information that might be useful for LLMs to read provide a clean markdown version of those pages at the same URL as the original page, but with `.md` appended. (URLs without file names should append `index.html.md` instead.) - > This proposal does not include any particular recommendation for how to process the llms.txt file, since it will depend on the application. For example, the FastHTML project opted to automatically expand the llms.txt to two markdown files with the contents of the linked URLs, using an XML-based structure suitable for use in LLMs such as Claude. The two files are: [llms-ctx.txt](https://answerdotai.github.io/fasthtml/llms-ctx.txt), which does not include the optional URLs, and [llms-ctx-full.txt](https://answerdotai.github.io/fasthtml/llms-ctx-full.txt), which does include them. They are created using the [`llms_txt2ctx`](https://llmstxt.org/intro.html#cli) command line application, and the FastHTML documentation includes information for users about how to use them. - https://llmstxt.org/#format - > Format - https://llmstxt.org/#existing-standards - > Existing Standards - https://github.com/AnswerDotAI/llms-txt - > The `/llms.txt` file, helping language models use your website - https://llmstxt.site/ - > llms.txt directory - > A list of all `llms.txt` file locations across the web with stats. > > The `llms.txt` is derived from the llmstxt.org standard. - https://github.com/krish-adi/llmstxt-site - > llmstxt-site - > This is a centralized directory of all `/llms.txt` files available online. The `/llms.txt` file is a proposed standard for websites to provide concise and structured information to help large language models (LLMs) efficiently use website content during inference time. > > Contributions are the backbone of this repository’s success. Let’s work together to build a comprehensive resource for `/llms.txt` files and advance the adoption of this standard for LLM-friendly content! - https://directory.llmstxt.cloud/ - > `/llms.txt` directory - > A curated directory of products and companies leading the adoption of the llms.txt standard. ## Agents / Tools ### Better Touch Tool (BTT) / h@llo.ai - https://docs.folivora.ai/docs/3000_hallo_ai.html - https://docs.folivora.ai/docs/3000_hallo_ai.html#projects - > Projects - > When calling the AI Assistant while in some sort of project, BTT checks whether there is an `AGENT.md` or `BTT.md` file. If so it will read the content of that file and use it to adapt the assistant to the project. - https://docs.folivora.ai/docs/3012_halloai_mcp.html - > h@llo.ai MCP - > BetterTouchTool's MCP server configuration by default is stored in this file: `~/Library/Application Support/BetterTouchTool/AI/btt-mcp-config.json` - > Alternative Configuration Locations > > It can also be stored in these files - however you should pick one and stick with that. > > - `~/.config/btt/mcp/.mcp.json` > - `~/.btt/mcp/.mcp.json` > - `~/Library/Application Support/BetterTouchTool/AI/` > - `/Library/Application Support/BetterTouchTool/AI/` ### Claude Code - See Also (?): - [Claude Desktop](#claude-desktop) - https://www.anthropic.com/engineering/claude-code-best-practices - > Claude Code: Best practices for agentic coding - > Claude Code is a command line tool for agentic coding. This post covers tips and tricks that have proven effective for using Claude Code across various codebases, languages, and environments. - https://www.anthropic.com/engineering/claude-code-best-practices#1-customize-your-setup - > Customize your setup > > Claude Code is an agentic coding assistant that automatically pulls context into prompts. This context gathering consumes time and tokens, but you can optimize it through environment tuning. - > Create `CLAUDE.md` files > > `CLAUDE.md` is a special file that Claude automatically pulls into context when starting a conversation. This makes it an ideal place for documenting: > > - Common bash commands > - Core files and utility functions > - Code style guidelines > - Testing instructions > - Repository etiquette (e.g., branch naming, merge vs. rebase, etc.) > - Developer environment setup (e.g., pyenv use, which compilers work) > - Any unexpected behaviors or warnings particular to the project > - Other information you want Claude to remember > > There’s no required format for `CLAUDE.md` files. We recommend keeping them concise and human-readable. - > You can place `CLAUDE.md` files in several locations: > > - **The root of your repo**, or wherever you run `claude` from (the most common usage). Name it `CLAUDE.md` and check it into git so that you can share it across sessions and with your team (recommended), or name it `CLAUDE.local.md` and `.gitignore` it > - **Any parent of the directory** where you run `claude`. This is most useful for monorepos, where you might run `claude` from `root/foo`, and have `CLAUDE.md` files in both `root/CLAUDE.md` and `root/foo/CLAUDE.md`. Both of these will be pulled into context automatically > - **Any child of the directory** where you run `claude`. This is the inverse of the above, and in this case, Claude will pull in `CLAUDE.md` files on demand when you work with files in child directories > - **Your home folder** (`~/.claude/CLAUDE.md`), which applies it to all your _claude_ sessions > > When you run the `/init` command, Claude will automatically generate a `CLAUDE.md` for you. - > Tune your `CLAUDE.md` files > > Your `CLAUDE.md` files become part of Claude’s prompts, so they should be refined like any frequently used prompt. A common mistake is adding extensive content without iterating on its effectiveness. Take time to experiment and determine what produces the best instruction following from the model. > > You can add content to your `CLAUDE.md` manually or press the `#` key to give Claude an instruction that it will automatically incorporate into the relevant `CLAUDE.md`. Many engineers use `#` frequently to document commands, files, and style guidelines while coding, then include `CLAUDE.md` changes in commits so team members benefit as well. > > At Anthropic, we occasionally run `CLAUDE.md` files through the [prompt improver](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/prompt-improver) and often tune instructions (e.g. adding emphasis with "IMPORTANT" or "YOU MUST") to improve adherence. - > Curate Claude's list of allowed tools - > There are four ways to manage allowed tools > > - ..snip.. > - **Manually edit** your `.claude/settings.json` or `~/.claude.json` (we recommend checking the former into source control to share with your team)_._ > - ..snip.. - https://www.anthropic.com/engineering/claude-code-best-practices#2-give-claude-more-tools - > Give Claude more tools - > Use Claude with MCP > > Claude Code functions as both an MCP server and client. As a client, it can connect to any number of MCP servers to access their tools in three ways: > > - **In project config** (available when running Claude Code in that directory) > - **In global config **(available in all projects) > - **In a checked-in `.mcp.json` file** (available to anyone working in your codebase). For example, you can add Puppeteer and Sentry servers to your `.mcp.json`, so that every engineer working on your repo can use these out of the box. - > Use custom slash commands > > For repeated workflows—debugging loops, log analysis, etc.—store prompt templates in Markdown files within the `.claude/commands` folder. These become available through the slash commands menu when you type `/`. You can check these commands into git to make them available for the rest of your team. > > Custom slash commands can include the special keyword `$ARGUMENTS` to pass parameters from command invocation. - > Putting the above content into `.claude/commands/fix-github-issue.md` makes it available as the `/project:fix-github-issue` command in Claude Code. You could then for example use `/project:fix-github-issue 1234` to have Claude fix issue \#1234. Similarly, you can add your own personal commands to the_ _`~/.claude/commands` folder for commands you want available in all of your sessions. ### Claude Desktop - See Also (?): - [Claude Code](#claude-code) - https://modelcontextprotocol.io/quickstart/user - > For Claude Desktop Users > > Get started using pre-built servers in Claude for Desktop. - https://modelcontextprotocol.io/quickstart/user#2-add-the-filesystem-mcp-server - > This will create a configuration file at: > > - macOS: `~/Library/Application Support/Claude/claude_desktop_config.json` > - Windows: `%APPDATA%\Claude\claude_desktop_config.json` ### Continue - https://hub.continue.dev/ - https://docs.continue.dev/ - > Continue enables developers to create, share, and use custom AI code assistants with our open-source VS Code and JetBrains extensions and hub of models, rules, prompts, docs, and other building blocks - https://docs.continue.dev/reference - > `config.yaml` Reference - > Continue hub assistants are defined using the `config.yaml` specification. Assistants can be loaded from [the Hub](https://hub.continue.dev/explore/assistants) or locally > > - [Continue Hub](https://hub.continue.dev/explore/assistants) - YAML is stored on the hub and automatically synced to the extension > - Locally > - in your global `.continue` folder (`~/.continue` on Mac, `%USERPROFILE%\.continue`) within `.continue/assistants`. The name of the file will be used as the display name of the assistant, e.g. `My Assistant.yaml` > - in your workspace in a `/.continue/assistants` folder, with the same naming convention > > Config YAML replaces [`config.json`](https://docs.continue.dev/json-reference), which is deprecated. View the **[Migration Guide](https://docs.continue.dev/yaml-migration)**. > > An assistant is made up of: > > \1. **Top level properties**, which specify the `name`, `version`, and `config.yaml` `schema` for the assistant > \2. **Block lists**, which are composable arrays of coding assistant building blocks available to the assistant, such as models, docs, and context providers. > > A block is a single standalone building block of a coding assistants, e.g., one model or one documentation source. In `config.yaml` syntax, a block consists of the same top-level properties as assistants (`name`, `version`, and `schema`), but only has **ONE** item under whichever block type it is. > > Examples of blocks and assistants can be found on the [Continue hub](https://hub.continue.dev/explore/assistants). > > Assistants can either explicitly define blocks - see [Properties](https://docs.continue.dev/reference#properties) below - or import and configure existing hub blocks. - https://docs.continue.dev/reference#local-blocks - > Local Blocks > > It is also possible to define blocks locally in a `.continue` folder. This folder can be located at either the root of your workspace (these will automatically be applied to all assistants when you are in that workspace) or in your home directory at `~/.continue` (these will automatically be applied globally). > > Place your YAML files in the following folders: > > Assistants: > > - `.continue/assistants` - for assistants > > Blocks: > > - `.continue/rules` - for rules > - `.continue/models` - for models > - `.continue/prompts` - for prompts > - `.continue/context` - for context providers > - `.continue/docs` - for docs > - `.continue/data` - for data > - `.continue/mcpServers` - for MCP Servers > > You can find many examples of each of these block types on the [Continue Explore Page](https://hub.continue.dev/explore/models) - https://docs.continue.dev/reference#complete-yaml-config-example - > Complete YAML Config Example > > Putting it all together, here's a complete example of a `config.yaml` configuration file - https://docs.continue.dev/blocks/models - > Model Blocks > > These blocks form the foundation of the entire assistant experience, offering different specialized capabilities: > > - **[Chat](https://docs.continue.dev/customize/model-roles/chat)**: Power conversational interactions about code and provide detailed guidance > - **[Edit](https://docs.continue.dev/customize/model-roles/edit)**: Handle complex code transformations and refactoring tasks > - **[Apply](https://docs.continue.dev/customize/model-roles/apply)**: Execute targeted code modifications with high accuracy > - **[Autocomplete](https://docs.continue.dev/customize/model-roles/autocomplete)**: Provide real-time suggestions as developers type > - **[Embedding](https://docs.continue.dev/customize/model-roles/embeddings)**: Transform code into vector representations for semantic search > - **[Reranker](https://docs.continue.dev/customize/model-roles/reranking)**: Improve search relevance by ordering results based on semantic meaning - https://docs.continue.dev/blocks/context-providers - > Context Blocks > > These blocks determine what internal information your AI assistant can access - https://docs.continue.dev/blocks/rules - > Rules Blocks > > Think of these as the guardrails for your AI coding assistants: > > - **Enforce company-specific coding standards** and security practices > - **Implement quality checks** that match your engineering culture > - **Create paved paths** for developers to follow organizational best practices - https://docs.continue.dev/customize/deep-dives/rules#continuerules - > `.continuerules` > You can create project-specific rules by adding a `.continuerules` file to the root of your project. This file is raw text and its full contents will be used as rules. - https://docs.continue.dev/blocks/prompts - > Prompt Blocks > > These are the specialized instructions that shape how models respond: > > - **Define interaction patterns** for specific tasks or frameworks > - **Encode domain expertise** for particular technologies > - **Ensure consistent guidance** aligned with organizational practices > - **Can be shared and reused** across multiple assistants > - **Act as automated code reviewers** that ensure consistency across teams - https://docs.continue.dev/customize/deep-dives/prompts#local-prompt-files - > Local `.prompt` files > In addition to Prompt blocks on the Hub, you can also define prompts in local `.prompt` files, located in the `.continue/prompts` folder at the top level of your workspace. This is useful for quick iteration on prompts to test them out before pushing up to the Hub. - > Below is a quick example of setting up a prompt file: > > - Create a folder called `.continue/prompts` at the top level of your workspace > - Add a file called `test.prompt` to this folder. > - Write the following contents to `test.prompt` and save. - https://docs.continue.dev/customize/deep-dives/prompts#format - > Format > > The format is inspired by [HumanLoops's `.prompt` file](https://docs.humanloop.com/docs/prompt-file-format), with additional templating to reference files, URLs, and context providers. - https://docs.continue.dev/blocks/mcp - > MCP Blocks > > Model Context Protocol servers provide specialized functionality: > > - Enable integration with external tools and systems > - Create extensible interfaces for custom capabilities > - Support more complex interactions with your development environment > - Allow partners to contribute specialized functionality > - Database Connectors: Understand schema and data models during development ### Cursor - https://docs.cursor.com/context/rules - > Rules - > Control how the Agent model behaves with reusable, scoped instructions. > > Rules allow you to provide system-level guidance to the Agent and Cmd-K AI. Think of them as a persistent way to encode context, preferences, or workflows for your projects or for yourself. - > We support three types of rules: > > - Project Rules: Stored in `.cursor/rules`, version-controlled and scoped to your codebase. > - User Rules: Global to your Cursor environment. Defined in settings and always applied. > - `.cursorrules` (Legacy): Still supported, but deprecated. Use Project Rules instead. - https://docs.cursor.com/context/rules#project-rules - > Project rules - > Project rules live in `.cursor/rules`. Each rule is stored as a file and version-controlled. They can be scoped using path patterns, invoked manually, or included based on relevance. > > Use project rules to: > > - Encode domain-specific knowledge about your codebase > - Automate project-specific workflows or templates > - Standardize style or architecture decisions - https://docs.cursor.com/context/rules#cursorrules-legacy - > `.cursorrules` (Legacy) - > The `.cursorrules` file in the root of your project is still supported, but will be deprecated. We recommend migrating to the Project Rules format for more control, flexibility, and visibility. - https://docs.cursor.com/context/ignore-files - > Ignore Files - > Control which files Cursor’s AI features and indexing can access using `.cursorignore` and `.cursorindexingignore` - > Cursor reads and indexes your project’s codebase to power its features. You can control which directories and files Cursor can access by adding a `.cursorignore` file to your root directory. - https://docs.cursor.com/context/model-context-protocol - > Model Context Protocol > Connect external tools and data sources to Cursor using the Model Context Protocol (MCP) plugin system - https://docs.cursor.com/context/model-context-protocol#configuring-mcp-servers - > The MCP configuration file uses a JSON format - https://docs.cursor.com/context/model-context-protocol#configuration-locations - > Configuration Locations > > You can place this configuration in two locations, depending on your use case: > > - Project Configuration > - For tools specific to a project, create a `.cursor/mcp.json` file in your project directory. This allows you to define MCP servers that are only available within that specific project. > - Global Configuration > - For tools that you want to use across all projects, create a `~/.cursor/mcp.json` file in your home directory. This makes MCP servers available in all your Cursor workspaces. - https://github.com/PatrickJS/awesome-cursorrules - > Awesome CursorRules - > A curated list of awesome `.cursorrules` files for enhancing your Cursor AI experience. ### Gemini CLI - https://github.com/google-gemini/gemini-cli - > Gemini CLI - > An open-source AI agent that brings the power of Gemini directly into your terminal. - > This repository contains the Gemini CLI, a command-line AI workflow tool that connects to your tools, understands your code and accelerates your workflows. - https://github.com/google-gemini/gemini-cli/blob/main/docs/cli/configuration.md#available-settings-in-settingsjson - > Available settings in `settings.json`: > > - **`contextFileName`** (string or array of strings): > - **Description:** Specifies the filename for context files (e.g., `GEMINI.md`, `AGENTS.md`). Can be a single filename or a list of accepted filenames. > - **Default:** `GEMINI.md` > - **Example:** `"contextFileName": "AGENTS.md"` - https://github.com/google-gemini/gemini-cli/blob/main/docs/cli/configuration.md#context-files-hierarchical-instructional-context - > **Context Files (Hierarchical Instructional Context)** > > While not strictly configuration for the CLI's _behavior_, context files (defaulting to `GEMINI.md` but configurable via the `contextFileName` setting) are crucial for configuring the _instructional context_ (also referred to as "memory") provided to the Gemini model. This powerful feature allows you to give project-specific instructions, coding style guides, or any relevant background information to the AI, making its responses more tailored and accurate to your needs. The CLI includes UI elements, such as an indicator in the footer showing the number of loaded context files, to keep you informed about the active context. > > - **Purpose:** These Markdown files contain instructions, guidelines, or context that you want the Gemini model to be aware of during your interactions. The system is designed to manage this instructional context hierarchically. - https://github.com/google-gemini/gemini-cli/blob/main/docs/cli/configuration.md#example-context-file-content-eg-geminimd - > Example Context File Content (e.g., `GEMINI.md`) ### GitHub Copilot - https://docs.github.com/en/copilot/tutorials/coding-agent/get-the-best-results#adding-custom-instructions-to-your-repository - > By adding custom instructions to your repository, you can guide Copilot on how to understand your project and how to build, test and validate its changes. - > You can add instructions in a single `.github/copilot-instructions.md` file in the repository, or create one or more `.github/instructions/**/*.instructions.md` files applying to different files or directories in your repository. - > For more information, see [Adding repository custom instructions for GitHub Copilot](https://docs.github.com/en/copilot/customizing-copilot/adding-repository-custom-instructions-for-github-copilot?tool=webui). - https://docs.github.com/en/copilot/tutorials/coding-agent/get-the-best-results#pre-installing-dependencies-in-github-copilots-environment - > Pre-installing dependencies in GitHub Copilot's environment > > While working on a task, Copilot has access to its own ephemeral development environment, powered by GitHub Actions, where it can explore your code, make changes, execute automated tests and linters and more. > > If Copilot is able to build, test and validate its changes in its own development environment, it is more likely to produce good pull requests which can be merged quickly. > > To do that, it will need your project's dependencies. Copilot can discover and install these dependencies itself via a process of trial and error - but this can be slow and unreliable, given the non-deterministic nature of large language models (LLMs). > > You can configure a `copilot-setup-steps.yml` file to pre-install these dependencies before the agent starts working so it can hit the ground running. For more information, see [Customizing the development environment for GitHub Copilot coding agent](https://docs.github.com/en/copilot/customizing-copilot/customizing-the-development-environment-for-copilot-coding-agent#preinstalling-tools-or-dependencies-in-copilots-environment). - https://docs.github.com/en/copilot/how-tos/use-copilot-agents/coding-agent/customize-the-agent-environment#preinstalling-tools-or-dependencies-in-copilots-environment - > Preinstalling tools or dependencies in Copilot's environment - > you can preconfigure Copilot's environment before the agent starts by creating a special GitHub Actions workflow file, located at `.github/workflows/copilot-setup-steps.yml` within your repository. > > A `copilot-setup-steps.yml` file looks like a normal GitHub Actions workflow file, but must contain a single `copilot-setup-steps` job. This job will be executed in GitHub Actions before Copilot starts working. - https://docs.github.com/en/copilot/how-tos/configure-custom-instructions/add-repository-instructions - > Adding repository custom instructions for GitHub Copilot - > Create a file in a repository that automatically adds information to questions you ask Copilot Chat. - https://docs.github.com/en/copilot/how-tos/configure-custom-instructions/add-repository-instructions#creating-a-repository-custom-instructions-file - > Creating a repository custom instructions file - > Copilot Chat on the GitHub website, Copilot coding agent and Copilot code review support a single `.github/copilot-instructions.md` custom instructions file stored in the repository. > > In addition, Copilot coding agent supports one or more `.instructions.md` files stored within `.github/instructions` in the repository. Each file can specify `applyTo` frontmatter to define what files or directories its instructions apply to. - https://docs.github.com/en/copilot/how-tos/configure-custom-instructions/add-repository-instructions#writing-effective-repository-custom-instructions - > Writing effective repository custom instructions - > The instructions you add to your custom instruction file(s) should be short, self-contained statements that provide Copilot with relevant information to help it work in this repository. Because the instructions are sent with every chat message, they should be broadly applicable to most requests you will make in the context of the repository. - https://docs.github.com/en/copilot/customizing-copilot/extending-copilot-chat-with-mcp - > Extending Copilot Chat with the Model Context Protocol (MCP) - > Learn how to use the Model Context Protocol (MCP) to extend Copilot Chat. - https://docs.github.com/en/copilot/customizing-copilot/extending-copilot-chat-with-mcp#configuring-mcp-servers-in-visual-studio-code - > To configure MCP servers in Visual Studio Code, you need to set up a configuration script that specifies the details of the MCP servers you want to use. You can configure MCP servers for either: > > - A specific repository. This will share MCP servers with anyone who opens the project in Visual Studio Code. To do this, create a `.vscode/mcp.json` file in the root of your repository. > - Your personal instance of Visual Studio Code. You will be the only person who has access to configured MCP servers. To do this, add the configuration to your `settings.json` file in Visual Studio Code. - https://github.blog/changelog/2025-06-13-copilot-code-review-customization-for-all/ - > Copilot code review now supports the same custom instructions used by Copilot Chat and coding agent—unlocking personalized, consistent AI reviews across your workflow. - > You can now tailor Copilot code review using `.github/copilot-instructions.md` — the same customization file already used by Copilot Chat and Copilot coding agent. This brings a consistent way to shape how Copilot responds across your entire workflow. - https://github.blog/changelog/2025-07-23-github-copilot-coding-agent-now-supports-instructions-md-custom-instructions/ - > GitHub Copilot coding agent now supports `.instructions.md` custom instructions - > You can add [custom instructions](https://docs.github.com/enterprise-cloud@latest/copilot/how-tos/custom-instructions/adding-repository-custom-instructions-for-github-copilot) to your repository to teach Copilot how the repository works as well as how to run any build steps, automated tests, or linters. With these instructions, Copilot can produce higher quality pull requests. > > Now, along with `.github/copilot-instructions.md`, Copilot coding agent supports `.instructions.md` files stored under `.github/instructions`. > > You can create many `.instructions.md` files, and each one can use YAML frontmatter to specify which files or directories it applies to. This means that you can give Copilot different instructions for different parts of your codebase. - https://plugins.jetbrains.com/plugin/17718-github-copilot/versions/stable/722432 - > GitHub Copilot 1.5.42-241 - > Added: Custom instructions for generating Chat and Git commit messages. Specify these in the `.github/copilot-instructions.md` or `.github/git-commit-instructions.md` files. - https://github.com/microsoft/copilot-intellij-feedback/issues/38#issuecomment-2763249941 - > Support for repository custom instructions - > You can create `.github/copilot-instructions.md` for custom instructions for inline chat and panel chat. > > Additionally, you can create custom instructions for Git commit message generation in: `.github/git-commit-instructions.md` - > This is available in the latest release, 1.5.41. - https://copilot-instructions.md/ - > Adding custom instructions for GitHub Copilot - https://copilot-instructions.md/prompts.html - > Godlike Prompts ### Humanloop - https://humanloop.com/ - > Your AI product needs evals > The LLM evals platform for enterprises. Humanloop gives you the tools that top teams use to ship and scale AI with confidence. - https://humanloop.com/docs/reference/prompt-file-format - > Prompt file format - > Our file format for serializing Prompts to store alongside your source code. - > Our `.prompt` file format is a serialized representation of a Prompt, designed to be human-readable and suitable for checking into your version control systems alongside your code. This allows technical teams to maintain the source of truth for their prompts within their existing version control workflow. - https://humanloop.com/docs/reference/prompt-file-format#format - > Format > > The format is heavily inspired by [MDX](https://mdxjs.com/), with model and parameters specified in a YAML header alongside a JSX-inspired syntax for chat templates. ### JetBrains AI Assistant - See Also: - [JetBrains Junie](#jetbrains-junie) - https://www.jetbrains.com/ai-assistant/ - > Exclude files from AI context > > You can prevent AI Assistant from accessing specific files or folders by configuring an `.aiignore` file. This ensures restricted files are not processed when AI gathers project context. - https://www.jetbrains.com/help/ai-assistant/getting-started-with-ai-assistant.html - https://www.jetbrains.com/help/ai-assistant/configure-project-rules.html - > Project-specific rules help AI Assistant better understand your code, preferred tools, and coding conventions. By defining these rules, you can improve the relevance of AI responses and ensure that suggestions align with your project setup. > > By default, project rules are automatically added to each chat session, so AI Assistant adheres to the provided guidelines. You can customize this behavior – for example, apply rules only to specific files, invoke them manually, or let the model decide when to use them. - > To configure project rules: > > - Go to Settings | Tools | AI Assistant | Rules. > - Click New Project Rules File and provide a name for the file. > > Alternatively, you can manually create the `.aiassistant/rules` folder in the project root and add a `.md` file there. - https://www.jetbrains.com/help/ai-assistant/disable-ai-assistant.html - > Restrict or disable AI Assistant features - https://www.jetbrains.com/help/ai-assistant/disable-ai-assistant.html#restrict-ai-assistant-usage-for-project - > Restrict usage of AI Assistant for a project - > As an alternative to disabling AI Assistant, you can create a file that will restrict the usage of AI features in the project. > > Create an empty file named `.noai` in the root directory of the project. > > When this file is present, all AI Assistant features are fully disabled for the project. Even if this project is opened in another IDE, the AI Assistant features will not be available. - https://www.jetbrains.com/help/ai-assistant/disable-ai-assistant.html#restrict-ai-assistant-usage-in-specific-files-or-folders - > Restrict usage of AI Assistant in specific files or folders - > You can restrict AI Assistant from processing specific files or folders by creating and configuring an `.aiignore` file. - > Files added to the `.aiignore` are not processed by AI Assistant, and AI features are disabled for them. However, in some cases, ignored files may still be processed due to unforeseen issues. If you notice any unexpected behavior, please report it so we can investigate. - > If your project already contains a `.cursorignore`, `.codeiumignore`, or `.aiexclude` file, there is no need to create a separate `.aiignore` file, as these files are also supported. As long as they are located in the root folder of the project, they will be used to restrict AI Assistant's access to the specified files and folders. - https://www.jetbrains.com/help/ai-assistant/mcp.html - > Model Context Protocol (MCP) - > AI Assistant can interact with external tools and data sources via the Model Context Protocol (MCP). By connecting to MCP servers, AI Assistant gains access to a range of tools that significantly extend its capabilities. - https://www.jetbrains.com/help/ai-assistant/mcp.html#use_ide_as_an_mcp_server - > Use your IDE as an MCP server - > Starting with version 2025.2, JetBrains IDEs come with an integrated MCP server, allowing external clients such as Claude Desktop, Cursor, VS Code, and others to access tools provided by the IDE. This provides users with the ability to control and interact with JetBrains IDEs without leaving their application of choice. ### JetBrains Junie - See Also: - [JetBrains AI Assistant](#jetbrains-ai-assistant) - https://www.jetbrains.com/junie/ - > Junie - > Your smart coding agent - https://www.jetbrains.com/help/junie/get-started-with-junie.html - https://www.jetbrains.com/help/junie/customize-guidelines.html - > Guidelines - > Guidelines allow you to provide persistent, reusable context to the agent. Junie adds this context to every task it works on. - > Guidelines are stored in the `.junie/guidelines.md` file in the root project directory, so you can version-control them and reuse at the project level. - > You can add and edit the `.junie/guidelines.md` file manually, or prompt Junie to explore the project and generate the `guidelines.md` file for you. - https://www.jetbrains.com/help/junie/aiignore.html - > `.aiignore` - > You can restrict Junie from processing the contents of specific files or folders by creating and configuring an `.aiignore` file in the project root directory. If a file or folder is on the `.aiignore` list, Junie will ask for explicit approval before viewing or editing it. - > Only the contents of files listed in `.aiignore` are protected. Junie will still have access to the file and folder names. - > The `.aiignore` file follows the same [syntax and pattern format](https://git-scm.com/docs/gitignore) as the `.gitignore` file. - https://www.jetbrains.com/help/junie/model-context-protocol-mcp.html - > Model Context Protocol (MCP) - > You can connect Junie to Model Context Protocol (MCP) servers. This will extend Junie's capabilities with executable functionality for working with data sources and tools, such as file systems, productivity tools, or databases. - > To connect Junie to an MCP server, use the `mcp.json` file in Junie's settings. - > The MCP servers added via the MCP Settings page are saved to the `~/.junie/mcp.json` file in the home directory. Such servers are available globally for all projects that are opened in the IDE. - > To configure an MCP server at the project level, add an `mcp.json` file manually to the `.junie/mcp`folder in the project root. - https://blog.jetbrains.com/idea/2025/05/coding-guidelines-for-your-ai-agents/#how-to-use-guidelines-with-junie - > Junie is an autonomous AI coding agent from JetBrains. You can specify your coding style, best practices, and general preferences by specifying your desired guidelines in the `.junie/guidelines.md` file so that Junie will follow them while generating code. - > Check out the [junie-guidelines](https://github.com/JetBrains/junie-guidelines) repository, which has a catalog of guidelines for various technologies. - https://github.com/JetBrains/junie-guidelines - > Junie Guidelines - > A catalog of technology-specific guidelines for optimizing Junie code generation. - > You can add all the guidelines for various technologies that you are using into the `.junie/guidelines.md` file and delegate tasks to Junie. Junie will take these guidelines into consideration while generating the code so that you don’t have to add these guidelines in each prompt. - > You can ask Junie to create a `guidelines.md` file that includes the coding conventions being followed in the current codebase so far. - > Once the `guidelines.md` file is generated, you can tweak it to add more guidelines or update the existing ones as needed. ### OpenAI Codex - See Also (?): - [OpenAI Codex CLI](#openai-codex-cli) - https://github.com/openai/codex - > Codex can be guided by `AGENTS.md` files placed within your repository. These are text files, akin to `README.md`, where you can inform Codex how to navigate your codebase, which commands to run for testing, and how best to adhere to your project's standard practices. Like human developers, Codex agents perform best when provided with configured dev environments, reliable testing setups, and clear documentation. - https://openai.com/index/introducing-codex/#appendix - > System message - > We are sharing the `codex-1` system message to help developers understand the model’s default behavior and tailor Codex to work effectively in custom workflows. For example, the `codex-1` system message encourages Codex to run all tests mentioned in the `AGENTS.md` file, but if you’re short on time, you can ask Codex to skip these tests. - ```markdown ..snip.. # AGENTS.md spec - Containers often contain AGENTS.md files. These files can appear anywhere in the container's filesystem. Typical locations include `/`, `~`, and in various places inside of Git repos. - These files are a way for humans to give you (the agent) instructions or tips for working within the container. - Some examples might be: coding conventions, info about how code is organized, or instructions for how to run or test code. - AGENTS.md files may provide instructions about PR messages (messages attached to a GitHub Pull Request produced by the agent, describing the PR). These instructions should be respected. - Instructions in AGENTS.md files: - The scope of an AGENTS.md file is the entire directory tree rooted at the folder that contains it. - For every file you touch in the final patch, you must obey instructions in any AGENTS.md file whose scope includes that file. - Instructions about code style, structure, naming, etc. apply only to code within the AGENTS.md file's scope, unless the file states otherwise. - More-deeply-nested AGENTS.md files take precedence in the case of conflicting instructions. - Direct system/developer/user instructions (as part of a prompt) take precedence over AGENTS.md instructions. - AGENTS.md files need not live only in Git repos. For example, you may find one in your home directory. - If the AGENTS.md includes programmatic checks to verify your work, you MUST run all of them and make a best effort to validate that the checks pass AFTER all code changes have been made. - This applies even for changes that appear simple, i.e. documentation. You still must run all of the programmatic checks. ..snip.. ``` ### OpenAI Codex CLI - See Also (?): - [OpenAI Codex](#openai-codex) - https://github.com/openai/codex - > Lightweight coding agent that runs in your terminal - https://github.com/openai/codex#memory--project-docs - > Memory & project docs > > You can give Codex extra instructions and guidance using `AGENTS.md` files. Codex looks for `AGENTS.md` files in the following places, and merges them top-down: > > - `~/.codex/AGENTS.md` - personal global guidance > - `AGENTS.md` at repo root - shared project notes > - `AGENTS.md` in the current working directory - sub-folder/feature specifics - https://github.com/openai/codex#example-prompts - > Example prompts > > Below are a few bite-size examples you can copy-paste. Replace the text in quotes with your own task. See the prompting guide for more tips and usage patterns. - https://github.com/openai/codex/tree/main/codex-cli/examples - > Quick start examples > > This directory bundles some self‑contained examples using the Codex CLI. - > If you want to get started using the Codex CLI directly, skip this and refer to the prompting guide. - https://github.com/openai/codex/blob/main/codex-cli/examples/prompting_guide.md - Note: This seems to have been removed - https://github.com/openai/codex/issues/2374 - > Issue #2374: Fix/restore `prompting_guide.md`, or remove link to it in `README.md` - > Prompting guide - https://github.com/openai/codex/blob/main/codex-cli/examples/prompting_guide.md#custom-instructions - > Custom instructions > > Codex supports two types of Markdown-based instruction files that influence model behavior and prompting: > > - `~/.codex/instructions.md` > - Global, user-level custom guidance injected into every session. You should keep this relatively short and concise. These instructions are applied to all Codex runs across all projects and are great for personal defaults, shell setup tips, safety constraints, or preferred tools. > - **Example:** "Before executing shell commands, create and activate a `.codex-venv` Python environment." or "Avoid running pytest until you've completed all your changes." > - `CODEX.md` > - Project-specific instructions loaded from the current directory or Git root. Use this for repo-specific context, file structure, command policies, or project conventions. These are automatically detected unless `--no-project-doc` or `CODEX_DISABLE_PROJECT_DOC=1` is set. > - **Example:** “All React components live in `src/components/`". ## Prompts - https://prompts.chat/ - > prompts.chat > World's First & Most Famous Prompts Directory - https://prompts.chat/vibe/ - > awesome vibe coding prompts to help you build simple apps - https://github.com/f/awesome-chatgpt-prompts - > This repo includes ChatGPT prompt curation to use ChatGPT and other LLM tools better. - https://copilot-instructions.md/prompts.html - > Godlike Prompts ## Unsorted - TODO: Find and add other examples (eg. [aider](https://aider.chat/docs/config.html) (`.aider.conf.yml`), [llm](https://github.com/simonw/llm), etc?) - This (private) ChatGPT convo gave some other suggestions that I need to look into deeper still: https://chatgpt.com/c/680b3bdc-80e8-8008-b05e-86d3e0b627a6 - > `CLAUDE.md`: Used by Claude (Anthropic) as a signal to scan the repo and use this file for context. It’s suggested in their documentation and blog posts. - > `.aider.conf.json` (or `aider.conf.json`): Used by Aider, a GPT-based coding assistant. Can include config such as files to include/exclude, model settings, etc. - > `.aider.chat.md`: Aider can also use this (or similarly named `.aider.md`) to persist chat history or provide persistent context for the assistant. While not always required, it’s sometimes created in the repo as a place to put system instructions or notes for context between runs. - > `.prompt.md`, `PROMPT.md`, or `INSTRUCTIONS.md`: Some AI agents or prompts (especially for open-source wrappers around GPT like smol-ai, Continue, or custom LangChain agents) look for files like these in root for either default instructions or human-readable context. - > `.continue/context.json`: Used by Continue (an open-source AI code agent IDE extension) to provide user preferences or context inclusion rules. - > `prompt.config.json` / `agent.config.json`: Custom LLM wrappers, especially those built with LangChain, Autogen, or AgentScript, sometimes define .config or .prompt.* files in root for behavior tuning. - > `.smol-dev.yaml`: The smol-ai developer tools may use YAML-based configs for defining how the assistant should scaffold or interact with the repo. - > There’s a growing informal convention for files that help tune LLM behavior: > > - `AI.md` or `AI_INSTRUCTIONS.md`: General-purpose file to guide any AI tooling in a repo > - `CONTRIBUTING.md`: While not AI-specific, many LLMs (like Copilot or Claude) are trained to respect these as guidance for changes > - `README.ai.md`: Separate AI-focused readme, e.g. summarizing intent, goals, style guides, etc. > - `STYLEGUIDE.md`: Useful for AI tooling that supports code style customization or alignment ## See Also ### My Other Related Deepdive Gist's and Projects - https://github.com/0xdevalias - https://gist.github.com/0xdevalias - [AI/ML Toolkit (0xdevalias' gist)](https://gist.github.com/0xdevalias/09a5c27702cb94f81c9fb4b7434df966#aiml-toolkit) - [Model Context Protocol (MCP) Tools (0xdevalias' gist)](https://gist.github.com/0xdevalias/86404c0a472e93109507a483a6cc6065#model-context-protocol-mcp-tools) - [AI Agent Swarm Musings (0xdevalias' gist)](https://gist.github.com/0xdevalias/4ce1ecd18b3a20ea6a9e58b1a2881875#ai-agent-swarm-musings) - [ChatGPT / AI Rental Property Plugins/Agents (0xdevalias' gist)](https://gist.github.com/0xdevalias/18e666bc319b2e08f90e52bb5cb53538#chatgpt--ai-rental-property-pluginsagents)