Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save rmendybayev/a06456d82bdde27221bf0965c43867d8 to your computer and use it in GitHub Desktop.
Save rmendybayev/a06456d82bdde27221bf0965c43867d8 to your computer and use it in GitHub Desktop.

Revisions

  1. @pyros-projects pyros-projects revised this gist Jan 15, 2025. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion how_to_use_metaprompts.md
    Original file line number Diff line number Diff line change
    @@ -61,7 +61,7 @@ Once the assistant or you finish, refine the output. Ask questions, request impr

    Don’t expect superhuman capabilities from an LLM without engaging with it. This won’t work. If a LLM could do this already society as we know it wouldn't exist anymore. LLMs are still a bit stupid, and their boon is their crazy speed, being able to generate 100 times the code a day. But it needs you so it's actually good code it produces. You are the architect and orchestrator

    That’s why one-shot snake game prompts are stupid. No dev I know can one-shot a snake game, and it’s irrelevant to daily project work. I’d rather have an LLM that can’t one-shot snake but has solid reasoning and planning skills—like o1 models.
    That’s why one-shot snake game prompts are stupid. No dev I know can one-shot a snake game, and it’s irrelevant for daily project work. I’d rather have an LLM that can’t one-shot snake but has solid reasoning and planning skills—like o1 models.

    ---

  2. @pyros-projects pyros-projects revised this gist Jan 15, 2025. 1 changed file with 5 additions and 6 deletions.
    11 changes: 5 additions & 6 deletions how_to_use_metaprompts.md
    Original file line number Diff line number Diff line change
    @@ -90,15 +90,14 @@ You probably don’t know this library:

    It’s the best WebApp/HTMX library for Python and makes Streamlit and Gradio look like drawing by numbers for five-year-olds.

    They’ve even proposed a standard for “LLM-fying” documentation:
    And they offer a dedicated text file you should load into your context when working with the library, which makes the LLM crazy good in using it:
    [https://docs.fastht.ml/llms-ctx.txt](https://docs.fastht.ml/llms-ctx.txt)

    What does it help you? Well they’ve even proposed a standard for “LLM-fying” documentation:
    [https://llmstxt.org/](https://llmstxt.org/)

    Not every library follows this yet, which is a pity, but it’s easy to create your own for any library.

    Here’s an example for FastHT:
    [https://docs.fastht.ml/llms-ctx.txt](https://docs.fastht.ml/llms-ctx.txt)

    And here’s a directory of libraries that provide LLM context:
    Here’s a directory of libraries that provide LLM context:
    [https://directory.llmstxt.cloud/](https://directory.llmstxt.cloud/)

    If your library isn’t listed, create a meta prompt to generate such a file from the repository. Or, better yet, build an app with the meta prompts this guide is about.
  3. @pyros-projects pyros-projects revised this gist Jan 15, 2025. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion how_to_use_metaprompts.md
    Original file line number Diff line number Diff line change
    @@ -48,7 +48,7 @@ Your META INSTANCE now includes:

    Paste the coding prompt into this new instance. This instance is called **CODING INSTANCE**.

    Coding prompts include all the context needed to solve them. Coding itself devours context for irrelevant information, so the prompts are designed to be self-contained.
    Coding prompts include all the context needed to solve them. We do it in a sperate instance because coding itself devours context for irrelevant information, so the prompts are designed to be self-contained. If you are using Gemini you are probably fine using a single instance, but even Gemini's 2mil token context degrades pretty quick.

    In theory, you could create a new CODING INSTANCE for every coding prompt. But let’s be real—having 2,318,476 chats open is a recipe for insanity.

  4. @pyros-projects pyros-projects revised this gist Jan 15, 2025. 1 changed file with 1 addition and 0 deletions.
    1 change: 1 addition & 0 deletions how_to_use_metaprompts.md
    Original file line number Diff line number Diff line change
    @@ -25,6 +25,7 @@ We call this instance of your bot **META INSTANCE**.
    [https://gist.github.com/pyros-projects/c77402249b5b45f0a501998870766ae9#file-01_planning_output-md](https://gist.github.com/pyros-projects/c77402249b5b45f0a501998870766ae9#file-01_planning_output-md)

    **Read it and fix errors by talking to your META INSTANCE.**
    (For example: Missing components, missing tasks, or you don't like the emoji for your app)

    ---

  5. @pyros-projects pyros-projects revised this gist Jan 15, 2025. 1 changed file with 57 additions and 52 deletions.
    109 changes: 57 additions & 52 deletions how_to_use_metaprompts.md
    Original file line number Diff line number Diff line change
    @@ -1,99 +1,104 @@

    # How to

    Default meta prompt collection: https://gist.github.com/pyros-projects/c77402249b5b45f0a501998870766ae9
    Default meta prompt collection: [https://gist.github.com/pyros-projects/c77402249b5b45f0a501998870766ae9](https://gist.github.com/pyros-projects/c77402249b5b45f0a501998870766ae9)

    Meta prompt collection with creating summaries and context sync (use them when using Cline or other coding assistants): [https://gist.github.com/pyros-projects/f6430df8ac6f1ac37e5cfb6a8302edcf](https://gist.github.com/pyros-projects/f6430df8ac6f1ac37e5cfb6a8302edcf)

    meta prompt collection with creating summaries and context sync (use them when using Cline or other coding Assistants): https://gist.github.com/pyros-projects/f6430df8ac6f1ac37e5cfb6a8302edcf
    ---

    ## Create a plan

    #### 1 Copy 01_planning and replace **user input** with your app idea, project spec or whatever
    ### 1. Copy `01_planning` and replace **user input** with your app idea, project spec, or whatever.

    https://imgur.com/a/4zSpwkT
    Example: [https://imgur.com/a/4zSpwkT](https://imgur.com/a/4zSpwkT)

    #### 2 Put the whole prompt into your LLM
    ### 2. Put the whole prompt into your LLM.

    Use the best LLM you have access too. For serious work o1 pro the best one, and it isn't even close. Followed by Claude and Gemini 2.0 Reasoning
    Use the best LLM you have access to. For serious work, **o1 Pro** is the best option, and it isnt even close. Followed by Claude and Gemini 2.0 Reasoning.

    If you don't mind to work for it every other LLM like a locally run Qwen-Coder2.5 also works
    If you dont mind putting in the effort, every other LLM like a locally run **Qwen-Coder2.5** also works.

    We call this instance of your bot **META INSTANCE**
    We call this instance of your bot **META INSTANCE**.

    Potential result
    **Potential result:**
    [https://gist.github.com/pyros-projects/c77402249b5b45f0a501998870766ae9#file-01_planning_output-md](https://gist.github.com/pyros-projects/c77402249b5b45f0a501998870766ae9#file-01_planning_output-md)

    https://gist.github.com/pyros-projects/c77402249b5b45f0a501998870766ae9#file-01_planning_output-md
    **Read it and fix errors by talking to your META INSTANCE.**

    READ IT AND FIX ERRORS BY TALKING TO YOUR META INSTANCE
    ---

    #### 3 Put 02_prompt_chain.md into the same chat (META INSTANCE)
    ### 3. Add `02_prompt_chain.md` to the same chat (META INSTANCE).

    This prompt will generate on basis of the technical plan the very first coding prompt and a review prompt to evaluate the results of such a coding prompt
    This prompt generates, based on the technical plan, the first coding prompt and a review prompt to evaluate the results.

    https://gist.github.com/pyros-projects/c77402249b5b45f0a501998870766ae9#file-02_prompt_chain_potential_output-md
    [https://gist.github.com/pyros-projects/c77402249b5b45f0a501998870766ae9#file-02_prompt_chain_potential_output-md](https://gist.github.com/pyros-projects/c77402249b5b45f0a501998870766ae9#file-02_prompt_chain_potential_output-md)

    So your Meta instance looks like
    Your META INSTANCE now includes:

    - prompt #01
    - technical plan
    - prompt #02
    - coding prompt/review prompt
    - Prompt #01
    - Technical plan
    - Prompt #02
    - Coding prompt/review prompt

    NOW YOU OPEN A COMMPLETELY FRESH INSTANCE OF THE LLM OF YOUR CHOICE (or open up your coding ass. like cline)
    ---

    #### 4 Put the coding prompt into the new instance
    ### 4. Open a completely fresh instance of the LLM of your choice (or open up your coding assistant like Cline).

    This instance is called CODING INSTANCE
    Paste the coding prompt into this new instance. This instance is called **CODING INSTANCE**.

    Coding prompts are written in a way they include all the context they need to be able to be solved. That's because coding itself is eating context like nobody's business for information that in the end has zero relevance.
    Coding prompts include all the context needed to solve them. Coding itself devours context for irrelevant information, so the prompts are designed to be self-contained.

    In theory you can even create a completely new coding instance for every future coding prompt, but having 2318476 chats open will drive you insane.
    In theory, you could create a new CODING INSTANCE for every coding prompt. But let’s be real—having 2,318,476 chats open is a recipe for insanity.

    If you pasted it into a normal LLM you will get something like this - a step by step plan of what you should do (which you will do as written)
    https://gist.github.com/pyros-projects/c77402249b5b45f0a501998870766ae9#file-04_coding_prompt_potential_result-md
    If you use a normal LLM, you’ll get something like thisa step-by-step plan to follow:
    [https://gist.github.com/pyros-projects/c77402249b5b45f0a501998870766ae9#file-04_coding_prompt_potential_result-md](https://gist.github.com/pyros-projects/c77402249b5b45f0a501998870766ae9#file-04_coding_prompt_potential_result-md)

    If you pasted it into cline or similar your assistant will start working
    If you’re using Cline or similar, your assistant will just start working.

    after you or the assistant is done you do some refinement with your bot. ask it question, ask it for improvements and ask it for clarification. Don't be all "the model sucks because it got a npm command wrong", because shit like this happens daily in any human based project. Never in the history of IT there was once a project done without any issues or bugs, or budget shenanigans, but most projects get done anyway because people communicate. Projects in which nobody talks with each other are those that fail. So don't go out claiming LLM can't do projects like humans, while expecting superhuman capabilities from it by never talking to it. This won't work.
    Once the assistant or you finish, refine the output. Ask questions, request improvements, and seek clarification. Don’t complain, “the model sucks because it got an npm command wrong.” Mistakes happen in every human project, too. Projects succeed because people communicate; they fail when they don’t.

    That's why I think one-shot snake game prompts are stupid. No dev I know can one shot a snake game, and it's also totally irrelevant for daily project business. I rather have a LLM that can't one shot snake, but instead has a better grasp of reasoning and planning... like the o1 models
    Don’t expect superhuman capabilities from an LLM without engaging with it. This won’t work. If a LLM could do this already society as we know it wouldn't exist anymore. LLMs are still a bit stupid, and their boon is their crazy speed, being able to generate 100 times the code a day. But it needs you so it's actually good code it produces. You are the architect and orchestrator

    After the acceptance criteria defined in the prompt are fulfilled you can do the review prompt, or if it's just a small project or proof of concept skip it.
    That’s why one-shot snake game prompts are stupid. No dev I know can one-shot a snake game, and it’s irrelevant to daily project work. I’d rather have an LLM that can’t one-shot snake but has solid reasoning and planning skills—like o1 models.

    If you are using cline you generate a summary of what cline did with the summary generation prompt.
    ---

    ### GO BACK TO THE META INSTANCE
    ### 5. Review, refine, and repeat.

    and write "I'm done with this. please generate the next coding and review prompt" . if you have a summary, add it. if you have a review add it aswell. done.
    When the acceptance criteria in the prompt are met:

    The answer you put into the Coding Instance again.
    1. Generate a summary (if using Cline or similar).
    2. Go back to the META INSTANCE and say:
    "I’m done with this. Please generate the next coding and review prompt."
    Include summaries or reviews, if available.

    This you repeat until all tasks of the technical plan are worked through.
    Paste the next prompt into your CODING INSTANCE. Repeat this process until all tasks in the technical plan are done.

    Congratulations you have done a real project with
    **Congratulationsyou’ve completed a real project!**

    ---

    ## FAQ

    #### My llm is outdated when working with library XYZ. What to do?

    No worries. It sucks, but is solvable.
    You probably won't know this library

    https://docs.fastht.ml/

    Which you should, because it's the best WebApp/HTMX library for Python, and makes streamlit and gradio look like drwaing by numbers for five year olds.
    ### My LLM is outdated when working with library XYZ. What should I do?

    They went ahead and proposed a standard of "llm-fying" your documentation for every other library maker out there to follow suit.
    No worries. It sucks, but it’s solvable.
    You probably don’t know this library:
    [https://docs.fastht.ml/](https://docs.fastht.ml/)

    https://llmstxt.org/
    It’s the best WebApp/HTMX library for Python and makes Streamlit and Gradio look like drawing by numbers for five-year-olds.

    Not everyone does this yet, which is a pitty, but it isn't hard to create it yourself for any library there is.
    They’ve even proposed a standard for “LLM-fying” documentation:
    [https://llmstxt.org/](https://llmstxt.org/)

    That's how it looks like for fastHtml: https://docs.fastht.ml/llms-ctx.txt
    Not every library follows this yet, which is a pity, but it’s easy to create your own for any library.

    and here is a list of other libraries providing a llm context: https://directory.llmstxt.cloud/
    Here’s an example for FastHT:
    [https://docs.fastht.ml/llms-ctx.txt](https://docs.fastht.ml/llms-ctx.txt)

    so perhaps you are lucky at it already exists for your library, but if not:
    And here’s a directory of libraries that provide LLM context:
    [https://directory.llmstxt.cloud/](https://directory.llmstxt.cloud/)

    Create a meta prompt which creates such a file out of a repository!
    (Or create an app with the meta prompts this guide is about)
    If your library isn’t listed, create a meta prompt to generate such a file from the repository. Or, better yet, build an app with the meta prompts this guide is about.

  6. @pyros-projects pyros-projects revised this gist Jan 15, 2025. 1 changed file with 74 additions and 3 deletions.
    77 changes: 74 additions & 3 deletions how_to_use_metaprompts.md
    Original file line number Diff line number Diff line change
    @@ -6,11 +6,11 @@ meta prompt collection with creating summaries and context sync (use them when u

    ## Create a plan

    #### Copy 01_planning and replace **user input** with your app idea, project spec or whatever
    #### 1 Copy 01_planning and replace **user input** with your app idea, project spec or whatever

    https://imgur.com/a/4zSpwkT

    #### Put the whole prompt into your LLM
    #### 2 Put the whole prompt into your LLM

    Use the best LLM you have access too. For serious work o1 pro the best one, and it isn't even close. Followed by Claude and Gemini 2.0 Reasoning

    @@ -24,5 +24,76 @@ https://gist.github.com/pyros-projects/c77402249b5b45f0a501998870766ae9#file-01_

    READ IT AND FIX ERRORS BY TALKING TO YOUR META INSTANCE

    #### 3 Put 02_prompt_chain.md into the same chat (META INSTANCE)

    This prompt will generate on basis of the technical plan the very first coding prompt and a review prompt to evaluate the results of such a coding prompt

    https://gist.github.com/pyros-projects/c77402249b5b45f0a501998870766ae9#file-02_prompt_chain_potential_output-md

    So your Meta instance looks like

    - prompt #01
    - technical plan
    - prompt #02
    - coding prompt/review prompt

    NOW YOU OPEN A COMMPLETELY FRESH INSTANCE OF THE LLM OF YOUR CHOICE (or open up your coding ass. like cline)

    #### 4 Put the coding prompt into the new instance

    This instance is called CODING INSTANCE

    Coding prompts are written in a way they include all the context they need to be able to be solved. That's because coding itself is eating context like nobody's business for information that in the end has zero relevance.

    In theory you can even create a completely new coding instance for every future coding prompt, but having 2318476 chats open will drive you insane.

    If you pasted it into a normal LLM you will get something like this - a step by step plan of what you should do (which you will do as written)
    https://gist.github.com/pyros-projects/c77402249b5b45f0a501998870766ae9#file-04_coding_prompt_potential_result-md

    If you pasted it into cline or similar your assistant will start working

    after you or the assistant is done you do some refinement with your bot. ask it question, ask it for improvements and ask it for clarification. Don't be all "the model sucks because it got a npm command wrong", because shit like this happens daily in any human based project. Never in the history of IT there was once a project done without any issues or bugs, or budget shenanigans, but most projects get done anyway because people communicate. Projects in which nobody talks with each other are those that fail. So don't go out claiming LLM can't do projects like humans, while expecting superhuman capabilities from it by never talking to it. This won't work.

    That's why I think one-shot snake game prompts are stupid. No dev I know can one shot a snake game, and it's also totally irrelevant for daily project business. I rather have a LLM that can't one shot snake, but instead has a better grasp of reasoning and planning... like the o1 models

    After the acceptance criteria defined in the prompt are fulfilled you can do the review prompt, or if it's just a small project or proof of concept skip it.

    If you are using cline you generate a summary of what cline did with the summary generation prompt.

    ### GO BACK TO THE META INSTANCE

    and write "I'm done with this. please generate the next coding and review prompt" . if you have a summary, add it. if you have a review add it aswell. done.

    The answer you put into the Coding Instance again.

    This you repeat until all tasks of the technical plan are worked through.

    Congratulations you have done a real project with


    ## FAQ

    #### My llm is outdated when working with library XYZ. What to do?

    No worries. It sucks, but is solvable.
    You probably won't know this library

    https://docs.fastht.ml/

    Which you should, because it's the best WebApp/HTMX library for Python, and makes streamlit and gradio look like drwaing by numbers for five year olds.

    They went ahead and proposed a standard of "llm-fying" your documentation for every other library maker out there to follow suit.

    https://llmstxt.org/

    Not everyone does this yet, which is a pitty, but it isn't hard to create it yourself for any library there is.

    That's how it looks like for fastHtml: https://docs.fastht.ml/llms-ctx.txt

    and here is a list of other libraries providing a llm context: https://directory.llmstxt.cloud/

    so perhaps you are lucky at it already exists for your library, but if not:

    Create a meta prompt which creates such a file out of a repository!
    (Or create an app with the meta prompts this guide is about)

    ## FAQ
  7. @pyros-projects pyros-projects created this gist Jan 15, 2025.
    28 changes: 28 additions & 0 deletions how_to_use_metaprompts.md
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,28 @@
    # How to

    Default meta prompt collection: https://gist.github.com/pyros-projects/c77402249b5b45f0a501998870766ae9

    meta prompt collection with creating summaries and context sync (use them when using Cline or other coding Assistants): https://gist.github.com/pyros-projects/f6430df8ac6f1ac37e5cfb6a8302edcf

    ## Create a plan

    #### Copy 01_planning and replace **user input** with your app idea, project spec or whatever

    https://imgur.com/a/4zSpwkT

    #### Put the whole prompt into your LLM

    Use the best LLM you have access too. For serious work o1 pro the best one, and it isn't even close. Followed by Claude and Gemini 2.0 Reasoning

    If you don't mind to work for it every other LLM like a locally run Qwen-Coder2.5 also works

    We call this instance of your bot **META INSTANCE**

    Potential result

    https://gist.github.com/pyros-projects/c77402249b5b45f0a501998870766ae9#file-01_planning_output-md

    READ IT AND FIX ERRORS BY TALKING TO YOUR META INSTANCE


    ## FAQ