Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Select an option

  • Save jhargis/366ef70bc9a42faf27999637665a19e5 to your computer and use it in GitHub Desktop.

Select an option

Save jhargis/366ef70bc9a42faf27999637665a19e5 to your computer and use it in GitHub Desktop.

Revisions

  1. @disler disler revised this gist Jul 14, 2024. No changes.
  2. @disler disler revised this gist Jul 14, 2024. No changes.
  3. @disler disler revised this gist Jul 14, 2024. No changes.
  4. @disler disler revised this gist Jul 14, 2024. 1 changed file with 2 additions and 2 deletions.
    4 changes: 2 additions & 2 deletions README_MINIMAL_PROMPT_CHAINABLE.md
    Original file line number Diff line number Diff line change
    @@ -23,8 +23,8 @@
    - `chain.py: FusionChain` - Sequential prompt multi-chaining in one method with context, output back-referencing and output fusion & ranking.

    ## Watch
    [When to use Prompt Chains. Ditching LangChain. ALL HAIL Claude 3.5 Sonnet](https://youtu.be/UOcYsrnSNok)
    [Fusion Chain: NEED the BEST Prompt Results at ANY COST? Watch this…](https://youtu.be/iww1O8WngUU)
    - [When to use Prompt Chains. Ditching LangChain. ALL HAIL Claude 3.5 Sonnet](https://youtu.be/UOcYsrnSNok)
    - [Fusion Chain: NEED the BEST Prompt Results at ANY COST? Watch this…](https://youtu.be/iww1O8WngUU)

    ## Four guiding questions for When to Prompt Chain

  5. @disler disler revised this gist Jul 14, 2024. 5 changed files with 291 additions and 7 deletions.
    9 changes: 7 additions & 2 deletions README_MINIMAL_PROMPT_CHAINABLE.md
    Original file line number Diff line number Diff line change
    @@ -1,8 +1,8 @@
    # Minimal Prompt Chainable
    # Minimal Prompt Chainables
    > Sequential prompt chaining in one method with context and output back-referencing.
    ## Files
    - `main.py` - start here - full example using `MinimalChainable` from `chain.py` to build a sequential prompt chian
    - `main.py` - start here - full example using `MinimalChainable` from `chain.py` to build a sequential prompt chain
    - `chain.py` - contains zero library minimal prompt chain class
    - `chain_test.py` - tests for `chain.py`, you can ignore this
    - `requirements.py` - python requirements
    @@ -18,8 +18,13 @@
    ## Test
    - `pytest chain_test.py`

    ## Chains
    - `chain.py: MinimalChainable` - Sequential prompt chaining in one method with context and output back-referencing.
    - `chain.py: FusionChain` - Sequential prompt multi-chaining in one method with context, output back-referencing and output fusion & ranking.

    ## Watch
    [When to use Prompt Chains. Ditching LangChain. ALL HAIL Claude 3.5 Sonnet](https://youtu.be/UOcYsrnSNok)
    [Fusion Chain: NEED the BEST Prompt Results at ANY COST? Watch this…](https://youtu.be/iww1O8WngUU)

    ## Four guiding questions for When to Prompt Chain

    127 changes: 127 additions & 0 deletions chain.py
    Original file line number Diff line number Diff line change
    @@ -1,6 +1,133 @@
    import json
    import re
    from typing import List, Dict, Callable, Any, Union
    from pydantic import BaseModel
    import concurrent.futures


    class FusionChainResult(BaseModel):
    top_response: Union[str, Dict[str, Any]]
    all_prompt_responses: List[List[Any]]
    all_context_filled_prompts: List[List[str]]
    performance_scores: List[float]
    model_names: List[str]


    class FusionChain:

    @staticmethod
    def run(
    context: Dict[str, Any],
    models: List[Any],
    callable: Callable,
    prompts: List[str],
    evaluator: Callable[[List[str]], List[float]],
    get_model_name: Callable[[Any], str],
    ) -> FusionChainResult:
    """
    Run a competition between models on a list of prompts.
    Runs the MinimalChainable.run method for each model for each prompt and evaluates the results.
    The evaluator runs on the last output of each model at the end of the chain of prompts.
    The eval method returns a performance score for each model from 0 to 1, giving priority to models earlier in the list.
    Args:
    context (Dict[str, Any]): The context for the prompts.
    models (List[Any]): List of models to compete.
    callable (Callable): The function to call for each prompt.
    prompts (List[str]): List of prompts to process.
    evaluator (Callable[[List[str]], Tuple[Any, List[float]]]): Function to evaluate model outputs, returning the top response and the scores.
    get_model_name (Callable[[Any], str]): Function to get the name of a model. Defaults to str(model).
    Returns:
    FusionChainResult: A FusionChainResult object containing the top response, all outputs, all context-filled prompts, performance scores, and model names.
    """
    all_outputs = []
    all_context_filled_prompts = []

    for model in models:
    outputs, context_filled_prompts = MinimalChainable.run(
    context, model, callable, prompts
    )
    all_outputs.append(outputs)
    all_context_filled_prompts.append(context_filled_prompts)

    # Evaluate the last output of each model
    last_outputs = [outputs[-1] for outputs in all_outputs]
    top_response, performance_scores = evaluator(last_outputs)

    model_names = [get_model_name(model) for model in models]

    return FusionChainResult(
    top_response=top_response,
    all_prompt_responses=all_outputs,
    all_context_filled_prompts=all_context_filled_prompts,
    performance_scores=performance_scores,
    model_names=model_names,
    )

    @staticmethod
    def run_parallel(
    context: Dict[str, Any],
    models: List[Any],
    callable: Callable,
    prompts: List[str],
    evaluator: Callable[[List[str]], List[float]],
    get_model_name: Callable[[Any], str],
    num_workers: int = 4,
    ) -> FusionChainResult:
    """
    Run a competition between models on a list of prompts in parallel.
    This method is similar to the 'run' method but utilizes parallel processing
    to improve performance when dealing with multiple models.
    Args:
    context (Dict[str, Any]): The context for the prompts.
    models (List[Any]): List of models to compete.
    callable (Callable): The function to call for each prompt.
    prompts (List[str]): List of prompts to process.
    evaluator (Callable[[List[str]], Tuple[Any, List[float]]]): Function to evaluate model outputs, returning the top response and the scores.
    num_workers (int): Number of parallel workers to use. Defaults to 4.
    get_model_name (Callable[[Any], str]): Function to get the name of a model. Defaults to str(model).
    Returns:
    FusionChainResult: A FusionChainResult object containing the top response, all outputs, all context-filled prompts, performance scores, and model names.
    """

    def process_model(model):
    outputs, context_filled_prompts = MinimalChainable.run(
    context, model, callable, prompts
    )
    return outputs, context_filled_prompts

    all_outputs = []
    all_context_filled_prompts = []

    with concurrent.futures.ThreadPoolExecutor(max_workers=num_workers) as executor:
    future_to_model = {
    executor.submit(process_model, model): model for model in models
    }
    for future in concurrent.futures.as_completed(future_to_model):
    outputs, context_filled_prompts = future.result()
    all_outputs.append(outputs)
    all_context_filled_prompts.append(context_filled_prompts)

    # Evaluate the last output of each model
    last_outputs = [outputs[-1] for outputs in all_outputs]
    top_response, performance_scores = evaluator(last_outputs)

    model_names = [get_model_name(model) for model in models]

    return FusionChainResult(
    top_response=top_response,
    all_prompt_responses=all_outputs,
    all_context_filled_prompts=all_context_filled_prompts,
    performance_scores=performance_scores,
    model_names=model_names,
    )


    class MinimalChainable:
    100 changes: 99 additions & 1 deletion chain_test.py
    Original file line number Diff line number Diff line change
    @@ -1,4 +1,5 @@
    from chain import MinimalChainable
    import random
    from chain import FusionChain, FusionChainResult, MinimalChainable


    def test_chainable_solo():
    @@ -188,3 +189,100 @@ def mock_callable_prompt(model, prompt):
    assert len(result) == 1
    assert isinstance(result[0], dict)
    assert result[0] == {"key": "value", "number": 42, "nested": {"inner": "content"}}


    def test_fusion_chain_run():
    # Mock models
    class MockModel:
    def __init__(self, name):
    self.name = name

    # Mock callable function
    def mock_callable_prompt(model, prompt):
    return f"{model.name} response: {prompt}"

    # Mock evaluator function (random scores between 0 and 1)
    def mock_evaluator(outputs):
    top_response = random.choice(outputs)
    scores = [random.random() for _ in outputs]
    return top_response, scores

    # Test context and chains
    context = {"var1": "Hello", "var2": "World"}
    chains = ["First prompt: {{var1}}", "Second prompt: {{var2}} and {{output[-1]}}"]

    # Create mock models
    models = [MockModel(f"Model{i}") for i in range(3)]

    # Mock get_model_name function
    def mock_get_model_name(model):
    return model.name

    # Run the FusionChain
    result = FusionChain.run(
    context=context,
    models=models,
    callable=mock_callable_prompt,
    prompts=chains,
    evaluator=mock_evaluator,
    get_model_name=mock_get_model_name,
    )

    # Assert the results
    assert isinstance(result, FusionChainResult)
    assert len(result.all_prompt_responses) == 3
    assert len(result.all_context_filled_prompts) == 3
    assert len(result.performance_scores) == 3
    assert len(result.model_names) == 3

    for i, (outputs, context_filled_prompts) in enumerate(
    zip(result.all_prompt_responses, result.all_context_filled_prompts)
    ):
    assert len(outputs) == 2
    assert len(context_filled_prompts) == 2

    assert outputs[0] == f"Model{i} response: First prompt: Hello"
    assert (
    outputs[1]
    == f"Model{i} response: Second prompt: World and Model{i} response: First prompt: Hello"
    )

    assert context_filled_prompts[0] == "First prompt: Hello"
    assert (
    context_filled_prompts[1]
    == f"Second prompt: World and Model{i} response: First prompt: Hello"
    )

    # Check that performance scores are between 0 and 1
    assert all(0 <= score <= 1 for score in result.performance_scores)

    # Check that the number of unique scores is likely more than 1 (random function)
    assert (
    len(set(result.performance_scores)) > 1
    ), "All performance scores are the same, which is unlikely with a random evaluator"

    # Check that top_response is present and is either a string or a dict
    assert isinstance(result.top_response, (str, dict))

    # Print the output of FusionChain.run
    print("All outputs:")
    for i, outputs in enumerate(result.all_prompt_responses):
    print(f"Model {i}:")
    for j, output in enumerate(outputs):
    print(f" Chain {j}: {output}")

    print("\nAll context filled prompts:")
    for i, prompts in enumerate(result.all_context_filled_prompts):
    print(f"Model {i}:")
    for j, prompt in enumerate(prompts):
    print(f" Chain {j}: {prompt}")

    print("\nPerformance scores:")
    for i, score in enumerate(result.performance_scores):
    print(f"Model {i}: {score}")

    print("\nTop response:")
    print(result.top_response)

    print("result.model_dump: ", result.model_dump())
    print("result.model_dump_json: ", result.model_dump_json())
    59 changes: 56 additions & 3 deletions main.py
    Original file line number Diff line number Diff line change
    @@ -1,8 +1,9 @@
    import os
    from typing import List, Dict, Union
    from dotenv import load_dotenv
    from chain import MinimalChainable
    from chain import MinimalChainable, FusionChain
    import llm
    import json


    def build_models():
    @@ -13,7 +14,14 @@ def build_models():
    sonnet_3_5_model: llm.Model = llm.get_model("claude-3.5-sonnet")
    sonnet_3_5_model.key = ANTHROPIC_API_KEY

    return sonnet_3_5_model
    # Add more models here for FusionChain
    sonnet_3_model: llm.Model = llm.get_model("claude-3-sonnet")
    sonnet_3_model.key = ANTHROPIC_API_KEY

    haiku_3_model: llm.Model = llm.get_model("claude-3-haiku")
    haiku_3_model.key = ANTHROPIC_API_KEY

    return [sonnet_3_5_model, sonnet_3_model, haiku_3_model]


    def prompt(model: llm.Model, prompt: str):
    @@ -26,7 +34,7 @@ def prompt(model: llm.Model, prompt: str):

    def prompt_chainable_poc():

    sonnet_3_5_model = build_models()
    sonnet_3_5_model, _, _ = build_models()

    result, context_filled_prompts = MinimalChainable.run(
    context={"topic": "AI Agents"},
    @@ -59,10 +67,55 @@ def prompt_chainable_poc():
    pass


    def fusion_chain_poc():
    sonnet_3_5_model, sonnet_3_model, haiku_3_model = build_models()

    def evaluator(outputs: List[str]) -> tuple[str, List[float]]:
    # Simple evaluator that chooses the longest output as the top response
    scores = [len(output) for output in outputs]
    max_score = max(scores)
    normalized_scores = [score / max_score for score in scores]
    top_response = outputs[scores.index(max_score)]
    return top_response, normalized_scores

    result = FusionChain.run(
    context={"topic": "AI Agents"},
    models=[sonnet_3_5_model, sonnet_3_model, haiku_3_model],
    callable=prompt,
    prompts=[
    # prompt #1
    "Generate one blog post title about: {{topic}}. Respond in strictly in JSON in this format: {'title': '<title>'}",
    # prompt #2
    "Generate one hook for the blog post title: {{output[-1].title}}",
    # prompt #3
    """Based on the BLOG_TITLE and BLOG_HOOK, generate the first paragraph of the blog post.
    BLOG_TITLE:
    {{output[-2].title}}
    BLOG_HOOK:
    {{output[-1]}}""",
    ],
    evaluator=evaluator,
    get_model_name=lambda model: model.model_id,
    )

    result_dump = result.dict()

    print("\n\n📊 FusionChain Results~~~~~~~~~~~~~~~~~~~~~~~~~~~~~")
    print(json.dumps(result_dump, indent=4))

    # Write the result to a JSON file
    with open("poc_fusion_chain_result.json", "w") as json_file:
    json.dump(result_dump, json_file, indent=4)


    def main():

    prompt_chainable_poc()

    # fusion_chain_poc()


    if __name__ == "__main__":
    main()
    3 changes: 2 additions & 1 deletion requirements.txt
    Original file line number Diff line number Diff line change
    @@ -1,4 +1,5 @@
    llm
    python-dotenv
    llm-claude-3
    pytest
    pytest
    pydantic
  6. @disler disler revised this gist Jun 23, 2024. 1 changed file with 2 additions and 0 deletions.
    2 changes: 2 additions & 0 deletions README_MINIMAL_PROMPT_CHAINABLE.md
    Original file line number Diff line number Diff line change
    @@ -52,5 +52,7 @@

    ### Solution?
    > STAY CLOSE TO THE METAL (the prompt)
    >
    > The prompt is EVERYTHING, don't let any library take hide or abstract it from you unless you KNOW what's happening under the hood.
    >
    > Build minimal abstracts with raw code that do ONE thing well. Outside of data collection, it's unlikely you NEED a library for the prompts, prompt chains and AI Agents of your tools and products.
  7. @disler disler revised this gist Jun 23, 2024. 1 changed file with 35 additions and 1 deletion.
    36 changes: 35 additions & 1 deletion README_MINIMAL_PROMPT_CHAINABLE.md
    Original file line number Diff line number Diff line change
    @@ -19,4 +19,38 @@
    - `pytest chain_test.py`

    ## Watch
    [When to use Prompt Chains. Ditching LangChain. ALL HAIL Claude 3.5 Sonnet](https://youtu.be/UOcYsrnSNok)
    [When to use Prompt Chains. Ditching LangChain. ALL HAIL Claude 3.5 Sonnet](https://youtu.be/UOcYsrnSNok)

    ## Four guiding questions for When to Prompt Chain

    ### 1. Your tasks are complex for a single prompt
    - Am I asking my LLM to accomplish 2 or more tasks that are distantly related in one prompt? If Yes → Build a prompt chain?
    - *Why: Prompt chaining can help break down complex tasks into manageable chunks, improving clarity and accuracy.*

    ### 2. Increase prompt performance and reduce errors
    - Do I want to increase prompt performance to the max, and reduce errors to the minimum? If Yes → Build a prompt chain?
    - *Why: Prompt chaining allows you to guide the LLM's reasoning process, reducing the likelihood of irrelevant or nonsensical responses.*

    ### 3. Use output of previous prompt as input
    - Do I need the output of a previous prompt to be the input (variable) of this prompt? If Yes → Build a prompt chain?
    - *Why: Prompt chaining is essential for tasks where subsequent prompts need to use information generated in previous steps.*

    ### 4. Adaptive workflow based on prompt flow
    - Do I need an adaptive workflow that changes based on the flow of the prompt? If yes → Build a prompt chain
    - *Why: Prompt chaining allowed you to interject and respond to the flow of your

    ### Summary
    1. you find yourself solving two or more tasks in a single prompt.
    2. need maximum error reduction and increased output quality.
    3. you have subsequent prompts that rely on the output of previous prompts.
    4. you need to take different actions based on evolving steps.

    ## Problems with LLM libraries (Langchain, Autogen, others)
    - Unnecessary abstractions & premature abstractions!
    - Easy to start - hard to finish!
    - Rough Docs - rough debugging!

    ### Solution?
    > STAY CLOSE TO THE METAL (the prompt)
    > The prompt is EVERYTHING, don't let any library take hide or abstract it from you unless you KNOW what's happening under the hood.
    > Build minimal abstracts with raw code that do ONE thing well. Outside of data collection, it's unlikely you NEED a library for the prompts, prompt chains and AI Agents of your tools and products.
  8. @disler disler revised this gist Jun 23, 2024. 1 changed file with 4 additions and 4 deletions.
    8 changes: 4 additions & 4 deletions README_MINIMAL_PROMPT_CHAINABLE.md
    Original file line number Diff line number Diff line change
    @@ -2,10 +2,10 @@
    > Sequential prompt chaining in one method with context and output back-referencing.
    ## Files
    - main.py - start here - full example using chain.py to build a sequential prompt chian
    - chain.py - contains zero library minimal prompt chain class
    - chain_test.py - tests for chain.py, you can ignore this
    - requirements.py - python requirements
    - `main.py` - start here - full example using `MinimalChainable` from `chain.py` to build a sequential prompt chian
    - `chain.py` - contains zero library minimal prompt chain class
    - `chain_test.py` - tests for `chain.py`, you can ignore this
    - `requirements.py` - python requirements

    ## Setup
    - Create `.env` with `ANTHROPIC_API_KEY=`
  9. @disler disler revised this gist Jun 23, 2024. 1 changed file with 6 additions and 0 deletions.
    6 changes: 6 additions & 0 deletions README_MINIMAL_PROMPT_CHAINABLE.md
    Original file line number Diff line number Diff line change
    @@ -1,6 +1,12 @@
    # Minimal Prompt Chainable
    > Sequential prompt chaining in one method with context and output back-referencing.
    ## Files
    - main.py - start here - full example using chain.py to build a sequential prompt chian
    - chain.py - contains zero library minimal prompt chain class
    - chain_test.py - tests for chain.py, you can ignore this
    - requirements.py - python requirements

    ## Setup
    - Create `.env` with `ANTHROPIC_API_KEY=`
    - (or export `export ANTHROPIC_API_KEY=`)
  10. @disler disler created this gist Jun 23, 2024.
    16 changes: 16 additions & 0 deletions README_MINIMAL_PROMPT_CHAINABLE.md
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,16 @@
    # Minimal Prompt Chainable
    > Sequential prompt chaining in one method with context and output back-referencing.
    ## Setup
    - Create `.env` with `ANTHROPIC_API_KEY=`
    - (or export `export ANTHROPIC_API_KEY=`)
    - `python -m venv venv`
    - `source venv/bin/activate`
    - `pip install -r requirements.txt`
    - `python main.py`

    ## Test
    - `pytest chain_test.py`

    ## Watch
    [When to use Prompt Chains. Ditching LangChain. ALL HAIL Claude 3.5 Sonnet](https://youtu.be/UOcYsrnSNok)
    108 changes: 108 additions & 0 deletions chain.py
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,108 @@
    import json
    import re
    from typing import List, Dict, Callable, Any, Union


    class MinimalChainable:
    """
    Sequential prompt chaining with context and output back-references.
    """

    @staticmethod
    def run(
    context: Dict[str, Any], model: Any, callable: Callable, prompts: List[str]
    ) -> List[Any]:
    # Initialize an empty list to store the outputs
    output = []
    context_filled_prompts = []

    # Iterate over each prompt with its index
    for i, prompt in enumerate(prompts):
    # Iterate over each key-value pair in the context
    for key, value in context.items():
    # Check if the key is in the prompt
    if "{{" + key + "}}" in prompt:
    # Replace the key with its value
    prompt = prompt.replace("{{" + key + "}}", str(value))

    # Replace references to previous outputs
    # Iterate from the current index down to 1
    for j in range(i, 0, -1):
    # Get the previous output
    previous_output = output[i - j]

    # Handle JSON (dict) output references
    # Check if the previous output is a dictionary
    if isinstance(previous_output, dict):
    # Check if the reference is in the prompt
    if f"{{{{output[-{j}]}}}}" in prompt:
    # Replace the reference with the JSON string
    prompt = prompt.replace(
    f"{{{{output[-{j}]}}}}", json.dumps(previous_output)
    )
    # Iterate over each key-value pair in the previous output
    for key, value in previous_output.items():
    # Check if the key reference is in the prompt
    if f"{{{{output[-{j}].{key}}}}}" in prompt:
    # Replace the key reference with its value
    prompt = prompt.replace(
    f"{{{{output[-{j}].{key}}}}}", str(value)
    )
    # If not a dict, use the original string
    else:
    # Check if the reference is in the prompt
    if f"{{{{output[-{j}]}}}}" in prompt:
    # Replace the reference with the previous output
    prompt = prompt.replace(
    f"{{{{output[-{j}]}}}}", str(previous_output)
    )

    # Append the context filled prompt to the list
    context_filled_prompts.append(prompt)

    # Call the provided callable with the processed prompt
    # Get the result by calling the callable with the model and prompt
    result = callable(model, prompt)

    # Try to parse the result as JSON, handling markdown-wrapped JSON
    try:
    # First, attempt to extract JSON from markdown code blocks
    # Search for JSON in markdown code blocks
    json_match = re.search(r"```(?:json)?\s*([\s\S]*?)\s*```", result)
    # If a match is found
    if json_match:
    # Parse the JSON from the match
    result = json.loads(json_match.group(1))
    else:
    # If no markdown block found, try parsing the entire result
    # Parse the entire result as JSON
    result = json.loads(result)
    except json.JSONDecodeError:
    # Not JSON, keep as is
    pass

    # Append the result to the output list
    output.append(result)

    # Return the list of outputs
    return output, context_filled_prompts

    @staticmethod
    def to_delim_text_file(name: str, content: List[Union[str, dict]]) -> str:
    result_string = ""
    with open(f"{name}.txt", "w") as outfile:
    for i, item in enumerate(content, 1):
    if isinstance(item, dict):
    item = json.dumps(item)
    if isinstance(item, list):
    item = json.dumps(item)
    chain_text_delim = (
    f"{'🔗' * i} -------- Prompt Chain Result #{i} -------------\n\n"
    )
    outfile.write(chain_text_delim)
    outfile.write(item)
    outfile.write("\n\n")

    result_string += chain_text_delim + item + "\n\n"

    return result_string
    190 changes: 190 additions & 0 deletions chain_test.py
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,190 @@
    from chain import MinimalChainable


    def test_chainable_solo():
    # Mock model and callable function
    class MockModel:
    pass

    def mock_callable_prompt(model, prompt):
    return f"Solo response: {prompt}"

    # Test context and single chain
    context = {"variable": "Test"}
    chains = ["Single prompt: {{variable}}"]

    # Run the Chainable
    result, _ = MinimalChainable.run(context, MockModel(), mock_callable_prompt, chains)

    # Assert the results
    assert len(result) == 1
    assert result[0] == "Solo response: Single prompt: Test"


    def test_chainable_run():
    # Mock model and callable function
    class MockModel:
    pass

    def mock_callable_prompt(model, prompt):
    return f"Response to: {prompt}"

    # Test context and chains
    context = {"var1": "Hello", "var2": "World"}
    chains = ["First prompt: {{var1}}", "Second prompt: {{var2}} and {{var1}}"]

    # Run the Chainable
    result, _ = MinimalChainable.run(context, MockModel(), mock_callable_prompt, chains)

    # Assert the results
    assert len(result) == 2
    assert result[0] == "Response to: First prompt: Hello"
    assert result[1] == "Response to: Second prompt: World and Hello"


    def test_chainable_with_output():
    # Mock model and callable function
    class MockModel:
    pass

    def mock_callable_prompt(model, prompt):
    return f"Response to: {prompt}"

    # Test context and chains
    context = {"var1": "Hello", "var2": "World"}
    chains = ["First prompt: {{var1}}", "Second prompt: {{var2}} and {{output[-1]}}"]

    # Run the Chainable
    result, _ = MinimalChainable.run(context, MockModel(), mock_callable_prompt, chains)

    # Assert the results
    assert len(result) == 2
    assert result[0] == "Response to: First prompt: Hello"
    assert (
    result[1]
    == "Response to: Second prompt: World and Response to: First prompt: Hello"
    )


    def test_chainable_json_output():
    # Mock model and callable function
    class MockModel:
    pass

    def mock_callable_prompt(model, prompt):
    if "Output JSON" in prompt:
    return '{"key": "value"}'
    return prompt

    # Test context and chains
    context = {"test": "JSON"}
    chains = ["Output JSON: {{test}}", "Reference JSON: {{output[-1].key}}"]

    # Run the Chainable
    result, _ = MinimalChainable.run(context, MockModel(), mock_callable_prompt, chains)

    # Assert the results
    assert len(result) == 2
    assert isinstance(result[0], dict)
    print("result", result)
    assert result[0] == {"key": "value"}
    assert result[1] == "Reference JSON: value" # Remove quotes around "value"


    def test_chainable_reference_entire_json_output():
    # Mock model and callable function
    class MockModel:
    pass

    def mock_callable_prompt(model, prompt):
    if "Output JSON" in prompt:
    return '{"key": "value"}'
    return prompt

    context = {"test": "JSON"}
    chains = ["Output JSON: {{test}}", "Reference JSON: {{output[-1]}}"]

    # Run the Chainable
    result, _ = MinimalChainable.run(context, MockModel(), mock_callable_prompt, chains)

    assert len(result) == 2
    assert isinstance(result[0], dict)
    assert result[0] == {"key": "value"}
    assert result[1] == 'Reference JSON: {"key": "value"}'


    def test_chainable_reference_long_output_value():
    # Mock model and callable function
    class MockModel:
    pass

    def mock_callable_prompt(model, prompt):
    return prompt

    context = {"test": "JSON"}
    chains = [
    "Output JSON: {{test}}",
    "1 Reference JSON: {{output[-1]}}",
    "2 Reference JSON: {{output[-2]}}",
    "3 Reference JSON: {{output[-1]}}",
    ]

    # Run the Chainable
    result, _ = MinimalChainable.run(context, MockModel(), mock_callable_prompt, chains)

    assert len(result) == 4
    assert result[0] == "Output JSON: JSON"
    assert result[1] == "1 Reference JSON: Output JSON: JSON"
    assert result[2] == "2 Reference JSON: Output JSON: JSON"
    assert result[3] == "3 Reference JSON: 2 Reference JSON: Output JSON: JSON"


    def test_chainable_empty_context():
    # Mock model and callable function
    class MockModel:
    pass

    def mock_callable_prompt(model, prompt):
    return prompt

    # Test with empty context
    context = {}
    chains = ["Simple prompt"]

    # Run the Chainable
    result, _ = MinimalChainable.run(context, MockModel(), mock_callable_prompt, chains)

    # Assert the results
    assert len(result) == 1
    assert result[0] == "Simple prompt"


    def test_chainable_json_output_with_markdown():
    # Mock model and callable function
    class MockModel:
    pass

    def mock_callable_prompt(model, prompt):
    return """
    Here's a JSON response wrapped in markdown:
    ```json
    {
    "key": "value",
    "number": 42,
    "nested": {
    "inner": "content"
    }
    }
    ```
    """

    context = {}
    chains = ["Test JSON parsing"]

    # Run the Chainable
    result, _ = MinimalChainable.run(context, MockModel(), mock_callable_prompt, chains)

    # Assert the results
    assert len(result) == 1
    assert isinstance(result[0], dict)
    assert result[0] == {"key": "value", "number": 42, "nested": {"inner": "content"}}
    68 changes: 68 additions & 0 deletions main.py
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,68 @@
    import os
    from typing import List, Dict, Union
    from dotenv import load_dotenv
    from chain import MinimalChainable
    import llm


    def build_models():
    load_dotenv()

    ANTHROPIC_API_KEY = os.getenv("ANTHROPIC_API_KEY")

    sonnet_3_5_model: llm.Model = llm.get_model("claude-3.5-sonnet")
    sonnet_3_5_model.key = ANTHROPIC_API_KEY

    return sonnet_3_5_model


    def prompt(model: llm.Model, prompt: str):
    res = model.prompt(
    prompt,
    temperature=0.5,
    )
    return res.text()


    def prompt_chainable_poc():

    sonnet_3_5_model = build_models()

    result, context_filled_prompts = MinimalChainable.run(
    context={"topic": "AI Agents"},
    model=sonnet_3_5_model,
    callable=prompt,
    prompts=[
    # prompt #1
    "Generate one blog post title about: {{topic}}. Respond in strictly in JSON in this format: {'title': '<title>'}",
    # prompt #2
    "Generate one hook for the blog post title: {{output[-1].title}}",
    # prompt #3
    """Based on the BLOG_TITLE and BLOG_HOOK, generate the first paragraph of the blog post.
    BLOG_TITLE:
    {{output[-2].title}}
    BLOG_HOOK:
    {{output[-1]}}""",
    ],
    )

    chained_prompts = MinimalChainable.to_delim_text_file(
    "poc_context_filled_prompts", context_filled_prompts
    )
    chainable_result = MinimalChainable.to_delim_text_file("poc_prompt_results", result)

    print(f"\n\n📖 Prompts~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \n\n{chained_prompts}")
    print(f"\n\n📊 Results~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \n\n{chainable_result}")

    pass


    def main():

    prompt_chainable_poc()


    if __name__ == "__main__":
    main()
    4 changes: 4 additions & 0 deletions requirements.txt
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,4 @@
    llm
    python-dotenv
    llm-claude-3
    pytest