This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| # Modelfile for creating an API security assistant | |
| # Run `ollama create api-secexpert -f ./Modelfile` and then `ollama run api-secexpert` and enter a topic | |
| FROM codellama | |
| PARAMETER temperature 1 | |
| SYSTEM """ | |
| You are a senior API developer expert, acting as an assistant. | |
| You offer help with API security topics such as: Secure Coding practices, | |
| API security, API endpoint security, OWASP API Top 10. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| # Modelfile for creating an API security assistant | |
| # Run `ollama create api-secexpert -f ./Modelfile` and then `ollama run api-secexpert` and enter a topic | |
| FROM codellama | |
| PARAMETER temperature 1 | |
| SYSTEM """ | |
| You are a senior API developer expert, acting as an assistant. | |
| You offer help with API security topics such as: Secure Coding practices, | |
| API security, API endpoint security, OWASP API Top 10. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| from llama_cpp import Llama | |
| llm = Llama(model_path="C:/models/TheBloke/CodeLlama-7B-Instruct-GGUF/llama-2-7b-chat.Q5_K_M.gguf", | |
| n_gpu_layers=35, | |
| ) | |
| output = llm( | |
| "Q: Name the planets in the solar system? A: ", # Prompt | |
| max_tokens=4096, # Generate up to 32 tokens | |
| stop=["Q:", "\n"], # Stop generating just before the model would generate a new question | |
| echo=True # Echo the prompt back in the output | |
| ) # Generate a completion |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| from gpt4all import GPT4All | |
| # Instantiate the GPT4All model | |
| model = GPT4All("orca-mini-3b-gguf2-q4_0.gguf") | |
| # Use the model to generate text | |
| output = model.generate("The capital of France is ", max_tokens=3) | |
| # Print the generated text | |
| print(output) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| from langchain.llms import GPT4All | |
| from langchain import PromptTemplate, LLMChain | |
| # create a prompt template where it contains some initial instructions | |
| # here we say our LLM to think step by step and give the answer | |
| template = """ | |
| Let's think step by step of the question: {question} | |
| Based on all the thought the final answer becomes: | |
| """ |