Goals: Add links that are reasonable and good explanations of how stuff works. No hype and no vendor content if possible. Practical first-hand accounts of models in prod eagerly sought.
| # train_grpo.py | |
| # | |
| # See https://github.com/willccbb/verifiers for ongoing developments | |
| # | |
| """ | |
| citation: | |
| @misc{brown2025grpodemo, | |
| title={Granular Format Rewards for Eliciting Mathematical Reasoning Capabilities in Small Language Models}, | |
| author={Brown, William}, |
| #!/usr/bin/env python3 | |
| """ | |
| Expose Ollama models to LM Studio by symlinking its model files. | |
| NOTE: On Windows, you need to run this script with administrator privileges. | |
| """ | |
| import json | |
| import os | |
| from pathlib import Path |
| function mandelbrot_kernel(c, max_iter) | |
| z = c | |
| for i in 1:max_iter | |
| z = z * z + c | |
| if abs2(z) > 4 | |
| return i-1 | |
| end | |
| end | |
| return max_iter |
Not only Mojo is great for writing high-performance code, but it also allows us to leverage huge Python ecosystem of libraries and tools. With seamless Python interoperability, Mojo can use Python for what it's good at, especially GUIs, without sacrificing performance in critical code. Let's take the classic Mandelbrot set algorithm and implement it in Mojo.
We'll introduce a Complex type and use it in our implementation.
| import requests | |
| import time | |
| import os | |
| import sys | |
| import openai | |
| import tiktoken | |
| from termcolor import colored | |
| openai.api_key = open(os.path.expanduser('~/.openai')).read().strip() |
Yoav Goldberg, April 2023.
With the release of the ChatGPT model and followup large language models (LLMs), there was a lot of discussion of the importance of "RLHF training", that is, "reinforcement learning from human feedback". I was puzzled for a while as to why RL (Reinforcement Learning) is better than learning from demonstrations (a.k.a supervised learning) for training language models. Shouldn't learning from demonstrations (or, in language model terminology "instruction fine tuning", learning to immitate human written answers) be sufficient? I came up with a theoretical argument that was somewhat convincing. But I came to realize there is an additional argumment which not only supports the case of RL training, but also requires it, in particular for models like ChatGPT. This additional argument is spelled out in (the first half of) a talk by John Schulman from OpenAI. This post pretty much
| # This demo uses an alpine sandbox in a docker container in | |
| # interactive mode. ran with: | |
| # docker run --rm -it alpine | |
| # | |
| # if you run it on your own system: | |
| # 1. you should use your own package manager instead of `apk` | |
| # 2. expect the following left overs: | |
| # - installed binaries (age, age-keygen, sops) | |
| # - $HOME/.config/sops/age/keys.txt | |
| # - demo files: source.env, encrypted.env, decrypted.env |
| BUCKET_NAME=terraform-your_company-remote-store # this should be unique, and by that I mean really UNIQUE | |
| BUCKET_REGION=eu-central-1 | |
| USER_NAME=terraform-deployer | |
| POLICY_FILE_NAME=$PWD/policy.json | |
| AWS_PROFILE=your_company | |
| aws s3api create-bucket \ | |
| --profile $AWS_PROFILE \ | |
| --bucket $BUCKET_NAME \ | |
| --region $BUCKET_REGION \ |