Skip to content

Instantly share code, notes, and snippets.

View jimmyrcom's full-sized avatar

Jimmy Ruska jimmyrcom

View GitHub Profile
@jexp
jexp / graphrag-load-neo4j-1.ipynb
Last active September 8, 2024 19:28
Quick Neo4j Loaders for GraphRAG Parquet
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@thomwolf
thomwolf / fast_speech_text_speech.py
Last active January 14, 2025 12:13
speech to text to speech
""" To use: install LLM studio (or Ollama), clone OpenVoice, run this script in the OpenVoice directory
git clone https://github.com/myshell-ai/OpenVoice
cd OpenVoice
git clone https://huggingface.co/myshell-ai/OpenVoice
cp -r OpenVoice/* .
pip install whisper pynput pyaudio
"""
from openai import OpenAI
import time
@SMUsamaShah
SMUsamaShah / Hacker_News_Story_Rank_Change_Indicator.user.js
Last active November 6, 2024 13:27
UserScript: Indicate New/Unread HN stories on front page since your last visit.
// ==UserScript==
// @name Hacker News Story Rank Change Indicator
// @namespace http://tampermonkey.net/
// @version 2024-09-07_15-42
// @description Indicate the new stories and stories moving up/down on the front page
// @author SMUsamaShah
// @match https://news.ycombinator.com/
// @match https://news.ycombinator.com/news
// @match https://news.ycombinator.com/news?p=*
// @match https://news.ycombinator.com/?p=*
@adtac
adtac / Dockerfile
Last active July 13, 2025 20:06
#!/usr/bin/env docker run
#!/usr/bin/env -S bash -c "docker run -p 8080:8080 -it --rm \$(docker build --progress plain -f \$0 . 2>&1 | tee /dev/stderr | grep -oP 'sha256:[0-9a-f]*')"
# syntax = docker/dockerfile:1.4.0
FROM node:20
WORKDIR /root
RUN npm install sqlite3
@younesbelkada
younesbelkada / finetune_llama_v2.py
Last active July 1, 2025 23:14
Fine tune Llama v2 models on Guanaco Dataset
# coding=utf-8
# Copyright 2023 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
@tech234a
tech234a / README.md
Last active June 10, 2023 14:03
Using unmodified third-party Reddit apps with a custom server
@rain-1
rain-1 / llama-home.md
Last active June 24, 2025 11:12
How to run Llama 13B with a 6GB graphics card

This worked on 14/May/23. The instructions will probably require updating in the future.

llama is a text prediction model similar to GPT-2, and the version of GPT-3 that has not been fine tuned yet. It is also possible to run fine tuned versions (like alpaca or vicuna with this. I think. Those versions are more focused on answering questions)

Note: I have been told that this does not support multiple GPUs. It can only use a single GPU.

It is possible to run LLama 13B with a 6GB graphics card now! (e.g. a RTX 2060). Thanks to the amazing work involved in llama.cpp. The latest change is CUDA/cuBLAS which allows you pick an arbitrary number of the transformer layers to be run on the GPU. This is perfect for low VRAM.

  • Clone llama.cpp from git, I am on commit 08737ef720f0510c7ec2aa84d7f70c691073c35d.
@evadne
evadne / lecture.md
Last active September 15, 2025 10:56
How to Sell Elixir (2023)

How to Sell Elixir AGAIN (2023)

Presented by Evadne Wu at Code BEAM Lite in Stockholm, Sweden on 12 May 2023

Synopsis

We have celebrated 10 years of Elixir and also nearly 25 years of Erlang since the open source release in December 1998.

Most of the libraries that were needed to make the ecosystem viable have been built, talks given, books written, conferences held and training sessions provided. A new generation of companies have been built on top of the Elixir / Erlang ecosystem. In all measures, we have achieved further reach and maturity than 5 years ago.

@kconner
kconner / macOS Internals.md
Last active October 21, 2025 15:03
macOS Internals

macOS Internals

Understand your Mac and iPhone more deeply by tracing the evolution of Mac OS X from prelease to Swift. John Siracusa delivers the details.

Starting Points

How to use this gist

You've got two main options:

Reinforcement Learning for Language Models

Yoav Goldberg, April 2023.

Why RL?

With the release of the ChatGPT model and followup large language models (LLMs), there was a lot of discussion of the importance of "RLHF training", that is, "reinforcement learning from human feedback". I was puzzled for a while as to why RL (Reinforcement Learning) is better than learning from demonstrations (a.k.a supervised learning) for training language models. Shouldn't learning from demonstrations (or, in language model terminology "instruction fine tuning", learning to immitate human written answers) be sufficient? I came up with a theoretical argument that was somewhat convincing. But I came to realize there is an additional argumment which not only supports the case of RL training, but also requires it, in particular for models like ChatGPT. This additional argument is spelled out in (the first half of) a talk by John Schulman from OpenAI. This post pretty much