Skip to content

Instantly share code, notes, and snippets.

View ankitsharma07's full-sized avatar
:electron:
atomic

Ankit Sharma ankitsharma07

:electron:
atomic
View GitHub Profile
@ankitsharma07
ankitsharma07 / Samplers.md
Last active July 13, 2025 15:00
Dummy's Guide to Modern LLM Sampling -- @AlpinDale

Dummy's Guide to Modern LLM Sampling

Intro Knowledge

Large Language Models (LLMs) work by taking a piece of text (e.g. user prompt) and calculating the next word. In a more technical term, token. LLMs have a vocabulary, or a dictionary, of valid tokens, and will reference those in training and inference (the process of generating text). More on that below. You need to understand why we use tokens (sub-words) instead of words or letters first. But first, a short glossary of some technical terms that aren't explained in the sections below in-depth:

Short Glossary

Logits: The raw, unnormalized scores output by the model for each token in its vocabulary. Higher logits indicate tokens the model considers more likely to come next. Softmax: A mathematical function that converts logits into a proper probability distribution - values between 0 and 1 that sum to 1. Entropy: A measure of uncertainty or randomness in a probability distribution. Higher entropy means the model is less certain abou

@ankitsharma07
ankitsharma07 / grokking_to_leetcode.md
Created August 19, 2023 22:28 — forked from tykurtz/grokking_to_leetcode.md
Grokking the coding interview equivalent leetcode problems

GROKKING NOTES

I liked the way Grokking the coding interview organized problems into learnable patterns. However, the course is expensive and the majority of the time the problems are copy-pasted from leetcode. As the explanations on leetcode are usually just as good, the course really boils down to being a glorified curated list of leetcode problems.

So below I made a list of leetcode problems that are as close to grokking problems as possible.

Pattern: Sliding Window