An all-in-one blog for learning LLM ins and outs: tokenize, attention, PE, and more
Project
I've been diving deep into the internals of Large Language Models (LLMs) and started documenting my findings. My blog covers topics like:
Tokenization techniques (e.g., BBPE)
Attention mechanism (e.g. MHA, MQA, MLA)
Positional encoding and extrapolation (e.g. RoPE, NTK-aware interpolation, YaRN)
Architecture details of models like QWen, LLaMA
Training methods including SFT and Reinforcement Learning
If you're interested in the nuts and bolts of LLMs, feel free to check it out: http://comfyai.app/