All tags
Posts tagged with "MLSys"
Attention Mechanisms — Full, Sparse, Linear, NSA & GLA
Breaking down Full, Sparse, and Linear Attention, all the way to DeepSeek NSA and Gated Linear Attention
TritonForge: Server-based Multi-turn RL for Triton Kernel Generation
End-to-end server-based RL training and evaluation system for Triton kernel generation across NVIDIA and AMD, built on slime + Megatron
Transformer Deep Dive (Math + Code)
Deconstructing Transformer's Self-Attention, LayerNorm, and MLP from math, code, and architecture perspectives