Writing

Notes on research, methods, and ideas.

May 4, 2026

Scaling Laws for LLMs: From Chinchilla to 2026

The most expensive equations in AI determine how labs spend billions. Here's what they actually say — and where they're being rewritten. From Kaplan to Chinchilla to inference-time scaling.

Machine LearningLLMScaling LawsResearch
Apr 28, 2026

LLM Quantization Demystified: GGUF vs GPTQ vs AWQ

Your 7B model has 14 billion numbers. Here's exactly how to shrink them — and what you lose in the process. A practitioner's guide to choosing GGUF, GPTQ, or AWQ.

Machine LearningLLMQuantizationEdge AI
Apr 20, 2026

Mixture of Experts Explained: The Architecture Behind Every Frontier Model in 2026

How DeepSeek-R1, GPT-5, Gemini, and Mistral Large 3 all use the same trick — and what it means for your work. A complete conceptual and technical guide to MoE architecture.

Machine LearningLLMArchitectureDeep Learning
Mar 15, 2026

Why Causal Inference Matters More Than Prediction in Development Research

Most ML models in global development optimize for prediction accuracy. I argue this is the wrong objective — and what we should be doing instead.

Causal InferenceMachine LearningDevelopmentStatistics
Jan 20, 2026

Neural Architecture Search for the Real World: Lessons from Edge Deployment

Building ML models that work on low-power edge hardware in climate-constrained settings taught me more about model design than any benchmark ever did.

Neural Architecture SearchEdge ComputingGreen AITransformers