<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Muhtasim Munif Fahim — Writing</title>
    <link>https://fahim.bd</link>
    <description>Notes on machine learning, statistics, neural architecture search, and AI for development.</description>
    <language>en</language>
    <atom:link href="https://fahim.bd/rss.xml" rel="self" type="application/rss+xml"/>
    <item>
      <title>Scaling Laws for LLMs: From Chinchilla to 2026</title>
      <link>https://fahim.bd/blog/scaling-laws-for-llms</link>
      <guid>https://fahim.bd/blog/scaling-laws-for-llms</guid>
      <pubDate>Mon, 04 May 2026 00:00:00 GMT</pubDate>
      <description>The most expensive equations in AI determine how labs spend billions. Here&apos;s what they actually say — and where they&apos;re being rewritten. From Kaplan to Chinchilla to inference-time scaling.</description>
    </item>
    <item>
      <title>LLM Quantization Demystified: GGUF vs GPTQ vs AWQ</title>
      <link>https://fahim.bd/blog/llm-quantization-demystified</link>
      <guid>https://fahim.bd/blog/llm-quantization-demystified</guid>
      <pubDate>Tue, 28 Apr 2026 00:00:00 GMT</pubDate>
      <description>Your 7B model has 14 billion numbers. Here&apos;s exactly how to shrink them — and what you lose in the process. A practitioner&apos;s guide to choosing GGUF, GPTQ, or AWQ.</description>
    </item>
    <item>
      <title>Mixture of Experts Explained: The Architecture Behind Every Frontier Model in 2026</title>
      <link>https://fahim.bd/blog/mixture-of-experts-explained</link>
      <guid>https://fahim.bd/blog/mixture-of-experts-explained</guid>
      <pubDate>Mon, 20 Apr 2026 00:00:00 GMT</pubDate>
      <description>How DeepSeek-R1, GPT-5, Gemini, and Mistral Large 3 all use the same trick — and what it means for your work. A complete conceptual and technical guide to MoE architecture.</description>
    </item>
    <item>
      <title>Why Causal Inference Matters More Than Prediction in Development Research</title>
      <link>https://fahim.bd/blog/causal-inference-ml</link>
      <guid>https://fahim.bd/blog/causal-inference-ml</guid>
      <pubDate>Sun, 15 Mar 2026 00:00:00 GMT</pubDate>
      <description>Most ML models in global development optimize for prediction accuracy. I argue this is the wrong objective — and what we should be doing instead.</description>
    </item>
    <item>
      <title>Neural Architecture Search for the Real World: Lessons from Edge Deployment</title>
      <link>https://fahim.bd/blog/nas-edge-deployment</link>
      <guid>https://fahim.bd/blog/nas-edge-deployment</guid>
      <pubDate>Tue, 20 Jan 2026 00:00:00 GMT</pubDate>
      <description>Building ML models that work on low-power edge hardware in climate-constrained settings taught me more about model design than any benchmark ever did.</description>
    </item>
  </channel>
</rss>