Back to Glossary
AI and Machine LearningM

Model Distillation

Definition

Model distillation (also called knowledge distillation) is a machine learning technique where a smaller "student" model is trained to reproduce the behavior of a larger "teacher" model. The student learns not just from raw training data but from the teacher's outputs, including the probability distributions and reasoning patterns that the teacher has learned. This transfer of knowledge allows the student to achieve performance approaching the teacher's level despite having far fewer parameters.

The technique has become especially important in the era of large language models, where the most capable models are too expensive or too slow for many production use cases. Distillation provides a practical path from "this works great in the demo with GPT-4" to "this works well enough in production at a cost we can sustain."

Why It Matters for Product Managers

Cost economics are one of the biggest constraints on AI product viability. A feature that costs $0.10 per API call during prototyping might work for demos, but at 10 million daily users that becomes $1 million per day. Model distillation is often the solution that makes the economics work, reducing per-request costs by an order of magnitude or more while preserving the quality that users expect.

Beyond cost, distillation also improves latency. Smaller models respond faster, which directly impacts user experience for interactive AI features. PMs building real-time AI experiences like chat interfaces, autocomplete, or inline suggestions need the speed that distilled models provide. Understanding the distillation tradeoff -- how much quality to sacrifice for how much cost and latency improvement -- is a critical product decision.

How It Works in Practice

  • Define the task scope -- Identify the specific task or narrow domain where you need the distilled model to perform. Distillation works best when focused on a well-defined use case rather than general-purpose capability.
  • Generate training data -- Use the teacher model to produce high-quality outputs for a large set of representative inputs. This synthetic data captures the teacher's behavior and reasoning patterns.
  • Select the student architecture -- Choose a smaller model architecture that balances your target cost, latency, and quality requirements. Common choices include smaller open-weight models.
  • Train the student -- Fine-tune the student model on the teacher-generated data, often using techniques that transfer not just the final outputs but the probability distributions and intermediate representations.
  • Evaluate and iterate -- Run comprehensive evaluations comparing student and teacher performance on your task-specific benchmarks. Identify quality gaps and iterate on training data or architecture until the student meets your minimum quality bar.
  • Common Pitfalls

  • Attempting to distill too broad a capability, resulting in a student model that performs mediocrely across many tasks rather than excellently on the specific task you need.
  • Not generating enough diverse training data from the teacher, causing the student to fail on input patterns it has not seen during distillation.
  • Ignoring the teacher model's terms of service, as some model providers explicitly prohibit using their outputs to train competing models.
  • Skipping rigorous evaluation and assuming the distilled model preserves quality without systematically measuring performance across all relevant scenarios.
  • Model distillation starts with a Foundation Model as the teacher and uses Fine-Tuning techniques to train the student. Synthetic Data generated by the teacher model serves as the primary training signal. Distilled models are often deployed for Edge Inference where Large Language Models would be too resource-intensive.

    Frequently Asked Questions

    What is model distillation in product management?+
    Model distillation is the process of training a smaller, cheaper AI model to mimic the outputs of a larger, more expensive one. For product managers, distillation is a key cost optimization strategy -- it allows teams to deploy AI features at a fraction of the inference cost while maintaining quality levels that users find acceptable.
    Why is model distillation important for product teams?+
    Model distillation is important because it directly addresses the cost-quality tradeoff in AI products. Large foundation models deliver excellent quality but are expensive to run at scale. Distillation lets product teams capture most of that quality in a model that costs 10-100x less per request, making AI features economically viable at scale.

    Explore More PM Terms

    Browse our complete glossary of 100+ product management terms.