Developer Tooling

From Prompt Engineering to Context Engineering

· book · finished

Simon Willison traces the evolution from clever prompt tricks to the discipline of assembling the right information for LLMs. These three posts chart that progression.

In Defense of Prompt Engineering

TLDR: Prompt engineering is a legitimate discipline requiring communication skills, domain expertise, systematic experimentation, and security awareness. It is not a hack or a fad.

Key Insight: The skills of clarity, specificity, and domain knowledge that make good prompts are permanently valuable, regardless of how models evolve.

Read the full article ->

Notes on Using LLMs for Code

TLDR: Practical strategies for getting better code from LLMs: voice interfaces for rapid iteration, AI-driven data extraction, local models for privacy, and providing examples in prompts.

Key Insight: Few-shot prompting — giving the model concrete examples — produces better code than even the most detailed written instructions.

Read the full article ->

Context Engineering

TLDR: “Context engineering” is proposed as a more accurate term than “prompt engineering.” The real skill is assembling the right information — documentation, examples, constraints, prior outputs — for the LLM to work with.

Key Insight: It is not about clever wording. It is about assembling the right information so the model has what it needs to produce useful output.

Read the full article ->

What does this mean for developers?

The shift from prompt engineering to context engineering reflects a maturing understanding of how LLMs work. Developers should focus less on finding magic phrases and more on building systems that assemble relevant context automatically — documentation, code examples, constraints, and prior outputs fed to the model at inference time.