In the spirit of 12 Factor Apps. The source for this project is public at https://github.com/humanlayer/12-factor-agents, and I welcome your feedback and contributions. Let's figure this out together!
Hi, I'm Dex. I've been hacking on AI agents for a while.
I've tried every agent framework out there, from the plug-and-play crew/langchains to the "minimalist" smolagents of the world to the "production grade" langraph, griptape, etc.
I've talked to a lot of really strong founders, in and out of YC, who are all building really impressive things with AI. Most of them are rolling the stack themselves. I don't see a lot of frameworks in production customer-facing agents.
I've been surprised to find that most of the products out there billing themselves as "AI Agents" are not all that agentic. A lot of them are mostly deterministic code, with LLM steps sprinkled in at just the right points to make the experience truly magical.
Agents, at least the good ones, don't follow the "here's your prompt, here's a bag of tools, loop until you hit the goal" pattern. Rather, they are comprised of mostly just software.
So, I set out to answer:
Welcome to 12-factor agents. As every Chicago mayor since Daley has consistently plastered all over the city's major airports, we're glad you're here.
Special thanks to @iantbutler01, @tnm, @hellovai, @stantonk, @balanceiskey, @AdjectiveAllison, @pfbyjy, @a-churchill, and the SF MLOps community for early feedback on this guide.
Even if LLMs continue to get exponentially more powerful, there will be core engineering techniques that make LLM-powered software more reliable, more scalable, and easier to maintain.
- How We Got Here: A Brief History of Software
- Factor 1: Natural Language to Tool Calls
- Factor 2: Own your prompts
- Factor 3: Own your context window
- Factor 4: Tools are just structured outputs
- Factor 5: Unify execution state and business state
- Factor 6: Launch/Pause/Resume with simple APIs
- Factor 7: Contact humans with tool calls
- Factor 8: Own your control flow
- Factor 9: Compact Errors into Context Window
- Factor 10: Small, Focused Agents
- Factor 11: Trigger from anywhere, meet users where they are
- Factor 12: Make your agent a stateless reducer
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
For a deeper dive on my agent journey and what led us here, check out A Brief History of Software - a quick summary here:
We're gonna talk a lot about Directed Graphs (DGs) and their Acyclic friends, DAGs. I'll start by pointing out that...well...software is a directed graph. There's a reason we used to represent programs as flow charts.
Around 20 years ago, we started to see DAG orchestrators become popular. We're talking classics like Airflow, Prefect, some predecessors, and some newer ones like (dagster, inggest, windmill). These followed the same graph pattern, with the added benefit of observability, modularity, retries, administration, etc.
I'm not the first person to say this, but my biggest takeaway when I started learning about agents, was that you get to throw the DAG away. Instead of software engineers coding each step and edge case, you can give the agent a goal and a set of transitions:
And let the LLM make decisions in real time to figure out the path
The promise here is that you write less software, you just give the LLM the "edges" of the graph and let it figure out the nodes. You can recover from errors, you can write less code, and you may find that LLMs find novel solutions to problems.
As we'll see later, it turns out this doesn't quite work.
Let's dive one step deeper - with agents you've got this loop consisting of 3 steps:
- LLM determines the next step in the workflow, outputting structured json ("tool calling")
- Deterministic code executes the tool call
- The result is appended to the context window
- Repeat until the next step is determined to be "done"
initial_event = {"message": "..."}
context = [initial_event]
while True:
next_step = await llm.determine_next_step(context)
context.append(next_step)
if (next_step.intent === "done"):
return next_step.final_answer
result = await execute_step(next_step)
context.append(result)
Our initial context is just the starting event (maybe a user message, maybe a cron fired, maybe a webhook, etc), and we ask the llm to choose the next step (tool) or to determine that we're done.
Here's a multi-step example:
027-agent-loop-animation.mp4
At the end of the day, this approach just doesn't work as well as we want it to.
In building HumanLayer, I've talked to at least 100 SaaS builders (mostly technical founders) looking to make their existing product more agentic. The journey usually goes something like:
- Decide you want to build an agent
- Product design, UX mapping, what problems to solve
- Want to move fast, so grab $FRAMEWORK and get to building
- Get to 70-80% quality bar
- Realize that 80% isn't good enough for most customer-facing features
- Realize that getting past 80% requires reverse-engineering the framework, prompts, flow, etc.
- Start over from scratch
Random Disclaimers
DISCLAIMER: I'm not sure the exact right place to say this, but here seems as good as any: this in BY NO MEANS meant to be a dig on either the many frameworks out there, or the pretty dang smart people who work on them. They enable incredible things and have accelerated the AI ecosystem.
I hope that one outcome of this post is that agent framework builders can learn from the journeys of myself and others, and make frameworks even better.
Especially for builders who want to move fast but need deep control.
DISCLAIMER 2: I'm not going to talk about MCP. I'm sure you can see where it fits in.
DISCLAIMER 3: I'm using mostly typescript, for reasons but all this stuff works in python or any other language you prefer.
Anyways back to the thing...
After digging through hundreds of AI libriaries and working with dozens of founders, my instinct is this:
- There are some core things that make agents great
- Going all in on a framework and building what is essentially a greenfield rewrite may be counter-productive
- There are some core principles that make agents great, and you will get most/all of them if you pull in a framework
- BUT, the fastest way I've seen for builders to get high-quality AI software in the hands of customers is to take small, modular concepts from agent building, and incorporate them into their existing product
- These modular concepts from agents can be defined and applied by most skilled software engineers, even if they don't have an AI background
- How We Got Here: A Brief History of Software
- Factor 1: Natural Language to Tool Calls
- Factor 2: Own your prompts
- Factor 3: Own your context window
- Factor 4: Tools are just structured outputs
- Factor 5: Unify execution state and business state
- Factor 6: Launch/Pause/Resume with simple APIs
- Factor 7: Contact humans with tool calls
- Factor 8: Own your control flow
- Factor 9: Compact Errors into Context Window
- Factor 10: Small, Focused Agents
- Factor 11: Trigger from anywhere, meet users where they are
- Factor 12: Make your agent a stateless reducer
- Contribute to this guide here
- I talked about a lot of this on an episode of the Tool Use podcast in March 2025
- I write about some of this stuff at The Outer Loop
- I do webinars about Maximizing LLM Performance with @hellovai
- We build OSS agents with this methodology under got-agents/agents
- We ignored all our own advice and built a framework for running distributed agents in kubernetes
- Other links from this guide:
- 12 Factor Apps
- Building Effective Agents (Anthropic)
- Prompts are Functions
- Library patterns: Why frameworks are evil
- The Wrong Abstraction
- Mailcrew Agent
- Mailcrew Demo Video
- Chainlit Demo
- TypeScript for LLMs
- Schema Aligned Parsing
- Function Calling vs Structured Outputs vs JSON Mode
- BAML on GitHub
- OpenAI JSON vs Function Calling
- Outer Loop Agents
- Airflow
- Prefect
- Dagster
- Inngest
- Windmill
- The AI Agent Index (MIT)
- NotebookLM on Finding Model Capability Boundaries