Who is Mitchell Hashimoto?
Before diving into his AI journey, it's worth understanding why Mitchell Hashimoto's opinions on developer tools carry so much weight. Mitchell is the co-founder of HashiCorp, the company behind some of the most influential infrastructure tools in modern software development:
- Vagrant - revolutionized local development environments
- Terraform - became the de facto standard for Infrastructure as Code (downloaded over 100 million times)
- Vault - enterprise secrets management
- Consul - service mesh and discovery
- Nomad - workload orchestration
- Packer - machine image automation
Mitchell met his co-founder Armon Dadgar at the University of Washington in 2008. Together, they built HashiCorp into a publicly-traded company. Mitchell stepped down from his executive role to focus on what he loves most: building software. He's now working on Ghostty, a new terminal emulator.
When someone with this track record shares their thoughts on AI-assisted development, developers pay attention. And what Mitchell shares isn't hype—it's hard-won practical wisdom from someone who's been skeptical, struggled, and eventually found genuine value.
The Three Phases of Tool Adoption
Mitchell frames his AI journey through a universal lens that any developer can relate to:
| Phase | Description | Mitchell's Experience |
|---|---|---|
| 1. Inefficiency | Tool feels slower than existing workflow | Frustrated with chatbots, copy-pasting code |
| 2. Adequacy | Tool matches existing speed | Found agents useful but still babysitting |
| 3. Life-Altering Discovery | Tool fundamentally changes what's possible | Background agents, parallel workflows |
Most developers give up in Phase 1. Mitchell forced himself through it—and his method is instructive.
Step 1: Drop the Chatbot
Mitchell's first breakthrough came from abandoning the chatbot interface entirely for serious coding work.
"Chatbots have real value and are a daily part of my AI workflow, but their utility in coding is highly limited because you're mostly hoping they come up with the right results based on their prior training, and correcting them involves a human (you) to tell them they're wrong repeatedly. It is inefficient."
His "oh wow" moment? Pasting a screenshot of Zed's command palette into Gemini and asking it to reproduce it in SwiftUI. It worked remarkably well. But when he tried to generalize this success, especially in existing codebases ("brownfield projects"), the chatbot approach fell apart.
The Solution: Use agents—LLMs that can chat AND invoke external behavior in a loop. At minimum, agents need to: read files, execute programs, and make HTTP requests.
Step 2: Reproduce Your Own Work
This is where Mitchell's approach gets interesting—and where most developers would never go:
"I literally did the work twice. I'd do the work manually, and then I'd fight an agent to produce identical results in terms of quality and function (without it being able to see my manual solution, of course)."
He describes this as "excruciating" because it slowed down actual progress. But the deliberate practice paid off. He discovered the principles that experienced AI users now consider essential:
- Break down sessions into clear, actionable tasks. Don't try to "draw the owl" in one mega session
- Split planning from execution for vague requests
- Give agents verification tools—they'll fix their own mistakes if they can detect them
- Know when NOT to use agents—avoiding bad use cases saves as much time as finding good ones
By the end of this phase, Mitchell was using agents at "no slower than doing it myself" speed—though still not faster.
Step 3: End-of-Day Agents
The efficiency gains started when Mitchell shifted his thinking:
"Instead of trying to do more in the time I have, try to do more in the time I don't have."
His pattern: Block out the last 30 minutes of every workday to kick off one or more agents. The tasks that worked well:
- Deep research sessions - surveying libraries, comparing options, producing detailed summaries
- Parallel experimentation - multiple agents exploring vague ideas to illuminate unknown unknowns
- Issue and PR triage - using the GitHub CLI to generate reports (not respond), guiding next-day priorities
The result? A "warm start" each morning that got him working faster than cold-starting on tasks.
Step 4: Outsource the Slam Dunks
With enough experience, Mitchell developed high confidence in predicting which tasks agents would nail. His next evolution:
"Let agents do all of that work while I worked on other tasks."
Critical insight: Turn off agent desktop notifications.
"Context switching is very expensive. It was my job as a human to be in control of when I interrupt the agent, not the other way around."
This approach also addresses the "skill atrophy" concern from Anthropic's research on AI-assisted coding. Mitchell notes: "You're trading off: not forming skills for the tasks you're delegating to the agent while continuing to form skills naturally in the tasks you continue to work on manually."
Step 5: Engineer the Harness
Mitchell introduces a concept he calls "harness engineering":
"Anytime you find an agent makes a mistake, you take the time to engineer a solution such that the agent never makes that mistake again."
This comes in two forms:
| Type | Description | Example |
|---|---|---|
| Implicit Prompting | AGENTS.md files with project-specific instructions | Ghostty's AGENTS.md addresses every repeated agent mistake |
| Programmed Tools | Scripts agents can invoke | Screenshot tools, filtered test runners |
Mitchell maintains an AGENTS.md file in Ghostty that exemplifies this approach—each line represents a past agent failure that's now prevented.
Step 6: Always Have an Agent Running
Mitchell's current operating principle:
"If an agent isn't running, I ask myself 'is there something an agent could be doing for me right now?'"
He combines this with slower, more thoughtful models like Amp's "deep mode" (GPT-5.2-Codex) that can take 30+ minutes but produces high-quality results.
Currently, he estimates agents run 10-20% of his working day in the background. Importantly:
"I don't want to run agents for the sake of running agents. I only want to run them when there is a task I think would be truly helpful to me."
Lessons from Zed's Agentic Engineering Session
Mitchell recently demonstrated his workflow live with Zed's Richard Feldman, walking through an actual Ghostty commit. Key insights from that session:
On Architectural Control
"My approach is that I'm more or less the architect of the software project. I still like to come up with the code structure, the expected data flow through the app, where state lives. I give tooling that guidance."
On Prompting Like Coaching Juniors
"The best way I found to coach juniors is to give them a well-scoped problem with a lot of guardrails...it's sort of like bowling with bumpers."
On Working in Parallel
"A benefit of these AI-assisted tools is you don't need to respond right away. When Claude posts this notification that it wants your attention, I don't need to go to it. If I'm heavily into something, I'll keep doing it."
On Model Selection
"Sometimes I actually have multiple checkouts—ghostty, ghostty2, ghostty3, ghostty4. I will run different models and different agents on the different code bases on the same task with the same prompt. It's a competition."
Mitchell's Current Limitations with AI
Despite his success, Mitchell remains grounded about what AI can't do well:
- Zig language - "Anything more than trivial changes to Zig code bases is still hopeless." His workaround: have agents rewrite solutions in another language, then manually convert
- Architectural problems - AI struggles with high-level system design
- High-performance data structures - "It doesn't understand the data structure in the context of what you're trying to achieve"
- Senior-quality thinking - "Most of the work I do right now with LLMs is just getting it to more of a senior quality point of view"
How This Compares to OpenClaw
Mitchell's workflow heavily relies on Claude Code and similar terminal-based agents. For developers looking to adopt similar patterns, OpenClaw offers complementary capabilities:
| Mitchell's Tool | OpenClaw Equivalent | Advantage |
|---|---|---|
| Claude Code | Built-in Claude integration | Same power, more integrations (browser, Discord, etc.) |
| AGENTS.md files | AGENTS.md + SOUL.md + memory system | Persistent context across sessions |
| Manual agent monitoring | Discord notifications | Check progress from anywhere |
| Multiple checkouts | Subagent spawning | Parallel agents in single environment |
| End-of-day agents | Scheduled tasks + heartbeat system | Automated scheduling, persistent execution |
The key alignment: Mitchell's "harness engineering" philosophy matches OpenClaw's approach of encoding context into AGENTS.md and memory files that persist across sessions.
Key Takeaways for Your AI Journey
- Ditch chatbots for agents when doing real coding work
- Force yourself through the learning curve - reproduce your manual work with agents to build expertise
- Leverage dead time - end-of-day and overnight agent tasks create "warm starts"
- Disable notifications - you control when to context switch, not the agent
- Invest in harness engineering - every agent mistake should become a prevention mechanism
- Know the limits - use agents for what they're good at, preserve your skills for what they're not
- Stay grounded - Mitchell's not trying to convince anyone. He's sharing what works for him
The Bottom Line
Mitchell Hashimoto's AI adoption journey is notable not for its enthusiasm but for its pragmatism. He started skeptical, forced himself through painful learning phases, and emerged with a workflow that makes him genuinely more productive.
"I really don't care one way or the other if AI is here to stay. I'm a software craftsman that just wants to build stuff for the love of the game."
That attitude—tool-agnostic pragmatism focused on craft—might be the most valuable lesson of all. Whether you're using Claude Code, OpenClaw, or any other AI tool, the principles remain the same: invest in learning, engineer your harness, and stay focused on building great software.
For more on AI-assisted development workflows, check out our guides on Claude Code and writing effective AGENTS.md files.