Master this workflow from the creator of Claude Code

Master this workflow from the creator of Claude Code

As AI-generated code becomes the norm, the real advantage shifts to developers who know how to direct it. I tested the exact workflow used by the creator of Claude Code, and the results debunk the "replace the developer" while reveal the specific orchestration skills you need to survive the industry shift coming in 2026.
12 mins read
Jan 12, 2026
Share

In 387 BC, Plato founded the Academy in Athens, establishing the Western world’s first institution of higher learning. Every few centuries, we have encountered similar inflection points: the printing press democratized knowledge, the Industrial Revolution reorganized labor on a large scale, and the internet connected minds across continents. Each shift felt disruptive in its era, and each required humans to evolve not just their tools, but their fundamental relationship with those tools.

I believe coding is at one of those inflection points right now.

When I started Educative with my brother Naeem in 2015, following nearly a decade at Microsoft and Facebook, we had one mission: to help developers master new skills as quickly as the industry changes. However, the current transition exceeds previous changes in both speed and scope.

AI coding tools have crossed a critical threshold. Today, 85 percent of developers use AI tools regularly and 41 percent of all code is AI-generated. I believe 2026 is shaping up to be the year that separates developers who master these tools from those who do not. I do not say this to alarm you; I say it because I have seen what is possible, and I recognize the confusion that exists regarding what the future holds.

What is the “no humans in the code” vision?#

A few weeks ago, I noticed a particular vision gaining traction within developer communities:

  • Codebase purity is no longer realistic: That artifact, once guarded by senior engineers, is now irrelevant.

  • AI patches until it can’t: Companies deploy agentic AI to fix messy code until the cost in compute time and tokens outweighs the benefit.

  • Then regeneration becomes the more economical option: Instead of debugging, teams rewrite entire modules from scratch using the original specification and passing unit tests.

  • Humans exit the code: We become spec writers and test designers. Implementation is the job of the AI, which produces code that is disposable, regenerable, and never precious.

I’ve seen this in LinkedIn DMs, founder Slack groups, and conversations with engineering leaders. It is a compelling argument. If AI writes code faster than humans can debug it, why bother with maintenance?

But I have watched paradigm shifts before. The cloud was supposed to eliminate operational roles, yet DevOps has become one of the most in-demand skills in the industry. Web3 was supposed to make backend engineers obsolete, but the engineers who thrived were those who understood both the hype and the fundamentals.

I don’t dismiss takes like this. Instead, I try to understand what is true, what is overstated, and what it means for developers. And I have some concerns.

Why am I skeptical of “just rewrite everything?”#

The idea that you can cleanly identify a “non-fixable piece” of software and simply regenerate it from a specification assumes that bugs exist in isolation. They do not.

During my time at Facebook, I saw bugs that were deeply entangled with authentication logic, payment flows, compliance requirements, and edge cases that took years to discover. The institutional knowledge encoded in those code paths cannot be recreated from a specification document because it was never documented. That knowledge was accumulated through production challenges, user complaints at 2 a.m., and security audits that caught vulnerabilities no one had anticipated.

And here is another question: will companies really delete repositories that have survived security audits, regulatory scrutiny, and years of user feedback in favor of something vibe-coded in two hours?

I’ve talked to CTOs who are genuinely excited about AI. However, none of them plan to discard codebases that work. They are looking for ways to enhance their existing systems, not replace them on a whim. The cost of being wrong, whether in downtime, security breaches, or lost customer trust, is simply too high.

So I asked myself, “What does the future look like?” How do the people building these AI tools use them?

And then I saw Boris Cherny’s thread.

So how do AI tool creators actually use their tools?#

Boris Cherny is the creator of Claude Code, Anthropic’s agentic AI coding assistant that has rapidly gained traction in the developer community. A few days ago, he posted a thread on X explaining exactly how he uses his own tool, and it went viral almost immediately.

Developers responded with high praise, suggesting that ignoring the Claude Code best practices shared by its creator would put any programmer at a disadvantage.

What caught my attention, however, was that Boris describes his setup as “surprisingly vanilla.” He does not use extensive customization, claiming that Claude Code performs exceptionally well right out of the box.

Yet his output over the last 30 days tells a more complex story:

  • 259 pull requests landed

  • 497 commits pushed

  • 40,000 lines added and 38,000 lines removed

  • Every single line written by Claude Code and Opus 4.5

This paradox is precisely why the thread gained widespread attention. Boris is someone who could theoretically automate himself out of the loop entirely. He built the tool, which gives him full access and operational capabilities. If anyone could achieve a “no human in the code” workflow, it is him.

But that is not what his process reveals. Instead, his workflow is a masterclass in structured human oversight. He is not operating this way despite using AI at scale, but specifically because of it.

I do not usually follow viral threads, but this one felt different. It was the creator revealing his setup, including the imperfections. So I did what I always do when I come across something compelling: I tried it myself.

What works from Boris’s workflow?#

Let me break down what Boris does and what I learned from testing it.

The parallel agent strategy#

Boris runs five Claude instances in numbered terminal tabs. He relies on system notifications to alert him whenever an agent requires input. Beyond those terminal sessions, he operates another five to ten Claude instances through the browser at claude.ai/code, supplemented by additional sessions launched from his phone throughout the day.

That adds up to more than 15 parallel AI agents working simultaneously.

While the sheer scale of his operation impressed me, I was even more struck by his infrastructure for oversight because:

  • He is not simply spinning up agents and walking away; he is actively orchestrating them.

  • He hands off sessions between devices by moving work across his environment.

  • He checks mobile sessions between meetings, ensuring that his coding work continues even when he leaves his desk.

I found that running three parallel sessions was my sweet spot. It provided a significant productivity boost without becoming so complex that I lost track of the threads. Boris has clearly developed the operational fluency required for this level of multitasking. Most of us are not there yet.

Always start with a smaller setup than you think you need. Scale only once as you’ve built the habits to manage the increased load.

Plan mode changed everything#

Boris notes that most of his sessions begin in plan mode, which is accessed by hitting “Shift+Tab” twice. He iterates with Claude until he is satisfied with the strategy, then switches into auto-accept mode for execution. “A good plan is really important,” he writes.

This was an important insight. I had been jumping straight into execution and then course-correcting constantly. When I began iterating on the plan before any code was written, the quality of the initial output improved significantly. This shift alone likely saved me hours of tedious back-and-forth.

The insight here is subtle but critical: plan mode serves as a human checkpoint. It is a moment to validate the approach before committing resources to it. Boris could skip this step, but he chooses not to.

The CLAUDE.md system#

This may be the most important part of Boris’s workflow. The entire Anthropic team shares a single CLAUDE.md file, which is checked into Git. The process is simple:

  1. Claude makes a mistake.

  2. Someone notices and adds a rule to CLAUDE.md.

  3. The file gets committed with the PR.

  4. Claude reads it in the next session and avoids the mistake.

Every mistake is recorded as a permanent rule. The codebase grows and learns.

We began implementing this on a new project we have been working on internally at Educative, and within days, some of the recurring errors we faced simply stopped. It functions like a collective memory for your AI collaborator.

However, the key is that this requires human judgment. Someone must recognize the mistake, understand the root cause, and clearly articulate the new rule. The AI does not improve itself; humans improve the AI.

The “slower is faster” model choice#

Boris uses Opus 4.5, the largest and slowest model, for everything. He reasons that while it is slower per request and more expensive to use, it requires less steering and excels at tool use. Consequently, it is “almost always faster than using a smaller model in the end.”

This is counterintuitive, but it resonates with a principle I have always believed: the goal is not speed per task, but speed per outcome. A fast model that requires constant correction wastes your scarcest resource, which is your attention.

That said, I found that model choice was context-dependent for my needs. Opus 4.5 is incredible for complex, multi-file reasoning, but for quick tasks, such as simple refactors or one-off scripts, Sonnet 4.5 was perfectly capable and much faster. Boris’s insight is correct for his specific workflow involving complex pull requests, but you should choose your model based on the complexity of your task.

Verification is non-negotiable#

Boris describes this as “probably the most important thing to get great results”: always give Claude a way to verify its work. He has Claude validate every change using a Chrome extension that opens a browser, tests the UI, and iterates until the code functions correctly and the user experience meets expectations.

The AI is never trusted to self-assess. There is always an external feedback loop in place. This approach requires an upfront investment in setting up browser testing, automated test suites, and CI hooks. If you are just getting started, prioritize progress over perfection. Start with simple verification, such as running the code, checking the output, and build toward more robust automation over time.


What’s the real lesson here?#

The real takeaway? Boris himself says, “there is no one correct way to use Claude Code.” His team proves it; each member works with the tool in a completely different manner. These aren’t rigid rules. They are patterns you can adapt to your own context.

What I discovered was not a rigid process to copy, but rather a framework to adapt to your own needs. Plan mode is universally effective, while CLAUDE.md serves as a powerful foundation for teams. Parallel sessions scale as you gain experience, but rigorous verification remains non-negotiable. The specific details, such as how many agents to deploy, which models to select, or which commands to run, will always depend on your unique context.

This approach is the complete opposite of a “set it and forget it” mentality. It represents a deliberate, structured collaboration between humans and AI.

What does this mean for 2026?#

Let me bring this back to the core vision I started with: the “no humans in the code” future.

Boris’s workflow serves as evidence against that thesis. Here is someone with maximum AI capability who has built extensive systems specifically for human oversight:

  • Plan mode checkpoints: Validating the approach before execution

  • CLAUDE.md files: Compounding team knowledge with every mistake

  • Verification loops: Seeking external feedback before anything ships

  • Numbered terminal tabs with notifications: Orchestrating 15 or more agents without losing track

He is not stepping back from the code. Instead, he is operating at a higher level of abstraction while staying deeply in the loop.

The data backs this up. A study by METR  found that experienced open-source developers using AI tools on their own codebases were 19 percent slower with AI assistance. This does not happen because AI is ineffective; it happens because naive AI adoption without proper workflows creates friction.

Boris’s setup demonstrates what proper workflows look like. That is a learnable skill.

Why is 2026 the make-or-break year?#

I’ve been asked a lot lately: “What’s your plan for staying relevant in 2026?” Here is my honest answer: master these tools now, or spend the next decade catching up.

The difference between developers who understand AI workflows and those who do not is widening every month. The question is no longer whether AI will write code; it already does. Instead, the question is whether you will be the one directing that process or the one replaced by someone who can.

When I founded Educative, I believed the future belonged to developers who could learn fast. That remains true, but the definition of “learning fast” has evolved. It now means:

  • Learning how to orchestrate AI

  • Learning how to build compounding knowledge systems

  • Learning how to design verification loops

  • Learning how to maintain the human judgment that no AI can replace

The real vision I started with gets one thing right: the idea of the “purity of the codebase” as a fixed artifact is likely over. However, the replacement is not disposable code and infinite rewrites.The future is code that learns, adapts, and accumulates intelligence, with humans remaining the irreplaceable layer that makes it all work.

How can you learn Claude Code the right way#

Boris’s thread is a masterclass, but it’s also just 15 tweets. If you want to go deeper, we’ve built Claude Code: Workflows and Tools on Educative.

Cover
Claude Code: Workflows and Tools

Claude Code is Anthropic’s AI coding assistant, streamlining development with natural conversations, automation, and integrations. This course begins with the essentials: installation, setup, and the foundations of conversation-driven development. The learners learn to manage context, guide interactions, and work with Claude as a coding partner. The learners will then explore advanced features like custom commands, sub-agents, and hooks. They’ll see how to automate tasks, secure workflows, and extend Claude Code with SDK integrations. By structuring conversations and using Claude’s orchestration, they can achieve clarity and efficiency across complex projects. Finally, they will focus on integrations, connecting Claude Code with MCP servers and GitHub for seamless collaboration and version control. The course concludes with best practices, preparing the learners to apply Claude Code in real environments and unlock AI-powered workflows that boost productivity, security, and team efficiency.

4hrs
Beginner
13 Playgrounds
32 Illustrations

It covers everything: context management, custom slash commands, subagents, hooks, MCP integrations, GitHub workflows, and the best practices that separate amateurs from professionals. The course is hands-on and interactive, built by developers for developers.

This isn’t about learning to copy-paste from an AI. It’s about building the orchestration skills and oversight systems that will define professional development for the next decade.

What’s next?#

2026 is already here. The central question for the industry is not whether artificial intelligence will redefine software development, but whether you will be prepared as this shift accelerates.The developers who thrive in this new period of change will not be those who distance themselves from the technology. Instead, the most successful engineers will be those who learn to remain deeply integrated in the development loop while meaningfully increasing their individual impact. The time to start is now.


Written By:
Fahim ul Haq
The AI Infrastructure Blueprint: 5 Rules to Stay Online
Whether you’re building with OpenAI’s API, fine-tuning your own model, or scaling AI features in production, these strategies will help you keep services reliable under pressure.
9 mins read
Apr 9, 2025