
You Should README.md

I realized today that I am now too lazy to $cat a README.md file.
I enjoy certain tactile and manual experiences, like taking portraits or wildlife photos (pictured above). But, when it comes to the jungle that is my code repository, I would rather let Claude Code sort it out. Even if it's just extracting information from a 50-line readme file. This got me thinking… Why even bother writing things for human consumption anymore, when so much of what we produce is fed directly into LLMs?
Claude Code is also guilty of writing ‘useful tips’ for humans. It assumes that I want to know how to solve an obscure bug by auto-documenting it in a readme file. The thing is, I am offloading 99% of programming and implementation logic to Claude. The last thing I want to do is open a document to read how to address an obscure bug that I faced a few context sessions ago. Next time I open this project will be with a coding agent. Not some old IDE, Vim, or Notepad++.
What I really want is for Claude to remember project nuances every time, even if I close all the terminals and browser windows and come back 3 months later with a mean attitude. And since I can’t be bothered to use vim anymore, I want Claude to just be smart enough to remember how to solve problems, without charging me for an enormous context window. Doesn’t it just feel jarring when Claude forgets what AWS profile to use, after spending 10 minutes on a compaction process? How could it throw away something I’ve been asking it to run over and over again?
That brings me to CLAUDE.md. Why Claude doesn’t automatically maintain (r+w) this file with its superb reasoning capabilities is beyond me. In theory, it should do a perfect job in maintaining important footnotes about my project. Sure, it maintains a MEMORY.md, but that’s user-specific with potentially sensitive information that doesn’t get committed to repositories. Anecdotally, I’ve also heard that the extent to which Claude Code consults CLAUDE.md or MEMORY.md can be inconsistent, often ignoring explicit instructions.
The real problem isn't that Claude Code has a bad memory; it's that every coding agent today is designed around a human-in-the-loop. It writes READMEs so you can read them. It adds inline comments so you can understand the logic. It documents bugs so you can look them up later. Every artifact it produces assumes a human is on the receiving end.
But that assumption is already feeling outdated. My next interaction with a project isn't going to be me skimming a README to better understand the architecture. It's going to be me opening a new Claude Code session and saying "pick up where we left off." The agent is my primary producer, consumer, and executor. I’m just a person with lots of wonky ideas and irrational expectations.
Until agents are authoring context for other agents rather than human readability, the workarounds are going to feel like duct tape. You can manually write your CLAUDE.md, front-load context at the start of every session, and guide the model to refer to specific documentation. You can add SKILL.md to define when and how to run specific tasks. These methods incrementally improve the experience.
But the shift that actually matters is agents treating persistent, machine-readable context as a first-class output, not a side effect of being helpful to humans. We're not there yet. But the fact that I'm too lazy to $cat a README file is, I'd argue, a small indicator of where this is all going.
My takeaway is that we all use coding agents in our own unique ways. We are all learning its quirks together. And we all have our fascinations and frustrations with it.
But the trajectory is clear: our expectations for what these agents can do are growing fast. A year ago I couldn't have imagined the workflows I'm running today. A year from now, CLAUDE.md will probably read me instead.
We hear similar stories from the SREs and DevOps engineers we work with. They are pushing coding agents in so many creative ways. They are breaking existing workflows and testing the boundaries of what is possible with the current state of AI. It has inspired us to innovate on what the next generation of AI SRE tooling looks like.
Stay tuned for some exciting news in the coming weeks.
And thanks for taking the time out of your day to README.md
Written by
Andrew Lee
Technical Marketing Engineer
Related Articles
Finally Something Stable in AI Engineering: Harness Engineering & NeuBird’s releases at HumanX
Production has outgrown human understanding
Introducing the NeuBird AI Production Ops Agent that prevents issues before impact and resolves incidents in minutes
Tackling Observability Scale with Context Engineering
The Problem: When Observability Data Exceeds Human Capacity It’s your first week on-call and you get paged at 3am. You’re…