481 points by kachapopopow 8 hours ago | 60 comments
logicprog 6 hours ago
I really enjoyed this article. I think the author is precisely right and I've been saying this for a long time. There's a ton of extremely interesting low hanging fruit that can vastly improve the effectiveness of even currently existing models hiding in how we design our agent harnesses; enough to — at least until we hit diminishing returns — make as much or more of a difference than training new models!

I think one of the things that this confirms, for me at least, is that it's better to think of "the AI" as not just the LLM itself, but the whole cybernetic system of feedback loops joining the LLM and its harness. Because, if the harness can make as much if not more of a difference, when improved, as improvements to the model itself, then they have to be really considered equally important. Not to mention the fact that models are specifically reinforcement learned to use harnesses and harnesses are adapted to the needs of models in general or specific models. So they necessarily sort of develop together in a feedback loop. And then in practice, as they operate, it is a deeply intertwined feedback loop where the entity that actually performs the useful work, and which you interact with, is really the complete system of the two together.

I think thinking like this could not only unlock quantitative performance improvements like the ones discussed in this blog post, but also help us conceive of the generative AI project as actually a project of neurosymbolic AI, even if the most capital intensive and a novel aspect is a neural network; and once we begin to think like that, that unlocks a lot of new options and more holistic thinking and might increase research in the harness area.

andai 2 hours ago
My Weird Hill is that we should be building things with GPT-4.

I can say unironically that we haven't even tapped the full potential of GPT-4. The original one, from 2023. With no reasoning, no RL, no tool calling, no structured outputs, etc. (No MCP, ye gods!) Yes, it's possible to build coding agents with it!

I say this because I did!

Forcing yourself to make things work with older models forces you to keep things simple. You don't need 50KB of prompts. You can make a coding agent with GPT-4 and half a page of prompt.

Now, why would we do this? Well, these constraints force you to think differently about the problem. Context management becomes non-optional. Semantic compression (for Python it's as simple as `grep -r def .`) becomes non-optional. Bloating the prompt with infinite detail and noise... you couldn't if you wanted to!

Well, surely none of this is relevant today? Well, it turns out all of it still is! e.g. small fix, the "grep def" (or your language's equivalent) can be trivially added as a startup hook to Claude Code, and suddenly it doesn't have to spend half your token budget poking around the codebase, because -- get this -- it can just see where everything is... (What a concept, right?)

-- We can also get into "If you let the LLM design the API then you don't need a prompt because it already knows how it should work", but... we can talk about that later ;)

jstummbillig 38 minutes ago
The problem with these exercises is always: I have limited time and capacity to do things, and a fairly unlimited number of problems that I can think of to solve. Coding is not a problem I want to solve. Prompt engineering is not a problem I want to solve.

If I do things for the love if it, the rules are different of course. But otherwise I will simply always accept that there are many things that improve around me, that I have no intimate knowledge of and probably never will, and I let other people work them out and happily lean on their work to do the next thing I care about, that is not already solved.

logicprog 1 hour ago
> Well, surely none of this is relevant today? Well, it turns out all of it still is! e.g. small fix, the "grep def" (or your language's equivalent) can be trivially added as a startup hook to Claude Code, and suddenly it doesn't have to spend half your token budget poking around the codebase, because -- get this -- it can just see where everything is... (What a concept, right?)

Hahaha yeah. This is very true. I find myself making ad hoc versions of this in static markdown files to get around it. Just another example of the kind of low hanging fruit harnesses are leaving on the table. A version of this that uses tree sitter grammars to map a codebase, and does it on every startup of an agent, would be awesome.

> My Weird Hill is that we should be building things with GPT-4.

I disagree, IMO using the best models we have is a good way to avoid wasting time, but that doesn't mean we shouldn't also be frugal and clever with our harnesses!

andai 1 hour ago
To clarify, I didn't mean we should be using ancient models in production, I meant in R&D.

Anthropic says "do the simplest thing that works." If it works with the LLMs we had 3 years ago, doesn't that make it simpler?

The newer LLMs mostly seem to work around the poor system design. (Like spawning 50 subagents on a grep-spree because you forgot to tell it where anything is...) But then you get poor design in prod!

mycall 6 hours ago
If I remember, both Claude Code and OpenAI Codex "harnesses" improved themselves now.

OpenAI used early versions of GPT-5.3-Codex to: debug its own training process, manage its deployment and scaling and diagnose test results and evaluation data.

Claude Code have shipped 22 PRs in a single day and 27 the day before, with 100% of the code in each PR generated entirely by Claude Code.

logicprog 6 hours ago
Also, yes, I'm aware that I use a lot of "its not just X, its Y." I promise you this comment is entirely human written. I'm just really tired and tend to rely on more wrote rhetorical tropes when I am. Believe me, I wrote like this long before LLMs were a thing.
rubenflamshep 6 hours ago
It didn’t read as AI to me :)
co_king_3 3 hours ago
No one here will accuse you of being an AI unless they're trying to dehumanize you for expressing anti-AI sentiment.
logicprog 1 hour ago
I'm sorry, but that's empirically false. E.g., a substantial proportion of the highly upvoted comments on https://news.ycombinator.com/item?id=46953491, which was one of the best articles on software engineering I've read in a long time, are accusing it of being AI for no reason.
drob518 3 hours ago
That's what all the AIs have been trained to say.
kachapopopow 6 hours ago
why the long -'s
logicprog 6 hours ago
Because I like them?
kachapopopow 5 hours ago
reminds me of that one guy complaining that everyone is calling them an AI when AI was trained on their grammar style.
ahofmann 5 hours ago
This happened to the female speaker with her voice, which I find terrifying: https://www.youtube.com/watch?v=qO0WvudbO04
soperj 5 hours ago
how do you make them?
RussianCow 4 hours ago
On macOS, Option+Shift+- and Option+- insert an em dash (—) and en dash (–), respectively. On Linux, you can hit the Compose Key and type --- (three hyphens) to get an em dash, or --. (hyphen hyphen period) for an en dash. Windows has some dumb incantation that you'll never remember.
oblio 47 minutes ago
For Windows it's just easier to make a custom keyboard layout and go to town with that: https://www.microsoft.com/en-us/download/details.aspx?id=102...
BizarroLand 2 hours ago
Alt+0151 or WIN+SHIFT+-, but I can't seem to make the WIN+SHIFT+- combo work in browser, only in a text editor.
noupdates 3 hours ago
I was just looking at the SWE-bench docs and it seems like they use almost an arbitrary form of context engineering (loading in some arbitrary amount of files to saturate context). So in a way, the bench suites test how good a model is with little to no context engineering (I know ... it doesn't need to be said). We may not actually know which models are sensitive to good context-engineering, we're simply assuming all models are. I absolutely agree with you on one thing, there is definitely a ton of low hanging fruit.
barrenko 6 hours ago
2026 is the year of the harness.
cyanydeez 16 minutes ago
2027 is the year of the "maybe indeterminism isn't as valueable as we thought"
visarga 5 hours ago
Already made a harness for Claude to make R/W plans, not write once like they are usually implemented. They can modify themselves as they work through the task at hand. Also relying on a collection of patterns for writing coding task plans which evolves by reflection. Everything is designed so I could run Claude in yolo-mode in a sandbox for long stretches of time.
porker 45 minutes ago
Link?
ex-aws-dude 2 hours ago
As a VC in 2026 I'm going to be asking every company "but what's your harness strategy?"
miohtama 5 hours ago
But will harness build desktop Linux for us?
vidarh 1 hour ago
My harness is improving my Linux desktop...
riskable 4 hours ago
Only if you put bells on it and sing Jingle Bells while it em dashes through the snow.
aeon_ai 6 hours ago
Once you begin to see the “model” as only part of the stack, you begin to realize that you can draw the line of the system to include the user as well.

That’s when the future really starts hitting you.

renato_shira 2 hours ago
yeah this clicked for me when i stopped obsessing over which model to use and focused on how i structure the context and feedback loops around it. for my project the same model went from "barely usable" to "legitimately helpful" just by changing how i fed it context and how i validated its output.

the user inclusion part is real too. the best results i get aren't from fully autonomous agents, they're from tight human-in-the-loop cycles where i'm steering in real time. the model does the heavy lifting, i do the architectural decisions and error correction. feels more like pair programming than automation.

logicprog 2 hours ago
> the user inclusion part is real too. the best results i get aren't from fully autonomous agents, they're from tight human-in-the-loop cycles where i'm steering in real time. the model does the heavy lifting, i do the architectural decisions and error correction. feels more like pair programming than automation.

Precisely. This is why I use Zed and the Zed Agent. It's near-unparalleled for live, mind-meld pair programming with an agent, thanks to CRDTs, DeltaDB, etc. I can elaborate if anyone is interested.

ambicapter 1 hour ago
I am interested.
rahabash 1 hour ago
plz do
logicprog 1 hour ago
The special (or at least new to me) things about Zed (when you use it with the built-in agent, instead of one of the ones available through ACP) basically boil down to the fact that it's a hyper advanced CRDT-based collaborative editor, that's meant for live pair programming in the same file, so it can just treat agents like another collaborator.

1. the diffs from the agent just show up in the regular file you were editing, you're not forced to use a special completion model, or view the changes in a special temporary staging mode or different window.

2. you can continue to edit the exact same source code without accepting or rejecting the changes, even in the same places, and nothing breaks — the diffs still look right, and doing an accept or reject Just Works afterwards.

3. you can accept or reject changes piecemeal, and the model doesn't get confused by this at all and have to go "oh wait, the file was/wasn't changed, let me re-read..." or whatever.

4. Even though you haven't accepted the changes, the model can continue to make new ones, since they're stored as branches in the CRDT, so you can have it iterate on its suggestions before you accept them, without forcing it to start completely over either (it sees the file as if its changes were accepted)

5. Moreover, the actual files on disk are in the state it suggests, meaning you can compile, fuzz, test, run, etc to see what it's proposed changes do before accepting them

6. you can click a follow button and see which files it has open, where it's looking in them, and watch as it edits the text, like you're following a dude in Dwarf Fortress. This means you can very quickly know what it's working on and when, correct it, or hop in to work on the same file it is.

7. It can actually go back and edit the same place multiple times as part of a thinking chain, or even as part of the same edit, which has some pretty cool implications for final code-quality, because of the fact that it can iterate on its suggestion before you accept it, as well as point (9) below

8. It streams its code diffs, instead of hanging and then producing them as a single gigantic tool call. Seeing it edit the text live, instead of having to wait for a final complete diff to come through that you either accept or reject, is a huge boon for iteration time compared to e.g. ClaudeCode, because you can stop and correct it mid way, and also read as it goes so you're more in lockstep with what's happening.

9. Crucially, because the text it's suggesting is actually in the buffer at all times, you can see LSP, tree-sitter, and linter feedback, all inline and live as it writes code; and as soon as it's done an edit, it can see those diagnostics too — so it can actually iterate on what it's doing with feedback before you accept anything, while it is in the process of doing a series of changes, instead of you having to accept the whole diff to see what the LSP says

logicprog 6 hours ago
Aha! A true cybernetics enthusiast. I didn't say that because I didn't want to scare people off ;)
drob518 3 hours ago
That's next-year's problem.
fazgha 6 hours ago
So deep your comment. Asking for a friend, how did you manage to have the em dash — in your keyboard ?
throwup238 6 hours ago
Does your friend have an iPhone? The default iOS keyboard has automatically converted double dashes into an emdash for at least seven years now.
QuercusMax 2 hours ago
I think Google docs does this too, which drives me up the wall when I'm trying to write `command --foo=bar` and it turns it into an M-dash which obviously doesn't work.
velcrovan 6 hours ago
ahofmann 6 hours ago
Em dashes are used often by LLMs, because humans use them often. On mac keyboards its easily typed. I know this is oversimplifying the situation, but I don't see the usefulness of the constant witch-hunting for allegedly LLM-generated text. For text we are long beyond the point, where we can differenciate between human generated and machine generated. We're even at the point, where it gets somewhat hard to identify machine generated audio and visuals.
StilesCrisis 4 hours ago
I might not be able to spot ALL AI generated text, but I can definitely spot some. It's still kind of quirky.
vardalab 3 hours ago
Yeah, I agree with you. I'm so tired of people complaining about AI-generated text without focusing on the content. Just don't read it if you don't like it. It's another level of when people complain how a website is not readable for them or some CSS rendering is wrong or whatever. How does it add to the discussion?
ink 6 hours ago
On a Mac, it's alt-dash in case you weren't being facetious
snazz 6 hours ago
Extra pedantic: that’s the en dash, the em dash is option-shift-hyphen
macintux 6 hours ago
Technically option-shift-dash. option-dash is an en-dash.
vient 1 hour ago
On Windows it is Alt+0151. Harder to use than on Mac but definitely possible, I frequently use it.

On recent versions Shift+Win+- also work, and Win+- produces en dash.

wiredfool 1 hour ago
I just type -- and jira fixes it.
dolebirchwood 3 hours ago
I really despise that people like you ruined em dashes for the rest of us who have enjoyed using them.
bitwize 6 hours ago
I use Compose - - - on Linux and my cellphone (Unexpected Keyboard). Mac is Alt-_.
woah 3 hours ago
Seems like a very cool technique, but also very oversold. He's seeing a 5% improvement on a find and replace benchmark of his own devising and saying stuff like this in the blog post:

> Here is why that is backwards. I just showed that a different edit format improves their own models by 5 to 14 points while cutting output tokens by ~20%. That’s not a threat. It’s free R&D.

He makes it sounds like he got a 5-14% boost on a top level benchmark, not 5% improvement on a narrow find and replace metric. Anecdotally, I don't usually have a lot of issues with editing in Claude Code or Cursor, and if there is an issue the model corrects it.

Assuming that it costs double the tokens when it has to correct itself, and find and replace errors are as prominent in actual day to day use as his benchmark, we're talking a 5% efficiency gain in editing token use (not reasoning or tool use). Given that editing must be less than 1/3 of the token use (I assume much less?), we're talking an overall efficiency gain of less than 1%.

This seems like a promising technique but maybe not a high priority in efficiency gains for these tools. The messianic tone, like assuming that Google cut off his access to suppress his genius editing technique rather than just because he was hammering their API also leaves a bad taste, along with the rampant and blatant ChatGPTisms in the blog post.

andai 2 hours ago
The benchmarks seem to indicate 25-50% reduction in tokens. I'm not sure how that works in real world usage though.
athrowaway3z 2 hours ago
> “replace line 2:f1, replace range 1:a3 through 3:0e, insert after 3:0e.”

Not sure what they're calculating, but this seems to me like it could be many times more efficient than 20%.

keeda 1 hour ago
This makes sense to me because I've been having very accurate results with models from even 2+ years ago... but I had to "hold them right." Even when reasoning models and coding agents were just a gleam in Altman's and Amodei's eyes, I could tell a lot of the unrealized gains lay in building the right tools, harnesses and guardrails to manage the context and guide the model. (Relevant subthread as example: https://news.ycombinator.com/item?id=44171519)

But this article hints at deeper wins to be had. Consider that these models are operating on source code, which is a verbose, noisy, textual serialization of the intended syntax / semantic trees. TFA improves accuracy by retro-fitting some structure onto the text. But what if models could operate directly on these underlying structures themselves?

As a data point, there are projects like OpenRewrite, which encode a ton of information, from formatting to types with globally resolved dependencies for each symbol in what they call a "Lossless Semantic Tree", so that there is ~0 ambiguity about the code. When I worked with OpenRewrite (in the era before LLMs, how quaint!) compared to other tools, it produced the best results for code transformations with the highest fidelity to the surrounding code.

Now imagine if the agent has access to such detailed information. It would not have to waste tokens figuring incidental things out like formatting. Although I haven't tested it out myself, I believe Moderne (the maintainers of OpenRewrite) when they say that agents armed with LST-based tools make extremely accurate changes.

This is essentially the same reason why the answer to "Which is better, Vim or Emacs?" is "IntelliJ."

Now consider that these models are STILL operating on text as an input and output mode! What if they were multi-modally trained on source code and docs and their syntax / semantic trees? I don't even know what this would look like, but I'd bet this would produce the most accurate coding models ever -- probably neurosymbolic in the truest sense.

chrisweekly 7 hours ago
Great post. A few choice quotes:

> Often the model isn’t flaky at understanding the task. It’s flaky at expressing itself. You’re blaming the pilot for the landing gear.

> The model is the moat. The harness is the bridge. Burning bridges just means fewer people bother to cross. Treating harnesses as solved, or even inconsequential, is very short-sighted.

> The gap between “cool demo” and “reliable tool” isn’t model magic. It’s careful, rather boring, empirical engineering at the tool boundary.

brendanmc6 7 hours ago
You’re absolutely right! This isn’t your average engineering advice— it’s like painting the reader a vivid tapestry of the author’s mind.
esafak 7 hours ago
Please stop; I just can't any more! Yes, I'm absolutely right.
cevn 4 hours ago
You're absolutely right about being absolutely right!
dimgl 5 hours ago
My personal favorite: That’s not a threat. It’s free R&D.
matheist 4 hours ago
> Codex uses apply_patch: It takes a string as input, which is essentially an OpenAI-flavored diff, and instead of relying on a structured schema, the harness just expects this blob to follow a strict set of rules. Since OpenAI folks are without a doubt smart, I’m sure the token selection process is biased to fit this structure at the LLM gateway for the Codex variants of GPT, similar to how other constraints like JSON schemas or required tool calls work.

Codex does in fact use a schema for constrained sampling, it's here: https://github.com/openai/codex/blob/main/codex-rs/core/src/...

It still has to work to get an exact match, or at least I didn't read the code to see if there's any fuzzy matching used.

Note the two codex models were the only ones doing worse with the author's proposed format. The author found them doing better with replace than with apply patch, but since the author appears to be unaware that they use a schema for constrained sampling, I think a more realistic benchmark should enable constrained sampling for the apply test.

socketcluster 22 minutes ago
Seeing all these 'coding' benchmarks reminds me that people still don't understand what coding means in practice. People still think one-phase puzzle-solving is coding. Real coding almost always has multiple phases which build on top of one another. There is an architectural component which is missed here - and the sheer number of phases/layers is actually where most of the complexity comes from.
cyanydeez 13 minutes ago
Usually what I need a LLM to do is find me a elegant agorithm for a problem I've encountered where I know there's an elegant algorithm but I've got no idea what it's called or how to google search for it.
clx75 6 hours ago
During my first LLM experiments in Emacs using gptel, I also found that the LLM has considerable difficulties changing source code files with the Unix patch tool.

As Emacs has a built-in tree-sitter package, I implemented this same idea. I created gptel tools like tree_sitter_list_nodes, tree_sitter_get_nodes, tree_sitter_update_nodes, tree_sitter_insert_before_node and tree_sitter_insert_after_node. The "list" tool returns a list of AST nodes with first line number, first line content and node hash. The LLM can then use "get" to collect interesting nodes in their entirety and "update" to update a list of nodes identified by hash with new content (var/function bodies).

Worked like a charm.

badhorseman 5 hours ago
Sounds interesting, do you have the code to share.
jahala 5 hours ago
I implemented this hash (read and edit) approach in tilth if you want to test it out.

https://github.com/jahala/tilth

its on npm and cargo:

- cargo install tilth

- npx tilth

then tilth install claude-code/windsurf/cursor --edit

(--edit flag is needed)

I made "tilth" a few days ago, since I'm consistently trying to get the LLMs to use tools more efficiently and spend less tokens doing it -- original tilth post from Monday: https://news.ycombinator.com/item?id=46952321

hedgehog 4 hours ago
You might find it useful for markdown as well, especially if you add support for section-based addressing (e.g. cat or replace a section at a time). Section-based addresses are nice because they tend to be stable across versions.
jahala 3 hours ago
Great idea - Just implemented this.

(Already published on cargo, on npm in a few mins).

kachapopopow 4 hours ago
benchmarks vs grep?
jahala 4 hours ago
tilth isn’t trying to replace grep for raw text search — for that, it wraps ripgrep internally so perf is comparable. It’s about reducing round-trips and giving the agent a verified edit workflow, not faster search.

Instead of cat + grep + manual line counting, one tool call returns a structural outline of a large file, lets you drill into sections, and since this last update also returns hashline-anchored output that an edit tool can target.

kachapopopow 4 hours ago
well yah, that's what I mean how better is it versus cat + grep + manual line counting. Agents tend to perform worse with niche tools
jahala 1 hour ago
Thank you for this question - I'm building out a benchmark now. Initial results are very promising, will update you once it's done!
joshuamoyers 1 hour ago
I think this is the right take. I usually am aligned with most of what Anthropic is doing, but cutting off OAuth login from open harnesses was a bad move. My guess is there is some serious worry/overlap with the Cursor's of the world - e.g. folks who will be competitors in the future who are taking advantage of cheaper Opus rates/loss leader from them while simultaneously building a competitive model (Composer).

Also, nice clever optimization here. Lots of low hanging fruit in harness land.

woeirua 8 hours ago
The harness matters far more than most people think. This post about the CORE benchmark where Opus’ score almost doubled when they switched to Claude Code from their own harness. https://x.com/sayashk/status/1996334941832089732
theturtletalks 8 hours ago
Mario, the creator of Pi terminal agent, has this great blog post[0]. He talks about how TerminalBench's highest scores comes from using the Terminus 2 harness which uses tmux under the hood.

When I was reading the Opus 4.6 launch post, they mentioned the same thing and their TerminalBench score was based on using Terminus 2 and not CC.

0. https://mariozechner.at/posts/2025-11-30-pi-coding-agent/

withinboredom 8 hours ago
Which, IMHO, should be why we should be able to change them freely or make our own. Being locked into a specific harness because you pay 20 bucks per month vs. pay-per-use ... is kinda dumb.
CuriouslyC 7 hours ago
The reason Anthropic is pushing on the closed harness is that they're not confident with their ability to win on model quality long term, so they're trying to build lock-in. They can capture some additional telemetry owning the harness as well, but given the amount of data the agent loop already transmits, that borders on unethical spyware (which might be part of the reason they're afraid to open source).

Ultimately the market is going to force them to open up and let people flex their subs.

Aurornis 7 hours ago
> Being locked into a specific harness because you pay 20 bucks per month vs. pay-per-use ... is kinda dumb.

I’ll probably get downvoted for this, but am I the only one who thinks it’s kind of wild how much anger is generated by these companies offering discounted plans for use with their tools?

At this point, there would be less anger and outrage on HN if they all just charged us the same high per-token rate and offered no discounts or flat rate plans.

senordevnyc 5 hours ago
No, you're not the only one. The outraged entitlement is pretty funny tbh. How dare they dictate that they'll only subsidize your usage if you use their software!!
chickensong 1 hour ago
I'm not outraged, but the dynamic creates a tension that prevents me from building brand loyalty.
horsawlarway 8 hours ago
Also another place where having it change out from underneath you can drastically alter the quality of your work in unexpected ways.

Like most things - assume the "20/100/200" dollar deals that are great now are going to go down the enshitification route very rapidly.

Even if the "limits" on them stay generous, the product will start shifting to prioritize things the user doesn't want.

Tool recommendations are my immediate and near term fear - paid placement for dev tools both at the model level and the harness level seem inevitable.

---

The right route is open models and open harnesses, ideally on local hardware.

Aurornis 7 hours ago
> Like most things - assume the "20/100/200" dollar deals that are great now are going to go down the enshitification route very rapidly.

I don’t assume this at all. In fact, the opposite has been happening in my experience: I try multiple providers at the same time and the $20/month plans have only been getting better with the model improvements and changes. The current ChatGPT $20/month plan goes a very long way even when I set it to “Extra High” whereas just 6 months ago I felt like the $20/month plans from major providers were an exercise in bouncing off rate limits for anything non-trivial.

Inference costs are only going to go down from here and models will only improve. I’ve been reading these warnings about the coming demise of AI plans for 1-2 years now, but the opposite keeps happening.

disgruntledphd2 7 hours ago
> Inference costs are only going to go down from here and models will only improve. I’ve been reading these warnings about the coming demise of AI plans for 1-2 years now, but the opposite keeps happening.

This time also crosses over with the frontier labs raising ever larger and larger rounds. If Anthropic IPO (which I honestly doubt), then we may get a better sense of actual prices in the market, as it's unlikely the markets will continue letting them spend more and more money each year without a return.

TuxSH 4 hours ago
> The current ChatGPT $20/month plan goes a very long way

It sure does and Codex is great, but do you think they'll maintain the current prices after/if it eventually dominates Claude Code in terms of marketshare and mindshare?

deaux 7 hours ago
At this point subsidizing Chinese open-weights vendors by paying for them is just the right thing to do. Maybe they too might go closed-weights when they become SotA, but they're now pretty close and haven't done it.
DeathArrow 7 hours ago
I am wondering what kinds of harness are best for GLM, Deepseek, Qwen, Kimi.
deaux 7 hours ago
OpenCode is great in general. At least one of them is specifically trained on CC - I think it was Qwen - so for those that should give best results.
azuanrb 6 hours ago
Claude Code better than opencode for GLM models for me.
eshaham78 7 hours ago
The harness is effectively the agent's 'body'. Swapping the brain (model) is good, but if the body (tools/environment) is locked down or inefficient, the brain can't compensate. Local execution environments that standardize the tool interface are going to be critical for avoiding that lock-in.
mehdibl 30 minutes ago
You can improve a lot the success rate with providing HELM and clear instructions with the tool description.

Over a year ago had a lot of issues and the description and example was the difference between 30-50% failure to 1%!

So I'm surprised a bit about the point. May be I'm missing it.

tosh 7 hours ago
Shows how much room for improvement there is on the harness level.

Agents waste a lot of tokens on editing, sandboxes, passing info back and forth from tool calls and subagents.

Love the pragmatic mix of content based addressing + line numbers. Beautiful.

robbomacrae 6 hours ago
Indeed. The biggest waste might be the overuse of MCP for everything. Sure it makes the initial development easier but then for every connection you're using a hundred billion dollar parameter model to decide how to make the call when it's usually completely unnecessary and then prone to random errors. MCP is the hammer that can make literally everything look like a nail...
senordevnyc 5 hours ago
I see this ranting against MCP all the time, and I don't get it, maybe I'm missing something. I'm currently using an MCP in Cursor to give agents read-only access to my staging and prod databases, as well as BugSnag's MCP so it can look up errors that happen in those environments. It works great. What should I be using for this if not MCP?
visarga 4 hours ago
Make a CLI tool for it, of course
canadiantim 4 hours ago
agent skills, or use claude code to iteratively condense an MCP you want to use into only its most essential tools for your workflow
chasd00 7 hours ago
i haven't dug into the article but your comment reminded me about the ClaudeCode Superpowers plugin. I find the plugin great but it's quite "expensive", I use the pay-as-you-go account with CC because i've just been trying it out personally and the superpowers plugin spends a lot of money, relative to regular CC, with all the back and forth.

With CC you can do a /cost to see how much your session cost in dollar terms, that's a good benchmark IMO for plugins, .md files for agents, and so on. Minimize the LLM cost in the way you'd minimize typical resource usage on a computer like cpu, ram, storage etc.

kachapopopow 7 hours ago
you can actually go the other way and spend more tokens to solve more complex problems (multi-agent) by letting agents work with smaller problems
XCSme 19 minutes ago
Google banning you for benchmarking is crazy, are you sure that's the cause? How would they even know you are benchmarking?
kachapopopow 7 hours ago
My personal notes (not the author): have been way faster performance wise which is honestly the biggest improvement over correctless. I've posted https://github.com/can1357/oh-my-pi before, but didn't seem to gain traction. It's a great little agent.
mijoharas 6 hours ago
I've just started messing around with pi, but haven't fully dug in yet. How would you compare oh-my-pi? I see it has a lot of other bells and whistles built in.

Are they portable bit by bit back to pi, or is there enough differences that they can't? how about normal pi extensions, can they be used in omp?

Some of the stuff definitely looks interesting.

kachapopopow 6 hours ago
the differences are documented but it is mostly 1:1, never used normal pi, but night and day difference compared to opencode, don't forget omp setup python.
scotth 5 hours ago
I'm into it! This looks like an experimentation platform. OpenCode is beginning to feel like handcuffs. Let me hack!
rao-v 4 hours ago
I’d really like to see this optimized for the 50-120B parameter open source models that are local viable (gpt-oss-120b, qwen3-80b-3a etc.).

For them I think it would be optimal to provide a tag per function and trust the llm to rewrite the function. As the article notes full reproduction is generally more reliable than edited for short code.

The token and attention overhead from a per line hash I suspect limits this approach for smaller models

ianbutler 5 hours ago
It’s funny to see where we are on model improvements.

Back when I was maintaining a coding harness around the time of Claude 3.5 we tried hash prefixes we tried line number prefixes we tried a lot of different approaches to making the model better at selecting edit blocks and ultimately at-least then fuzzy string matching won out.

jbellis 5 hours ago
Yes, very similar results here (http://brokk.ai)

We got lines-with-anchors working fine as a replacement strategy, the problem was that when you don't make the model echo what it's replacing, it's literally dumber at writing the replacement; we lost more in test failures + retries than we gained in faster outputs.

Makes sense when you think about how powerful the "think before answering" principle is for LLMs, but it's still frustrating

indubioprorubik 1 hour ago
My guess always was that - if you took the source of training data- meaning the authors of the "best" answers and solutions on stackoverflow or github- and got the question reformatted, to sound like it was created by these experts- the created code, would try to hug these sources of truth while getting created.

So, the challenge is actually to find a map of "problem" to "author" and then from "author" to "related code" and from their to a solution.

Bolwin 5 hours ago
You forgot to mention your tool does worse for 8/16 LLMs compared to replace?

Problem is, replace has been around for so long, most LLMs are tuned for it now

animan 8 hours ago
What was the point of Claude code or Gemini banning the OP? Why would they care about how IDEs use the underlying API?
bri3d 7 hours ago
When you buy a subscription plan, you’re buying use of the harness, not the underlying compute / tokens. Buying those on their own is way more expensive. This is probably because:

* Subscriptions are oversubscribed. They know how much an “average” Claude Code user actually consumes to perform common tasks and price accordingly. This is how almost all subscription products work.

* There is some speculation that there is cooperative optimization between the harness and backend (cache related etc).

* Subscriptions are subsidized to build market share; to some extent the harnesses are “loss leader” halo products which drive the sales of tokens, which are much more profitable.

sigmar 7 hours ago
He wasn't using the regular paid api (ie per token pricing). He was using the endpoints for their subscribed customers (ie paid per month and heavily subsidized).
infecto 7 hours ago
I assume he was using Gemini the same way as he was Claude when I make the following statement.

I don’t believe it’s exceptionally unique or new that companies will revoke access if you are using an unpublished API that the apps use. I don’t see anything wrong with it myself. If you want, pay for normal token use on the published APIs. There is no expectation that you can use APIs for an application, even if you are a paid user, that are not published explicitly for usage.

deaux 7 hours ago
Indeed, that's why Anthropic, OpenAI and other LLM providers are known to adhere to published APIs to gather the world's data, obeying licensing and ROBOTS.txt.

It's truly disgusting.

skybrian 7 hours ago
I was under the impression that they do obey robots.txt now? There are clearly a lot of dumb agents that don’t, but didn’t think it was the major AI labs.
deaux 7 hours ago
After 3 years of pirating and scraping the entire world by doing the above, I guess they have everything that they now need or want.

So then it's better to start obeying ROBOTS.txt as a ladder pull through a "nicely behaved" image advantage.

skybrian 7 hours ago
Obeying robots.txt (now) is still better than not obeying it, regardless of what they did before.

The alternative is to say that bugs shouldn’t be fixed because it’s a ladder pull or something. But that’s crazy. What’s the point of complaining if not to get people to fix things?

DANmode 7 hours ago
Why does Google/Facebook et al arbitrarily enforce one human per account?

It’s because they want to study you.

They want the data!

logicallee 7 hours ago
>What was the point of Claude code or Gemini banning the OP? Why would they care about how IDEs use the underlying API?

Underscores the importance of sovereign models you can run on the edge, finetune yourself, and run offline. At State of Utopia, we're working on it!

znnajdla 7 hours ago
My experience as well. People worry our profession is being reduced to "prompt engineer", but actually I get the feeling that programming will soon be mainly about designing and building harnesses for specific tasks.
ambicapter 7 hours ago
Personal opinion is that LLMs are definitely not as magical as people think they are, they fill a specific niche of problem-solving, and harnesses are necessary to corral your problem into the niche that they are extremely good at solving.
cruffle_duffle 6 hours ago
The more I dive into this space the more I think that developers will still be in heavy demand—just operating in a different level of abstraction most of the time. We will need to know our CS fundamentals, experience will still matter, juniors will still be needed. It’s just that a lot of time time the actual code being generated will come from our little helper buddies. But those things still need a human in the seat to drive them.

I keep asking myself “could my friends and family be handed this and be expected to build what I’m building on them” and the answer is an immediate “absolutely not”. Could a non technical manager use these tools do build what I’m building? Absolutely not. And when I think about it, it’s for the exact same reason it’s always been… they just aren’t a developer. They just don’t “think” in the way required to effectively control a computer.

LLMs are just another way to talk to a machine. They aren’t magic. All the same fundamental principles that apply to probably telling a machine what to do still apply. It’s just a wildly different mechanism.

That all being said, I think these things will dramatically speed up the pace that software eats the world. Put LLMs into a good harness and holy shit it’s like a superpower… but to get those superpowers unlocked you still have to know the basis, same as before. I think this applies to all other trades too. If you are a designer you still have to what good design is and how to articulate it. Data scientists still need to understand the basics of their trade… these tools just give them superpowers.

Whether or not this assertion remains true in two or three years remains to be seen but look at the most popular tool. Claude code is a command line tool! Their gui version is pretty terrible in comparison. Cursor is an ide fork of vscode.

These are highly technical tools requiring somebody that knows file systems, command lines, basic development like compilers, etc. they require you to know a lot of stuff most people simply don’t. The direction I think these tools will head is far closer to highly sophisticated dev tooling than general purpose “magic box” stuff that your parents can use to… I dunno… vibe code the next hit todo app.

neversupervised 6 hours ago
I believe you’re arriving at the wrong conclusion because you’re comparing to an opposite instead of to someone slightly worse than you. Will this enable people at the edge to perform like you? That’s the question. Will there be more developers? Will they compete with you?
keybored 3 hours ago
> The more I dive into this space the more I think that developers will still be in heavy demand—just operating in a different level of abstraction most of the time. We will need to know our CS fundamentals, experience will still matter, juniors will still be needed. It’s just that a lot of time time the actual code being generated will come from our little helper buddies. But those things still need a human in the seat to drive them.

It’s disheartening that programmers are using this advanced, cutting-edge technology with such a backwards, old-fashioned approach.[1]

Code generation isn’t a higher level abstraction. It’s the same level but with automation.

See [1]. I’m open to LLMs or humans+LLMs creating new abstractions. Real abstractions that hide implementation details and don’t “leak”. Why isn’t this happening?

Truly “vibe coding” might also get the same job done. In the sense of: you only have to look at the generated code for reasons like how a C++ programmer looks at the assembly. Not to check if it is even correct. But because there are concerns beyond just the correctness like code gen size. (Do you care about compiler output size? Sometimes. So sometimes you have to look.)

[1]: https://news.ycombinator.com/item?id=44163821

skydhash 5 hours ago
> LLMs are just another way to talk to a machine. They aren’t magic.

I will still opt for a scriptable shell. A few scripts, and I have a custom interface that can be easily composed. And could be run on a $100 used laptop from ebay.

tgtweak 1 hour ago
When you're in the business of selling tokens - you look at technology that reduces that as a threat. If they were selling services that USE tokens, then reducing them would be welcome... so they'll likely steal this and incorporate it into their proprietary CLIs like claude code...
MarsIronPI 1 hour ago
Huh? Anthropic doesn't sell Claude Code, they sell tokens. Why would they make Claude Code more token-efficient?
fcanesin 6 hours ago
The harness is the model "body", it's weight the cognition. Like in nature they develop together and the iteration of natural selection works at both.

If smaller labs (Zai, Moonshot, deepseek, mistral..) get together and embrace a harness, like opencode for example, as a consortium just by the power of "evolution across different environments" they might hit jackpot earlier than bigger labs.

TZubiri 6 hours ago
But they rely on distilling the output of american leader models. Which will probably train against their own harness.

Someone has to do the baseline training, development, and innovation. it can't be clones all the way down

robotresearcher 5 hours ago
Why not? Humans are (very nearly) clones all the way down.
lillecarl 5 hours ago
Citation needed, SOTA labs surely has technical protection and legaleese against using them for training. It's been done in th past but what indicates this is still the case?
cyanydeez 5 minutes ago
this didn't stop the millions of copyrighted works used to train the models.
christophilus 2 hours ago
Has any harness matched the effectiveness of Claude Code yet? I haven't experimented much recently, but every time I have in the past, I wasn't able to get any other tool to approach how effective I am in CC.

I'd love to use a different harness-- ideally an OSS one-- and hook it up to whichever LLM provides the best bang for the buck rather than being tied to Claude.

parhamn 6 hours ago
On first principles it would seem that the "harness" is a myth. Surely a model like Opus 4.6/Codex 5.3 which can reason about complex functions and data flows across many files would trip up over top level function signatures it needs to call?

I see a lot of evidence to the contrary though. Anyone know what the underlying issue here is?

znnajdla 4 hours ago
How hard is it to for you to assemble a piece of IKEA furniture without an allen wrench, screwdriver, and clear instructions, vs with those 3?
0x457 3 hours ago
Well, I assembled Alex once without instruction and with impact driver and hammer last year. Hardest part was to make tools fit.
parhamn 4 hours ago
You didn't read the article it seems (or the analogy is a bad one). The differences are much more subtle than having a screwdriver or not.
znnajdla 4 hours ago
I did read the article quite enthusiastically and my practical experience confirms the same. Sure the difference is more subtle. But my point was, an "agent" whether human or AI can be a lot more productive with better tools. This guy found a better screwdriver than the most commonly used one. That's amazing and nothing from "first principles" denies that a better tool harness would mean better/faster/more correct AI agents.
3371 5 hours ago
If you agree that current LLMs (Transformers) are naturally very susceptible to context/prompt, then you can go on to ask agents for a "raw harness dump" "because I need to understand how to better present my skills and tools in the harness", you maybe will see how "Harness" impact model behavior.
6 hours ago
robotresearcher 5 hours ago
Humans have a demonstrated ability to program computers by flipping switches on the front panel.

Like a good programming language, a good harness offers a better affordance for getting stuff done.

Even if we put correctness aside, tooling that saves time and tokens is going to be very valuable.

madeofpalk 5 hours ago
Isn't 'the harness' essentially just prompting?

It's completely understandable that prompting in better/more efficient means would produce different results.

furyofantares 5 hours ago
No, it's also a suite of tools beyond what's available in bash, tailored to context management.
manbash 6 hours ago
The models generalized "understanding" and "reasoning" is the real myth that makes us take a step back and offload the process deterministic computing and harnesses.
jbetala7 2 hours ago
I switched from a basic prompt wrapper to structured tool use with Claude Code and the quality of output jumped overnight. Same model, completely different results.
uriegas 3 hours ago
I do agree with his identification of the problem: sometimes agents fail because of the tools around it and not because of the model's reasoning. However, for the failing tests I think he is not making the distinction between a failed test due to a harness failure or due to a reasoning failure. It would be nice if someone analyzed that from the data set.
aszen 6 hours ago
So the new implementation always operates at the line level, replacing one or more lines. That's not ideal for some refactorings like rename where search and replace is faster.

Edit

Checking ohmypi The model has access to str replace too so this is just a edit till

benreesman 5 hours ago
The logical end state of this line of reasoning is a collective action problem that dooms the frontier lab establishment. You can't devote model capacity to having an attention transformer match nested delimiters or cope with bash and be maximally capable, you can't mix authentication, authorization, control plane, and data plane into an ill specified soup and be secure enough for any that isn't a pilot or toy ever.

If you run this out, you realize that the Worse is Better paradox has inverted, it's an arbitrage, and the race is on.

pcwelder 8 hours ago
Great work, but concurrency is lost.

With search-replace you could work on separate part of a file independently with the LLM. Not to mention with each edit all lines below are shifted so you now need to provide LLM with the whole content.

Have you tested followup edits on the same files?

kachapopopow 7 hours ago
(not the author) it works fine most of the time been using it alongside an active agent and haven't ran into too many noticable problems. The token savings alone are worth it.
wrsh07 7 hours ago
Serializing writes is probably fine and the hashes should only change if you're updating the same line, right?

You probably don't want to use the line number though unless you need to disambiguate

But your write tool implementation can take care of that

jcims 7 hours ago
I ran into this from the other direction. I built a small SRE agent for my cloud infra and just kind of walked into hand-rolling some of the tools rather than using what exists today. I provided an edit_file tool that felt like it was of reasonable capability, but in practice the agent was regularly 'trying' to do a one line change and submitting PRs that hallucinated 3/4s of the file.

Seeing how bad the results are when you're casually approaching something makes it very evident that it's a topic that can be optimized.

giancarlostoro 6 hours ago
One of the first things I add to my claude instructions file is to stop using grep, its awfully slow, just use ripgrep instead, you can just type the word of what you're looking for from the project root and find it all in one shot. Claude likes to go folder by folder with grep and it drives me crazy.

"You're absolutely right!"

At this point I'd take a contract with Anthropic to have Claude code pick better tooling.

rafaelmn 8 hours ago
I wonder if we'll get to "VI for LLMs" - if the model was trained on using that kind of text navigation and you show context around cursor when it navigates.

Would also be worth having special tokens for this kind of navigation.

1313ed01 8 hours ago
I always thought ed would be a perfect match. Line-based instead of having to manage cursor movements.
cousinbryce 8 hours ago
I bet it’s good enough at VI already
the_harpia_io 5 hours ago
the harness bottleneck is real - I've been working on ai code security stuff and the biggest issue isn't model capability, it's that most tools treat the output as gospel. they'll take a suggested fix and apply it without checking if it even compiles, let alone if it introduces new vulns. I've seen fixes that patch one CVE but break auth logic entirely.

the edit tool point hits though. when you give the model a better interface to express changes (structured diffs vs free-form patches), error rates drop. but nobody talks about this because benchmarks measure "did it solve the problem" not "how many attempts" or "what's the blast radius when it fails". idk maybe I'm just jaded from debugging too many of these.

softwaredoug 6 hours ago
Underrated is how much improving harnesses, not just models, has a lot to do with productive uses of LLMs at tasks like coding in the last year.
nekitamo 5 hours ago
Getting banned from Gemini while attempting to improve Gemini is the most Googley thing ever :D imagine letting your automated "trust and safety" systems run amok so that they ban the top 0.01% of your users with no recourse. Google really knows how score an own-goal.
sgc 5 hours ago
I really don't understand what is his usage pattern would have triggered that obviously automated ban. Can somebody let me know what they might think is adversarial enough to be considered 'hacking' or similar by a bot?
visarga 5 hours ago
Yeah I invented a similar method for information extraction attribution around 2022, I would place custom markers in a document so the extraction model can reference them together with the answer and be unique on the document to be able to locate it.
0xbadcafebee 5 hours ago
Putting it out there: if any frontier model provider starts allowing any agent to use their $20/month plan, we will all switch to you. We don't want to be forced into 1 harness, we want OAuth, and we want respectable limits without excessive budgets.
aniviacat 4 hours ago
How would that differ from buying $20 worth of API credits each month?
0xbadcafebee 2 hours ago
1) security (oauth is much more secure than a static api key; if your key gets stolen, a hacker can run up your bill)

2) AFAIK the $20/month plan allows use of more tokens per month than if you bought $20 of tokens. my understanding is it assumes most users will only use a fraction of that each month, and they rake in profit (like a gym membership)

notsylver 7 hours ago
I feel like cursors solution is still the best answer. Let the model suggest edits in whatever format it prefers using as few "extra" tokens as possible and have a small model figure it out. I don't use cursor anymore but when I did it was impressive how consistently it worked, I think there was a single time it failed. 70b might be overkill though...
mromanuk 7 hours ago
Someone should try prompting the same LLM in use, to suggest an edit as a subagent.
znnajdla 7 hours ago
Yep this has been my experience with browser agents as well. One little change in the harness/agentic loop and the model suddenly becomes a whole lot smarter at navigating the web. I was also able to build a better browser agent than ‘claude —chrome’ in just a few afternoons just by tweaking the harness.
0xdeafbeef 5 hours ago
babkayaga 4 hours ago
Still weird to me that most people are not just giving an LLM an access to an editor, forcing it to write shell scripts to edit files. Shrug.
HarHarVeryFunny 3 hours ago
That's not quite how it works, and anyways if the model can't generate an accurate find/replace string, why would you expect it to do any better generating accurate commands to drive your editor (assuming it knew how do do that in the first place) ?!

The way edits happen is that the agent (local) first tells the model (typically remote) that it has an edit tool (e.g. taking parameters file name, find string and replace string). If the model decides it wants to edit a file then it'll invoke this edit tool, which just results in a blob of JSON being put in the model's response specifying the edit (filename, etc). The agent then receives the response, intercepts this JSON blob, sees that it is an edit request and does what is asked.

The problem the article is describing is that the edit request (tool invocation) generated by the model isn't always 100% accurate. Even if the agent told the model it had a tool to invoke an actual editor, say sed, assuming the model knew how to use sed, this is still going to fail if the edit request cannot be interpreted literally by the editor (due to being inaccurate).

znnajdla 4 hours ago
How do you give it access to an editor? It doesn't have a keyboard and mouse.
HarHarVeryFunny 2 hours ago
Well, it could be a batch editor, such as linux's sed, invoked from the command line, or with "computer use" the model could indeed potentially drive a real interactive editor.

Part of the problem though is that tools like Claude Code don't want to assume too much of the environment - that a specific editor is available, or even that it is running on a particular OS. The way it remains platform agnostic and not reliant on specific tools is by only having a dependency on Node.js, which provides file read/write support, so to implement an edit request the agent uses Node.js to read the file, itself implements the edit, then again uses Node.js to create the new updated file.

visarga 4 hours ago
I built a structural zoom tool, it would fit flat or tree like content into a 10K char budget. It can compress HTML, JSON, folders, zip files, logs, chat sessions, basically large files or collections of files. Moving around is done by range selection. The idea is to have the agent find its way iteratively to the target, while having the structure exposed. RAG would totally cut everything to pieces and put them in a hat. My approach is to follow the structure of a large content by a series of glimpses. Unfortunately I myself am not sure it is better to use this tool vs bash and python one off scripts.
4 hours ago
energy123 8 hours ago
I feel the baseline comparison should be relative to the intuitive and simple "line-numbers only" schema.

It's less token heavy than the proposed hash approach, and I don't think frontier LLMs hallucinate line numbers if each line in the context is prefixed with them.

withinboredom 8 hours ago
The issue is when the file changed between when the LLM read the file and when it wrote to the file. Just using line numbers will clobber a file if that happens. The hashes prevent that from being an issue.
energy123 8 hours ago
Point taken.
kachapopopow 7 hours ago
it starts writing to the wrong part of the file after multiple edits.
jwpapi 6 hours ago
Great article and tbh I thought it would’ve been implemented that way makes sense to hash and save mainly context I don’t expect them to care about token usage

How about Kimi tho how can I play with it?

MetaWhirledPeas 6 hours ago
> Treating harnesses as solved, or even inconsequential, is very short-sighted

Is it possible that burning extra tokens is the point, since they get paid more?

vlovich123 6 hours ago
Given the fierce competition, I would imagine a better performing model generates more revenue than burning extra tokens
dack 6 hours ago
they have pretty fierce competition though, so i doubt this is intentional. my guess is they just have a million things to do and that isn't at the top of the list
naasking 5 hours ago
That doesn't make sense with subscriptions.
jwpapi 6 hours ago
Arguably I would think that the last year was mainly inner harness improvement instead model improvement but I could be wrong, just feels like that to me
SatvikBeri 4 hours ago
We can measure this by looking at the same harness applied to different models, e.g. the very plain Terminus: https://www.tbench.ai/leaderboard/terminal-bench/2.0?agents=...

Models have improved dramatically even with the same harness

a11r 7 hours ago
This is very nicely done. We have seen the same issue at a higher level of getting separators right when generating multiple files in a single inference call.
aghilmort 3 hours ago
curious: wdym by "getting separators right when generating multiple files in a single inference call"

context: created hypertokens an even more robust hashing mechanism to create context-addressable memory (CAM), one cheat code is make them prefix-free, lots of others that get deep into why models work the way they do, etc.

the_harpia_io 2 hours ago
honestly the harness thing is way more important than people realize - I've been working on code security tools and the gap between what a model generates raw vs with better structure is massive, way bigger than model versions mattering. like the security bugs I see in AI code, half of them are just because the prompt didn't include enough context or the edit format was wonky

the benchmark overselling isn't the point though - it's that we're barely using these things right. most people still chat with them like it's 2023. what happens when you combine this with actual review flows not just 'beat swe-bench'

idk I think everyone's too focused on the model when tooling matters more, since that's something you can actually control

andai 2 hours ago
> Why bother, you ask? Opus may be a great model, but Claude Code to this day leaks raw JSONL from sub-agent outputs, wasting hundreds of thousands of tokens. I get to say, “fuck it, subagents output structured data now”.

The VC economics are creating a reality distortion field where Anthropic is incentivized to burn more tokens so they can rent more GPUs so they can get more investment, and where I am incentivized to pipe the LLM inputs into `claude -p` and blast 50KB of useless proompt onto it so they don't ban me from their 95% discounted API endpoint.

evolly 6 hours ago
My experience exactly! I’ve recently become so tired of the Claude harness that I switched to OpenCode (which is extremely good compared to Claude). However, OpenCode is also tedious to change, and it inherits all the “good stuff,” like treating agents as Markdown files and all the dancing around with hooks/plugins/skills scattered all over the place. Getting stuck again and again, I’ve ultimately come to the conclusion that this must be solved by writing my own damn coding agent, with extensibility that’s acceptable for real-world engineering.
HumanOstrich 6 hours ago
Give Pi[1] a try. Comes pretty barebones out of the box, yet still provides a decent default experience. Extension points are all TypeScript if you want. There are a lot of examples[2] and some 3rd party extensions[3].

I'll point out that if you want permission prompts for certain behavior, you have to add that yourself. There's at least one example.

Edit: Just noticed the article's author is using a fork of Pi.

[1]: https://shittycodingagent.ai/

[2]: https://github.com/badlogic/pi-mono/tree/main/packages/codin...

[3]: https://github.com/nicobailon

wyre 6 hours ago
Before you build you own, try pi. It is what you are looking for.

[0] https://shittycodingagent.ai/

scotty79 6 hours ago
Harness is where the open source should shine. It doesn't require millions of dollars of compute but the search space is vast and explorable with limited budgets.
avereveard 8 hours ago
I use small model I like to give them TOC more than lines wonder how it'd stack up with the hashline approach

read_toc tool:

...

  {

    "name": "mcp",

    "qualified_name": "mcp",

    "type": "constant",

    "docstring": null,

    "content_point": "src\\mcps\\code_help\\server.py::17::18::python::mcp",

    "is_nested": false

  },

  {

    "name": "handler",

    "qualified_name": "handler",

    "type": "constant",

    "docstring": null,

    "content_point": "src\\mcps\\code_help\\server.py::18::19::python::handler",

    "is_nested": false

  },

....

update_content tool:

{

  "content": "...",

  "content_point": "src\\mcps\\code_help\\server.py::18::19::python::handler",

  "project_root": ....

}
falkenstein 6 hours ago
really enjoyed reading this, although I'm a dumb farmer and it took me a while to understand lol
azinman2 5 hours ago
Why not just use line numbers?
renewiltord 5 hours ago
Forces you to read after every write. E.g. you edit line 15 to be two lines. Then now you need arithmetic for later vs earlier lines or you need to read full file to reindex by line number.
azinman2 5 hours ago
Good point!

I just wonder how unique these hashes will be if only 2 characters. It seems like the collision rate would be really high.

aghilmort 3 hours ago
we dug into those sorts of questions with hypertokens, a robust hash for lines, code, tables/rows or any in-context token tagging to give models photographic memory

one mechanism we establish is that each model has a fidelity window, i.e., r tokens of content for s tag tokens; each tag token adds extra GUID-like marker capacity via its embedding vector; since 1,2,3 digit numbers only one token in top models, a single hash token lacks enough capacity & separation in latent space

we also show hash should be properly prefix-free, or unique symbols perp digit, e.g., if using A-K & L-Z to hash then A,R is legal hash whereas M,C is not permitted hash

we can do all this & more rather precisely as we show in our arXiv paper on same; next update goes deeper into group theory, info theory, etc. on boosting model recall, reasoning, tool calls, etc. by way of robust hashing

pbowyer 47 minutes ago
For others, here's the paper: https://arxiv.org/abs/2507.00002
MrGreenTea 3 hours ago
The author writes that these hashes are 2 or 3 characters long. I assume depending on the line count. That's good for almost 48k lines. You have other issues then.
azinman2 2 hours ago
But if it’s a hash vs a line number, then we can collide much more easily.

There many be many lines that are duplicates, eg “{“

giancarlostoro 5 hours ago
I was wondering the same thing.
deaux 7 hours ago
Great article, recommend reading all of it.

> Why bother, you ask? Opus may be a great model, but Claude Code to this day leaks raw JSONL from sub-agent outputs, wasting hundreds of thousands of tokens. I get to say, “fuck it, subagents output structured data now”.

This is why I find the banning of using Claude subscriptions in other harnesses is so heinous. Their harness that they're forcing onto everyone has tons of big issues including wasting massive numbers of tokens. Very much in line with intentionally refusing to adhere to standards in the most IE6 way possible.

techpression 7 hours ago
I mean they want to make money right? CC is a cool tool, but obviously they want you to use the api eventually if you’re even remotely a power user, 200/month for all you can eat tokens (well, until some arbitrary limit of the day kicks in) just doesn’t make sense when compared to api prices. In other words, CC should be seen as a software subscription.
deaux 7 hours ago
The token limit is the same whether used in CC or in other harnesses.
techpression 5 hours ago
Sure, but then Anthropic loses the possibility to upsell, show ads, telemetry, brag about number of users and how long they use it etc etc. Not necessarily what’s in there today, but what can be in there tomorrow. They also get the ability to much better fine tune backoffs etc from a purely technical side of things.
badhorseman 5 hours ago
I feel a lot of confusion at which coding harness is best and what options to use. tbh I have mostly used standard aider and I don't know what the consensus is on this tool.

I feel I want to write my own and that maybe in the future a lot of developers will have custom harnesses and have highly customized versions as each user of these models wants to use these things in a way that's unique to their brain, much like how emacs is so great for the customization but one persons emacs config is often not what another wants or only wants a subset and then write their own features.

As an aside what is the feeling on all the various ai coding tools, does aider suck is that aider-ce/cecli are better or are the bespoke tools for each model like claudeCode and such better.

__mharrison__ 7 hours ago
Is there a skill file I can use for these edits?
kittbuilds 1 hour ago
[dead]
reeddev42 7 hours ago
[dead]
genie3io 7 hours ago
[dead]
logicallee 7 hours ago
I agree with this article completely, nice to see it presented quantitatively.

>re "only" the harness changed

In our experience, AI's are like amnesiacs who can barely remember what they did three minutes ago (their last autonomous actions might still be in their context if you're lucky), with no chance at remembering what they did three days ago. As such, the "harness" determines their entire memory and is the single most important determinant of their outcome.

The best harness is a single self-contained, well-commented, obvious, and tiny code file followed by a plain explanation of what it does and what it's supposed to do, the change request, how you want it to do it (you have to say it with so much force and confidence that the AI is afraid of getting yelled at if they do anything else) and a large amount of text devoted to asking the AI not to break what is already working. Followed by a request to write a test that passes. Followed by asking for its judgment about whether it broke what was already working on or not. All in one tiny crisp prompt.

With such a harness, it's able to not break the code one time in twenty. If you use reverse psychology and ask it to do the opposite of what you want, it rises to fifty-fifty odds you'll get what you're trying to do.

Don't believe me? You can watch the livestream (see my previous comments).

Baby steps toward Utopia.