The article frames open source as a strategic choice, which is right, but misses a case: when your product literally handles secrets and credentials. If your agent framework touches API keys, tokens, and personal data, closed source is a non-starter for the security-conscious. You cannot audit what you cannot read.
We are building an agent platform (SEKSBot, a fork of OpenClaw) and open source is not a growth hack for us — it is a prerequisite. Nobody should trust an opaque binary with their API keys.
Didn't scroll past the vomit inducing AI generated "illustration" at the start of the article. If the author thinks that adds anything of value to the post, what else will they get wrong?
Finally, an AI article I enjoy. Give me nice bulleted summaries (and actually accurate content, unlike most blog posts) over 6-page paragraphs any day.
I know some people want to ban AI posts, but I want the opposite: ban any post until AI has looked over it and adds its own two cents based on the consensus of the entire internet & books it's trained on.
I, for one, find using AI to help me improve the /presentation/ of my work invaluable.
It helps me to set the tone, improve the readability, and layout, but I do have to watch that it doesn't insert bad information (which is easy for me to either recognise or verify).
Can easily detect the AI slop. It is like how ads were splattered everywhere (and still do) in some old school websites and you would train your brain to ignore those ads. This is coming for AI slop as well. As more and more people realize they are reading AI generated vomit, they will instantly close whatever they are reading.
That's sort of true, although in reality Airbyte was only truly "open source" for a very small period[0].
In reality, since about 1 year into the project, it's operated with a mix of open and "less open" licenses for different parts of the codebase, in a way that would make it difficult to just use the MIT licensed bit.
I think that kinda proves the point you were going for.
With consensus.tools we split things intentionally. The OSS CLI solves the single user case. You can run local "consensus boards" and experiment with policies and agent coordination without asking anyone for permission.
Anything involving teams, staking, hosted infra, or governance sits outside that core.
Open source for us is the entry point and trust layer, not the whole business. Still early, but the federation vs stadium framing is useful.
Startups fail because of a lack of adoption far more often than by any other reason, including competitive and monetisation factors.
If your developer company gets popular you’ll be rich enough anyway. You might need to choose between screwing over your VCs by not monetising or screwing over your customers by messing around with licences.
But yourself as a founder will likely be okay as long as the tool is popular.
This is not necessarily true. Wrong monetization can be the killing blow. Market can change and your business model which used to work can suddenly fall apart. A recent example for business model change is Tailwind where traffic to their open-source docs plummeted and suddenly not enough people are upgrading to their commercial licenses.
Startups die for a variety of reasons, even if products are popular and loved.
Tailwind was (is?) also selling "lifetime" licenses, which means eventually their sales would collapse anyway, once they have sold a license to most interested customers. They were always going to need to pivot at some point. regardless of traffic to their docs.
After being an open source dev for over a decade, I've built up a kind of moral objection to open source.
If it was truly "for everyone" then we'd be seeing many more small tech startups succeed and a more vibrant ecosystem where open source devs would be supported and have access to opportunities. Currently, open source is almost exclusively monetized by entities whose values oppose my own. I'd rather sell or even give away cheap unlimited, permissive licenses to regular folks, one by one and give them an actual competitive edge, than this faux "share with everyone" nonsense.
The value extraction pipelines in the economy are too strong, all the value goes into a tiny number of hands. It's so direct and systematic, I may as well just hand over my project and all IP rights exclusively to big tech shareholders. This is an immoral or amoral position given the current system structure.
Having first hand experience with building multiple open source and open core dev infra companies, the advice in this article is spot on. If it is AI slop, it's still good advice.
I'd prefer comments focused on content vs. trying to Turing-test AI generated text.
The content is useful only if it's fact-checked. The author evidently did not even bother editing the article, so how is anyone supposed to know whether it's factual or it's conjured out of some numbers.
Each article like this one is an opportunity to assess whether it's mainly written by an AI or not. After reading part of this one I mostly think not (except for the obvious AI generated image), but it would be amusing if it were. "I’ve been asked a few times about my approach to open-source in the past few weeks, so decided to write this article to structure my thoughts." Is this being told from the perspective of Claude or OpenAI? I assume across the millions of users this has been asked a few times in the past few weeks. If it's from the human perspective, perhaps while he was drafting it, the AI assistant asked him about his approach a few times so that it, and in this case each conversation counts as a separate character asking him for his thoughts about it. Either way it's easier to inflate the number of people asking the author's opinion. However, for this, I dug into the author's bio, and with almost 10k followers on X, it seems likely he did get asked this a bunch of times.