Having worked in corporate with vaguely software-buying related stuff, I am confused at why so many small companies think an enterprise would be excited to go with them.
Even if I love your product, how do I pitch to the powers that be that we replace something we are already paying for with this new thing? The company might make billions but I've always had to fight for my budgets.
And tell me again why we should bet our core operations on a two man outfit with six months runway? What happens when you pivot? What happens when our competitor acquires you? What happens when you go on a transatlantic flight and a key expires?
Selling to enterprise early on is a poisoned chalice as well. They have much larger teams, so you'll be dealing with a horde of product owners, compliance specialists, data privacy experts, who might never touch your product but come with excel sheets with 300 rows of gnarly questions. Not to mention just getting the bills paid can be a huge fight.
It will drag you into their orbit, especially if 80% of your revenue is from a single customer. Soon your other customers will start going to someone who actually have time to care about them. And by then there's been a political shift in-house and the new VP of X gets a quote for an outsourcing bundle from his squash buddy at one of the big system integrators. Your line item gets bundled into this to motivate the cost even though it's not even relevant. And that the end of your company.
If you do want to sell, treat the enterprise like an ecosystem of SMEs, find a department or team who are more innovative and sell to them behind the backs of enterprise IT. Once you've entrenched yourself and the users love you, then you can expand to other teams and eventually enterprise IT will be forced to negotiate with you for a license and do the compliance dance. But even so this will take years of effort and luck.
This friction, and the lead dividing solutions from consulting, gave me an idea—-they’re describing conditions where LLM revolution might track with the desktop revolution. Companies, groups within companies and small businesses will DIY it and say good enough.
Isn't familiarity with the language even more the case with a LLM. The language they do best with is the one with the largest corpus in the training set.
Stability, consistency and simplicity are much more important than this notion of familiarity (there's lots of code to train on) as long as the corpus is sufficiently large. Another important one is how clear and accessible libraries, especially standard libraries, are.
Take Zig for example. Very explicit and clear language, easy access to the std lib. For a young language it is consistent in its style. An agent can write reasonable Zig code and debug issues from tests. However, it is still unstable and APIs change, so LLMs get regularly confused.
Languages and ecosystems that are more mature and take stability very seriously, like Go or Clojure, don't have the problem of "LLM hallucinates APIs" nearly as much.
The thing with Clojure is also that it's a very expressive and very dynamic language. You can hook up an agent into the REPL and it can very quickly validate or explore things. With most other languages it needs to change a file (which are multiple, more complex operations), then write an explicit test, then run that test to get the same result as "defn this function and run some invocations".
Counterexample: the Wolfram programming language (by many people rather known from the Mathematica computer algebra system).
It is incredibly mature and takes stability very seriously, but in my experience LLMs tend to hallucinate a lot when you ask them to write Wolfram or Mathematica code.
I see the reason in two points:
1. There exists less Wolfram/Mathematica code online than for many other popular programming languages.
2. Code in Wolfram is often very concise; thus it is less forgiving with respect to "somewhat correct" code (which is in my opinion mostly a good thing), thus LLM often tend to struggle writing Wolfram/Mathematica code.
A stable mature framework then is the best case scenario. New frameworks or rapidly changing frameworks will be difficult, wasting lots of tokens on discovery and corrections.
> Clojure was not a hiring barrier - it was a hiring filter.
It makes me think about this HN comment: https://news.ycombinator.com/item?id=11933250 > Jane Street Capital's Yaron Minsky once said that contrary to popular belief hiring for OCaml developers was easier because the signal to noise ratio in the OCaml community is so much better than other, more approachable languages.
I saw a YouTube vidoe years ago that featured Yaron Minsky. He made similar points. In short, some programming languages are like catnip for excellent programmers.I think that misses the point.
Things that are hard have a higher percentage of people who don't need it to be easy.
If you're a "good" programmer you don't need the "community support" (i.e. a bunch of stuff to tell you why you should do things one way or the other in your particular language) so you're free to choose niche languages based on other factors and in turn there will be more good programmers programming in those languages.
You see this in all sorts of subjects not just programming.
This is still true today. Gartner makes a living out of it. Always prefer buying the "familiar" product rather than being successful with the right solution.
Fortunately history show that those who do their math right actually end up being extremely successful: Google using linux HW for their DB servers, AWS developing their own network equipment and protocols, etc. It takes guts but when it works it leaves competition years behind.
The cost of the migration was supposed to be 500millions $ and it's now estimated at 1.1 billion $.
But, they weren't fired because of SAP, they were fired because they lied to the government about the true cost.
But that is the "fear" side of the enterprise sales equation... The "greed" side of it is for the buyer to make the long / short hedge.
The exec who gets the value of the working product can potentially come out shining, when their peers will be furiously backpedalling next year. And this consummate exec can do it by name-associating with their "main bet" which is optically great for the immediate term but totally out of their control (because big corp vendor will drag its feet like every SAP integration failure they've seen), and feeling a sense of agency by running an off-books skunkworks project that actually works and saves the day.
A fine needle to thread for the upstart, but better than standing outside the game.
So where its fair to say enterprise users buy safety, if he's referring to his own product I would offer the following.
He's in the AI tool space i.e. a better rag. So you're selling to AI developers and developers nearly always go open source first.
If they can't find an open source solution or if they don't even look, they prefer to build it themselves.
For this kind of product most enterprise buyers won't understand its benefits, you have to get the developers interested first.
And finally, in this market, you are 1 prompt away from someone cloning your whole business and calling it openaxon or something like that.
It's a tough time to be a software startup.
This dynamic is not new. Unsophisticated enterprise buyers making bad decisions in a bad way. We haven't had an overwhelming market discipline come down though.
Do these enterprises actually need "good?"
HN discussions seems to miss this. What LLMs are before you use them for agentic something is a lossy compression of a large text corpus.
The original wikis have to survive so you can have access to the non lossy version though.
- Enterprise buyers are risk averse and buy the wrong thing - Language X is better because the people that use it are smarter - New tech is difficult for established players
Not really a fresh take but at least it's well written.
In the same article the author was mentioning a few expert systems from the past that were quite obviously successful.
> on the promise printed on its marketing
Ah, _that_ promise. That promise is never fulfilled anywhere nor it is expected to.
Enterprise buy from large companies because those large companies come with support teams, liability, expertise that you don't need to manage internally.
It rare I read an article that actively annoys me but there's something about how this is written that seems a little arrogant.
A little. But it's a nice article nevertheless.
The insight here is that this also still applies to huge enterprise contracts where supposedly more rational decision making should apply.
Also sunk costs “should in theory” never be considered but I’ve only ever seen sunk costs considered.
Huh? All current and previous-gen models are most effective when coding in languages with the most test data.
While I agree the newest frontier model may be smart enough to reason at a lower level and be agnostic but its “relatively dumber / less capable” forebears .. need lots of examples to pattern match from.
Familiarity once again!
That why vc look favorably to startup which go trough the motion of setting up partner led sales channel. an established partner taking maintenance contracts bridge the disconnect in the lifecycle gap between the two realities.
But no, corporate is bad, I guess.
In a sense, they have to make themselves obsolete. Either by making sure they are a part of a larger network, or by making sure that the org itself can own the product or service.
One should not underestimate a "compression primitive with a chat interface". For certain tasks it is a superpower.
As the article notes, the alternatives from the large companies suck. So this is like buying fire insurance from a company that promptly sets fire to your house. You are buying the insurance while knowing you will need it because the disaster is already happening.
This is correct and very agreeable to everyone, but then after some waffle they then write this:
> Structure, for the first time, can be produced from content instead of demanded from people
These quotes are very much at odds. Where is this structure and content supposed to come from if you just said that nobody makes it? Nowhere in that waffle is it explained clearly how this is really supposed to work. If you want to sell AI and not just grift, this is the part people are hung up on. Elsewhere in the article are stats on hallucination rates of the bigger offerings, and yet there's nothing to convince anyone this will do better other than a pinky promise.
"It is graph-native - not a vector database with graph features bolted on, not a document store with a graph view, but a graph at it's core - because the multi-hop question intelligent systems actually have to answer cannot be answered by cosine similarity over chunked text, no matter how much AI you paste on top."
And
"It has a deterministic harness around its stochastic components. The language model proposes but the scaffolding verifies. Every inference, every tool call, every state change is captured in an immutable ledger as first-class data and this is what makes non-deterministic components safe to deploy where determinism is required."
Imagine a model with a reliable 100M context window. Then all of a sudden you can.
> The information the intelligent answer needs was never in the wiki in the first place.
Oh well.