"This is a new form of social science. It is qualitative research at a massive scale, and we’re in the early stages of learning how to do it. Surveys and usage analysis tell us what people are doing with AI, but the open-ended interview format helps us get at why. "
-------------
Who is doing the research matters. What is presented here is not the product of academia. It's the product of a company that produces AI agents. The picture this web page paints may appear rosy and have just enough thorns to be convincing, but it's the equivalent of a tobacco company telling you that their product is neither addictive or carcinogenic.
I fully expect actual research will be done on the impact of AI and our hopes for it. This page, however, is marketing.
> "This is a new form of social science. It is qualitative research at a massive scale, and we’re in the early stages of learning how to do it. Surveys and usage analysis tell us what people are doing with AI, but the open-ended interview format helps us get at why. "
Also AI written, but I suppose that's expected. The big AI companies seem to want to make all their blog posts and communications have the AI tells so you know they didn't actually bother writing them
> The big AI companies seem to want to make all their blog posts and communications have the AI tells so you know they didn't actually bother writing them
Investors want to see you use your own product, if they themselves don't feel the product is good enough to write their own announcement then investors would worry about their future.
And AI is still a product primarily aimed at investors and not consumers.
I'd love to be able to actually articulate what makes AI writing read like AI writing. A few of the common tells come to mind (contrast construction, hyperbole, overuse / wrongly used em-dashes, etc). The above quote doesn't have any of that, and yet it certainly feels AI. The first sentence (both what it says and where it's placed) suggest AI to me. But, I couldn't quite tell you why.
I think the main tell is that it says basically nothing, it reads like a human that is paid per word. Humans prefer easy to read articles that doesn't hide the point behind such fluff, so there is no reason to do it except just to spam words.
I can't help but feel a little bit of ... pity for a lot of the people who call themselves "entrepreneurs" in this survey?
"I live hand to mouth, zero savings. If I use AI smarter, it may help me craft solutions to that cycle."
"Relaxing while my AI gets the work done, builds the wealth. It’s a shadow of me, just a very, very long one."
etc. I do believe AI currently accelerates businesses, especially in software dev. We work with a contractor who use Claude Code to reach incredible development pace for the size of their team, but also when we sit down with them in meetings they understand what's being created, they are able to argue their architectural choices, and they know how to propose business value.
You can't just buy a Claude subscription and have magically solve your problems. The thing is, as soon as Claude can do this without a business savvy human in the loop, then
a) everyone can do it, so you won't actually have any value to propose, and
b) Once the AI can run businesses without humans in the loop, you can bet your ass they will not out of the goodness of their hearts keep giving that ability away for $20.
In summary, AI if used to accelerate businesses _CAN_ be good. Buying it as a magic bullet to bring you out of poverty is probably a worse choice than just buying a lottery ticket.
I think that there's a "time window" right now, before most people realized the scale of AI. Those who jump there first, can monetize it. It certainly won't last forever, but you can earn some money while it lasts. And you will have years of AI-relevant experience afterwards.
That really reminds me of the "mashup" bubble in the late 2000's, when all services started to provide API and people were calling themselves "entrepreneurs" for combining 2 sources of data, like putting craigslist ads on a map.
Are you sure? We have many SaaS and final products which are just stitching together more SaaS. We have a very vocal part of the HN community always reminding you to buy a SaaS solution and connect it to your business instead of maintaining an in-house bespoke solution.
If the technology becomes cheaper, this creates more market pressure, by changing the cost base of certain product. For example books when printing press was invented went from luxury to something expensive but more affordable. In software markets that means that will have more software, more competition and in free market segments profits will evaporate.
The pseudo "entrepreneurs" who think they could outsmart the market by working less, are just naive. In a free market economy optimization is brutal and a freelancer developer will sell the same "product" cheaper, because he has the same technology available to him.
So the only way to get the gains from these AI technologies is to have something that can't be easily copied like market knowledge, data access or sweetheart deals with big companies that can pay more because their profits support the higher spend.
Also, services based SAAS especially B2B will not die, because a tyre shop won't have the time to write/debug/host it's own solution and will not want to depend to a single contractor who can disappear for a vacation. But the margins will go waaay down. 25$ for a set of forms and a database, not gonna cut it anymore.
> Also, services based SAAS especially B2B will not die, because a tyre shop won't have the time to write/debug/host it's own solution and will not want to depend to a single contractor who can disappear for a vacation.
True in the current state of LLMs, possibly not true forever if someone finds the magic bullet that turns the one-shotting (reliable) software dream that companies like Anthropic and Perplexity currently peddle into reality. Seems far-fetched ATM but the gains since GPT-2 have been very real.
We're quite a ways away from this though, even with Opus 4.6 and the like. And even further from it being part of Claude Code rather than some proprietary $1000/mo. closed-source solution.
As you say though, _if_ such a technology were to exist, it's Anthropic that holds all the cards, not random entrepreneur #25721 who is asking the Anthropic API the same thing that the actual customer could just be asking directly. At that point you're an undesirable middleman, not a business.
> I can't help but feel a little bit of ... pity for a lot of the people who call themselves "entrepreneurs" in this survey?
Fake it till you make it mentality that degenerated completely once we got the internet. It used to be "crypto will make you rich, buy my coin/course", now it's "AI will make you rich buy my tool/course", the same type of people will get fleeced
It’s funny how so much of market demand ends up just ends up boiling down to basic needs. Everyone’s always trying to hustle so they don’t have to worry about financial instability.
The quote about being temporarily embarrassed millionaires comes to mind….
A great AI future is the robots doing stuff so we can be free. But none of the major isms are geared up to provide that i.e. capitalism or communism. Maybe hackable with UBI and capitalism mix.
"I’ve been working on a scientific project for 6 years... with Claude I was able to accomplish in 5 weeks what took me 6 years. I’m old... I estimate I have another 5 to 10 years and I’ll accomplish everything I want." Academic, Germany
"I live in a war zone... AI can not only give practical advice, but also emotionally calm me down during panic attacks. It can calm someone during a missile attack in one chat, and laugh with me about something silly in another. That’s what makes it not fragmented into a therapist/teacher/friend, but something whole." Ukraine
"If an AI had been in Stanislav Petrov’s position — the Soviet officer who prevented a potential nuclear war in 1983 — it would not have refused to launch." Academic, USA
"The humans in my life were telling me it was psychological. An AI chatbot was the only one who really listened and took me seriously — it pushed me to ask for specific tests... which came back 6 times higher than its supposed to be."
> "It’s not healthy to love someone or something that can’t tell you no." - Not Currently Working, United States of America
> "Instead of AI doing my chores, AI does the stuff that I love—in two minutes, without any passion." - Student, United Kingdom
> "I used to write songs for my kids. Now I have [AI music product] make them for me. I used to write poems for those I loved... I used to bust my brain doing research, and now I get a research summary that is better... but I didn’t learn the paths in between. And yet, I use it because I have to pay off my house, pay off my land, and feed my little kids so I can find an hour on Saturdays to do something meaningful with them." - Software Engineer, United States of America
> "I believe AI is likely to kill me and everyone I love… building an AI that’s smarter than us before we’ve figured out how to keep it under control will likely destroy everyone and everything they value." - Software Engineer, United Arab Emirates
This was one of the highlighted quotes:
> "I’ve been told I’m ‘too much, treatment resistant, complex’ by providers. Within six months of working alongside AI, I was able to understand my own inner world in a way I never could before. I was doing creative writing again after quitting for two years. I developed hope again — that’s the through line." - Healthcare Worker, United States of America
A healthcare worker outsourcing their own treatment to an LLM, who won't tell them no, is terrifying.
> "The humans in my life were telling me it was psychological. An AI chatbot was the only one who really listened and took me seriously — it pushed me to ask for specific tests... which came back 6 times higher than its supposed to be."
I can see this kind of survival-bias stories distorting the reality. To have millions of people asking for "specific tests" because AI told them seems problematic. One in a million will discover something, and that story will be enough to create the believe that is "worth doing the test that AI says" just in case. But...
> which came back 6 times higher than its supposed to be.
It has been proven that massive testing creates many false positives.
Tests may not be as reliable as though but they are good enough when other symptoms are accounted for. To randomly test people based on AI hallucinations can increase the number of unnecessary medication or even interventions.
> I can see this kind of survival-bias stories distorting the reality. To have millions of people asking for "specific tests" because AI told them seems problematic. One in a million will discover something, and that story will be enough to create the believe that is "worth doing the test that AI says" just in case. But...
This is a competition of public and private interests. A sick individual is going to lobby for tests until they discover the cause. From a public perspective, it might be cheaper to just let them die. AI is an advocate for the individual.
For the record, ChatGPT helped me diagnose a lifelong illness. I'm a new man now thanks to AI. Literally life changing. I had spent decades pleading for tests because no one could figure out the cause. I think a likely outcome here is not necessarily 10,000x more tests performed, but similar or even fewer tests, because the diagnosis success rate with AI is higher. It's not subject to bias. People tend to be more honest and reflective with their AI than they are with doctors. They get 5 minutes to give the entire case to the doctor. With an AI they can spend weeks debating and reflecting. This builds a case history far more detailed and accurate than anything we have in modern medicine today. Amplified by an order of magnitude because the AI can extract meaningful insights from the discussion.
In the very near future our AI will contact our GP for us. Soon after that, our GP will be our AI.
I don't know about survival bias. LLMs are well suited to this task of taking in this cloud of soft data like a description of symptoms and spitting out a potential diagnosis.
They're good at acting as a "reverse dictionary" like this where you give it a description of something, and it knows the word for it. They have approximate knowlege of many things.
> "I’ve been working on a scientific project for 6 years... with Claude I was able to accomplish in 5 weeks what took me 6 years. I’m old... I estimate I have another 5 to 10 years and I’ll accomplish everything I want." Academic, Germany
There's always something about claims like this. I'm not claiming that AI can't speed up your processes, but I question the persons expertise when they claim months or years of work turns into days or weeks. It just doesn't make sense to me.
"My output is like 25x what it used to be. I’ve built over 20 backend server tools, 7 major projects in the last 6 months—my work output this year is greater than the last five combined. I can typically finish a significant project in a day or two."
> "If an AI had been in Stanislav Petrov’s position — the Soviet officer who prevented a potential nuclear war in 1983 — it would not have refused to launch." Academic, USA
For the record, Petrov made this decision based on a false assumption that the US wouldn't launch just a few missiles, but would instead send a lot, all at once. Except, that one of the US plans was to send a few missiles to destroy critical targets, and then follow it up with a large scale attack.
Petrov himself said that he might've acted differently if he was aware of this possibility. And even then, his initial hestitancy was basically a 50/50 gamble.
An AI would basically do the same thing if asked - just roll a random number, and launch nukes below a threshold, adjust threshold based on some llm evaluation of the situation if needed.
Vibe-coded websites are the new Frontpage website, being 10x as heavy as one made by hand would be. But 10x as heavy… on top of a modern Web that had already bloated to 100x what was reasonable. Now we wish the only problem were that the html is 10x as large and complex as it needed to be.
The coming years will see the current RAM shortage followed by a war between local AI models and vibe-coded shitware “productivity” software for memory on our devices. Especially fun will be when vibe coding crap hits corporate security software, which is already often so bad it looks more like sabotage than security. Imagine when it gets, from both angles (using models for threat detection; vibe-coded shitware) another large multiplier on its resource use.
This has to be intentional, right? To reassure people that front-end developers still have a job? The data is interesting but the site itself is a complete embarrassment for several reasons.
I work sometimes in frontend and mostly in backend but I cant still comprehend why are we going backwards. shouldnt the websites be so optimized that they should be able to run in normal pc / smartphone rather than s23 is failing to load it. I guess at least bigger companies have that kind of resource for optimization but still not doing it why?
Any hardware gains and more are used up by stuffing in additional telemetry, ad/engagement scripts, and animations. Devs have grown up on "unused RAM is wasted RAM," work on the latest high-spec Macs, and get incentivized by higher-ups demanding things be ever "modernized" and not to waste time on optimization, which they see as annoying nerd stuff. But even that doesn't explain everything I guess, because I still see a lot of these things in open source projects.
The explanation for bloated OSS is that the software development field has opened up to be accessible to non-programmers. There are at least 10x as many developers publishing software now as there were in the 90s, and the class of people who know how a CPU works are a tiny, tiny minority of the field now, where 30 years ago it was the norm. The vast majority of developers operate on 15 layers of abstractions and are literally offended by the idea that they should understand even a single layer below the one they're currently on. They will invoke a retort like "might as well learn assembly while you're at it", which I have heard literally dozens of times by now, as though it is actually unreasonable to have an understanding of assembly even if you don't write it every day.
Game development suffers greatly from this, too. So many games run like dogshit and some take literally 100+ GB more disk space than they need to (with the counterfactual proven when a dev eventually "optimizes" their game 3 years later by doing some really trivial thing, like what hapened with Helldivers 2 and some other game I can't recall). There is a whole generation of "Unity devs" and "Unreal devs" who work no-code or as close to it as possible, only being able to develop games through a GUI and light scripting, with even the latter usually involving copy-pasting existing scripts written by other people and tweaking the numbers.
In some ways this is a good thing, of course. There are a lot of useful software and fun games in the world that would not have been created if software development were not accessible. But with the cost to performance and security breaches becoming the absolute norm, I do really wish there was a culture for developers to continue improving, to continue learning, instead of a culture of learning the very top of the stack, declaring it good enough, and becoming a "React dev" for the rest of their career instead of becoming "a programmer" who can use more than one abstraction.
Billionaire CEOs have silenced the informed sources of information. We live in a time that everybody knows the opinion of billionaires in every aspect of society (and it is bad) but science and journalism are seen with mistrust.
Marketing and entertainment are supplanting news and knowledge. I hope that the people that is pushing back succeed.
Nitpicky comment. The article says
> "We call this the “light and shade” of AI: the same capabilities that lead to > benefits also produce harms. The two sides are entangled."
Why not call it a "double-edged sword" or something else? Light and shade are opposites but not necessarily two products from the same tool. It just irks me.
People derive genuine satisfaction from a job well done. A sense of purpose and of being useful is important to our wellbeing. There's nothing dystopian about a desire to do your work well.
Well, there is when you no longer deserve credit for the work and your boss, should you be fortunate enough to even have a job, just expects you to do more work. The satisfaction will evaporate pretty quickly.
"to generate copious amounts of source code that looks like it came from an offshore chop shop that whip cracked a thousand underpaid programmers to complete tasks under threat of violence so they'll fake the tests and cut corners but hide it with plausible bullshit"
In the abstract consumer point of view a car is exactly a faster horse. They both have high up front costs, both require continuous maintenance and fuel, and they're inconvenient to store when you're not using them.
Stationary gasoline engines were already changing the farm and reducing the head of horses necessary to feed a nation. It, too, was a faster horse for them.
Anyways.. it took the Detroit police to eventually deploy the first automatic stoplight. The real innovations seem to be often found downstream of the simple increases in capacity.
That all being said, it seems to me the current crop of LLMs haven't done this, their power and training budgets do not seem to be scaling favorably against adoption rates and profit margins. Absent a significant change in algorithm or computing substrate I don't think this strategy is the leap everyone hopes it will be.
I just launched a site yesterday that's trying to record anonymous stories like this and see how things breakdown across demographics. Fantastic timing on my part hahaha. Anthropic obviously reaches more people.
The quotes they have are really interesting to read. That's what I was hoping to get when I built mine.
Anecdotally, the concern I hear from many is that the current positioning of AI as labor replacement doesn't benefit them at all. An expensive AI which simply takes your job or forces you to work harder is categorically worse for people's quality of life.
What consumer benefits is ai driving? at least with industrial automation consumers benefited from new technologies, cheaper goods, and new job categories.
In case someone at Anthropic reads this.. if you find some way to make software developer salaries go up as a result of using your tools, or find some way to fast forward society to that stage of the effect of AI, you’ll have a lot of fans, and even faster adoption.
It would be great if there was some internal “make this benefit Main Street and knowledge workers” department, helping find ways for workers or creators to capture the value of some of the increased productivity.
> It would be great if there was some internal “make this benefit Main Street and knowledge workers” department, helping find ways for workers or creators to capture the value of some of the increased productivity.
If they wanted to do this, they could put their models in a public trust for the public's access and benefit in research, education, etc. Then it could be licensed, pay a dividend like a sovereign wealth fund, etc.
Considering that they copy and train on the sum total of all human creativity, a public trust is something that would be in line with both the spirit, and first and fourth considerations, of fair use doctrine:
1. the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
2. the nature of the copyrighted work;
3. the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
4. the effect of the use upon the potential market for or value of the copyrighted work.
That way everyone is rewarded with the benefits of running a model that was trained on everyone's creations.
I don't need software developer salaries to go up. That would be kind of selfish and narrow minded.
What I need instead is something that takes the burden off my entire society and gives them a breather. Universal health care to start. They could also use a higher minimum wage, and lower housing costs.
Is it more selfish and narrow minded to wish for a "utopia" that is economically unsound and happens to be your personal preference, or to wish for productive workers' salaries to increase - something with an actual track record of improving any society it occurs in?
All perl programmers should be wishing for ponies, that's definitely less narrow minded.
It doesn't sound like utopia to me, hence the quotation marks. Eminently achievable, but not actually good. Only those engaged in utopian thinking - with a heavy slice of ignorance of basic economics and history - would think it is utopia or leads to it.
Universal healthcare is very sound economically. Costs are lower and outcomes better than under private insurance, and overhead is dramatically reduced.
This is not true, the Kings Fund publishes a report that the Guardian fauns over whenever it comes out because it shows how "cost effective" the NHS is, yet if you read it you find that actual health outcomes are generally worse than other, insurance based systems. Give me wealth and health over a postcode lottery produced by utopianists.
>if you find some way to make software developer salaries go up
This is quite easy. Just optimize the models to do reviews and bug finding. This would make developers (who normally hate reviews) quite happy and let them do more coding, thus delivering more value and possibly earning more...
It’s often lamented that some employees have a difficult case to argue for their impact on the bottom line, and as a result probably get paid a lower fraction of their value to the business than other roles where the link is easy to measure.
I can at least “imagine” a model that tries to crack this nut.
But your value to a company doesn’t just come from your impact, but how tough you are to replace, how much others value your skills, etc.
Nike’s logo designer was paid $35. One model says she should’ve gotten hundreds of thousands of dollars, because of what her work product went on to become. Another model of the value says it was worth $35 because that’s what she agreed to.
If, as an employee, you think you’re massively undervalued for the impact you generate, go out to the market and either get another job or start your own business making widgets - either you’ll get that pay bump you expect, or you’ll see you actually were relying on a lot of other supporting mechanisms to generate that value.
The intrinsic satisfaction of increasing the wealth of shareholders. We should all be happy to devote ourselves to getting them more, nothing is more important than that.
Of all the possible criticisms that's the one you chose? If that's the worst of the problems you can see, why don't you buy some stock and became the shareholder. Per your own words, you will get more.
My kids like to use AI to discuss things they learned in school in greater depth, and from different angles than they learned in the textbook. They can also ask "What if" and "Why not" questions from this infinitely patient teacher.
At least with search engines, or even libraries, you're aware that there are many authors of varying reliability and the publications/sites might not be reputable.
AI chat bots will summarize the top N web search results as if they're fact, weaving them into seemingly coherent narratives, all while reassuring the user that their questions are really good and they're learning a lot.
Also it's better not to answer, but flip the question back and let your kid think through it, offer hypothesis, and so on, helping him problem solve, recall, and all that.
> An expensive AI which simply takes your job or forces you to work harder
But this implies higher productivity, no? This must mean more outputs that should benefit someone, unless the jobs that are being automated had little value to begin with. Seems paradoxical.
I guess you could argue that there should be cheaper software, but most software people interact with is free/ad supported. Where it is paid, it's already a race to to the bottom.
Basically consumers don't really pay for software in the first place, and the leverage from labour companies get through software is already through the roof even before AI. Will much change for consumers of software?
There is a practical upper bound on how much labor can be replaced before deflation becomes a problem. AI firms risk spoiling the pot if no other business model is discovered.
"The doctors were just doing a copy-paste of a copy-paste of a prescription from a few weeks ago, not realizing it was the medication that was killing her. AI helped me ask the right question to save her life."
A classic marketing piece by showing thought leadership based on survey data. I'm not saying they're lying, I don't think they are. I am saying they are biased and have a conflict of interest on this one. I've seen it at my previous employer as well (a F500 company).
To remove some of that bias, I'd recommend to get an independent body (probably some university) in and let them do the interpretation and write the article.
I just want people to see the tactic for what it is. I really like Claude Opus 4.6 but this just screams "marketing" to me. I wouldn't say it's wrong, it's good to have these discussions and I'd encourage AI companies to say what they have to say. I would say: more independent sources are needed (and not another AI company).
> AI should learn to say two things: ‘I don’t know’ and ‘you’re wrong.’
My guess is, the next evolutionary step of LLM's should be yet another layer on top of reasoning, which should be some form of self-awareness and theory of mind. The reasoning layer already has some glimpses of these things ("The user wants ...") but apparently not enough to suppress generation and say "I don't know".
Why do websites need to be so front end heavy? When a software company spend so much effort on fancy website, I don’t trust their product. Except anthropic i guess.
I am disappointed in how vague the classifications are for what people want. 'professional excellence ' anyone? I was expecting more concrete responses, but I guess since it's working with what we told it, generalities are prevalent in a write up. If I keep looking, perhaps at the quotes, I might find more concrete answers.
And just keep scrolling, you can make it to the story eventually.
Yeah I want to know how many people are using AI for social purposes; to provide the role of a friend. But I don’t know what category that would be under.
I don't like describing countries like this but: a bit underdeveloped countries (compared to North American and European countries) seem to have a more positive view on AI.
> These are active Claude users who'd already found enough value to keep using AI, and our interview asked first for positive visions for AI and then for concerns that would counter their vision.
It's like those recipe sites that have 5 pages of nice photos and background story and side tracks and whatnot as the author waxes verbose, so they need to put a 'Jump to recipe' button in so people don't just click 'Back' immediately.
Except this time for an article.
I can't tell if 'skip the junk' is good (junk can be skipped!) or bad (maybe this means there's too much junk on the page?)
They do not find it favorable all of the time. If you look into the "What people are concerned about" section, these same people will call out the "Unreliability" as a top-1 concern. So, you can be excited and critical of the technology at the same time. To me this is a more worthy indicator than people who are on either of the extremes, highly critical of the tech or not critical at all.
I mean, I don't know.. those quotes seem way too clean from what I'd expect of normal people chatting. Also the use of em-dash. Does it say somewhere that it's an LLM that has compressed the sentiments of the conversations to create these quotes? I wouldn't be surprised if it was.
> “It’s much easier for me to learn without being judged—just friendly feedback. It's harder with friends or family to get that.”
White collar worker, Brazil
I'm not going to claim I know this response was written by an AI, but it's very suspicious. I would like to hear about how Anthropic ensured that the survey responses were provided by real human beings using their own words.
Not 81,000 as it says in the title. I know I'm being nitpicky, but I wouldn't round up to 81k. Surely the 'important number' in this case is 80, so you would round down to that. Then let the reader pleasantly discover you had interviewed ~500 more than you stated.
It's funny to me when someone does this sort of minor hyperbole that's verging on lying - you have to wonder what is going on.