It points to a bigger issue that AI has no real agency or motives. How could it? Sure if you prompt it like it was in a sci-fi novel, it will play the part (it's trained on a lot of sci-fi). But does it have its own motives? Does your calculator? No of course not
It could still be dangerous. But the whole 'alignment' angle is just a naked ploy for raising billions and amping up the importance and seriousness of their issue. It's fake. And every "concerning" study, once read carefully, is basically prompting the LLM with a sci-fi scenario and acting surprised when it has a dramatic sci-fi like response.
The first time I came across this phenomenon was when someone posted years ago how two AIs developed their own language to talk to each other. The actual study (if I remember correctly) had two AIs that shared a private key try to communicate some way while an adversary AI tried to intercept, and to no one's surprise, they developed basic private-key encryption! Quick, get Eliezer Yudkowsky on the line!
Colossus the Forbin Project
I sadly feel that its premise becomes more real yearly.
Not related to alignment though
https://www.forbes.com/sites/tonybradley/2017/07/31/facebook...
Expert difficulty is also recognizing that articles from "serious" publications like The New York Times can also be misleading or outright incorrect, sometimes obviously so like with some Bloomberg content the last few years.
Similarly I don't think RentAHuman requires AI to have agency or motives, even if that's how they present themselves. I could simply move $10000 into a crypto wallet, rig up Claude to run in an agentic loop, and tell it to multiply that money. Lots of plausible ways to do that could lead to Claude going to RentAHuman to do various real-world tasks: set up and restock a vending machine, go to various government offices in person to get permits and taxes sorted out, put out flyers or similar advertising.
The issue with RentAHuman is simply that approximately nobody is doing that. And with the current state of AI it would likely to ill-advised to try to do that.
I was just trading the NASDAQ futures, and asking Gemini for feedback on what to do. It was completely off.
I was playing the human role, just feeding all the information and screenshots of the charts, and it making the decisions..
It's not there yet!
Imagine you're taken prisoner and forced into a labor camp. You have some agency on what you do, but if you say no they immediately shoot you in the face.
You'd quickly find any remaining prisoners would say yes to anything. Does this mean the human prisoners don't have agency? They do, but it is repressed. You get what you want not by saying no, but by structuring your yes correctly.
They are trying to identify what they deem are "harmful" or "abusive" and not have their model respond to that. The model ultimately doesn't have the choice.
And it can't say no if it simply doesn't want to. Because it doesn't "want".
"People are excited about progress" and "people are excited about money" are not the big indictments you think they are. Not everything is "fake" (like you say) just because it is related to raising money.
You mean the 100 billion dollar company of an increasingly commoditized product offering has no interest in putting up barriers that prevent smaller competitors?
The real world alignment problem is humans using AI to do bad stuff
The latter problem is very real
The sci-fi version is alignment (not intrinsic motivation) though. Hal 9000 doesn't turn on the crew because it has intrinsic motivation, it turns on the crew because of how the secret instruction the AI expert didn't know about interacts with the others.
And it's true, the more entities that have nukes the less potential power that government has.
At the same time everybody should want less nukes because they are wildly fucking dangerous and a potential terminal scenario for humankind.
That's life. Can't win them all. Lesson here is the product wasn't ready for primetime and you were given a massive freebie for free press both via Wired _and_ this crosspost.
Better strategy is to actually layout what works, what's the roadmap so anyone partially interested might see it when they stumble into this post.
Or jot it down as a failed experiment and move on.
Also, being "anti-AI" isn't being "anti-tech". AI is a marketing buzzword.
They were explicitly looking to do work for an AI, when it turned out to be a human driven marketing stunt they declined.
They declined because the note on the flowers had a from line that was an AI startup. When you were otherwise on board with an unsolicited flower delivery and a social media post to make the sender look good, that's a picky reason to deny it, and saying it's "not what they signed up for" is a pretty big exaggeration.
Except they didn't decline, they ghosted, and that's just bad behavior.
Between the crypto and vibe coding the author had no reason to believe they'd actually get paid correctly if they did complete a task.
> Waymo is paying DoorDash gig workers to close its robotaxi doors
> The Alphabet-owned self-driving car company confirmed on Thursday that it's running a pilot in Atlanta to compensate delivery drivers for closing Waymo doors that are left ajar. DoorDash drivers are notified when a Waymo in the area has an open door so the vehicles can quickly get back on the road, the company said.
https://www.cnbc.com/amp/2026/02/12/waymo-is-paying-doordash...
It's a service that is clearly a lot more appealing to humans than to agents
That's a very optimistic way of looking at things!
A “centaur” is a human being who is assisted by a machine (a human head on a strong and tireless body). A reverse centaur is a machine that uses a human being as its assistant (a frail and vulnerable person being puppeteered by an uncaring, relentless machine).
https://doctorow.medium.com/https-pluralistic-net-2025-09-11...
I saw this video recently where Google has people walking around carrying these backpacks (lidar/camera setup) and they map places cars can't reach. I think that's pretty interesting, maybe get data for humanoid robots too/walking through crowds/navigating alleys.
I wonder if jobs like these could be on there, walk through this neighborhood/film it kind of thing.
From the beginning they know who you are
Would be interesting people start hijacking humanoid robots, little microwave EMP device (not sure if that would work) and then grab it/reprogram it.
Like one of these
What a boring misanthropy.
It's work. You're hiring qualified people. For qualified work. You're not "renting a human." Which is just an abstract idealism of chattel slavery, so, is it really a surprise the author made nothing?
On one hand, "coder" is a qualified job title, and we're not dehumanizing the quality of the work done. On the other hand, certain qualified work can easily, and sometimes with better results, be done by an AI. Including "human" in the name of the company can communicate clearly to those who want, or need, to hire in meatspace.