Also, I remember reading this guy has close ties to Anthropic. Also, I find it suspicious how he came to prominence out of nowhere. Like Big Tech and the establishment are propping podcasts of controlled narrative/opposition. I don't buy any of it.
Rents in the San Francisco Bay area are too high to live a practical distance from job centers as a junior without roommates.
It’s true that in some circumstances we require avoiding even the appearance of impropriety or a conflict of interest, but that’s simply too large a burden to impose on everyone all of the time, especially for allegedly dire sins like “having a roommate who works for a Google”
Really Anthropic doesn't seem to be fighting for anyone but a narrow subset of people.
So who cares, none of the but AI providers are particularly ethical. Pick your poison as your conscious and needs allow.
After that, he become well-known to the general public through his Sarah Paine podcasts (which are excellent).
He was first funded by FTX
SBF was in Patel's previous podcast in July 2022 and FTX unraveled in November 2022. Hmm.
https://www.dwarkesh.com/p/sbf
> I flew to the Bahamas to interview Sam Bankman-Fried, the CEO of FTX! He talks about FTX’s plan to infiltrate traditional finance, giving $100m this year to AI + pandemic risk, scaling slowly + hiring A-players, and much more.
And that was right in the middle of FTX being accused by many prominent people .
April 29, 2022 https://x.com/AlderLaneEggs/status/1520023221294145536
June 20, 2022 https://x.com/MartyBent/status/1538645746655936519
Sometimes people succeed without earning it, and what matters is what they do with the success afterwards. I'd say Dwarkesh earned it, but got lucky and caught the right waves, and has surfed the hell out of his success. He's had consistently well informed, level headed takes, and has engaged the field with insight and honest curiousity.
When I see people surf like that, I applaud it. There's nothing grifty or shady, he's just had a great series of excellent opportunities and has played them for everything they're worth. Once he had a few billionaires on, that was all the social cache he needed to continue attracting guests and high level researchers and other figures in AI.
SemiAnalysis
https://jon4hotaisle.substack.com/p/influence-as-a-service-s...
It is not unusual that a young person would succeed at performance occupations. A huge portion of the Top 40 pop songs are performed by people under 27.
Also, somewhat spitefully, find it funny that he has multiple roommates.
This creates a dangerous dynamic. AI can generate targets that a human operator might not be able to justify manually, and when something goes wrong the blame can always be shifted to the system, such as the recent incident where roughly 180 children were killed due to faulty targeting.
Israel’s way of fighting this war looks more like pure destruction than a conventional military campaign, and AI systems like this are very easy to abuse in that context. At this point it’s clear that even the U.S. is willing to eliminate targets even when the collateral damage includes the person’s family or neighbors. I don’t think that would have been acceptable in previous administrations. Israel has lowered the bar.
That may be why Anthropic moved early to denounce this kind of usage, even though they had previously partnered with the Department of War.
Now let’s look at the statements made by Anthropic and Hegseth:
https://www.anthropic.com/news/where-stand-department-war
https://x.com/SecWar/status/2027507717469049070
From Anthropic’s own statement, we hear that they have actually been quite closely partnered. In Hegseth’s tweet we see:
“Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service.”
This shows that Anthropic is still currently being actively used by the Department of War.
My view is that Anthropic and its investors eventually realized that the American war machine will use their technology in reckless ways, and that this will certainly create a massive PR disaster or, in an ideal world, even legal consequences. That realization likely pushed them to adopt what they now frame as a more “humanitarian” position.
The current amount of horsepower on the hoof is a rounding error, but before mechanized farming and war-fighting, these distinctions were the difference.
If we consider the capacity of technology to act as a force multiplier, it is reasonable to assume that current and future AI-assisted fighting forces can achieve more with less traditional materiel and with fewer personnel.
Drones are an especially likely way that these many AIs will become embodied and diversify, in which case I don’t think the percentages are so far-fetched.
https://www.bbc.com/news/articles/c62662gzlp8o
> Further ahead in the future, it wants its machines to be programmed to travel autonomously to a location, carry out its task - such as watching out for advancing enemy soldiers and engaging them if necessary - and then return to base after a certain time.
> “Preface to the highest stakes negotiations in history.”
Like come on. The cuban missile crisis, for starters? Bro needs to calm tf down.
> The whole background of this AI conversation is that we’re in a race with China, and we have to win. But what is the reason we want America to win the AI race? It’s because we want to make sure free open societies can defend themselves. We don’t want the winner of the AI race to be a government which operates on the principle that there is no such thing as a truly private company or a private citizen.
In the US currently, there are private citizens, and there are 'not-the-1%' citizens, where a Kavanaugh stop is legal, your voter information may be (or may have already been) seized by the DoJ or FBI, you may be tracked by out of state or federal agents on ALPRs with no warrant, for any reason, and where attending a legal protest may have your biometrics added to a database of potential domestic terrorists.
Or maybe your tax money will just be used to blow up unidentified boaters or bomb girls' schools and homes, and you'll get no say in whether that's the case because the elected body that is there to issue a declaration of war (or not) as representatives of you, has abdicated that power to a cabinet of unelected white nationalists.
But go off about how we're such a better country that believes in freedom and goodness.
It’s easy to point to China as a place where freedom of speech isn’t present, but try asking members of the current administration or even Supreme Court judges who won the 2020 election and see what kind of responses you get. That alone says a lot about the current state of things.
Freedom of speech and regard for the facts are independent concerns. People absolutely have the right to call out lies about the 2020 election and have repeatedly done so.
Some at the cost of their careers and a few now face the threat of prosecution.
China is a low bar. We shouldn't accept any of this as normal.
More like the past 200 years. America have never been the "good guys", and it is only Americans who seem to think they ever were.
The American says "But we don't have propaganda", the Soviet says "Exactly".
Similarly normal for the population of any country that has net negative externalities from America to view them as the "bad guys".
The current and growing anti-US sentiment is an expected result of an increasing gap between the US and the rest of the first world on economy and defense. The existence of a superpower is precluded on being viewed negatively by the rest of the world
No, it would be a sign of critical thinking and self reflection.
But, second, often precisely because they think we’re the bad guys.
If you see the world as dominated by an evil, overwhelmingly powerful empire that uses violence in a way that shows no concern for the continuation or quality of human life outside of the metropole then, even if it is bigoted, repressive, and unjust within the metropole, you still want to be in the metropole rather rhan peripheries.
And, to be equally as fair, the only genuinely good guys are the ones that are too small to enforce their will upon others directly - small countries without arms who are forced to find other ways to engage with others in order to achieve whatever goals they have (resource acquisition)
The Americans have been extremely adept at dominating the discourse via non-government pathways (Hollywood)
Better than China as a global model? Still, yes, probably. Potentially. Depends on how the next few years ago.
Even if America fails, I’d argue a global republic is a brighter potential future than a global dictatorship.
They might. I’m not. There is an analogy here to perfect being the enemy of good. Or, at the very least, the pragmatic better.
But to your credit you brought up the Pretti shooting. I have to analyze how that demonstrates why the “AI values” should reflect American ones.
Judge my enemies by their actions. Judge me by my words. About myself...
America debates and exhibits its faults, at least internally. The Tulsa Massacre is a movie and cultural discussion point in a way Tiananmen Square is not in China. Neither should have happened. And neither is universally acknowledged or atoned for. But if we’re debating which system AI should emulate, I know it’s not just the one that explicitly buries its faults.
> Judge my enemies by their actions. Judge me by my words
Judge both by both. The ability to have words about shameful actions is not meaningless.
Just like being a billionaire (or, super-wealther, if you will), you don't get to be a superpower by doing good things.
China and the US can both be bad, and they're both going to use AI for mass internal and external surveillance and weapon targeting.
Is it possible to live in a world where powerful entities have gotten there through ethical means? Sure. We don't live in that world, though.
And yes, if I said "name me one powerful person/entity that got there through ethical means", I'm sure you could give me a name. But that name would surely be an outlier.
It’s a lie in the way cats are round is a lie—actually a lie, but one nobody brought up.
I don’t think Dwarkesh is arguing for global American hegemony. Just that if AI becomes dominant, having AIs embedded with American cultural values, broadly, is probably better than having ones seeded with Xi Jinping thought.
> China and the US can both be bad, and they're both going to use AI for mass internal and external surveillance and weapon targeting
Agree. But I don’t think any Chinese AI companies get to sue the CCP over it.
I'd really rather have a choice of both rather than be forced to accept "AI that downplays a 2 year old genocide" over "AI that covers up a 40 year massacre".
You do. So do I. If American AI goes by the wayside, we cease to have that choice anymore.
An observation one can make when comparing a republic with the rule of law to one that ain’t, whether across time or geography. There is a real benefit to having the American experiment prominent and continuing.
These aren’t mutually exclusive. The world is better off for Athens and the Roman and Harrapan and Haudenosaunee republics. (Book request: history of the republic. I’ve struggled to find one.)
The CCP with internal elections was interesting and a genuine riposte to broadly-enfranchised republics. Xi as a dictator is not, not.
Author literally is.
This subthread is part of the broader discussion. There are lots of Reddit corners for debating whether America is a republic. I haven’t seen any novel arguments in a while. The argument for whether an American AI is useful out of an American republic, its dying republic or even its embers is the germane one here, and I think it speaks decisively in favor against the one that’s proudly autocratic without organized dissent.
The American 'experiment' is one long history of the US doing really horrible things, but giving ourselves a pass because we dress it up in the name of freedom and self-determination.
If you ignore our slavery and the genocide of Native Americans, it's easy to paint China's slavery and genocide as evils that are unique somehow.
The real experiment of America is in seeing how self-deluded we can become if we continuously reinforce the false premise that our institutions are intrinsically good (or at least, nebulously "better").
Is that true of the US? Is there state-sanctioned/supported slavery in the US? Is the US committing genocide within its own borders? Arguably not?
This doesn't make the US perfect or wonderful. We've been politically and militarily supporting a genocide in Gaza, as a stark example.
But "the US did slavery and genocide in the past" and "China is doing slavery and genocide now" doesn't make the US and China equivalent today.
And on top of that, I can go out and protest my country supporting Israel's garbage in Gaza. If I were a Chinese citizen and tried to do something like that in China, I'd be jailed.
How would you contrast the responses to the Tiananmen Square Massacre [1] and that of Pretti’s shooting?
[1] https://en.wikipedia.org/wiki/1989_Tiananmen_Square_protests...
The idea that anyone would be better off with China supplanting the US is asinine. This is the same government that committed the Tiananmen square massacre and still doesn't acknowledge that anything happened.
> The whole background of this AI conversation is that we’re in a race with China, and we have to win. But what is the reason we want America to win the AI race?
Right now there are two contenders for first in the AI race. The US, and China.
You spent the rest of your comment making the case that it is not good for the US to win. Implying, though not directly saying, we would be better off with China.
You can say "oh wouldn't it be nice if Europe won instead" but they don't have anything in the race right now. We're stuck with the US or China.
This is you putting words in my mouth. It's bad if either wins.
You seem to be operating under an unspoken personal belief that an AI race "win" inevitably spills out into global dominance.
I don't know that it won't, but you likewise don't know that it will, and I'm not beholden to debate things from your chosen premise.
I think AI will be bad for whoever is being targeted by it's controllers, but I don't think it will intrinsically disrupt the military spheres that exist now as a result of nuclear weaponry.
China will use its AI to hurt the people it's hurting now.
The US will use its AI to hurt the people it's hurting now.
Imho, the idea of an AI arms race "winner" is just the new face of the securitization rhetoric that we used to justify our military excursionism during the Cold War.
Read up on what it means to "imply" something.
Speaking of putting words in people's mouths:
> You seem to be operating under an unspoken personal belief that an AI race "win" inevitably spills out into global dominance.
This is the belief of the article we're all commenting on. Intelligent people are able to discuss concepts without endorsing them.
My read is they’re saying we need an alternative to Chinese AI. Because with its industrial might, the default future is Chinese technological dominance.
China invaded and annexed Tibet in 1959. To the degree we had a classical definition of intent-based genocide, Beijing continues to commit it in Tibet and Xinjiang.
America’s conscious is stained. But it’s downright nonsense to go off about surveillance when the comparison is China.
Yes, the Uyghur genocide and paramilitary suppression and settler-colonialism of Tibet and Xinjiang is horrific, and will (hopefully) be recognized in the future as a genocide on par with others that 'enjoy' historical notoriety, but let's not pretend we're not well on our way to doing that here.
The rhetoric of ethnic superiority and nationalism and birthright that exists in our government is the exact same rhetoric that exists in Xi Jinping's "Imperial Han" nationalism.
The same government that helped murder 2M folks in Iraq. The same gov that paid death squads to kill nuns in El Salvador.
At least China isn't in a position to have to reckon with how deep white supremacy runs in its culture.
In fact, when I hear folks from the US talk about china without understanding their own history of racism and genocide and how that shit is still going on, all I can conclude is that they are operating under the same racist delusions that have historically brought the US to do such horrific things to the world.
I want the US to win because I live in the US and it will probably benefit me, but we’ve largely stopped pretending to value the republic so I don’t think we can claim a moral standing on these topics anymore.
To reference your other comment, the common American man has as much de facto ability to sue our government and/or leaders as the common Chinese man
This comes to the core of the issue, and is where I think the disagreement comes from. Many Westerners in fact do not want "Western" values to prevail.
Why? For me those values have led to outcomes so horrendously antithetical to _my_ values, that I would not wish them for the rest of the world. Even worse, this Western centrism has led to jingoist conclusions for at least 400 years.
No. There is no court in Beijing that can tell Xi to knock it off.
> China hasn't bombed girls' schools
Read up on the treatment of Uyghur girls in the Chinese schools. It’s Indian Removal Act stuff, except right now.
Again, nobody is arguing America is a beacon of anything right now. But between America and China, one is an explicit and proud autocracy.
SCOTUS isn’t being ignored.
> Sounds like a question for the philosophers
And lawyers. It’s an interesting series of hypotheticals.
SCOTUS rules 90%+ for Trump (lower courts are 90%+ against). They've given him freedom from investigation and criminal prosecution. They aren't much of a bulwark.
You _could_ argue that this is a flaw in the constitution, and that none of the above should be legal, and that people who support those things should be restricted in their speech or ability to hold office. This was the status quo in politics for a while! These things have all existed for a long time but this seems particularly targeted at Trump, who was famously banned from most social media platforms for years.
There are a lot of democracies (most of the EU for example) that take this stance on freedoms and will even overturn elections to prevent those who support those policies. The question is really 'does doing that protect freedom and democracy or infringe it?'
As for the second paragraph, this is just a lie, Congress has not abdicated any type of war powers to the Cabinet. There has not been any type of declaration of war, and if Congress wanted to stop the DoD, they very much could and in fact came very close to doing so. If your Congress representative did not represent your interests (in this case voted nay), you can call email etc. them and their office or vote them out.
> better country that believes in freedom and goodness
I think you're letting your strong feelings here cloud your judgement, you can hold all of these opinions above without needing to fellate China, which is objectively worse on freedoms than the US. It's also important not to conflate "believes in freedom" with "perfectly meets my line of freedom."
You have some low standards of praise.
The one plausible claim they could make is, ironically, one similar to Altman's claim a while back that visiting China was "easier" (I don't recall his precise phrasing) because there is a very clear and public list of things you are not allowed to talk about and actions you are not allowed to take. This list is, of course, subject to change.
1. Democracy and freedom worldwide
2. Economic access+prosperity with Asia
3. Pro-American sentiment
(Not in order of importance, which shifts constantly)
I think assuming China would beat the US in conventional war if they reach 'AGI' first is a stretch, even if this actually grants them a force advantage it's not like the US can no longer reach AGI. The risk is really more that if they reach 'AGI' and subsequently a force advantage, that they would no longer be deterred and more decisively move on Taiwan next year. Taiwan is key to [1] and [2] above.
[1] https://www.washingtonpost.com/national-security/2026/02/13/...
[2] https://www.nytimes.com/2026/02/13/technology/dhs-anti-ice-s...
People have also been detained with intention to be deported for their views about Palestine, with online comments being part of how they're chosen for targeting:
[3] https://www.columbiaspectator.com/news/2026/01/28/federal-go...
There was also someone jailed for a month for quoting Trump's own words about a school shooting, "we have to get over it", in the context of Charlie Kirk's death, along with many other noted instances of retaliation against online comments around that incident:
[4] https://www.cnn.com/2025/12/17/politics/retired-cop-jailed-o...
ICE asking for a list of social media profiles of its detractors doesn't sound like "without fear of jail or shunning or anything like that." to me. Through data mining and 3rd parties, the local PD has a dossier on me based on what I write here that would come up if I did something to get their attention. That has a chilling effect on what say on here in public.
> But within 20 years, 99% of the workforce in the military, the government, and the private sector will be AIs. This includes the soldiers (by which I mean the robot armies), the superhumanly intelligent advisors and engineers, the police, you name it.
Autonomous weaponry is one of the few ways that a fascist state could reasonably maintain violent control over a large and hostile populace.
I guarantee Trump would rather have perfectly obedient killbots than critically thinking soldiers, or even just the 5 murderous assholes required to oversee tasking for 1000 semi-autonomous police drones.
The least plausible part is the private sector, which just doesn't work that way.
The part of the Pentagon that did this is, to put it politely, not the part that's good at planning.
I mean... isn't that pretty much the way the current administration behaves in general? If the answer to this question is "yes", and the US executive does not in fact share the values of the author about free and open society, then the rest of the article is kinda moot (except the point that we should be talking about these things now, and encouraging congress to act).
This administration believes that they don't need to treat all businesses equally under the law, and can use strong-arm intimidation tactics to get what they want. That is the problem.
I remember thinking about this - basically AGI - decades ago, and it was always obvious to me that if you created such a thing there'd come a day when the MIB would be ringing the doorbell.
Who is learning this for the first time only now? Even just restricting ourselves to the current administration, look at how many times Trump has directed punitive actions against private entities! Look at his actions against law firms like Perkins Coie or Covington & Burling. This is not something that just arose out of nowhere with Anthropic.
A teenager, probably. Not everyone is 100 years old.
As for whether code written with Claude Code should be so considered - if it’s just code that is subject to human review, I would argue that this use shouldn’t be a supply chain risk. But with Claude Code PR Review and similar products, the chance that an AI product (not limiting to Anthropic here) could own a load-bearing part of the lifecycle of a critical piece of code becomes much larger, and deserves scrutiny.
What Hegseth/Trump want to do is not just stop Anthropic models from being used by any military supplier pursuant to goods/services they are providing to the military, but rather say that if you do business with the military then you must not use Anthropic at all, even if that usage is entirely unrelated to your military contracts.
It is also common corporate doctrine to use a subsidiary for government contracting to avoid having to evidence that a commercial vendor is utilized for government, so this won't even be 'annoying' for contractors.
ITAR and compliance frameworks (e.g. FedRAMP and CMMC) already mandate this for any non-US company, yet AWS commercial still has offerings in other countries and from non-US vendors, Palantir still has an IG business, etc.
Because you can't designate a company a SCR because you don't like the contract you signed with them.
I speculate we'll discover there's very few unambiguously ethical uses of AI, much less for military applications. Them's the breaks.
I haven't seen this much hype and hopium since the dot com boom. The whole open AI -> Anthropic saga just reeks of the same evolution of Viant/Scient.
Look we have an amazing tool, but it has some fundamental shortcomings that the industry seems to want to burry its head in the sand about. The moment the hype dies and we get to engineering and practical implementations a lot is going to change. Does it have the potential to displace a lot of our current industry: why yes it does. Agents can force the web open (have you ever tried to get all your amazon purchase history?) can kill dark patterns (go cancel this service for me), and crush wedge services (how many things are shimmed into sales force that should really be stand alone apps). And the valuable engagement is going to be by PEOPLE, good UI, good user experiences are gonna be what sells (this will hit internet advertising hard for the middle men like google and Facebook).
The notion that 99% of the workforce and military will be AIs isn't "copium", it's grounds for absolute terror. One of two things will be true:
1. The AIs will be controlled by the Epstein class, who will then have no use for most of humanity, either as workers or soldiers.
2. Or the AIs will be controlled by the AIs themselves, which also seems worrisome.
Really, any situation where 99% of the workforce and military are AIs should be deeply concerning, for reasons that should be obvious to any student of history or evolution.
And, sure, maybe we won't get there in our lifetimes. But if we did, I wouldn't expect an automatic utopia.
The GP is saying that it’s a major over-extrapolation of the current progress.
You seem to be assuming we will get there instead of expecting the cracks will become more and more obvious.
The problem with democracy is that it can easily become a revolving door wherein capital holders can choose which candidates are allowed to approach the door.
I think democracy works well when the monetary system is constrained; for example on gold or other scarce asset because that creates a better separation between money and state because then there would be less of an incentive for big companies to corrupt the revolving door to gain a financial advantage.
In a monetary system where the government can create an unlimited amount of money, the incentive to corrupt the government and political process keeps increasing.
I think democracy with a soft fiat money system is probably the most dangerous system because any moral objection can be filtered out of the running as we saw happen with Anthropic and the Department of War. It's because clearly it's the weapon manufacturers running that department behind the scenes; they have a huge financial interest to do so. The Department of War is the bread and butter of weapon manufacturers and defense contractors.
I've reached similar conclusions about the problems with democracy but always struggle with any potential solutions. Looking at the the world, I don't see any viable alternative forms of government, just slight variations that still suffer from the same core problems but to a lesser degree
The lawfare part of it is that to coerce an individual or a company, governments are willing to abuse their power. The Biden administration did it when pressuring social media companies to censor content. The Trump administration is doing it to a much greater extent with things like ordering every government agency to stop using Anthropic and by labeling them a supply chain risk.
The ideological part of it is when Defense Sec Hegseth and Trump and AI Czar / PayPal Mafia member David Sacks repeatedly attack Anthropic as “woke”, and it is clear they’re undermining this company from their government positions based on Anthropic’s speech (first amendment violation). This obviously is part of why they attacked Anthropic in such a public way.
And the corruption part of it is OpenAI’s leaders being big supporters of the MAGA movement and the Trump administration. Greg Brockman, president of OpenAI, is the biggest donor ever to the MAGA PAC. Why did Hegseth grant a contract to OpenAI after banning Anthropic, even though OpenAI has the same red lines in their agreement (what Sam Altman claimed)? It’s because of the corruption - give Trump and his family/friends money, and you’ll get something back.
The fight against these types of government abuse have ALWAYS been happening. But the abuse is much more in the open today, and much larger in scale than ever before. Scandals like Watergate would not even make the news today. And that is what the public should be waking up to and focusing on. We need to rethink our political system significantly and add a lot more protections against the kind of things the Trump 2.0 administration has done.
> Our future civilization will run on AI labor. And as much as the government’s actions here piss me off, in a way I’m glad this episode happened - because it gives us the opportunity to think through some extremely important questions about who this future workforce will be accountable and aligned to, and who gets to determine that.
I stopped reading there because this is a pointless exercise.[1]
This isn’t a roundtable. You are not even at the table. There isn’t some “thankfully time to discuss this...”—you are just out.
The Machine doesn’t need your labor? You are out. No norms. No discussions.
You either try to forcefully take control of the situation or you see yourself get discarded.
(I am here just assuming all the AI Maximalist (doom maximalist in this context, Trump and all) premises for the sake of the argument.)
[1] I did read the last paragraphs and the tenor is the same. “We must make laws and norms through our political system”… just like with nuclear bombs, of all things.
> The uncomfortable truth is that...
> ...that the real question isn't...
> Corporate resistance...introduces friction at the infrastructure layer
And check comment history (https://news.ycombinator.com/threads?id=julius_eth_dev)
Sometime yesterday, or further back, someone has decided to run a bot experiment (https://news.ycombinator.com/threads?id=patchnull, https://news.ycombinator.com/item?id=47340079)
The real tell is that you've not been in the group of people that use these terms frequently enough for you to think they're normal.
It's like the emdash alarmism, AI never invented emdash, nor did it invent using it frequently. Its training was full of examples, so many that AI picked up using it frequently.
Look at my comment history. I emdash. But I adapted by removing the spaces around them—AI hasn’t similarly adapted.
Most comments on HN with emdashes aren’t slop. But if it starts getting into Wernicke word-salad territory and there are emdashes? With spaces? At that point, it’s fair to flag.
I'm laughing not at you but the ludicrousness of the times - I use endash heavily, have done for a minute, but now I see endash used by LLMs with no surrounding spaces.
I think that the "identify AI by some artefact" is just another game of whack-a-mole, and the better approach is to look at the quality of what's being presented.
I have argued before, and still feel strongly, that LLM/AI generated images/audio/text is causing a stronger inspection of what's being presented as fact, which is a healthy thing (how far that will go is yet to be determined, as per when the availability of Photoshop generated content exploded)
Goddamit. (Flippity floppity floop.)
> LLM/AI generated images/audio/text is causing a stronger inspection of what's being presented as fact, which is a healthy thing
If it is, I agree. What I think is actually happening is folks are skimming and then concluding on vibes. Unfortunately, that means “I don’t agree” gets lumped in with “this is slop.”
Having one red flag is something I wouldn’t nuke on, but new account + em dash + other ai style talking points is just too much.
I feel like we’re eventually going to end up with shibboleths or something like the thieves cant that updates everytime a new model launches just to distinguish the humans.
AI is just computers doing things we typically associate with human intelligence, and having a conversation with a computer that effectively passes the Turing test, is definitely AI. If LLMs aren't AI, then AI isn't a useful term. (though agreed that LLMs aren't AGI, which I assume is what you're thinking of)
Wikipedia's list of AI applications: https://en.wikipedia.org/wiki/Artificial_intelligence#Applic...
There’s a similar thing with transhumanist “enhancement” or “life extension” stuff. When it actually works we call it medicine. Statistically one of the most powerful life extension techs ever developed was the cardiac bypass, which would have been sci-fi in 1900.
I’ve been using stuff like Claude Code and personally feel comfortable calling this stuff AI. Is it AGI? I don’t think so, but then again I’m not totally sure what that is. Am I AGI? I’m not universally able to handle all forms of cognition well and I can’t self modify much, so I’m not sure either. I’m not even sure if AGI is a well formed concept.
Intelligence is a pretty broad concept too. My pet rabbit is intelligent. Plants are intelligent. Bacteria are intelligent. Anything that can run an OODA loop, learn, adapt, and move toward a goal function is intelligent. By that definition some computer systems have been AI for decades. They’re just getting better.
I think there’s intelligence all around us. We just don’t get the wow factor from it unless it talks.
I personally would prefer "AI" to be "AGI" but there's no point fighting the way people use language (see: every damned pedantic comment about English usage ever!! :-)
But beyond the pedanticness and authority appeals, I think keeping the term AI distinct from AGI is just useful so it can be an umbrella term for all the human-like smart-ish things computers do. And so its Wikipedia page doesn't have to be re-written.
But on the substance they're equally vapid. Dwarkesh's interview with Richard Sutton was especially cringe.
First phrase: "you're saving on energy by putting data centers in space". What?
2:08 "It's harder to scale on the ground than it is in space" what?
Didnt startship exploded like 10 times by now? But in 30 months they'll be launchign 1 per hour? What?
I actually do. The math is more strained than anything present. But a lot of people are rejecting it out of hand without doing anything back of the envelope. Truth is, barring a seismic shift in how we permit data centers on the ground, it takes a within-the-envelope decreases in launch costs to make space-based data centers profitable. Which is then just a cheat code for building a Dyson sphere.
> Didnt startship exploded like 10 times by now?
They all explode all the time. Starship has also been consistently improving its suborbital flight characteristics. I don’t see a good argument for a fundamental design fuckup in the data we have.
> But in 30 months they'll be launchign 1 per hour?
This is nonsense. But within ten years? I think so. At least, we don’t have a good reason to reject that with current data. And that would make the cost equation flip to favoring space-based infrastructure. Which, honestly, is not the answer I expected. (I’ve done aerospace stuff for a while. Most of the back-of-the-envelope math fails. It failed for space-based solar power. It failed for asteroid mining. And it currently fails for space-based data centers. But let launch costs dip a bit, or permitting delays and risks rise a bit, and the equation balances sooner than one would think.)
Alright, show me the back of the envelope maths.
Having done a little bit of both, the latter around data centers, I’ll say they’re different kinds of hard.
> Alright, show me
Fair question. But no, I’m still refining my math and making bets on this. But I’ll start working on an HN comment in a few weeks and try to remember to post it back to this thread.
My basic argument is to try pinning out current datacenter costs, pin out lifted costs, and then work out what cost/kg you need to balance the two. Hint: approval time and interest rates are meaningful variables.
iirc HN threads automatically close, due to inactivity and (/or?) based on time since the original post. I wasn’t able to find a thread with the comments still open from 16 days ago, let alone a “few weeks”, but in good faith I’m assuming that you already know that, and aren’t using that as an out to avoid replying, not that anyone is “owed” a reply by you, or by anyone.
This is all to say, I appreciate the thread as a bystander, and would thus naturally eagerly await your reply if and when it arrives before the closure of individual this post’s comment section.
I'm personally very glad that Dwarkesh isn't like that. He's not perfect, but I think he's doing a way better job than other podcasters in the field right now.
Not sure if this is true, maybe someone who went to MIT around the same time can shed some light on this?