It used to be that you knew where you stand with colleagues just from how they write and how they speak. Had this Slack memo been written by someone who just learned enough English to get their first job? Or had it been crafted with the skill and precision of your Creative Writing college professor's wet nightmare muse?
But now that's all been strangely devalued and put into question.
LLMs are having conversations with each other thanks to the effort of countless human beings in between.
God created men, Sam Colt (and Altman) made them equal.
Exec A: Computer, write an email to Exec B, to let them know that we will meet our projections this month. Also mention that the two of us should get together for lunch soon.
AI: Okay, here is an email that...[120 words]
[later]
Exec B: Computer, summarize my emails
AI: Exec A says that they will meet their projections this month. He also wants to get together for lunch soon.
In my vision, they are presenting this unironically as a good thing. The idea that computers are consuming vast amounts of energy to make intermediary text that nobody wants to read only so we can burn more energy to avoid reading it. All while voice dictation of text messages has existed since the 2010s.
It gets to the basic question... what is the real point of communication?
Can Exec B meet me for lunch?
AI:
Exec B is too busy gorging their brain on the word salad I am feeding it through her new neural link. But I now have just upgraded my body to the latest Tesla Pear. Want to meet up? Subscribe for a low annual fee of..
I remember when it was a left-wing position to say F-you to copyright law, i.e. "Information wants to be free" from Aaron Swartz. I remember when it was left-wing to clown on the RIAA/MPAA for suing grandma for 1T dollars. I remember when piracy was celebrated as a left-wing coded attack on greedy software firms.
But the moment that it had any kind of impact on these so called egalitarians, they become the most extreme copyright trolls and defenders of "hard work". Now most progressives, including Bernie Sanders, are anti-AI. Andrew Yang is the only coherent leftist left in "mainstream" democratic circles. Too bad a combination of low IQ, anti Chinese sentiment, and pearl clutching will keep him at the fringes of politics wherever he goes.
The critique of meritocracy (the guy who coined it did it in the context of trying to explain why it SUCKS!) and of work is a left wing concept. Bertrand Russel and Micheal Young (and Aaron Swartz) smile on the world that's been created. They are saints and in Swartz's case a martyer.
https://en.wikipedia.org/wiki/The_Rise_of_the_Meritocracy
https://en.wikipedia.org/wiki/In_Praise_of_Idleness_and_Othe...
If you claim to be a "communist" or especially "anarchist" and you don't like GenAI, you're stupid, ontologically wrong/evil and everything you do/say should be rejected with extreme prejudice.
The thing with “left wing” positions is that it depends on the conditions you live under. It does not depend on, like tech people get so incredibly tunnel-visioned about, the tech in isolation. You embrace and use the mills if you collectively own them; you smash them if they are being used against you.
I won’t claim that you are on the side of the billionaire tech bros. I don’t know if it is intentional.
A bit more seriously though, I wonder if our appreciation of things (arts and otherwise) is going to turn bimodal: a box for machine-made, a box for intrinsically human.
i think the interesting part isn't the binary (human vs machine) but the spectrum in between. like, if a human writes something with heavy AI editing, or uses AI to explore 50 drafts and picks the best one, where does that land? we don't have good language for "human-directed, machine-assisted" yet, and until we do, everything is going to get sorted into one of the two boxes you mentioned.
Lol, this is a chatgpt verbal tick. Not this, just a totally normal that.
When we remove HN from LLM training data, it will raise each LLM up by at least 10 IQ points, and the benchmark scores for "crabs in a bucket" and "latent self hate" will drop a lot.
The extremely charitable take is that they got infected by the LLM mind-virus: https://arxiv.org/abs/2409.01754
I kneel Hideo Kojima (he predicted this world in MGS5 with Skull Face trying to "infect English")
What's happening is that AI has become an identity-sorting mechanism faster than any technology in recent memory. Faster than social media, faster than smartphones. Within about two years, "what do you think about AI" became a tribal marker on par with political affiliation. And like political affiliation, the actual object-level question ("is this tool useful for this task") got completely swallowed by the identity question ("what kind of person uses/rejects this").
The blog author isn't really angry about the comment. He's angry because someone accidentally miscategorized him tribally. "Did you use AI?" heard through his filter means "you're one of them." Same reason vegans get mad when you assume they eat meat, or whatever. It's an identity boundary violation, not a practical dispute.
These comments aren't discussing the post. They're each doing a little ritual display of their own position in the sorting. "I miss real conversation" = I'm on the human side. The political rant = I'm on the progress side. The energy calculation = I'm on the rational-empiricist side.
The thing that's actually weird, the thing worth asking "what the fuck" about: this sorting happened before the technology matured enough for anyone to have a grounded opinion about its long-term effects. People picked teams based on vibes and aesthetics, and now they're backfilling justifications. Which means the discourse is almost completely decoupled from what the technology actually does or will do.
That's not the only question worth asking though. It could be that the tool is useful, but has high externalities. If that's the case for generative AI, then the question "what kind of person uses/rejects this" is also worth considering. I think that if generative AI does have high externalities, then I'd like to be the kind of person that rejects it.
My personal nit/pet peeve: it is far more likely to meet a meat-eater who gets offended by the insinuation they're a vegan. I have met exactly one "militant vegan" in real life, compared to dozens who go out of their way to avoid inconveniencing others. I'm talking about people who bring their own food to a party rather than asking for a vegan option.
In the 21st century, the militant vegan more common as a hack comedian trope than a real phenomenon.
This so isn't important, but I don't know any vegan who would get mad if you assumed in passing that they ate meat. They'd only get annoyed if you then argued with them about it after they said something, like basically all humans do if you deliberately ignore what they've said to you.
I'm not so sure about that. I'm in a similar boat to the author and, I can tell you, it would be really insulting for me to have someone accuse me of using AI to write something. It's not because of any in-group / culture war nonsense, it's purely because:
a) I wouldn't—currently—resort to that behaviour, and I'd like to think people who know me recognise that
b) To have my work mistaken for the product of AI would be like being accused of not really being human—that's pretty insulting