mjk didn't have anything to add to the social economic conversation around llm usage, to the point of even being a bit tone deaf (especially if the 100R were part of his intended audience).
A bit more open constructive conversation to mjk's post on Mastodon would've been helpful for more people to understand the 100R philosophy more intimately; but...it seems the battle lines are drawn. Who wins/loses?
There's some semi-apologetic interest in ML, esp. smaller local models, in the "permacomputing" (don't like the term but whatev) sphere. But I don't know if there's much of a conversation around LLMs. With all the hype and how resource intensive and externalities-heavy they are, I can see wanting to draw a line, but it's sad to see it become a purity test.
Lately the discussion around this has had me thinking of the William Köttke quote "not only is it ethical to use the resources of the current system construct the next one, ideally, all the resources of the current system would be used to that end".
I think that if the situation was as dire as it's made out to be (I think it is) and projects like uxn were a serious attempt at a mitigating response (less convinced, as cool as they are), there's room for a conversation about beneficial-detrimental (rather than good-evil). Then we could discuss whether it's a good idea to use LLM-based tools when they are available to help build out infrastructure that runs without them, whether there's a nuance as to at what level of automation we draw the line (Ivan Illich, tools vs machines etc), human augmentation vs replacement, the cognitive load stuff Keeter's post touches on and so on.
Unfortunately, part of the polycrisis seems to be a difficulty in discussing things clearly.
> a bit tone deaf
Agree