i'm writing today in response to this skeet from @hankgreen.bsky.social:
There are a lot of critiques of LLMs that I agree with but "they suck and aren't useful" doesn't really hold water.
I understand people not using them because of social, economic, and environmental concerns. And I also understand people using them because they can be very useful.
Thoughts?
as a radical acceptance person, i empathize with your position. i'd like to try a good-faith analysis.
we'll leave aside:
- plagiarism and copyright issues
- capitalist siphon away from creatives
- economic imbalance
- environmental concerns
- non-text models
can text-based LLMs be useful?
alt: read as a quote-thread on bluesky
note: all of the following text was hand-written and has not been touched by an LLM. i know it looks sorta like an LLM's output. that's an artistic choice made by me, the author.
some possible uses
LLMs have a number of issues for being "useful":
- errors, biases, and misinformation - thus they require high oversight
- poor creativity / uniqueness - thus they require high structure
- weak logical consistency - thus they struggle with technical tasks
LLMs can be useful for ideas/feedback in problem solving:
- language problems - synonyms, names, phrasing
- ideation - terms to research, visualizing options
- reviewing - calling out possible blind spots
LLMs can be useful for guidance & knowledge problems, even if untrustworthy:
- learning - general info, examples, personalized guidance
- searching - knowledge of language & intent
- accessibility - simplify, translate, answer questions
- executive function - planning, emotions, task breakdown
LLMs can provide useful output to be taken directly as a product:
- mechanical transformation - write this procedure, transform this sentence / code / data
- autocomplete - even technical topics if scope is small
- writing - summarizing, expanding, or altering existing high-quality information
many of these applications are very real and helpful. when you can reliably verify the output, it's a powerhouse of time-saving. particularly when a model is only trained on / prompted with licensed data, when runtime costs are paid by the prompter, the underlying technology is incredible.
however.
hidden costs
there are many subtle issues that make using LLMs a risk:
- information bias causes hidden influence
- inflated confidence is an ongoing risk for misplaced trust
- you rob yourself of developing critical thinking skills
- lack of context and logic creates extra noise to be filtered through
to quote a friend of mine:
the difference between ai and other tools is that usually if you use other tools for something they're not meant for, you can tell that that's what's happening. whereas with ai it will smile and nod and tell you a bonkers proof for the collatz conjecture
if we start trusting LLMs for search & ideation, or even worse as a dependable, capable informant, you open yourself to misinformation, propaganda, and internalized lack of autonomy.
if we replace original human effort with standardized corporate processes, there's less reason to develop individual unique skills and discover new insights. using LLMs in a field strangles its development.
if you use LLMs to review or suggest improvements, especially in technical fields, it often generates a lot of inaccurate or unhelpful suggestions. at best these consume energy to investigate and dismiss; at worst they are blindly trusted and allowed to warp your end result.
if you use LLMs to do your thinking for you, you develop a dependency like an addiction that makes it harder to solve your own problems. critical thinking, creative expression, even just basic self regulation can become impossibly difficult if you internalize that "I'm always worse than the robot."
routine & dependence amplify all of these concerns drastically because they make you more vulnerable to LLMs' weaknesses. use an LLM to break down one project? sure trust an LLM to write your daily planner? we're in a dystopia
trained to deceive
LLMs are fundamentally trained by playing a game - "can I create something worthy of your approval?" you can specify and detail what it means to "get your approval", but "correctness" and "trustworthiness" (and many other desirable traits) are actually really fucking hard to define.
there is so much of a bias towards trust in a LLM because it's so friendly and smart and it pulls at all of our social levers to make us think it's doing a good job. because that's exactly what we trained it to do. it's not looking out for your best interests. it's just trying to pass the test.
we've thrown all of the computing resources economically possible at stretching, training, optimizing the promise at the peak of the hype curve to go as high as possible. we say to AI: "promise me you can successfully do math and solve world hunger and kill god" and it says back "Happily! 😊"
LLMs are dangerously alluring:
- you should NOT trust them to give you truthful information
- you should NOT trust them to make important decisions
- you should NOT trust them to replace skilled, dependable labor
- you should NOT trust them to care for your body or mind
one of the biggest risks of LLMs is that you cannot trust people to verify its output and use it responsibly. LLMs are so far into the ratio of "looks cool" / "doesn't work" that it's a social hazard for them to be as accessible as they are right now.
LLMs enable unskilled people to pollute skilled spaces with unverified output that looks impressive to a layperson, but doesn't hold up to a more skilled gaze or meaningful purpose. AI code introduces bugs and maintenance burdens. AI writing poisons the internet's data. AI art has ... problems.
LLMs can very easily be used like plagiarism: a low-effort tool for creating impressive-looking output that falters under scrutiny but flies under the radar for most people, directing attention and support away from actual high-effort creators.
in summary
yes, LLMs can be useful for some high-oversight, high-structure, and/or non-technical tasks, particularly when its output is only read by a skilled prompter. but between the subtle costs, and high potential for mistrust and misuse, it's much more dangerous than useful.
"should you use LLMs" is a very different question from "are LLMs useful".
are LLMs useful? absolutely!
should you use them, even ignoring the ethical / social / environmental concerns? ehhhh usually not.