hckrnws
What is happening to writing? Cognitive debt, Claude Code, the space around AI
by benbreen
I won't ever put my name on something written by an LLM, and I will blacklist any site or person I see doing it. If I want to read LLM output I can prompt it myself, subjecting me to it and passing it off as your own is disrespectful.
As the author says, there will certainly be a number of people who decide to play with LLM games or whatever, and content farms will get even more generic while having less writing errors, but I don't think that the age of communicating thought, person to person, through text is "over".
It's easy to output LLM junk, but I and my colleagues are doing a lot of incredible work that simply isn't possible without LLMs involved. I'm not talking a 10 turn chat to whip out some junk. I'm talking deep research and thinking with Opus to develop ideas. Chats where you've pressure tested every angle, backed it up with data pulled in from a dozen different places, and have intentionally guided it towards an outcome. Opus can take these wildly complex ideas and distill them down into tangible, organized artifacts. It can tune all of that writing to your audience, so they read it in terms they're familiar with.
Reading it isn't the most fun, but let's face it - most professional reading isn't the most fun. You're probably skimming most of the content anyways.
Our customers don't care how we communicate internally. They don't care if we waste a bunch of our time rewriting perfectly suitable AI content. They care that we move quickly on solving their problems - AI let's us do that.
> Reading it isn't the most fun, but let's face it - most professional reading isn't the most fun. You're probably skimming most of the content anyways.
I find it difficult to skim AI writing. It's persuasive even when there's minimal data. It'll infer or connect things that flow nice, but simply don't make sense.
To build what, though? I’m truly curious. You talk about researching and developing ideas — what are you doing with it?
I hear stories like this a lot (on here anyway) but I haven't seen any output that backs it up. Any day now I guess.
I don't really understand this retort. I assume most of us work in a professional environment where it's difficult, if not impossible, to share our work.
We've been discussing these types of anecdotes with code patterns, management practices, communication styles, pretty much anything professionally for years. Why are the LLM conversations held to this standard?
Well, because I've worked in different places, and with different organizations, and can see for myself how different approaches to professional conduct manifest in the finished product, or the flexibility of the team, effectiveness of communication, etc.
Especially with things like code and writing, I assess the artifacts: software and prose. These stories of incredibly facility of LLMs on code and writing are never accompanied by artifacts that back up these claims. The ones that I can assess don't meet the bar that is being claimed. So everyone who has it working well is keeping it to themselves, and only those with bad-to-mediocre output are publishing them, I am meant to believe? I can't rule it out entirely of course, but I am frustrated at the ongoing demands that I maintain credulity.
FWIW I have sat out many other professional organization and software development trends because I wanted to wait and assess for myself their benefits, which then failed to materialize. That is why I hold LLMs to this standard, I hold all tools to this standard: be useful or be dismissed.
Pretty sure people are trying to prompt chatgpt to write Brandon Sanderson-like stories and we'll see their successful prints anytime now.
It's really interesting that I've only seen a few actual pieces of large-scale LLM output by people boasting about it, and most of them (e.g. the trash fire of a "web browser" by Anthropic) are bad.
> but I and my colleagues are doing a lot of incredible work that simply isn't possible without LLMs involved
...Which part is impossible? "Writing a bunch of ideas down" was definitely possible before.
Comment was deleted :(
I assume if someone used an LLM to write for them that they must not be comfortabley familiar with their subject. Writing about something you know well tends to come easy and usually is enjoyable. Why would you use an LLM for that and how could you be okay with its output?
Writing a first draft may come easy, but there's more to the process than that. An LLM can go from outline to "article" in one step. I can't.
I don't write often, so revising and rewriting is very slow for me. I'm not confident in my writing and it looks clunky to my eye.
I see the appeal, though I want to keep developing my own skills.
> An LLM can go from outline to "article" in one step. I can't.
But the point is that the results tend to be very grating.
> I'm not confident in my writing and it looks clunky to my eye.
AI writing is clunky!
> I don't write often, so revising and rewriting is very slow for me.
This is totally fair, but maybe consider editing the AI output once it's given you a second draft?
I agree entirely. Seeing all llm garbage being published made me realize how insecure people are about their writing.
Since realizing, I've been stubbornly improving my own writing and not touching LLMs. Takes a bit of work though.
"maybe consider editing the AI output once it's given you a second draft?".
I would completely rewrite the LLM output. Use it as a researcher or idea generator.
> I assume if someone used an LLM to write for them that they must not be comfortabley familiar with their subject.
This statement assumes that the writer is a native speaker in the language in which he writes the text.
If you're not a good enough speaker to write it, you're not good enough to proofread it, either.
some people might be better at prompting a LLM than you
just like when you go to a restaurant to have a chef cook for you when you can cook yourself
a chef can only do so much with a frozen microwave meal
Most restaurants, by volume, these days churn out ultra processed, mass-marketed slop.
It’s true there is the occasional Michelin starred place or an amazing local farm to table place. There is also the occasional excellent use of LLMs. Most LLM output I have to read, though, is straight up spam.
Axios got traction because it heavily condensed news into more scannable content for the twitter, insta, Tok crowd.
So AI is this on massive steroids. It is unsettling but it seems a recurring need to point out that across the board many of "it's because of AI" things were already happening. "Post truth" is one I'm most interested in.
AI condenses it all on a surreal and unsettling timeline. But humans are still humans.
And to me, that means that I will continue to seek out and pay for good writing like The Atlantic. btw I've enjoyed listening to articles via their auto-generated NOA AI voice thing.
Additionally, not all writing serves the same purpose. The article makes these sweeping claims about "all of writing". Gets clicks I guess, but to the point, most of why and what people read is toward some immediate and functional need. Like work, like some way to make money, indirectly. Some hack. Some fast-forwarding of "the point". No wonder AI is taking over that job.
And then there's creative expression and connection. And yes I know AI is taking over all the creative industries too. What I'm saying is we've always been separating "the masses" from those that "appreciate real art".
Same story.
> Additionally, not all writing serves the same purpose.
I think this is a really important point and to add on, there is a lot of writing that is really good, but only in a way that a niche audience can appreciate. Today's AI can basically compete with the low quality stuff that makes up most of social media, it can't really compete with higher quality stuff targeted to a general audience, and it's still nowhere close to some more niche classics.
An interesting thought experiment is whether it's possible that AI tools could write a novel that's better than War and Peace. A quick google shows a lot of (poorly written) articles about how "AI is just a machine, so it can never be creative," which strikes me as a weak argument way too focused on a physical detail instead of the result. War and Peace and/or other great novels are certainly in the training set of some or all models, and there is some real consensus about which ones are great, not just random subjective opinions.
I kind of think... there is still something fundamental that would get in the way, but that it is still totally achievable to overcome that some day? I don't think it's impossible for an AI to be creative in a humanlike way, they don't seem optimized for it because they are completely optimized for the sort of analytical mode of reading and writing, not the creative/immersive one.
> An interesting thought experiment is whether it's possible that AI tools could write a novel that's better than War and Peace. A quick google shows a lot of (poorly written) articles about how "AI is just a machine, so it can never be creative," which strikes me as a weak argument way too focused on a physical detail instead of the result. War and Peace and/or other great novels are certainly in the training set of some or all models, and there is some real consensus about which ones are great, not just random subjective opinions.
I am sure it could but then what is the point? Consider this, lets assume that someone did manage to use LLM to produce a very well written novel. Would you rather have the novel that the LLM generated (the output), or the prompts and process that lead to that novel?
The moment I know how its made, the exact prompts and process, I can then have an infinite number of said great novels in 1000 different variations. To me this makes the output way, way less valuable compared to the input. If great novels are cheap to produce, they are no longer novel and becomes the norm, expectation rises and we will be looking for something new.
I'm inclined to believe that the difference that makes the upper bound of human writing (or creativity) higher than that of an LLM comes from having experiences in the real world. When someone is "inspired" by others' work or is otherwise deriving ideas from them, they inevitably and unavoidably insert their own biases and experiences into their own work, i.e. they also derive from real-world processes. An LLM, however, is derived directly and entirely from others' work, and cannot be influenced by the real world, only a projection of it.
> Would you rather have the novel that the LLM generated (the output), or the prompts and process that lead to that novel?
The "process", in many cases, is not necessarily preferable to the novel. Because an important part of the creative process is real-world experiences (as described above), and the real world is often unpleasant, hard, and complex, I'd often prefer a novel over the source material. Reading Animal Farm is much less unpleasant than being caught in the Spanish Civil War, for example.
I agree with you.
I also think it's a matter of time before we start constructing virtual worlds in which we train AI. Meaning, representations of simulated world-like events, scenarios, scenery, even physics. This will begin with heavy HF, but will move to both synthetic content creation and curation over time.
People will do this because it's interesting and because there's potential to capitalize on the result.
I thought of this in jest, but I now see this as an eventuality.
> People will do this because it's interesting and because there's potential to capitalize on the result.
I don't know why anyone admits to thinking this. For one, there's nothing stopping you from making movies or writing stories now. You're not suddenly going to develop creativity or interesting ideas using LLMs, either.
Also, think it through. If everyone can yell at computer until movie fall out, there will be millions of them and nobody will pay for anything.
I don't want AI content, but there's a market in the belief that people do.
> The "process", in many cases, is not necessarily preferable to the novel. Because an important part of the creative process is real-world experiences (as described above), and the real world is often unpleasant, hard, and complex, I'd often prefer a novel over the source material. Reading Animal Farm is much less unpleasant than being caught in the Spanish Civil War, for example.
I think you misunderstood what I meant by "prompts and process that lead to that novel". I am talking about the process that the "author" used to generate that novel output. I am more interested in the technique that they use, and the moment that technique is known. Then, I can produce billions of War And Peace.
I suppose the argument is that, the moment there's an LLM that can produce a unique and interesting novels, what stops it from generating another billion similarly interesting novels?
> Then, I can produce billions of War And Peace
You cannot and will never lol.
This so fundamentally misunderstands (1) the point of writing a novel and (2) what makes a novel interesting.
A novel isn't just a buncha words slapped together, bing bam slop boom, done.
What makes a novel interesting is the author and the author's choices, like all art. It's the closest you can get to experiencing what it's like to be someone else. You can't generate that, it's specific to a person.
The GP assumes that an LLM is able to write such novel. So I was working from there. My thesis is that even IF LLMs are able to produce "novelty", it will become the norm and we will simply demand even more exotic novelty.
> An interesting thought experiment is whether it's possible that AI tools could write a novel that's better than War and Peace. A quick google shows a lot of (poorly written) articles about how "AI is just a machine, so it can never be creative," which strikes me as a weak argument way too focused on a physical detail instead of the result. War and Peace and/or other great novels are certainly in the training set of some or all models, and there is some real consensus about which ones are great, not just random subjective opinions.
> Today's AI can basically compete with the low quality stuff that makes up most of social media, it can't really compete with higher quality stuff
But compete in what sense? It already wins on volume alone, because LLM writing is much cheaper than human writing. If you search for an explanation of a concept in science, engineering, philosophy, or art, the first result is an AI summary, probably followed by five AI-generated pages that crowded out the source material.
If you get your news on HN, a significant proportion of stories that make it to the top are LLM-generated. If you open a newspaper... a lot of them are using LLMs too. LLM-generated books are ubiquitous on Amazon. So what kind of competition / victory are we talking about? The satisfaction of writing better for an audience of none?
> if you get your news on HN, significant portion that make it to the top are LLM-generated.
You mean this anecdotally I assume.
This makes me think of the split between people who read the article and people who _only_ read the comments. I'm in the second group. I'd say we were preemptive in seeking the ideas and discussion, less so achieving "the point" of the article.
FWIW, AI infiltrates everything, i get that, but there's a difference between engagement with people around ideas and engagement with the content. it's blurry i know, but helps to be clear on what we're talking about.
edit: in this way, reading something a particular human wrote is both content engagement and engagement with people around an idea. lovely. engaging with content only, is something else. something less satisfying.
There are very few things worth reading submitted to this site. The only meaningful thing I'm glad to have read was the "I sell onions on the internet" blog post. Everything else I've forgotten, mostly VC marketing fluff or dev infighting in open source; hardly anything worth noting.
This place is up there with reddit, it's all lowish calorie info; 90% forgettable, 10% meaningful but you have to dig quite quite deep to find it.
To be fair, it has gotten harder, but when the meaningful stuff does happen, it is hard to beat. Some of the audience can have rather pointed takes. And if it is then somehow topped by 'off the beaten path' guy, it really makes it for me ( in the sense that maybe not all is lost quite yet ). I still sometimes reel from 'manifest bananas' guy.
>The satisfaction of writing better for an audience of none?
The satisfaction of writing for an engine. The last of what could still be recognized as a real human being writing. There’s no competition with AI, but also no resignation and no fear of being limited compared to the vast knowledge of an LLM. Even in a context of an "audience of none", somewhere there will be a scraper tool interested in my writing. And if it gets hallucinated... wow!
This is the part nobody wants to say out loud: most writing was already bad before LLMs. AI didn't kill good writing, it just made bad writing free. The people who were reading AI-generated slop on Amazon are the same people who were reading ghostwritten garbage before. Good writers still have audiences, they're just not competing in the SEO content farm game anymore. And honestly they never should have been.
<< most writing was already bad before LLMs.
I am not sure this is the problem. The problem, as it were, is that writing muscles will atrophy and in a year or two we will be looking at those tiktok reels as long lost havens of enlightenment. Personally, if anything, I write a lot more now, but then I am fascinated by llms and how they work, so .. I test and that requires writing. I might be bad, but there is hope I won't need ugh to English llm translator.
Tens of millions of people, if not hundreds now thanks to the popularity of the television adaptation, have been waiting 15 years now for Winds of Winter to get published. If AI is such a good writer and can replace anything, write Winds of Winter for George. I don't really give a shit what's ubiquitous on Amazon. Nobody will remember any of it in a century the way we remember War and Peace. People will remember the Song of Ice and Fire books.
I think it's fine. As said above, most reading isn't done because people are looking for thought-provoking, deeply emotional multi-decade experiences with nearly parasocial relationships to major characters. They're just looking to avoid the existential dread of being alone with their thoughts for more than a few minutes. There's room for both twinkies and filet mignon in the world and filet mignon alone can't feed the entire world anyway. By the same token, if we expected all journalists to write like H.L. Menken, a lot of people wouldn't get any news, but the world still deserves to have at least a few H.L. Menkens and I don't think they'll have an audience of "none" even if their audience is smaller than Stephanie Meyer or whoever is popular today.
If it were me, I don't know man, does nobody on Hacker News still care about actually being good at anything as opposed to just making sales and having reach? Personally, I'd rather be Anthony Joshua than Jake Paul, even though Jake Paul is richer. Shit, I think Jake Paul himself would rather be Anthony Joshua
> "Post truth" is one I'm most interested in.
I have this theory that the post-truth era began with the invention of the printing press and gained iteratively more traction with each revolution in information technology.
Doesn't matter when post-truth started because it's now over, and it's more accurate to characterize this era as "post-rationality". Most people do seem to understand this, but we are in different stages of grief about it.
Maybe I’m viewing truth too narrowly, but I feel like the printing press brought us as close as we could come to a “truth era”. Authorship of text, and the friction and cost involved with publishing seems to bend towards transmitting truth. I guess how are you evaluating or measuring truth?
So slightly before 1440 was peak Truth for humanity?
I think you're right, but I also think it's worthwhile to look at Edward Bernays in the early 1900s and his specific influence on how companies and governments to this day shape deliberately shape public opinion in their favor. There's an argument that his work and the work of his contemporaries was a critical point in the flooding of the collective consciousness with what we would consider propaganda, misinformation, or covert advertising.
> There's an argument that his work and the work of his contemporaries was a critical point in the flooding of the collective consciousness with what we would consider propaganda
I would rather say that Bernays was a keen observer and understood mass behavior and the potential of mass media like no one else in his time. Soren Kierkegaard has written about the role of public opinion and mass media in the 19th and had a rather pessimistic outlook on it. You have stuff like the Dreyfuss Affair where mass media already played a role in polarizing people and playing into the ressentiments of the people. There were signs that people were overwhelmed by mass media even before Bernays. I would say that Bernays observed these things and used those observations to develop systematic methods for influencing the masses. The problem was already there, Bernays just exploited it systematically.
Same. New yorker is the other mag I subscribed to.
Until 3 weeks ago I had a high cortisol inducing morning read: nyt, wsj, axios, politico. I went on a weeklong camping trip with no phone and haven't logged into those yet. It's fine.
People think I'm nuts when I tell them I ditched subscriptions for those sites and only check them maybe once a week, if that.
But what you said is 100% true, it's fine. When things in your life provide net negative value it's in your best interest to ditch them.
> When things in your life provide net negative value it's in your best interest to ditch them.
Let's ditch politicians. :-)
I agree with this in general but with caveats. For example I think reading national-sized news every day sucks. But if you're of a specific demographic it might be useful to keep pretty up to date on nuanced issues, like if you're a gun owner you will probably want to keep up to date on gun licensing in your area. Or if you're a trans person it's pretty important nowadays to be very aware of laws being passed to dictate your legally going to whatever bathroom or something.
[flagged]
you don’t need any of the mentioned periodicals for that. m
Fair point. But I was addressing leaving the phone at home to "check out". Because without a phone you'll just have to hope you see the masked men before they see you.
"Is Claude Code junk food, though? ... although I have barely written a line of code on my own, the cognitive work of learning the architecture — developing a new epistemological framework for “how developers think” — feels real."
Might this also apply to learning about writing? If have barely written a line of prose on my own, but spent a year generating a large corpus of it aided by these fabulous machines, might I also come to understand "how writers think"?
I love the later description of writing as a "special, irreplaceable form of thinking forged from solitary perception and [enormous amounts of] labor", where “style isn’t something you apply later; it’s embedded in your perception" (according to Amis). Could such a statement ever apply to something as crass as software development?
My current bugbear is how art is held up as creativity and worthy of societal protection and scorn against AI muscling in on it
While the same people in the same comments say it’s fine to replace programming with it
When pressed they talk about creativity, as if software development has none…
I haven't heard writers make any kind of stance on software engineering, but Brandon Sanderson has very publicly renounced AI writing because it lacks any kind of authentic journey of an authors own writing. Just as we would cringe at our first software projects, he cringes at his first published novel.
I think that's a reasonable argument to make against generative art in any form.
However, he does celebrate LLM advancements in health and accessibility, and I've seen most "AI haters" handwave away its use there. It's a weird dissonance to me too that its use is perfectly okay if it helps your grandparents live a longer, and higher quality of life, but not okay if your grandparents use that longer life to use AI-assisted writing to write a novel that Brandon would want to read.
The easiest job to automate is someone else’s.
Art has two facets. First is if you like it. If you do, you don't need to care where it came from. Second is the art as cultured and defined by the artistic elites. They don't care if art is liked or likable, they care about the pedigree, i.e. where it came from, and that it fits what they consider worthy art. Between these two is what I call filler art: stuff that's rather indifferent and not very notable, but often crosses over some minimum bar that it's accepted by, and maybe popular among average people who aren't that seriously interested in art.
In the first category, AI is no problem. If you enjoy what you see or hear, it doesn't make a difference if it was created by which kind of artist or AI. In the second category, for the elite, AI art is no less unacceptable than current popular art or, for that matter, anything at all that doesn't fit their own definition of real art. Makes no difference. Then the filler art.. the bar there is not very high but it will likely improve with AI. It's nothing that's been seriously invested in so far, and it's cheaper to let AI create it rather than poorly paid people.
Commercial art has literally nothing to do with art, and everything to do with commerce. Art is not stored in freeport bunkers and used as collateral for loans.
All art aspires to the condition of music. It evokes an emotional reaction. If it does that, it doesn't matter where it came from.
> If it does that, it doesn't matter where it came from.
Personally, it matters to me quite a lot where art comes from, especially music. I have a hard time "separating the art from the artist". If I find out a musician is a creep/abuser/rapist, I can't enjoy their music anymore.
This belief obviously isn't widespread given artists like Michael Jackson, Chris Brown, R. Kelly, and Jimmy Page are still wildly popular. But I assume I'm not alone in this.
As for AI music, it's hard for me to imagine an "AI Musician" ever becoming very popular because I reckon most humans want some human-ness in their music. And I think if an existing artist ever put out AI music as their own, they'd lose some fans pretty quickly.
No, fair point. I'm the same, I can't enjoy the music if I know the artist is not a good person. Though I do think this gets taken too far; I can enjoy Pink Floyd even though I have huge disagreements with Roger Waters' politics.
I'm not sure I could tell the difference between AI and human music already. In a few years I'm pretty sure I couldn't. This is the bit where I'm not sure it matters. I mostly listen to music for the nostalgic emotions now anyway.
My dude, there is no artistic elite deciding what art is. I think you just don't understand the critiques around this topic, and so it sounds like snobbery ("real art") to you
a lot of artists don't mind use AI for art outside their field
I was in a fashion show in tokyo in 2024.
i noticed their fashion was all human designed. but they had a lot of posters, video, and music that was AI generated.
I point blank asked the curator why he used AI for some stuff but didn't enhance the fashion with AI. I was a bit naive because I was actually curious to see if AI wasn't ready for fashion or maybe they were going for an aesthetic. I genuinely was trying to learn and not point out a hypocrisy.
he got mad and didn't answer. i guess it is because they didn't want to pay for everything else. big lesson learned in what to ask lol.
How do you know he used AI in one area but not another?
Maybe that's because AI "art" looks just as cringe as written AI slop.
Thank you, this sort of insight is exactly why I've felt such kinship with what software engineers like Karpathy and Simon Willison have been writing lately. It seems obvious to me that there is something special and irreplaceable about the thought processes that create good code.
However, I think there is also something qualitatively different about how work is done in these two domains.
Example: refactoring a codebase is not really analogous to revising a nonfiction book, even though they both involve rewriting of a sort. Even before AI, the former used far more tooling and automated processes. There is, e.g., no ESLint for prose which can tell you which sentences are going to fail to "compile" (i.e., fail to make sense to a reader).
The special taste or skillset of a programmer seems to me to involve systems thinking and tool use in a different way than the special taste of a writer, which is more about transmuting personal life experiences and tacit knowledge into words, even if tools (word processor) and systems (editors, informants, primary sources) are used along the way.
Sort of half formed ideas here but I find this a really rich vein of thought to work through. And one of the points of my post is that writing is about thinking in public and with a readership. Many thanks for helping me do that.
I don't have a good answer to your question, but I do think it might be comparable, yes. If you had good taste about what to get Opus 4.6 to write, and kept iterating on it in a way that exposes the results to public view, I think you'd definitely develop a more fine grained sense of the epistemological perspective of a writer. But you wouldn't be one any more than I'm a software developer just because I've had Claude Code make a lot of GitHub commits lately (if anyone's interested: https://github.com/benjaminbreen).
> Could such a statement ever apply to something as crass as software development?
Absolutely. I think like a Python programmer, a very specific kind of Python programmer after a decade of hard lessons from misusing the freedom it gives you in just about every way possible.
I carry that with me in how I approach C++ and other languages. And then I learned some hard lessons in C++ that informed my Python.
The tools you have available definitely inform how you think. As your thinking evolves, so does your own style. It's not just the tool, mind, but also the kinds of things you use it for.
Comment was deleted :(
"My AI usage is justified, but what others are doing is generating slop."
I'm still waiting for a famous people to say this so we can have a name of this psychological phenomenon.
I overheard a conversation between a uni professor and a phd student the other day. Professor was complaining 99% his students use chatgpt to write essays in uni. He seemed genuinely distressed about the effect this was having on all of them.
Not surprised, I work in Academia and there is a push from the Business side to start marking essays and performing lectures with ChatGPT/AI.
I have my own personal reservation about it all.
I predict the future of the web will be no-bot-allowed spaces a bit like discord is right now. Once you know what you are reading was created by a bot, it looses all its appeal, except when its purely informational
i cant wait for the reverse effect to happen , where everyone themselves start sounding like large language models ... a true singularity where AI colonizes the noosphere instead of earth
Why can't you wait for that? I picture that as everyone sounding the same
maybe im a psycho who would enjoy the ease of manipulating a uniformly thinking populace , maybe i was being a bit facetious because that sort of world is the one we are on track towards and it sounds even more like hell than the world we currently live in
> Anyone who has led a class discussion — much less led students on a tour of Egypt or Okinawa, as my colleagues regularly do — knows that there is a huge gap between solo learning online and collective learning in meat space
One thing this author misses, which I fear, is that it may become less important in the eyes of stakeholders to educate the masses when they have LLMs to do jobs instead. That is, it is fully possible that one of the futures we may see is one where education goes down as it is perceived as not important for most. Yes, meat space education may be better, but who decides if it is necessary?
Maybe vocational schools become more important instead? Jobs where you for all intents and purposes build out the infrastructure for the tetriary industry, mostly automated by LLM.
You may disagree with this, but the key here is to realize that even if we disagree, others don't. Education is also power, there's a perverse incentive to avoid educating people and feeding them with your narrative of how the world works instead. We are very much possibly on the way towards a buy-n-large style future.
This type of cadence.
You know the one.
Choppy. Fast. Saying nothing at all.
It's not just boring and disjointed. It's full-on slop via human-adjacent mimicry.
Let’s get very clear, very grounded, and very unsentimental for a moment.
The contrast to good writing is brutal, and not in a poetic way. In a teeth-on-edge, stomach-dropping way. The dissonance is violent.
Here's the raw truth:
It’s not wisdom. It’s not professional. It’s not even particularly original.
You are very right to be angry. Brands picking soulless drivel over real human creatives.
And now we finish with a pseudo-deep confirmation of your bias.
---
Before long everyone will be used to it and it'll evoke the same eugh response
Sometimes standing out or wuality writing doesn't actually matter. Let AI do that part
Only thing missing is, “The raw truth?” instead of, “ Here's the raw truth:”.
the LinkedIn register of English
I don't really remember Claude 3.5 doing this, but it seems increasingly worse, with 4.6 being so bad I don't like using it for brainstorming. My shitty idea isn't "genuinely elegant".
Why would anyone get sick of it if people have been happily doing it to each other for so many years prior?
Does the fact that a machine can ape it so easily somehow reveal its vacuousness in a way that wasn't obvious already?
I keep hearing people with job titles like "SEO growth hacker" saying it's depressing that AI can do their jobs better than they can.
Really? That's the depressing part?
No worse than "junior developer" assuming they were looking to move on from it
Writing SEO content for random sites was of course the lowest skilled writing job. Ideally they'd have higher aspirations than that though.
Maybe those people didn't even want to be writers. They just wanted an easy job.
This is what I don't grok...
Your sample sounds exactly like an LLM. (If you wrote it yourself, kudos.)
But, it needn't sound like this. For example, I can have Opus rewrite that block of text into something far more elegant (see below).
It's like everyone has a new electric guitar with the cheapo included pedal, and everyone is complaining that their instruments all sound the same. Well, no shit. Get rid of the freebie cheapo pedal and explore some of the more sophisticated sounds the instrument can make.
----
There is a particular cadence that has become unmistakable: clipped sentences, stacked like bricks without mortar, each one arriving with the false authority of an aphorism while carrying none of the weight. It is not merely tedious or disjointed; it is something closer to uncanny, a fluency that mimics the shape of human thought without ever inhabiting it.
Set this against writing that breathes, prose with genuine rhythm, with the courage to sustain a sentence long enough to discover something unexpected within it, and the difference is not subtle. It is the difference between a voice and an echo, between a face and a mask that almost passes for one.
What masquerades as wisdom here is really only pattern. What presents itself as professionalism is only smoothness. And what feels, for a fleeting moment, like originality is simply the recombination of familiar gestures, performed with enough confidence to delay recognition of their emptiness.
The frustration this provokes is earned. There is something genuinely dispiriting about watching institutions reach for the synthetic when the real thing, imperfect, particular, alive, remains within arm's length. That so many have made this choice is not a reflection on the craft of writing. It is a reflection on the poverty of attention being paid to it.
And if all of this sounds like it arrives at a convenient conclusion, one that merely flatters the reader's existing suspicion, well, perhaps that too is worth sitting with a moment longer than is comfortable.
----
(prompt used: I want you to revise [pasted in your text], making it elegant and flowing with a mature literary-style. The point of this exercise is to demonstrate how this sample text -- held up as an example of the stilted LLM style -- can easily be made into something more beautiful with a creative prompt. Avoid gramatical constructions that call for m-dashes.)
>It is not merely tedious or disjointed; it is something closer to uncanny, a fluency that mimics the shape of human thought without ever inhabiting it.
It still can't help itself from doing "it's not X it's Y". Changing the em-dash to a semi-colon is just lipstick
Yep. But that prompt I used was just a quirky. You can explicitly force it to avoid THAT structure as well. Just do what the smart ?ie, devious) middle-schoolers do: find a list of all the tell-tale ‘marks’ of AI content, and explicitly include them as prohibitions in your prompt… it’s the most basic work-around to the ‘AI spotters’ the teacher uses for grading your essay. (And, of course, be sure to include an instruction to include a grammatical or spelling error every few sentences for added realism.)
You're right, a lot of the style can be changed from its default. I don't think you can get rid of the soulless aspect though - the lack of underlying relatable consistency.
Especially once you go past a page or two.
When you get to the actual content so much of it just doesn't make sense past a superficial glance
Soulless drivel is very accurate
well done. :)
and at the same time the chop becomes long-form slop, stretching out a little seed of a human prompt into a sea of inane prose.
The "cognitive debt" framing is slightly mislocated. The debt isn't from using AI — it's from confusing editorial fluency with generative fluency.
When you write, you discover what you think in the act of arranging words. That's why writing feels hard — it's thinking, not the output of thinking. When you prompt an AI and refine its output, you're doing editorial work, which builds different muscles. You get better at recognizing good prose without getting better at producing it.
That's the real debt: the growing gap between your ability to evaluate writing and your ability to generate it. Same thing happens when programmers who only use AI-assisted code generation start losing the ability to reason about systems from scratch.
Are you an LLM yourself?
As much as the general public seems to be turning against AI, people only seem to care when they're aware it's AI. Those of us intentionally aware of it are better tuned to identify LLM-speak and generated slop.
Most human writing isn't good. Take LinkedIn, for example. It didn't suddenly become bad because of LLM-slop posts - humans pioneered its now-ubiquitous style. And now even when something is human-written, we're already seeing humans absorb linguistic patterns common to LLM writing. That said, I'm confident slop from any platform with user-generated content will eventually fade away from my feeds because the algorithms will pick up on that as a signal. (edit to add from my feeds)
What concerns me most is that there's absolutely no way this isn't detrimental to students. While AI can be a tool in STEM, I'm hearing from teachers among family and friends that everything students write is from an LLM.
Leaning on AI to write code I'd otherwise write myself might be a slight net negative on my ability to write future code - but brains are elastic enough that I could close an n month gap in 1/2n months time or something.
From middle school to university, students are doing everything for the first time, and there's no recovering habits or memories that never formed in the first place. They made the ACT easier 2 years ago (reduced # of questions) and in the US the average score has set a new record low every year since then. Not only is there no clear path to improvement, there's an even clearer path to things getting worse.
I spent several years trying to get ground truth out of digital medical records and I would draw this parallel to AI slop:
With traditional medical records, you could see what the practitioner did and covered because only that was in the record.
With computerized records, the intent, thought process, most signal you would use to validate internal consistency, was hidden behind a wall of boilerplate and formality that armored the record against scrutiny.
Bad writing on LinkedIn is self-evident. Everything about it stinks.
AI slop is like a Trojan Horse for weak, undeveloped thoughts. They look finished, so they sneak into your field of view and consume whatever additional attention is required to finally realize that despite the slick packaging, this too is trash.
So “AI slop,” in this worldview, is a complaint that historical signals of quality simply based on form, no longer are useful gatekeepers for attention.
re: traditional vs electronic medical records, if you haven't read Seeing Like a State, I highly recommend checking it out. The book is all about the unexpected side effects of improving the legibility of information for decision makers - these attempts can erase or elide important local detail, which ultimately sabotages the bureaucracy's aim of improving the system.
Did we lose something when we invented the calculator and stopped teaching the times table in schools? There have been millions of words discussing this, and the general consensus amongst us crusty old folks was that yes, the times table was useful and losing the ability to do mental arithmetic easily would be bad.
Turns out we were wrong. Everyone carries a calculator now on their phone, even me. Doing simple maths is a matter of moments on the calculator app, and it's rare that I find myself doing the mental arithmetic that used to be common.
I can't remember phone numbers any more. I used to have a good 50+ memorised, now I can barely remember my own. But the point is that I don't need to any more. We have machines for that.
Do we need to be able to write an essay? I have never written one outside of an educational context. And no, this post does not count as an essay.
I was expelled from two kindergartens as a kid. I was finally moved to a Montessori school where they taught individually by following our interests, where I thrived. Later, I moved back into a more conventional educational environment and I fucking hated every minute of it. I definitely learned despite my education not because if it. So if LLMs are about to completely disrupt education then I celebrate that. This is a good thing. Giving every kid a personal tutor that can follow their interests and teach them things that they actually want to learn, at the pace they want to learn them, is fucking awesome.
Any competent thinker should be able to structure an argument and present it in written form, that's an important skill to have.
If someone is unable to write an essay arguing something, unable to articulate complex thoughts and back them up with evidence, what does that indicate about their thinking?
I don't write essays either, but I'm sure I could. And maybe some of those docs or emails I write at work are made more effective by that.
There are literally hundreds of millions of people in the Anglosphere who have graduated from their education unable to coherently structure an argument and present it in written form.
It indicates nothing about their thinking. One of the smartest people I've known left school at 14 and couldn't read or write.
We mistake education for intelligence often. We mistake erudition for capability often. The thing you need to get a PhD is not intelligence, but the ability to follow directions and persevere. You certainly don't need to have any original thoughts, in fact they will only get in your way.
Being able to read or write, if given an opportunity to learn, certainly IS a marker of intelligence. That’s not a very high bar to pass considering toddlers can usually read. But it’s obviously not the only way to measure intelligence!
You claim the smartest person you ever met couldn’t read or write. So what kind of smarts did this person have? Genuinely curious. A really good memory? Emotional intelligence? Extremely persuasive?
I knew him for about 3 months, hung out with him regularly, before I figured out that he couldn't read. He was very good at manipulating the conversation to make me read things for him without me guessing it.
He paid off his mortgage by his mid-30's.
He taught himself to read and write alongside his eldest daughter when she learned. Keeping up with a kid while learning an entirely new skill is no minor thing.
He built his own house in the corner of a field without planning permission so that no-one knew he was there, and lived in it for long enough that he then didn't need planning permission.
He effectively retired in his 40's, and keeps bees for fun.
They call it "street smarts". He has it in spades. Also just genuinely a fun person to be around.
thanks for sharing. Sounds like very disciplined person with lots of street smarts as you say.
> One of the smartest people I've known left school at 14 and couldn't read or write.
Really? What a bummer. Imagining how much more they could've done if the education system had taught them to read and write.
Maybe. But the education system was never designed for this kind of person. Challenging authority and doing things in unconventional ways is not tolerated in school. I think he dodged a bullet.
Calculators are good. But we still teach times tables and long division and prohibit calculators until kids learn how to do it the “hard way.”
We can’t give a generation of kindergarteners calculators and expect them to produce new math when they’re adults: how will they ever form mathematical problem solving skills?
I think the same principle applies for LLMs - they can be a tool but learning how to do things without them is still essential. Otherwise we might not have any more good authors in 10 years.
Before CAD, engineers had to draw designs on drafting boards. Similar concept here, I believe most classes still find it valuable for students to start with pencil and paper and grasp something at its most fundamental level, even if obsolete, before moving on to modern tools.
LLMs (and calculators, and CAD) should be used as a tool once the underlying mechanisms and skills are understood by its user, otherwise it’s like driving a car without knowing how to replace a flat tire. Sure you can call AAA, but eventually if nobody learns to change a tire with their own two hands, humanity won’t be able to drive. This obviously hyperbole but I hope it illustrates my point.
I’m fairly confident LLMs will be a net positive on society in the long run, just as calculators have been. But just like calculators are restricted at certain times in math classes, LLMs should be restricted in writing classes.
> We can’t give a generation of kindergarteners calculators and expect them to produce new math when they’re adults: how will they ever form mathematical problem solving skills?
Arithmetic has nothing, literally nothing, to do with "new maths". A calculator won't help you with algebra, or shortcut any mathematical problem solving. It will just help you with dividing up the restaurant bill, which is the hardest maths problem the vast majority of humans will encounter.
> I think the same principle applies for LLMs - they can be a tool but learning how to do things without them is still essential. Otherwise we might not have any more good authors in 10 years.
Did we stop having any good new portrait painters once we'd invented the camera? The people who really want to write a book will still write a book.
About the article that's referenced in the beginning - that sentiment presented in it honestly sounds like AI version of cryptocurrency euphoria just as the bubble burst. "You are not ready for what's going to happen to the economy", "crypto will replace tradfi, experts agree". The article is sitting at almost 100M views after just a week and has strong FOMO vibes. To be honest, it's very conflicting for me to believe that, because I've been using AI and compared to crypto, it doesn't just feel like magic, it also does magic. However, I can't help but think of this parallel and the possibilty that somehow the AI bubble could right now be starting to stall/regress. The only problem is that I just don't see how such a scenario would play out, given how good and useful these tools are
The same snobs who were telling us that "The Old Man and the Sea" (written in the style of a fifth-grader) is 'art'...
the same people telling us that "Finnegan's Wake" (written in the style of a fifth-grader with a brain injury) is 'art'...
the same people telling us the poetry of Maya Angelou (written in the style of a fifth-grader with a brain injury and self-esteem issues) is 'art'...
the same people telling us that the works of Jackson Pollack, Mark Rothko, Piet Mondrian, etc., etc. are 'art'...
seem to be the ones complaining the most about AI generated content.
Comment was deleted :(
Those are all art, though? Your insults to them don't make them not.
I wonder whether we will see a shift back toward human generated, organic content, writing that is not perfectly polished or exhaustively articulated. For an LLM, it is effortless to smooth every edge and fully flesh out every thought. For humans, it is not.
After two years of reading increasing amounts of LLM generated text, I find myself appreciating something different: concise, slightly rough writing that is not optimized to perfection, but clearly written by another human being
If LLMs presently aren't capable of matching the style quirks you're describing, isn't it likely they'll be able to in the near future? To me this feels like a problem that'll either need to be addressed legally or left to authors to somehow convince their audiences to trust that their work is their own.
I think people hate AI generated writing more than they like human curated writing. At the same time, I find that people like AI content more than my writing. I write, comment, and blog in many different places and I notice that my AI generated content does much better in terms of engagement. I'm not a writer, I code, so it might be that my writing is not professional. Whereas my code-by-hand still edges out against AI.
We need to value human content more. I find that many real people eventually get banned while the bots are always forced to follow rules. The Dead Internet hypothesis sounds more inevitable under these conditions.
Indeed we all now have a neuron that fires every time we sense AI content. However, maybe we need to train another neuron that activates when content is genuine.
How do you know if your engagement was by real humans or not? I'd also assume bot traffic is way more accepted on platforms like Facebook, Instagram, and Twitter. Especially any Meta owned platform, they have a history of lying to people about numbers and were never punished for it:
https://en.wikipedia.org/wiki/Pivot_to_video#Facebook_metric...
I agree with the assessment that pure writing (by a human) is over. Content is going to matter a lot more.
It's going to be tough for fiction authors to break through. Sadly, I don't think the average consumer has sufficiently good taste to tell when something is genuinely novel. People often prefer the carefully formulated familiar garbage over the creative gems; this was true before AI and, IMO, will continue to be true after AI. This is not just about writing, it's about art in general.
There will be a subset of people who can see through the form and see substance and those will be able to identify non-AI work but they will continue to be a minority. The masses will happily consume the slop. The masses have poor taste and they're more interested in "comfort food" ideas than actually novel ideas. Novelty just doesn't do it for them. Most people are not curious, new ideas don't interest them. These people will live and breathe AI slop and they will feel uncomfortable if presented with new material, even if wrapped in a layer of AI (e.g. human-written core ideas, rewritten by AI).
I feel like that about most books, music and pop culture in general; it was slop and it will continue to be slop... It was the same basic ideas about elves, dragons, wizards, orcs, kings, queens, etc... Just reorganized and mashed with different overarching storylines "a difficult journey" or "epic battles" with different wording.
Most people don't understand the difference between pure AI-generated content (seeded by a small human input) and human-generated content which was rewritten by AI (seeded by a large human input) because most people don't care about and never cared about substance. Their entire lives may be about form over substance.
Who or what is "the masses" actually?
Reminded of this clip.
https://www.youtube.com/watch?v=KHJbSvidohg
But as much as it pains me to admit... the current state of America is the slopocalypse. A slopalanche. A slopnado. AI cats waking people up in the middle of the night, blasting down doors, glitching out. All produced by slop-slingers. It's rather bleak for long form attention content, human created or not.
Its a war of/on attention. A war to secure your attention during the time that you would otherwise think for yourself. Keep off the short form content, is my advice.
What is the difference between writing and content?
Comment was deleted :(
I would guess he's looking to compare the equivalent of fast-food to fine-dining or nutritious eating.
Yeah, I meant 'content' in terms of the intrinsic value, the 'nutritional value' underlying the writing... The message, the story, the information content.
Actually it's kind of dystopian to think of it; that the word 'content' has been appropriated to refer to an arbitrarily broad range of media products...
The word 'content' used to be associated with the word 'substance' but in a modern context, it's actually more closely associated with the concept of 'form' as the word emphasizes a variety of media... What happened to the term "multimedia"? IMO this is what I would refer to when some people say content... I mean, there's no content in content... It's empty, it's all smoke and mirrors.
[dead]
That is a shallow piece of the new genre: I am a concerned academic who nevertheless uses these new tools to create vibe coded slop and has to tell the world about it.
Everything is inevitable but my own job is secure. Have I already told you how concerned I am?
No novelty. No intellectual challenge. No spirit. Just AI advertisements! /s
[flagged]
You vibe-coded a computer-vision product that is to be used in monitoring industrial cranes? And people are using it?
The account is 47 minutes old and with the writing style plus the hefty dose of em dashes, I think they are an LLM.
Appreciate this take. It makes a lot of sense and can see this happening all over right now.
[flagged]
That crosses into personal attack, which is not allowed here, regardless of how bad someone's website is or you feel it is.
If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.
[flagged]
[dead]
My spidey-sense: the "it isn't X, it's Y" construct and the dreaded em dash.
[dead]
its interesting how much this comment feels like its AI written.
if it isn't, then it has seeped into your writing style and its quite a turn off as a reader; i dont care much to engage.
if it is, then why should i read it? what come to this website and even bother reading AI bot comments?
what is happening to writing indeed.
I think it's a joke comment intended to be as stereotypically AI as possible. It even has the emdash!
It’s 100% AI. lol it might even be a bot
I had this worry at first but at this point we have hundreds of years of books written using legacy methods the best of what was possible already exists it's time for a change
In the near future will not even need to read anyway
For hundreds of years we've avoided eating rocks, just based on so-called "conventional wisdom". Witness all the problems we now have in the world. Well I, for one, am ready for a change. It's time to do things differently. If you're fed up with the status quo, it's time to start eating rocks.
Crafted by Rajat
Source Code