11 Comments
User's avatar
Kent's avatar

The phrase "writing with AI" is ambiguous between two meanings: (A) doing all the actual writing yourself, and only using AI as a critic to improve your writing; (B) prompting AI to say some stuff and calling that your writing.

You are saying (A) is good. And it is! It makes sense that it would improve your writing. But the people who pledge never to write with AI are talking about (B). I approve of everyone who takes that pledge, and I imagine that you do as well.

Quico Toro's avatar

But that distinction is too neat by half. Sometimes the robot comes up with a formulation that I quickly recognize as better than what I’d written in an earlier draft, what am I supposed to excise it from the finished essay because the machine came up with it? That doesn’t seem right. I’ll often tweak it, instead, and now it’s my tweak on a machine improvement on a clumsily expressed idea I’d put in an early draft. Which the machine will then correct, and whose correction I’ll tweak again. After the 18th round of this kind of back and forth the ideas have become hybridized to the point where if you ask me which partner had originally put together a given phrase my honest answer will often be ‘I can’t remember.’

Deferring to the robot without thinking is inadmissible. But refusing to defer to it when your own experience as a writer is screaming at you that it’s on to something is perverse.

Kent's avatar

In that case I think I just might disagree with you, though I'm not sure. Hear me out; just a possibility to consider.

What if chatbot always (or even mostly) suggests things that pull the human writer toward the ordinary, the mediocre, the seen-a-thousand-times-before in the data set? What if relying on the chatbot rather than thinking for oneself degrades the ability to think for oneself? What if taking a suggestion from a chatbot makes it impossible for you to come up with a more interesting, more original idea that would have come to you if you had sat in your own discomfort for another few minutes?

Everything done is at the same time a thing not done; a choice to do X is a choice not to do Y. "This is better than what I would have come up with" is a counter-factual claim that you can never test, because you're never going to write what you would have come up with.

Maybe you're right and it's all good with no downside. But I think there are risks here and it's not obvious to me that you (or anyone else) can be aware of what they are.

I do thank you for posting this, both the original piece and your reply to my reply; it's interesting to think through.

Quico Toro's avatar

"What if chatbot always (or even mostly) suggests things that pull the human writer toward the ordinary, the mediocre, the seen-a-thousand-times-before in the data set?"

Well, in that case I wouldn't use it!

Kent's avatar
2dEdited

I guess you're much more intelligent than I am, then. I don't know that I would be able to tell, quickly and easily, what is good thinking versus what is thinking that seems good because I've seen the pattern a thousand times before. I have to sit with my thoughts and really let them plague me before I can even start to make those kinds of judgments.

I guess in the long run the proof will be in the pudding: how good is your writing going to turn out to be? Are you going to amaze people?

Good luck. Honestly, good luck to you. We can all use a world where people are smarter rather than stupider. If AI can help us get there I'll be delighted and surprised. I hope I'll be mature enough to be happy that I was wrong to be skeptical.

edited to add: I have not played with the chatbots in the way you suggest. Maybe it'll work for me better than I expect it to. Maybe I'll learn something. Thanks for your optimism.

Quico Toro's avatar

I don’t think this is about intelligence so much as experience. I’ve spent my whole career as a writer. Hours and hours day after day year after year honing this one skill. What skill have you spent a long time developing? Whatever it is, I bet you can use an LLM to help you with it.

Quinn Que ❁'s avatar

I love this essay so much. Couldn't agree more.

Andrew Currall's avatar

Hmm.

I have vibe-coded four games using Claude.

None of them is actually "good". But the problems with them are entirely down to the design and ideas; not the implementation.

I have also vibe-coded countless little mostly one-and-done data manipulation utilities. They are very good, well-structured and very efficient.

In none of these cases did I ever look at or edit any of Claude's code, except for a few filenames near the top of some of the utilities.

Now I am definitely not a software developer. I've never worked in software development and never published or been involved in publishing an application of any kind. But I am a coder; I used to spend ~20% of my working day writing code (it's now more like 5%; I prompt Claude instead). I'm not sure how or if this helps me get Claude to work well. There certainly is a bit of learning curve, though; my very first attempt to vibe-code a game with Claude did not work terribly well.

Ulysses Outis's avatar

Thank you for being an intelligent person as usual. There are not so many left. And it is not because of AI.

Terzah Becker's avatar

I love this comparison! My husband is a coder and I am a writer. I never thought to use AI the way he does, but I will. The only kind of writing I've used it for is when I have to write something bland and corporate for my day job. Proves your point about bad writing...but also saves me time and boredom.

Brian Charlebois's avatar

Thank you,

Now I know exactly how badly I am screwed.

I was kind of hoping, at one time, that AI would be helpful in writing, but I was slowly coming to the same conclusion.

Once again, you are reaffirming my beliefs.