On my use of AI for writing

Tags
CreativityGenAI
Published
Integrating AI Tools with Notion: A Practical Experiment with Claude and Model Context Protocol

Context

Who I am (a product of)

A bit about me. My mom has been, among other things, an English Language Arts teacher. It is for that reason that, for much of high school and all of college, the copy of the MLA Style Guide that she gave me was fixed to my desk. My father, who attained a PhD in Philosophy/Theology, was also a skilled writer and orator. Today, in addition to the utilitarian writing that I do for work, I also write this blog to sharpen and share my thinking.

Why I write

The primary goal of writing, for me, is to turn reflections on my experiences into actionable, shareable insights. So it would be meaningless to even try to have an do that for me.

Blogging for me is (thus far and likely indefinitely) a “small web”, not commercial activity. It isn't a source of revenue for me - there are neither ads nor refferal links - so there's no monetary incentive for me to just create "content".

💬

I don’t want/need AI to scale my “content creation” output because the upper bound for that pace is the pace of my learning.

So if I want to produce more, I need to learn more and capture more of what I’m learning – and that's one sees where I would welcome AI-assisted scaling.

Pride and prejudice

While I'm not aware of any “AI slop” accusations being leveled against me, unfair accusations aren't new to me.

I once had a teacher accuse me of palagarism. His assertion? "A 9th grade kid didn't write this.” The situation was resolved when I invoked the schools own stated policies. "If you're accusing me of academic dishonesty, you have to report me to the principals office and call my mom. Let's go call her now". That accusation stung because it came not in spite of but perhaps because of the level of effort I put into the work.

My worry is that claims of inappropriate AI use – which is a legitimate concern – will become a new attack surface for bias and prejudice.

I have – perhaps due to stereotype threat – increasingly had this uncanny feeling of writing something, reading it back, and thinking, "Someone who doesn't know me well assume think AI wrote this". Bullet lists and metanioa with parallel structure ("It's not just A, it's B") - these are ingrained aspects of the writing style that I've cultivated.

While I probably can't do much to convince folks who are primed to make offhand dismissals for whatever reason, I can show my hand, as it were, about how I'm using LLMs in my writing today.

Where AI fits in

Where I do leverage LLMs is in covering the gaps between my final solitary drafts and my first shared drafts. When I’m looking for early (human) feedback, it’s usually at a point in my writing lifecycle where I don’t want to distract my reviewers from the substance of the material with typos, grammatical errors, or stylistic weaknesses. It’s also a point where I’ve usually stared at the text long enough that I’ve started to go blind to those very issues. LLM, in my experience, have been really good at spotting and suggesting fixes for those categories of issues.

A framework for AI involvement AI Influence Level

Killing one’s darlings

Work flow

Here are some of the prompts that I'll use

Initial prompt

Once I have a decent draft (typically around the 1000-1500 word mark based the my usual post length – ~1300 for this particular essay), I'll usually throw an LLM against it with some variation of the following prompt:

After iteration

Once I've worked through the feedback and implemented the feedback that I agree with, I'll throw the updated draft against a prompt such as this:

Perform a final check to verify that grammatical, typographical, and stylistic issues have been resolved and the major opportunities to tighten, organize, or otherwise strengthen the draft have been implemented.

When ready to publish

Then finally, when I have a near-final draft, I'll prompt for

  • Feedback on my working title and suggestions for alternatives
  • Suggestions for an SEO-friendly meta description

Conclusion

In discourse, I see people's willingness confidently assert "dead giveaways" and tells – even in short form writing like comments that provides limited data.

And it's not that they are necessarily wrong, it's that these assertions are often made with certainly instead of as sliding probabilities.

  • If LLMs tend to produce a certain style, that implies that the style was likely prevalent in training data. Meaning there were and still are humans who write that way. It's unlikely to be some fully emergent behavior that is exclusively used by AI.
  • Being "too good" doesn't mean something is slop. Some people follow style guides or otherwise take care when they write and shouldn't be punished for it.
  • Being bad doesn't mean something is slop.
  • Disagreeing with an argument doesn't mean it's slop

LLMs in general, but especially when paired with tools for autonomy like Open lClaw have led to content being produced at unprecedented scales. Some of it to manipulate or deceive. In general, try to engage with the substance of an argument.

TODO