On writing in the age of AI: Transparency, workflows, and principles

On writing in the age of AI: Transparency, workflows, and principles

Tags
CreativityGenAI
Published
Description

"A 9th grader didn't write this" – a plagiarism accusation in high school, LLM-assisted editing today with practical examples, and why claims of AI authorship risk becoming a new vector for bias.

Integrating AI Tools with Notion: A Practical Experiment with Claude and Model Context Protocol

Context

Of pride and prejudice

While I'm not aware of any “AI slop” accusations being leveled against me, unfair accusations aren't new to me.

I once had a teacher accuse me of plagiarism. Their assertion, as I recall? "A ninth-grade kid didn't write this.” They reversed their decision after I invoked the school’s stated academic integrity policies: "If you're accusing me of academic dishonesty, you have to report me to the principal’s office and call my mom. Let's go call her now." That accusation stung because it came not in spite of but perhaps because of the level of effort I put into the work.

I worry that claims of inappropriate AI use – which is a legitimate concern – will become a new vector for bias and prejudice.

I have – perhaps due to stereotype threat – increasingly had this uncanny feeling of writing something, reading it back, and thinking, "Someone who doesn't know me well might assume AI wrote this". Bullet lists and metanoia with parallel structure ("It's not just A, it's B") – these are ingrained aspects of the writing style that I've cultivated.

Of whom I am product

My mom has been, among other things, an English Language Arts teacher. It is for that reason that, for much of high school and all of college, the copy of the MLA Style Guide that she gave me was fixed to my desk. My father, who attained a PhD in Philosophy/Theology, was also a skilled writer and orator. Today, in addition to the utilitarian writing that I do for work, I also write this blog to sharpen and share my thinking.

Why I write

The primary goal of writing, for me, is to turn reflections on my experiences into actionable, shareable insights. So it would be meaningless to even try to have an AI do that for me.

Blogging, for me, is a “small web” activity. Since it isn't a source of revenue for me – there are neither ads nor referral links – there's no monetary incentive to just create content.

💬

The upper bound for my writing output is the pace of my learning.

So if I want to produce more, I need to learn more and capture more of what I’m learning – and that's where I would welcome AI-assisted scaling.

While I probably can't do much to convince folks who are primed to make offhand dismissals for whatever reason, I can show my hand, as it were, about how I'm using LLMs in my writing today.

Where AI fits in

Where I do leverage LLMs is in closing the gaps between my final drafts produced in solitude and my first drafts shared with others. When I’m ready for early (human) feedback, I don’t want to distract reviewers from the substance of the material with low-hanging typos, grammatical errors, or stylistic issues. It’s also a point where I’ve usually stared at the text long enough that I’ve started to go blind to those very issues. LLMs, in my experience, have been really good at spotting and suggesting fixes for those categories of issues.

Qualifying AI involvement

I’ve found Daniel Miessler’s AI Influence Level framework for labeling AI involvement in content creation useful for thinking about this categorically. In my process, AI use stays at Level 0 (”Human Created, No Al Involved”) or Level 1 (”Human Created, Minor Al Assistance”). Though if you include traditional grammar checking tools (think MS Word) under the umbrella of AI, then you could argue that none of it is level 0.

Level 3 and up, I'd conjecture, is for folks who have monetary incentives tied to their writing. For example, those with affiliate links or courses to sell. I’m willing to operate at those levels for code, particularly for personal projects, but not when teasing insights out of reflections.

Killing one’s darlings

Where I've found AI helpful beyond catching typos is in tightening up my essays.

💡

It takes more effort to publish fewer words

Reducing word count without losing substance means greater impact per word. Doing that without losing your style – a sense of authentic connection to and ownership of the words – is a tricky balance. Which is why I selectively incorporate suggestions – wholesale copying is antithetical to my goals.

Roughly speaking, given a ~1500 word draft, it usually takes me about an hour to trim ~10% or 150 words – and that’s with AI writing assistance 🫣. Whether or not I apply their specific suggestions, I've found LLMs useful for flagging wordiness.

And with that context in mind, I'll share how I actually use these tools today.

Workflow

Here are some of the prompts that I use.

Initial prompt

Once I have a decent draft (typically around the 1000-1500 word mark based on my usual post length – ~1300 for this particular essay), I'll usually throw an LLM against it with some variation of the following prompt:

Editorial review

Here's the result of running that prompt through Claude (Opus 4.6) on an earlier draft of this very essay. You may note the mix of error fixes and stylistic suggestions.

Here’s an example of structural feedback that I found helpful:

The prompts are useful to include, but the surrounding prose thins out — consider adding a sentence or two around each prompt block about what kind of feedback you typically get back, or a specific example of a suggestion you accepted vs. rejected.

This section, including the decision to include the whole editorial output, is the result of implementing that feedback. In another case, the draft had the sentence,

My worry is that claims of inappropriate AI use – which is a legitimate concern – will become a new attack surface for bias and prejudice.

Claude’s feedback was,

Consider whether "attack surface" (a security term) is the metaphor you want. It works for a tech audience but might feel slightly jargon-y.

This validated an iteration I was already mulling over: “attack surface” may imply active malice, whereas I think the primary issue will be unconscious/semi-conscious bias – making the “vector” metaphor used in the final draft more apt.

A suggestion on tone I did reject is this,

A few spots veer slightly academic ("antithetical to my goals," "I'd conjecture") where plainer language would land harder.

…That’s just what I sound like.

After iteration

Once I've worked through the feedback and implemented the parts I agree with, I'll throw the updated draft against a prompt such as this:

Perform a check to verify that grammatical, typographical, and stylistic issues have been resolved and the major opportunities to tighten, organize, or otherwise strengthen the draft have been implemented.

When ready to publish

Then finally, when I have a near-final draft, I'll prompt for

  • Feedback on my working title and suggestions for alternatives.
  • Suggestions for an SEO-friendly meta description.

That’s the workflow of going from rough draft to final draft. Here’s why I felt compelled to write this.

Conclusion

In discourse, I see people's willingness to confidently assert "dead giveaways" and “tells” of AI usage – even in short-form writing like comments that provide limited data to go on.

And it's not that they are necessarily wrong; it's that these assertions are often made with certainty instead of as probabilities. For instance, the teacher who accused me of plagiarism must have thought there were “tells” and “giveaways” in my paper as well. And yet they turned out to be wrong, and I, the student, would have been the one to pay for it. So it’s with that personal experience in mind that I share these observations:

  • If LLMs tend to produce a certain style, that implies that the style was likely prevalent in training data. Meaning there were and still are humans who write that way. It's unlikely to be some fully emergent behavior that is exclusively used by AI.
  • Being "too good" doesn't mean something is slop. Some people follow style guides or otherwise take care when they write and shouldn't be punished for it.
  • Disagreeing with an argument doesn't mean it's slop.
  • Not liking something doesn’t mean it’s slop.
  • Even being bad doesn't necessarily mean something is slop.

LLMs in general, especially when paired with tools for autonomy like OpenClaw, have led to content of unclear provenance being produced at unprecedented scales. Some of it to manipulate or deceive. I don’t fault readers for wanting quick heuristics to weed out low-effort, low-value work; time and cognitive resources are precious.

Yet we can still temper justified vigilance with consideration that incorrect accusations can have a cost, too. In practice that means:

  • As authors, be transparent about the provenance of our work and diligent about attributions.
  • As readers, try to engage with the substance of what we consume.
    • If it’s bad, say that.
    • If someone is arguing in bad faith or otherwise violating the norms of your community, disengage, correct, or moderate accordingly.
    • Refrain from accusations of slop unless it is literally obvious (e.g. leaked prompt).
  • Ask yourself: how does the value of the work change with the authorship?