There’s something a bit odd happening in academia right now, and I can’t quite shake it off.

We’re being told – quite rightly, I think – that we need to declare when we use AI tools to help write grant applications. Transparency matters. I get that. The funders have been clear in their joint statement on generative AI tools: if you use these tools, you must cite and acknowledge them.

But here’s the thing that’s been niggling at me: many universities now employ entire teams of professional bid writers. These are specialists whose job is to help academics craft compelling proposals. They’ll restructure your arguments, polish your prose, suggest better ways to frame your impact. Sometimes they’ll practically rewrite whole sections.

And that’s considered best practice. No disclosure required. In fact, it’s often encouraged.

So we have two scenarios:

Scenario 1: You use Claude or ChatGPT to help refine a paragraph, tighten your methodology section, or suggest clearer phrasing. You must declare this.

Scenario 2: A professional bid writer does essentially the same thing – restructuring, refining, suggesting better language. No declaration needed. Actually, you’re savvy for using institutional resources.

Same function. Different rules.

I’m not saying the AI disclosure requirement is wrong. As I covered in my My rough guide for the Responsible Use of AI in Research, there are legitimate concerns about using generative AI responsibly. These tools can’t take responsibility for their output, which is why they shouldn’t be listed as authors. The transparency requirement makes sense.

But then… why doesn’t the same logic apply to human writing support? If we’re concerned about who’s really authoring the work, shouldn’t we be equally transparent about extensive professional editing and restructuring?

The uncomfortable questions

Where exactly do we draw the line on “authorship” in grant writing? Is it about:

  • The origin of ideas? (Still yours in both cases)
  • Who does the actual typing? (Seems arbitrary)
  • Whether support comes from silicon or carbon-based intelligence? (Is that really the key distinction?)
  • How much the text changes from your original? (This happens with both AI and human editors)

I suspect what’s really happening here is that we’re uncomfortable with the new and unfamiliar (AI) whilst being perfectly comfortable with established practices (professional bid writers), even when they serve remarkably similar functions.

What actually matters?

Perhaps the real question isn’t “who wrote this?” but rather:

  • Are the ideas and research genuine?
  • Is the methodology sound?
  • Can the applicant actually deliver what’s promised?
  • Is the work being presented honestly?

Professional bid writers don’t make bad research good, and neither does AI. They just help you communicate more effectively. In both cases, you’re still responsible for the content, the ideas, and the delivery.

My take

I’m not arguing against AI disclosure requirements. Transparency is important, and these tools are new enough that we’re still figuring out the implications. But I do think we need to examine why we’re comfortable with some forms of writing support and not others.

Maybe the answer is that we should be more transparent about all forms of significant editorial support, whether it comes from AI or professional writers. Or maybe we need to accept that getting help to communicate your ideas more clearly isn’t somehow cheating, regardless of the source.

What I’m certain of is that the current situation – where functionally similar support requires different levels of disclosure depending on whether it comes from software or a staff member – feels less like a coherent policy and more like we’re making it up as we go along.

Over to you

I’d genuinely love to hear what others think about this. Where do you draw the line? Should universities employ bid writers if we’re worried about authentic authorship? Should all significant editorial support be disclosed? Or am I overthinking this entirely?

Drop your thoughts in the comments – I promise I’ll respond (and I’ll write the responses myself, AI or otherwise).

Full disclosure: I used Claude to help edit this post for clarity and flow. See how easy that was?