Earlier this month I had the pleasure of presenting at the Advance HE AI Symposium alongside Dr Pauline Campbell, Senior Research Fellow here at GCU. We presented our work on using creative methods – enhanced by AI – to support evidence synthesis, and specifically to help researchers tackle that notoriously tricky task: the systematic review. I always find conferences like this a great opportunity to reflect on what we’re doing and why, so I wanted to write up our key ideas here for anyone who couldn’t make it.

The Problem We Were Trying to Solve

Let me start with a bit of context. Evidence synthesis and systematic reviews in particular, requires researchers to hold enormous amounts of information in their heads at once, spotting patterns across dozens or sometimes hundreds of studies, identifying gaps, and then communicating all of that in a coherent, well-structured argument. That is genuinely hard cognitive work.

And yet, academic writing is still largely taught as a linear, text-based process. We hand researchers a template, point them at a pile of papers, and say: synthesise! It’s no wonder so many of them end up staring at a blank screen. The result is cognitive overload, writer’s block, and a real struggle to see relationships across studies, not because the researchers aren’t capable, but because the approach doesn’t match the complexity of the task.

There’s also a diversity issue. One-size-fits-all instruction fails a lot of learners. Not everyone thinks in the same way, and not everyone finds it easy to externalise complex thinking through text alone.

Reframing Writing as Thinking

One of the things Pauline and I feel strongly about is that synthesis begins long before words appear on the page. Researchers are already doing the thinking required for synthesis when they go for a walk and let ideas percolate, when they sketch connections between concepts, when they talk something through with a colleague, or even when they sleep on a particularly tricky problem. Writing is thinking made visible, and if we accept that, then we should be supporting researchers to do that thinking in whatever form works best for them, not just the one we’re most comfortable assessing.

This is the core idea we’ve been applying in our MRes Critical Review in Research module.

Creative Methods as Structured Thinking Tools

We introduced five creative methods into the module: zines, storyboards, podcasting, animation, and data visualisation. These aren’t gimmicks, they’re what we’d call structured visual, narrative, and dialogic approaches that help make complex thinking visible. Each one forces a different kind of engagement with the material, and each one surfaces gaps and connections that can be surprisingly hard to spot when you’re writing linearly.

In the symposium session, we focused on the two methods we used as classroom activities: zines and storyboards.

Zines originally come from punk and activist movements and are self-published, low-fi booklets. In a research context, they’re a brilliant constraint. Distilling 20 to 100 studies into four to six panels forces you to identify your core argument. You can’t hide behind vague language when you’ve got a small box to fill and it needs to make sense visually. Students who used zines found they had to articulate their argument in a way that made the structure (or the gaps in it) very obvious, very quickly.

Storyboards, borrowed from film and animation, ask you to sequence your research as a narrative arc: the inciting incident, the climax, the resolution. This helps students structure their argument and communicate complex ideas in an accessible, audience-friendly way. It also turns out to be great preparation for other tasks, like conference posters, infographics, and thesis summaries. This transfers those skills nicely.

Where AI Comes In

We positioned AI as a “creative amplifier” rather than a replacement for critical thinking, and I think that distinction matters a lot. The tools we suggested (Claude, ChatGPT, CoPilot, Canva AI, and a few others) can help with things like brainstorming visual metaphors, generating layout ideas, or structuring a narrative arc. But the researcher has to stay in the driving seat at every stage.

For zines, the workflow looked something like this: use AI to brainstorm several possible visual metaphors for your key finding, then choose the one that actually fits your research, use an AI-assisted design tool to get a layout template, and then draw or design the four panels yourself. The AI helps you get unstuck and move faster; the critical judgement about what’s accurate and what’s meaningful stays with you.

For storyboards, a really useful prompt is to ask AI to turn your systematic review methodology into a hero’s journey narrative structure. That sounds a bit bonkers, I know, but it’s a great way to see whether your research actually has a coherent narrative arc, and where the gaps are. You then refine it to reflect what actually happened in your research process, which requires real engagement with your own methodology.

What We Observed in the Classroom

The classroom activity itself ran for 25 minutes: five minutes generating AI suggestions, fifteen minutes creating a physical zine or storyboard, and five minutes of sharing and reflection. The reflection prompts asked students to consider which parts were AI-inspired, which were entirely their own, and how the combination worked. And our main critical reflection question for the session was: what did AI help you see about your research that you hadn’t noticed before? And what did it miss that only you knew was important?

The early classroom insights were really encouraging. Students started with quite a bit of hesitation, a lot of it driven by uncertainty about what appropriate AI use looks like, which is understandable given how much guidance varies across institutions and disciplines. But that caution shifted to curiosity pretty quickly. We saw strong engagement with the synthesis tasks and increased expressive confidence.

Scaffolding Responsible AI Use

Something we were keen to address throughout was the ethical dimension of using AI in research contexts. We framed this around five key principles: transparency (disclose when AI contributed to your outputs), accuracy (always fact-check against your own data), authorship (you remain responsible; AI is a tool, not a co-author), bias awareness (AI reflects its training data, so critically evaluate what it suggests), and academic integrity (follow your institution’s policies and when in doubt, ask).

There’s a phrase from the presentation that I keep coming back to: speed without critical thinking creates noise, not knowledge. AI can help you move faster, but only if you’re bringing your own critical expertise to bear on what it produces. The goal is academic rigour plus creative expression plus responsible AI use. And that combination is where the real research impact comes from.

What’s Next

Pauline and I are continuing to develop this work. We’re planning a scoping review to examine AI-augmented creative methods in evidence synthesis more broadly, running methodological workshops for researchers, and starting to build an interdisciplinary network in this space.

At its heart, this work is about preparing researchers for a future where evidence synthesis will demand new ways of thinking. The volume of research being published, nearly 80 systematic reviews a day at the last count, isn’t going to slow down. So we need to help researchers develop the cognitive tools to work at that scale, and creative methods supported by responsible AI use are a genuinely promising part of that picture.

If any of this resonates with your own practice or research, I’d love to hear from you. You can get in touch via the contact page or find me on LinkedIn. And if you’re interested in collaborating or finding out more about the work, Pauline and I are both very open to conversations — drop either of us an email.

You can watch our presentation and the whole Advance HE AI Symposium on YouTube