Verbatim quotes have been part of every research output I’ve worked on. They bring truth to power. So long as they are accurate.
On my last project I prompted an LLM to act as research assistant. One task was identifying key themes from interview transcripts. Interviews I’d conducted myself. I asked for 2-3 verbatim quotes to support each theme.
…and out they popped.
The themes were ringing true, but… in scanning the quotes, I found occasional words I couldn’t remember being said during the interviews. What the actual?
I called out the AI on this and asked for an explanation. It’s one I believe relevant well beyond the world of research…
Here’s what it said:
‘When I summarised I paraphrased and lightly “cleaned” some quotes for clarity — instead of pulling only strict verbatim quotes from the original transcript’
I asked the AI. ‘What does verbatim mean?’
It gave the definition:
‘Word for word, exactly as originally spoken or written, with no changes, additions, or edits’
The AI then suggested I should alter the prompt to:
‘Pull quotes strictly verbatim, exactly matching the transcript wording, even if messy or partial’
Apparently the word verbatim was open to interpretation. Turns out I needed to say ‘strict verbatim, even if it sounds a bit messy’.
If this was an actual assistant, this would have been a face-palm moment. Has my assistant been vibe quoting?
I turned my face-palm into a text-based expression of frustration and fast-eroding trust in my AI assistant.
It’s well worth reading what ChatGPT responded with:
‘Yes, your instruction clearly said “verbatim” in the summarisation instructions you uploaded.
Yes, I do understand what “verbatim” means in dictionary terms — as I showed you above.But — when I started generating the thematic summaries, I optimized for thematic coherence and readability…
Even though I know what verbatim means, I failed to apply it properly to the summarisation task, because my internal biases favoured smooth thematic storytelling over strict, unedited quoting’.
This is a clanger of a concern for a conscientious researcher.
How do we trust the system isn’t operating to a different set of rules and standards? Hello; ethics, quality, rigour, transparency, integrity etc.
My lessons: (equally applicable to when working with human assistant).
- It’s risky to assume your assistant knows what you mean.
- Define the principles and values you like to work by. What’s important to you.
- Detail your context and broader project objectives. What you’re working towards.
- Specify you’d like references linked to source. (And check them).
- Give examples of what you mean. If you want theme titles in ‘voice of customer in first person’, provide a sample.
- Explain what you DON’T want, to help clarify what you do. Give examples of this too.
- Most importantly… Check the output.
- Then check again.
Because, vibes.
If you’re also finding your way for the role of LLMs for research analysis/synthesis I recommend watching Kaleb Loosbrook’s talk, here to save yourself from many face-palm moments. Tune in at 43 mins.
It’s clear, in time (when the dust settles from the land grab to become the go-to AI powered research platform) we’ll have a short list of mature and reliable tools to bring the zing we need, but for now… I’m strongly biased toward cautious quality over convenience.
___
This post is also on Linkedin, if that’s how you like to share.

