What to Do When You're Picked On for Using AI
If we're being honest AI is currently loudest guest who’s changed the vibe of the party entirely (for better or worse), and suddenly everyone’s either dancing or glaring.

If we're being honest AI is currently loudest guest who’s changed the vibe of the party entirely (for better or worse), and suddenly everyone’s either dancing or glaring.
In clinical trials and eClinical work, using LLMs or AI tools to draft, analyse, or even just brainstorm ideas is fast becoming... normal. But for some, admitting it is like confessing you used GPS to get home. Cue the raised eyebrows and muttered “This sounds very GPT…” comments.
So what do you do when someone calls your AI usage a crutch or worse, a faux pas? Let’s unpack where the negativity comes from, why it can be harmful, and how to turn the conversation into something constructive.
Why That Negative Bias Exists (And Why It’s Harmful)
1. Algorithm Aversion: Humans Distrust Algorithms (Even When They’re Right)
Clinicians and researchers are trained to trust their judgment so when an algorithm recommends something, there’s a reflex to second-guess it. Egala & Liang (2024) found that clinicians were significantly less likely to adopt mobile clinical decision support tools if they feared “ceding authority” to an algorithm, even when those tools were demonstrably accurate [1].
This is called algorithm aversion, and in practice it means people may dismiss AI outputs not because they are wrong, but because they feel foreign or threatening to professional autonomy.
2. Transparency Can Turn Skeptics into Users
One of the most promising solutions is simple: show your work. Bohlen et al. (2025) found that when users could see how an algorithm reached its output (and were given minimal control over parameters), adoption rates and trust increased [2].
In other words, AI works better when it stops acting like a “black box” and starts behaving like a collaborator that explains itself.
3. The One-Mistake Problem
Mahmud et al. (2022) showed that people often hold AI to higher standards than they do humans. When AI makes a single visible error, trust can plummet even if its overall accuracy remains higher than human decision-making [3].
This “one-strike-you’re-out” effect is why AI sometimes feels like an all-or-nothing proposition: people tolerate sloppy human work but demand perfection from the machine.
What You Should Do Instead
Reframe: AI as Your Co-Pilot (pun intended) , Not Your Boss
When challenged, try saying:
“AI gave me a first draft but my expertise guided the final version.”
This makes it clear that you remain the decision-maker, and AI is just part of your toolkit.
Be Transparent... It Builds Trust
If you used AI to produce a report, draft an email, or analyse data, disclose it. You’ll build credibility by showing you’re not hiding your methods.
“This summary was AI-assisted, then reviewed, corrected, and validated by me.”
Lean into Your Human Edge
AI saves time so use that time to do the things AI can’t. Have the nuanced conversation with the site lead. Interpret messy real-world data in context. Walk the sponsor through the emotional and operational risks of adopting a new platform.
That’s what turns a “robot output” into a trusted, human recommendation.
Educate Without Preaching
If someone criticises you for using AI, use it as a teaching moment. Explain that AI isn’t “cheating” it’s accelerating. The responsibility and judgment still sit squarely with you.
Conclusion
There’s a negative connotation attached to using AI in professional settings and sometimes it feels personal. But AI isn’t a crutch. It’s a lever.
By being transparent, reframing its use, and highlighting the irreplaceable human judgment you bring to the table, you can turn criticism into a conversation about smarter, faster, and better work.
References
Egala, S.B., & Liang, D. (2024). Algorithm aversion to mobile clinical decision support among clinicians: a choice-based conjoint analysis. European Journal of Information Systems, 33(6), 1016-1032. https://ideas.repec.org/a/taf/tjisxx/v33y2024i6p1016-1032.html
Bohlen, L., Zschech, P., Rosenberger, J., Kruschel, S., et al. (2025). Overcoming Algorithm Aversion with Transparency: Can Transparent Predictions Change User Behavior? arXiv preprint. https://arxiv.org/abs/2508.03168
Mahmud, H., et al. (2022). What influences algorithmic decision-making? A systematic study of algorithm aversion. Journal of Behavioral and Experimental Economics. https://www.sciencedirect.com/science/article/pii/S0040162521008210
No senior management were harmed in the research and writing of this article.









