>

>

What to Do When You're Picked On for Using AI

What to Do When You're Picked On for Using AI

If we're being honest AI is currently loudest guest who’s changed the vibe of the party entirely (for better or worse), and suddenly everyone’s either dancing or glaring.

If we're being honest AI is currently loudest guest who’s changed the vibe of the party entirely (for better or worse), and suddenly everyone’s either dancing or glaring.

In clinical trials and eClinical work, using LLMs or AI tools to draft, analyse, or even just brainstorm ideas is fast becoming... normal. But for some, admitting it is like confessing you used GPS to get home. Cue the raised eyebrows and muttered “This sounds very GPT…” comments.

So what do you do when someone calls your AI usage a crutch or worse, a faux pas? Let’s unpack where the negativity comes from, why it can be harmful, and how to turn the conversation into something constructive.


Why That Negative Bias Exists (And Why It’s Harmful)

1. Algorithm Aversion: Humans Distrust Algorithms (Even When They’re Right)

Clinicians and researchers are trained to trust their judgment so when an algorithm recommends something, there’s a reflex to second-guess it. Egala & Liang (2024) found that clinicians were significantly less likely to adopt mobile clinical decision support tools if they feared “ceding authority” to an algorithm, even when those tools were demonstrably accurate [1].

This is called algorithm aversion, and in practice it means people may dismiss AI outputs not because they are wrong, but because they feel foreign or threatening to professional autonomy.


2. Transparency Can Turn Skeptics into Users

One of the most promising solutions is simple: show your work. Bohlen et al. (2025) found that when users could see how an algorithm reached its output (and were given minimal control over parameters), adoption rates and trust increased [2].

In other words, AI works better when it stops acting like a “black box” and starts behaving like a collaborator that explains itself.


3. The One-Mistake Problem

Mahmud et al. (2022) showed that people often hold AI to higher standards than they do humans. When AI makes a single visible error, trust can plummet even if its overall accuracy remains higher than human decision-making [3].

This “one-strike-you’re-out” effect is why AI sometimes feels like an all-or-nothing proposition: people tolerate sloppy human work but demand perfection from the machine.


What You Should Do Instead

Reframe: AI as Your Co-Pilot (pun intended) , Not Your Boss

When challenged, try saying:

“AI gave me a first draft but my expertise guided the final version.”

This makes it clear that you remain the decision-maker, and AI is just part of your toolkit.


Be Transparent... It Builds Trust

If you used AI to produce a report, draft an email, or analyse data, disclose it. You’ll build credibility by showing you’re not hiding your methods.

“This summary was AI-assisted, then reviewed, corrected, and validated by me.”


Lean into Your Human Edge

AI saves time so use that time to do the things AI can’t. Have the nuanced conversation with the site lead. Interpret messy real-world data in context. Walk the sponsor through the emotional and operational risks of adopting a new platform.

That’s what turns a “robot output” into a trusted, human recommendation.


Educate Without Preaching

If someone criticises you for using AI, use it as a teaching moment. Explain that AI isn’t “cheating” it’s accelerating. The responsibility and judgment still sit squarely with you.


Conclusion

There’s a negative connotation attached to using AI in professional settings and sometimes it feels personal. But AI isn’t a crutch. It’s a lever.

By being transparent, reframing its use, and highlighting the irreplaceable human judgment you bring to the table, you can turn criticism into a conversation about smarter, faster, and better work.


References

  1. Egala, S.B., & Liang, D. (2024). Algorithm aversion to mobile clinical decision support among clinicians: a choice-based conjoint analysis. European Journal of Information Systems, 33(6), 1016-1032. https://ideas.repec.org/a/taf/tjisxx/v33y2024i6p1016-1032.html

  2. Bohlen, L., Zschech, P., Rosenberger, J., Kruschel, S., et al. (2025). Overcoming Algorithm Aversion with Transparency: Can Transparent Predictions Change User Behavior? arXiv preprint. https://arxiv.org/abs/2508.03168

  3. Mahmud, H., et al. (2022). What influences algorithmic decision-making? A systematic study of algorithm aversion. Journal of Behavioral and Experimental Economics. https://www.sciencedirect.com/science/article/pii/S0040162521008210


No senior management were harmed in the research and writing of this article.

About

Delivering independent journalism, thought-provoking insights, and trustworthy reporting to keep you informed, inspired, and engaged with the world every day.

Related Post

Mar 26, 2026

/

Post by

How to fix fragmentation without pretending it doesn’t exist

Mar 25, 2026

/

Post by

There’s an assumption in clinical trials that doesn’t get challenged nearly enough: If each system is good… then more systems must be better. More specialised. More powerful. More “best-of-breed”. But spend a day at a clinical trial site, and that logic starts to unravel.

Mar 23, 2026

/

Post by

There’s a quiet lie circulating in clinical trials. It’s dressed up as sophistication. It sounds like maturity. It often appears in RFPs.

Feb 23, 2026

/

Post by

Clinical trial start-up — the phase encompassing vendor onboarding, system build and configuration, site activation and training — persistently consumes time, introduces friction and contributes to costly delays in getting first patient in. For decades this has been driven by an industry-wide reliance on narrative, unstructured protocols and disconnected operational hand-offs.

Jan 29, 2026

/

Post by

Why clinical trial technology buyers and sellers need to step up in 2026 In case you’ve been living under a rock - or buried under a pile of protocols - there’s a meme doing the rounds on LinkedIn and X that goes something like this: “I just had a deeply personal life experience… and here’s what it taught me about B2B sales.”

Dec 12, 2025

/

Post by

If you want to understand where clinical trials are heading, don’t start with conferences or consensus papers. Start with the one thing that never lies: capital allocation.

Mar 26, 2026

/

Post by

How to fix fragmentation without pretending it doesn’t exist

Mar 25, 2026

/

Post by

There’s an assumption in clinical trials that doesn’t get challenged nearly enough: If each system is good… then more systems must be better. More specialised. More powerful. More “best-of-breed”. But spend a day at a clinical trial site, and that logic starts to unravel.

Mar 23, 2026

/

Post by

There’s a quiet lie circulating in clinical trials. It’s dressed up as sophistication. It sounds like maturity. It often appears in RFPs.

Feb 23, 2026

/

Post by

Clinical trial start-up — the phase encompassing vendor onboarding, system build and configuration, site activation and training — persistently consumes time, introduces friction and contributes to costly delays in getting first patient in. For decades this has been driven by an industry-wide reliance on narrative, unstructured protocols and disconnected operational hand-offs.

The eClinical Edge is an independent voice focused on the technology, systems, and decisions shaping modern clinical trials.

© 2026 The eClinical Edge. All rights reserved.

The eClinical Edge is an independent voice focused on the technology, systems, and decisions shaping modern clinical trials.

© 2026 The eClinical Edge. All rights reserved.

The eClinical Edge is an independent voice focused on the technology, systems, and decisions shaping modern clinical trials.

© 2026 The eClinical Edge. All rights reserved.