“It’s Official: The Dutch Are the Only Nation Still Smarter Than ChatGPT…”
I knew it... ;) Somewhere between my Garmin telling me to stand up every hour and my laptop telling me what to write, I had a hunch that humanity still had a fighting chance against the bots.

I knew it... ;)
Somewhere between my Garmin telling me to stand up every hour and my laptop telling me what to write, I had a hunch that humanity still had a fighting chance against the bots.
And apparently, it’s the Dutch who are carrying the torch for us. (Clickbait title aside)
At least according to a fascinating study by Joseph Henrich and colleagues at Harvard, which compared GPT models with 31 human populations worldwide using a “triad task” to measure holistic thinking. The punchline? GPT beat almost every country except the Netherlands.
But what does this really mean? And why should anyone in clinical research care?
1. The Triad Task: GPT vs Humanity
The study used a cognitive task where participants grouped objects based on relationships vs categories. GPT-4 (purple bar in Figure 4) performed very similarly to most human groups, showing a more analytic, category-based thinking style — except the Dutch, who outperformed it on holistic reasoning.
This is less about “IQ” and more about thinking patterns. The Dutch tended to see relationships, context, and systems more holistically, a thinking style that is often correlated with collaborative problem-solving and design thinking.
2. What This Means for Decision-Making
In clinical research, we’ve been pushing for systems thinking: risk-based monitoring, quality-by-design, 'participant-centric' endpoints.
If AI models default to more analytic, “category first” thinking, they may miss the contextual nuances that matter, like why a site struggles with recruitment, or how patient burden interacts with protocol design.
Holistic thinkers (apparently, go team Netherlands!) bring a wider lens, considering multiple moving parts and relationships at once, an approach still hard to automate.
3. How to Apply This Insight in Clinical Trials
Use AI for the pieces, not the whole: Let LLMs categorise, cluster, summarise, but let humans (preferably diverse teams) sense-check for contextual fit.
Inject human factors early: When designing protocols, recruitment strategies, or digital endpoints, combine AI insights with panels of diverse clinicians, patients, and operations staff.
Measure systems, not just widgets: Build dashboards that link recruitment, burden, retention, cost, and diversity so you don’t just optimise one metric while harming another.
Conclusion
If GPT is “smarter” than most of us, that’s good news, it means we have a very powerful assistant.
But as the Dutch prove, contextual, holistic thinking isn’t dead... it’s just rare.
Our job is to keep that lens alive in clinical trials: to use AI where it’s strong, but still ask the bigger, connecting questions that keep trials patient-centric, feasible, and fit for purpose.
Reference
Henrich, J., et al. (2023). Which Humans Are Most Like GPT? Harvard University. https://scholar.harvard.edu/sites/scholar.harvard.edu/files/henrich/files/which_humans_09222023.pdf
None of my Dutch colleagues paid me to write this.









