Intended for healthcare professionals

Editorials Christmas 2021: Get Lucky

Of research and robots: making sense of chance findings

BMJ 2021; 375 doi: (Published 16 December 2021) Cite this as: BMJ 2021;375:n2915
  1. Juan Víctor Ariel Franco, editor in chief, BMJ Evidence-Based Medicine1 2,
  2. Santiago Esteban, chair, Information Management and Health Statistics Office2 3
  1. 1Research Department, Instituto Universitario Hospital Italiano de Buenos Aires, Argentina
  2. 2Family and Community Medicine Division, Hospital Italiano de Buenos Aires, Argentina
  3. 3Information Management and Health Statistics Office, Health Ministry of the City of Buenos Aires, Argentina
  1. Correspondence to: J Franco juanfranco{at}

Taking our cue from QT-1, we must use AI to help, not harm, when interpreting healthcare research

“I, a reasoning being, am capable of deducing truth from a priori causes. You, being intelligent but unreasoning, need an explanation of existence supplied to you,” says the robot QT-1 (Cutie) to his perplexed human masters, Donovan and Powell, in Isaac Asimov’s classic book I, Robot.1 Powered by artificial intelligence (AI), the rebellious robot takes control of a space station, believing that humans are too weak to have created such a superior creature.

Users of medical research might feel as perplexed as Donovan and Powell when faced with new and sophisticated analytical methods, especially those related to AI. So, what do we currently understand about AI’s role in clinical research—and how can we make it work better for everyone?

AI can be used in descriptive, diagnostic, and predictive research, in knowledge discovery, and in investigating causal inference.1 It can also provide sophisticated tools to automate tasks usually done by humans, such as going through thousands of electronic health records and analysing large, routinely collected datasets (the infamous “real world evidence”). AI can even predict the titles of The BMJ’s Christmas articles with some success,2 and more importantly it can be used to guide clinicians in interpreting medical imaging, one of the rapidly growing fields of implementation.3

While we might regard AI as “rocket science”—a descriptor with its own set of problems, as noted by Usher and colleagues4—patients and the public have understandable reservations about its use in clinical care.56 AI is often seen as a “black box” by clinicians using its variably mysterious outputs to guide care78: for instance, the way that AI identifies patterns in chest x rays to help diagnose pneumonia is clearer to most clinicians than the way it estimates ejection fraction from an electrocardiogram in patients with dyspnoea.9

Improving diagnostic accuracy with AI also has other challenges, particularly regarding overdiagnosis, which can be amplified by the power to detect minor changes with unclear clinical importance.3

Correlation and causation

Some of AI’s potential lies in knowledge discovery, such as seeking correlations and associations in large bodies of data, which makes scientists even more uncomfortable. It’s all too tempting to interpret correlations and associations as evidence of causation, rather than as a way to generate new and potentially fruitful hypotheses. Of course, the spurious use of correlations and associations to infer causality is a problem across most of medical science, not just AI—as clearly illustrated by The BMJ’s Christmas offerings linking heavy metal music with reduced mortality10 or associating a person’s digit ratio with good luck.1112

AI has the potential to find correlations and generate hypotheses beyond the reach of traditional human methods.13 But, as QT-1 says, “don’t feel bad . . . You poor humans have your place, and though it is humble, you will be rewarded if you fill it well.” He’s right: exploring causal inferences within datasets remains a human domain, for now.

Causal inference starts with counterfactual thinking (“What would happen if . . .”), which might then be tested in a randomised trial, the most reliable way to investigate cause and effect.14 In observational datasets, however, we depend more on models and assumptions that try to emulate the balanced comparisons seen in clinical trials. Until AI incorporates traditional epidemiological approaches for conceptualising causal relations, we still need humans and their counterfactual thinking for this part of the process. QT-1 may need guidance on how to interpret the underlying assumptions and the relation between key variables when analysing the data, ideally thinking about how a hypothetical trial would answer this causal question.15

For clinicians and patients, AI arguably offers the most when combined with precision medicine, and new AI supported methods for personalising treatment decisions are already being developed, although robust evaluation is still lacking.

Ideally, all interventions supported by AI, including diagnostics, would be tested in randomised controlled trials. In a field of research prone to problems with transparency, reproducibility, ethics, and effectiveness, reporting should be informed by recent AI extensions to the SPIRIT and CONSORT statements,1617 rather than by the Christmas BMJ’s noble but misguided BOGUS guide to spinning underwhelming results.18 We might next consider a BOGUS AI extension, highlighting inadequate reporting of the development and validation of AI models used in healthcare. These models are often associated with eyecatching performance metrics, generated by opaque methods and rarely validated in clinical settings.19

In Asimov’s story, Cutie the robot ended up saving the Earth from a catastrophic electron storm by overriding human orders. His human masters argued that QT-1 was just following the laws of robotics: disobeying human orders only to prevent harm. Now we must guide AI firmly in that direction.


  • Competing interests: We have read and understood BMJ policy on declaration of interests and declare the following interests: Juan Franco is a clinical editor for The BMJ.

  • Provenance and peer review: Commissioned; not peer reviewed.


View Abstract

Log in

Log in through your institution


* For online subscription