For decades, they've set the record straight in biology. Next up: science's reproducibility crisis

Trending 5 hours ago

Even after decades of research, scientists remain divided on what drives diseases like Alzheimer’s. Why? The scientific literature is filled with competing hypotheses, some pointing in different directions. How can we determine which ones are credible?

University of Maryland professor John Moult has long sought to answer this question—how to decide which evidence in the literature is trustworthy. For example, which papers support specific hypotheses about how the APOE4 gene influences Alzheimer’s? Which experiments back up those papers’ conclusions? Among those experiments—whether conducted in humans, mice, or cells—which have more reliable conditions and statistical analyses, and which are more questionable? Could we eliminate some major hypotheses about the APOE4 gene and accelerate treatment development if we knew which experiments to trust?

Moult brings over 30 years of experience distinguishing reliable findings from less credible ones in structural biology. He helped found the Critical Assessment of Structure Prediction (CASP), the blind challenge benchmark that paved the way for DeepMind to demonstrate the true power of its AlphaFold tool, which ultimately contributed to DeepMind’s 2024 Nobel Prize in Chemistry award. With the rise of large language models, Moult believes it may now be possible to apply a similar objective standard to evaluate the scientific literature.

STAT+ Exclusive Story

Already have an account? Log in

STAT+

STAT+

This article is exclusive to STAT+ subscribers

Unlock this article — and get additional analysis of the technologies disrupting health care — by subscribing to STAT+.

Already have an account? Log in

View All Plans

To read the rest of this story subscribe to STAT+.

Subscribe

More
Source science
science