Evaluation of SURUS: a named entity recognition NLP system to extract knowledge from interventional study records

  • Casper Peeters*
  • , Koen Vijverberg
  • , Marianne Pouwer
  • , Bart Westerman
  • , Maikel Boot
  • , Suzan Verberne
  • *Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

2 Downloads (Pure)

Abstract

Background: Medical decision-making commonly is guided by evidence-based analyses from systematic literature reviews (SLRs). These require large amounts of time and subject matter expertise to perform. Automated extraction of key datapoints from clinical publications could speed up the process of systematic literature review assembly. To this end, we built SURUS, a named entity recognition (NER) system comprised of a Bidirectional Encoder Representations from Transformers (BERT) model trained on a fine-grained dataset. The aim of this study was to assess the quality of SURUS classifications of PICO (patient, intervention, comparator and outcome) and study design elements of clinical study abstracts. Methods: The PubMedBERT-based model was trained and evaluated using a dataset of 39,531 labels amongst 400 clinical abstracts, with an inter-annotator agreement of 0.81 (Cohen’s κ) and 0.88 (F1). The labels were manually annotated using a strict annotation guide. We evaluated quality of the dataset and tested the utility of the model in the practise of systematic literature screening, by comparing SURUS predictions to expert PICO and design classifications. Additionally, we tested out-of-domain quality of the model across 7 other therapeutic areas and another study design. Results: The SURUS NER system achieved an overall F1 score of 0.95, with minor deviation between labels. In addition, SURUS achieved a NER F1 of 0.90 and 0.84 for out-of-domain therapeutic area and observational study abstracts, respectively. Finally, F1 of PICO and study design classifications was 0.89 with a recall of 0.96 compared to expert classifications. Conclusion: The system reaches an F1 score of 0.95 across 25 contextually different medical named entities. This high-quality in-domain medical entity prediction of a fine-tuned BERT-based model was the result of a strict annotation guideline and high inter-annotator agreement. This prediction accuracy was largely preserved during extensive out-of-domain evaluation, indicating its utility across other indication areas and study types. Current approaches in the field lack in the fine-grained training data and versatility demonstrated here. We think that this approach sets a new standard in medical literature analysis and paves the way for creating fine-grained datasets of labelled entities that can be used for downstream analysis outside of traditional SLRs.

Original languageEnglish
Article number184
JournalBMC medical research methodology
Volume25
Issue number1
DOIs
Publication statusPublished - Dec 2025

Keywords

  • Bi-directional encoder representations from transformers
  • Evidence-based medicine
  • Language model
  • Named entity recognition
  • Natural language rrocessing
  • PICO
  • Systematic literature review

Fingerprint

Dive into the research topics of 'Evaluation of SURUS: a named entity recognition NLP system to extract knowledge from interventional study records'. Together they form a unique fingerprint.

Cite this