Accelerate systematic reviews through AI-driven active learning and automated abstract screening.
Abstrackr is a specialized, web-based semi-automated tool designed for researchers conducting systematic reviews. Developed by the Center for Evidence-Based Medicine (CEBM) at Brown University, the platform leverages Active Learning (a subset of machine learning) to significantly reduce the manual labor required during the citation screening phase. Technically, Abstrackr employs a Support Vector Machine (SVM) or similar classification models that continuously learn from a reviewer's decisions. As a user labels citations as 'relevant' or 'irrelevant,' the system re-ranks the remaining unscreened abstracts, prioritizing those with a higher probability of inclusion. By 2026, while newer LLM-based competitors have entered the market, Abstrackr remains a fundamental open-source benchmark due to its transparent methodology and zero-cost accessibility for the global academic community. It supports multi-reviewer collaboration, allows for the importation of standard bibliographic formats like RIS and XML, and provides visual analytics on screening progress. Its architecture is specifically optimized for high-recall tasks where missing a single relevant study is unacceptable, making it a preferred choice for Cochrane-style evidence synthesis.
Uses a machine learning backend to recalculate the probability of inclusion for every unscreened abstract after each user decision.
Verified feedback from the global deployment network.
Post queries, share implementation strategies, and help other users.
The system assigns a confidence score to its predictions, allowing users to focus on borderline cases.
Dynamic NLP-based highlighting of user-defined inclusion and exclusion terms within the abstract text.
A workflow module that identifies discrepancies between two independent reviewers for third-party adjudication.
Allows researchers to test their inclusion/exclusion criteria on a subset of data to refine the model.
Aggregates common terms found in accepted vs. rejected abstracts for linguistic insight.
Allows users to replicate project structures and settings for update reviews or related studies.
Researchers need to screen 10,000+ PubMed results for a new clinical guideline quickly.
Registry Updated:2/7/2026
Public health officials require an evidence synthesis on a new virus within 72 hours.
Identifying specific chemical compounds across thousands of patent abstracts.