Consensus
Search 200M+ papers and get citation-backed summaries of what peer-reviewed research actually says.
Reviewed by Mathijs Bronsdijk · Updated Apr 13, 2026

What is Consensus?
Consensus is an academic search engine built for one job, finding what peer-reviewed research actually says. Instead of acting like a general chatbot that answers from memory, it searches a database of more than 200 million papers first, then generates a synthesis with citations attached. That design choice is the whole story here. Consensus exists because literature review is slow, technical, and often overwhelming, even for experienced researchers using PubMed, Google Scholar, or Scopus. For students, clinicians, journalists, and curious non-specialists, it can be even harder.
The company launched in 2022 with a mission to democratize expert knowledge. In practice, that means giving people a way to ask natural-language questions like “Does zinc reduce cold duration?” and get back an answer grounded in real papers, not invented references. The platform has grown quickly, reaching more than 400,000 monthly active users by 2024 and nearly $2 million in annualized revenue. In August 2024, the company raised an $11.5 million Series A led by Union Square Ventures, with participation from Nat Friedman and Daniel Gross.
What we found in our research is that Consensus sits between old-school databases and general AI tools. It is easier to use than a traditional scholarly index, but more evidence-bound than ChatGPT-style assistants. That makes it appealing to graduate students trying to map a field, clinicians needing a quick evidence summary, journalists checking a scientific claim, and universities looking for an AI research tool they can actually justify putting in front of students. At the same time, it is still a research aid, not a substitute for reading papers carefully.
Key Features
-
Search-first AI answers: Consensus searches its corpus before it writes anything. This matters because every synthesis is tied to real peer-reviewed sources, which sharply reduces the fake-citation problem common in general AI tools.
-
200M+ paper index: The platform aggregates content from Semantic Scholar, OpenAlex, and its own web crawl, covering more than 200 million papers. For users, that breadth means it can handle both mainstream biomedical questions and narrower interdisciplinary topics without feeling like a niche database.
-
Quick, Pro, and Deep Search modes: Quick Search summarizes up to 10 papers, Pro Search analyzes the top 20, and Deep Search synthesizes up to 50 papers through an iterative review process. The difference matters because a journalist doing fast fact-checking needs a different workflow than a PhD student building a literature review chapter.
-
Consensus Meter: For yes-or-no style questions, Consensus classifies top papers into categories like yes, no, possibly, or mixed, then visualizes the distribution. It is one of the product’s most recognizable features because it gives users an immediate feel for how evidence lines up, though the company estimates about a 10% classification error rate.
-
Study Snapshots: This feature extracts key details from individual papers, including sample size, population, methodology, duration, and outcomes. That saves time during triage, especially when sorting through 20 or 50 papers and deciding which ones deserve full reading.
-
Ask Paper: Users can ask questions of individual papers, or compare up to 5 papers at once, through a conversational interface. This is especially useful when a paper is technical or outside your home discipline and you need help locating the relevant section fast.
-
Deep Search review outputs: Deep Search does more than summarize. It can show search iterations, methodology notes, claims-and-evidence tables, and color-coded citations indicating whether papers support, contradict, or show mixed evidence on a claim.
-
Advanced filters: Consensus lets users refine results by publication year, citation count, journal rank, study design, sample size, field, and geography. In our research, this stood out as one of its strongest practical advantages over many AI-native research tools.
-
Research Hub and saved lists: Users can bookmark papers, save searches, upload PDFs, and organize work inside the platform. It does not replace Zotero or EndNote, but it gives researchers a workable home base for active discovery.
-
Library and citation manager integrations: Consensus integrates with tools like Zotero and EndNote, and through LibKey it can connect users to institutional full-text access. For university users, that cuts down the friction between “I found the paper” and “I can actually read it.”
-
Medical Mode: For clinical use, Consensus offers a mode focused on roughly 8 million papers from high-quality medical journals and guidelines. That narrower scope matters when a clinician wants signal, not a giant mixed bag of loosely relevant studies.
Use Cases
Consensus is easiest to understand through the kinds of work people use it for. Graduate students are one of the clearest examples. A student starting a thesis chapter can ask a plain-English question, get a map of the main papers, then use Study Snapshots and filters to narrow toward randomized trials, recent meta-analyses, or highly cited work. The value is not that Consensus writes the literature review for them. It cuts down the hours spent building search strings, skimming irrelevant abstracts, and figuring out which papers matter first.
Clinicians use it differently. A busy doctor might not have time to manually review dozens of papers on vitamin D supplementation in older adults or the evidence for cognitive behavioral therapy in treatment-resistant depression. Consensus gives them a fast evidence summary with citations and, in Medical Mode, focuses attention on stronger medical sources. The Consensus Meter is especially useful here because it gives a quick visual read on whether the literature is broadly aligned or still mixed.
Journalists are another strong fit. When reporting on vaccine safety, climate claims, nutritional supplements, or pharmaceutical efficacy, they often need to understand the state of evidence quickly and under deadline. Consensus helps them move from “people are arguing online” to “here is what peer-reviewed research says, and here are the papers behind it.” That is a meaningful shift in reporting quality.
At the institutional level, adoption has been notable. More than 100 university libraries have licensed the platform, and the company says it has reached over 5 million students, researchers, and faculty across more than 10,000 universities. Consensus also offered free access for the 2024-25 academic year to university libraries, with plans to continue through 2025-26. That tells you something important about how the product is being positioned, not just as an individual productivity tool, but as part of academic infrastructure.
Strengths and Weaknesses
Strengths:
-
Grounded answers instead of invented ones: The biggest advantage over ChatGPT and similar tools is structural. Consensus searches first and cites real papers, which makes it far more usable in academic, clinical, and policy settings where fake citations are unacceptable.
-
Fast orientation on unfamiliar topics: For students, clinicians, and journalists, Consensus can compress the first stage of a literature review from hours into minutes. Our research repeatedly pointed to this “orientation” value, especially when entering a new field or working across disciplines with unfamiliar terminology.
-
Excellent filtering for research quality: Compared with many AI research tools, Consensus gives users unusually strong filters around study design, citation count, journal rank, publication year, and sample size. That matters because relevance alone is not enough, most users also need ways to separate stronger evidence from weaker evidence.
-
Accessible interface for non-specialists: Natural-language search lowers the barrier for people who do not know Boolean syntax or database conventions. For undergraduates and general users, that can be the difference between actually using research and giving up.
-
Strong academic distribution strategy: The university library push has helped Consensus spread quickly. LibKey integration also means that for many academic users, the platform is not just a discovery layer, it is a gateway into papers their institution already pays for.
Weaknesses:
-
AI summaries can still be wrong: Consensus avoids many hallucination problems, but it does not eliminate error. Study Snapshots can misread methods or sample sizes, and the Consensus Meter classifications have an estimated error rate of about 10%, which is high enough that users should never treat the output as final truth.
-
Full-text access is uneven: Consensus indexes more than 200 million papers, but it only has full text for a portion of them, mostly open-access papers and some publisher-partner content. In paywalled fields, the tool may rely heavily on titles and abstracts, which limits how deep its analysis can go.
-
Not enough independent benchmarking yet: One 2024 systematic review found only five peer-reviewed studies on the tool’s use, and none rigorously benchmarked recall, precision, or speed against traditional databases. That means many of the strongest performance claims are still based more on product experience and anecdote than on external validation.
-
Consensus Meter simplifies messy science: The meter is intuitive, but science often does not fit clean yes/no categories. A treatment may help one subgroup, fail in another, or produce modest effects that look more decisive in a visual summary than they really are.
-
Not a replacement for systematic review workflows: For rigorous evidence synthesis, researchers still need tools like PubMed, Scopus, or Web of Science, plus manual screening and full-text review. Consensus is best understood as an accelerator and discovery layer, not the whole process.
Pricing
-
Free: $0 Includes 25 Pro Searches per month, 3 Deep Searches, 10 Study Snapshots, 10 Ask Paper messages, and 25 Pro Analyses. For occasional users, this is a real free tier, not just a teaser.
-
Pro: $15/month, or $120/year Annual billing brings the effective monthly price to $10. This tier includes unlimited Pro Searches, 15 Deep Searches per month, 50 Study Snapshots, 50 Ask Paper messages, and 100 Pro Analyses.
-
Deep: $65/month, or $540/year Annual billing lowers the effective monthly price to $45. Users get 200 Deep Searches per month, effectively unlimited Pro Searches, 200 Study Snapshots, unlimited Ask Paper messages, and 500 Pro Analyses.
-
Institutional / Enterprise: Custom pricing Consensus sells licenses to universities and organizations. More than 100 university libraries have adopted it, and the company offered free university-wide access for the 2024-25 academic year.
In practice, most individual users will likely sit between Free and Pro. The free plan is enough for light academic work or occasional evidence checks. Pro is the more realistic tier for graduate students, clinicians, and researchers who use the product weekly. The Deep plan is priced for power users doing repeated literature reviews. Students, faculty with school emails, and US healthcare professionals with valid NPI numbers can get up to 40% off, which changes the math quite a bit compared with other paid research assistants.
The main pricing gotcha is usage limits. Consensus is not charging by token in the usual consumer AI sense, but feature caps still shape the experience. If you rely heavily on Deep Search or Ask Paper, you can hit ceilings faster than expected during an active project.
Alternatives
Elicit Elicit is one of the closest alternatives, especially for researchers doing structured evidence synthesis. Where Consensus shines in quick, natural-language evidence summaries, Elicit is stronger when you want to extract fields across many papers into tables, such as sample size, intervention, outcome, or methodology. If your work looks like systematic review prep, Elicit may feel more methodical. If your work starts with “what does the literature say about this question?”, Consensus often feels faster.
Semantic Scholar Semantic Scholar is a strong free option and one of the major data sources behind tools like Consensus. It is especially good for discovery, citation networks, and paper summaries. Someone might choose Semantic Scholar if they want a no-cost research engine with broad coverage and strong paper exploration tools. They might choose Consensus if they want more guided synthesis and a more opinionated answer layer on top.
Scite Scite takes a different angle by focusing on citation context. It shows whether later papers support, contradict, or merely mention a cited claim. That makes it valuable for researchers who care deeply about how a finding has held up over time. Consensus is better for fast question answering. Scite is better when the next question is, “How has the field responded to this paper?”
Research Rabbit Research Rabbit is built around visual exploration of literature. It helps users see relationships between papers, authors, and topics, which can be especially helpful early in a project when you are trying to map a field rather than answer one question. Researchers who think spatially or want to follow citation trails may prefer it. Consensus is stronger when the goal is evidence synthesis rather than literature mapping.
Google Scholar Google Scholar remains the default starting point for many researchers because it is broad, familiar, and free. It often captures gray literature, conference papers, and many edge-case sources that more curated tools miss. But it also puts more of the burden on the user to search well, screen results, and synthesize findings manually. Consensus is easier and faster. Google Scholar is still harder to replace when completeness matters.
PubMed, Scopus, and Web of Science These are not direct substitutes so much as serious complements. For systematic reviews, formal academic work, and highly controlled search strategies, traditional databases still matter. They offer better field-specific indexing, controlled vocabularies, and more rigorous search construction. Consensus wins on speed and accessibility. These tools win on formal completeness and methodological defensibility.
FAQ
What is Consensus used for?
Consensus is used to search academic literature and generate cited summaries of what peer-reviewed research says about a question. People use it for literature review, fact-checking, clinical evidence lookups, and early-stage topic exploration.
Is Consensus better than ChatGPT for research?
For academic sourcing, usually yes. Consensus searches real papers first and cites them, while ChatGPT can produce convincing but false references unless you verify everything carefully.
How do I get started?
Start with a plain-English research question, then review the cited papers rather than stopping at the summary. If you are new to the platform, the free tier gives enough access to test Pro Search, Deep Search, and Study Snapshots.
How long does it take to set up?
Almost no setup is required. You can create an account and start searching right away, and most users can get useful results within a few minutes.
Does Consensus only search peer-reviewed papers?
Its core positioning is peer-reviewed academic literature. That is one reason many researchers trust it more than broad web-based AI tools.
How many papers does Consensus search?
Consensus says it indexes more than 200 million papers. That coverage comes from sources including Semantic Scholar, OpenAlex, and its own scholarly web crawl.
Can I use Consensus for systematic reviews?
You can use it as part of a systematic review workflow, especially for discovery and early synthesis. But it should not be your only search tool if completeness and formal methodology are essential.
What is the Consensus Meter?
The Consensus Meter is a visual summary for yes-or-no style questions. It classifies top papers as yes, no, possibly, or mixed, so you can quickly see how evidence appears to line up.
Is the Consensus Meter always accurate?
No. The company estimates roughly a 10% error rate in classifications, and the format can flatten important nuance. It is useful for orientation, not for final judgment.
Can I upload my own papers?
Yes. Consensus lets users upload PDFs and analyze them with the same AI features used for papers in its database.
Does Consensus have full-text access to every paper?
No. It indexes many more papers than it can read in full text. For a lot of paywalled literature, it may rely mainly on abstracts and metadata unless there is open access or institutional access available.
Is Consensus free for students and universities?
There is a free individual tier, plus discounts of up to 40% for students, faculty, and eligible US healthcare professionals. Consensus has also worked directly with university libraries, and more than 100 have adopted institutional access.