AI tools like Elicit, ResearchRabbit, and ChatGPT can speed up parts of the literature review process, but they also come with serious limitations. They can generate inaccurate, incomplete, or biased information. Unlike peer-reviewed databases, AI tools don’t follow systematic indexing rules or provide clear evidence of where information comes from.
Using AI without careful evaluation risks:
Relying on fabricated or inaccurate citations
Oversimplifying complex research debates
Missing key articles and perspectives
Introducing bias into your own review
Takeaway: AI is a starting point, not an end point. Always verify and critically assess before including AI-supported content in your literature review.
1. Accuracy of Citations
Verify every citation in a scholarly database (PsycINFO, Sociological Abstracts, OneSearch).
Check author names, journal titles, publication years, DOIs.
Beware of “hallucinated” articles that look real but don’t exist.
2. Reliability of Summaries
Compare AI-generated summaries with the actual article abstract or full text.
Watch for overgeneralizations or misrepresentation of findings.
3. Completeness of Coverage
Ask: Did the AI return the most relevant studies, or only a partial view?
Supplement with traditional searching to ensure comprehensive coverage.
4. Transparency of Methods
Remember: AI doesn’t reveal how it selected or prioritized results.
Unlike databases, you can’t replicate AI’s “search” exactly.
5. Ethical Use
Avoid copying AI text directly into your work without revision.
Disclose AI use if your professor or discipline requires it.
Reflect on whether the tool might reinforce biases (e.g., privileging certain journals, viewpoints, or authors).
Cross-check: Always verify information in a scholarly database before relying on it.
Triangulate: Compare AI results across multiple tools and with traditional searches.
Annotate: Keep notes on when and how you used AI in your workflow.
Reflect: Ask whether AI is helping you think critically or just saving time.