Using ChatGPT for academic research raises problems around accuracy, bias, transparency, and academic integrity, so it should be treated as a tool for brainstorming and drafting rather than as an authoritative source. tandfonline
1. Accuracy and hallucinations
- ChatGPT can generate incorrect, made‑up, or outdated “facts” that sound plausible, a phenomenon often called hallucination. pmc.ncbi.nlm.nih
- A systematic review of studies on ChatGPT found accuracy and reliability to be the most common limitation, especially problematic in fields like healthcare and other evidence‑heavy domains. pmc.ncbi.nlm.nih
2. Missing sources and broken citations
- The model often fabricates references, misquotes articles, or mixes up details such as authors, years, and journal titles, which can corrupt literature reviews or theoretical frameworks if not independently checked. pmc.ncbi.nlm.nih
- It has no inherent mechanism to verify citations against real databases, so all references it suggests must be cross‑checked in primary sources such as library databases or Google Scholar. pmc.ncbi.nlm.nih
3. Bias and shallow reasoning
- Because ChatGPT is trained on large text datasets, any social, cultural, gender, or racial biases present in that data can be reproduced in its answers and examples. direct.mit
- Studies note that it struggles with tasks demanding deep critical thinking, original problem‑solving, or nuanced disciplinary judgment, tending instead toward generic, surface‑level responses. pmc.ncbi.nlm.nih
4. Effects on learning and critical thinking
- Over‑reliance on ChatGPT can weaken students’ development of independent analytical and writing skills, as they may default to AI‑generated text instead of doing their own reasoning. sciencedirect
- Researchers highlight risks that users skip evaluating evidence, simply accepting AI output as correct, which undermines core research competencies like scrutinizing methods and argument quality. pmc.ncbi.nlm.nih
5. Academic integrity and authorship
- Using ChatGPT to write assignments, theses, or papers can blur the line between assistance and ghostwriting, creating risks of unacknowledged AI use, plagiarism, or contract‑cheating. gchumanrights
- Major publishers (e.g., Springer Nature) specify that ChatGPT cannot be a co‑author because it cannot take responsibility; they require authors to disclose AI assistance and remain accountable for all content. pmc.ncbi.nlm.nih
6. Privacy, policy, and ethical concerns
- Entering unpublished data, sensitive participant details, or confidential documents into ChatGPT can raise data protection and privacy issues, particularly under institutional or legal frameworks. kpcrossacademy.ua
- Many universities are still developing or lack clear policies, leading to uncertainty about what forms of AI use are acceptable in coursework, exams, and publications. pmc.ncbi.nlm.nih
7. How to use it more safely
- Use ChatGPT mainly for idea generation, outlining, clarifying concepts, or improving structure and language, not as a substitute for reading and citing peer‑reviewed sources. pmc.ncbi.nlm.nih
- Always verify factual claims and all references in library databases, and follow your institution’s AI guidelines and disclosure requirements when using it in any assessed work. arxiv
AI Use Statement
The author used Perplexity AI in the research and development of this blogpost.

No comments:
Post a Comment
Thank you for your thoughtful comments.