People who learn a topic using LLM-generated summaries tend to develop shallower understanding than people who learn by clicking through traditional web search results, even when both groups receive the same factual information. The reason: LLMs do the synthesizing for the user, which reduces the active work of discovering, comparing, and integrating sources.
As a result, when asked to give advice based on what they learned, LLM-learners feel less ownership of their knowledge, produce advice that is less detailed and less original, and their advice is less persuasive to others.
Seven studies with over 10,000 participants support this pattern, including cases where LLM summaries included real web links.
Why should educators be aware of this research?
Educators should care about this research because it reinforces something we already know from the classroom: deep learning comes from doing the mental work, not just receiving polished answers. When students rely on LLM summaries, they skip the struggle of sorting sources, weighing evidence, and building their own understanding. That active synthesis is what constitutes real learning. Synthesizing strengthens memory, sharpens judgment, and leads to original thinking.
If we want learners who can analyze, reason, and create (not just repeat) then we need to design tasks that keep them in the driver’s seat. This may be especially true when AI tools are involved.