We increasingly receive “tailored” information online. From videos to news to products, algorithms try to predict what we will likely want to see. Convenient, because it saves time. Less convenient, because you end up less often surprised by things you do not yet know. You may also drift into a kind of filter bubble.
A new study by Giwon Bahg and colleagues in the Journal of Experimental Psychology: General examined what this type of personalisation might mean for learning. It is important to stress that this was not done in an educational setting. The study took place in a fully controlled lab environment with fictitious aliens and artificial features. That is clearly a limitation for practical conclusions. But for that very reason, it becomes interesting. All prior knowledge, opinions and emotions are stripped away. What remains is a kind of pure form of learning. And that produces a confusing and relevant picture.
So what did the researchers do? Participants had to learn new categories: eight types of “aliens,” each built from six features. In the control group, everyone saw the same sequence of examples and had to inspect all features. In the experimental groups, a personalisation algorithm influenced which examples participants saw and in which order the features appeared. The system learned from participants’ behaviour and offered items they were likely to click on again. Put simply, the algorithm tried not to maximise knowledge but to maximise consumption by encouraging participants to keep clicking on features.
What happened next? In short:
-
Participants in the personalised conditions inspected fewer features and did so more selectively.
-
As a result, they built narrower and more distorted category representations.
-
They made more errors when classifying new aliens.
-
And most striking: when they had never seen an example of a particular alien type, they were very confident about their incorrect answers.
This last point may be the most surprising. The less participants had seen, the more confident they sometimes became. It is not quite the Dunning-Kruger effect, but it feels related. It looks like a form of overconfidence caused by personalisation.
But it is tempting to draw a line from this study to personalised or adaptive learning in education. I caught myself making that connection. Still, that would be too quick. The researchers tested a personal algorithm optimised for content consumption rather than for learning. We may assume that the algorithms used in learning platforms are designed differently.
What this study shows is how quickly human information seeking is influenced when a system narrows the available options. People explore less, see less variation and become more confident about their limited knowledge. That makes the study relevant for anyone working with personalisation, from news platforms to learning technologies.
It is an elegant piece of work that shows how fragile learning can be when a system determines the input you receive. Perhaps that is the main message. Personalisation can be useful, but only if we ensure learners also see what they would not encounter spontaneously. Diversity of examples remains important, and so do feedback and metacognition.
Share this:
- Click to share on X (Opens in new window) X
- Click to share on Mastodon (Opens in new window) Mastodon
- Click to share on Bluesky (Opens in new window) Bluesky
- Click to share on Threads (Opens in new window) Threads
- Click to share on Facebook (Opens in new window) Facebook
- Click to share on Telegram (Opens in new window) Telegram
- Click to share on WhatsApp (Opens in new window) WhatsApp
- Click to share on Pocket (Opens in new window) Pocket
- Click to email a link to a friend (Opens in new window) Email
- Click to share on Tumblr (Opens in new window) Tumblr
- Click to share on Reddit (Opens in new window) Reddit
- Click to share on Pinterest (Opens in new window) Pinterest
- Click to share on LinkedIn (Opens in new window) LinkedIn
- Click to share on Nextdoor (Opens in new window) Nextdoor