This post first appeared in Dutch on Remco’s Linkedin channel.
In the debate on digitalisation, the focus is increasingly shifting to the social impact. And rightly so. How can we help students and teachers become more aware of that impact? The draft core objectives for digital literacy address this, fortunately, but it is only one component; the emphasis is mainly on the use of technology (and oh yes, do so thoughtfully and responsibly).
The ππππ‘π§π¨π¬π€ππ©ππ’ππ’π¬π¦ ππππππ«π -framework offers a solution. It approaches digitalisation and digital literacy from a critical perspective.
The model β developed by American scientists Pleasants, Krutka and Nichols β shows how technology goes beyond what is immediately visible. The metaphor of an iceberg emphasises that technology is not just a practical tool but is embedded in broader systems and societal values.
The visible layer β technology is often presented as a neutral tool that solves simple tasks.
The underlying layers β technology is part of complex systems that have unintended effects and reflect values ββabout what is desirable and good and for whom.
The framework distinguishes three dimensions to explore the deeper layers:
π. Technical β how does a technology work? What design choices have been made, and why does its use work differently for different people?
π. ππ¬π²ππ‘π¨π¬π¨ππ’ππ₯ β how does technology change how we think, behave and live together?
π. ππ¨π₯π’ππ’cal β who determines the rules and laws around technology, and whose interests are central to these decisions?
The βicebergβ also offers a practical entry point to critically examine AI with students. For example, you can do this as follows:
βͺοΈ Technical dimension: Have students investigate how generative AI, such as ChatGPT, works. Which algorithms and data make this technology possible? CanΒ we see that? How exactly do they influence that βoutputβ? Think of bias and ecological consequences.
βͺοΈ ππ¬π²ππ‘π¨π¬π¨ππ’ππ₯ dimension β discuss how AI influences our thinking and behaviour. For example, how does it change our idea of ββcreativity if texts and images can be generated by AI? Is our view of human expression changing?
βͺοΈ ππ¨π₯π’ππ’calΒ dimension β have a discussion about legislation and regulations around AI. Who decides how AI can be used? How does the lobby of big tech companies work?
By integrating these dimensions in lessons, students learn not only how (generative) AI works but also its broader social consequences.