Lost Voices: Student Voices on AI in Education

Discourse around AI and education is dominated by EdTech leaders and consultants, AI companies and to some extent teachers and educators (with often polarised views) and occasionally parents. In previous blogs we have explored the importance of casting our nets wide when gathering the ideas (and voices) of our communities and ensuring we map territory at the edges and in the middle of the dominant, often loud, camps in educational debates.

In this post, we would like to share some reflections from students and pupils across educational settings, all of whom have had some special links with us and our CATALYST community including alumni from our courses and those that have engaged in our mentoring projects.

AI as a reflection of ourselves

AI as a mirror


Benjamin Creasey has just finished his degree in Philosophy at the University of Winchester and gained work experience with us at CATALYST as he was keen to explore ways to facilitate philosophical dialogue. Here are Ben’s reflections on AI in Education:

Stanley Kubrick’s 1968 science fiction classic 2001: A Space Odyssey, features the character of HAL, an advanced, sentient AI that operates and maintains the Discovery One spacecraft. The most intriguing aspect of HAL is his natural interactions with the crew as we observe that HAL engages in conversations and even plays chess with one of the crew members. When the crew are questioned on whether the addition of an artificial intelligence to their team feels at all strange, the central character of Dave Bowman replies that while it may have been initially, they have all come to see HAL as an equally valued member.

This depiction of artificial intelligence, which remains foundational to the cultural archetype of AI, highlights the idea of organically integrating technology into human life and raises important questions about how we understand empathy, agency and ultimately the ways we form relationships to things.

In 2010 when Apple released its virtual personal assistant Siri, analyst debates were centered around whether this new development was a ‘gimmick or game-changer’, with many sceptical of the optimism that this could be the next revolutionary jump in our interaction with technology, analogous to the invention of the mouse and later touchscreen.

Research analyst Shannon Cross remarked at the time that  ‘the use of natural language could one day change the way we interact with electronic devices’.

 

The virtual assistant

I recall my own introduction to AI came when Microsoft, shortly afterward, released its Cortana, a virtual assistant that could perform tasks and engage in conversation. The object of particular fascination was that Cortana appeared to have a personality, talking in a friendly tone, responding in humorous or witty ways, and developing through interactions. Its abilities and functions were limited however, still often feeling robotic and detached, leading me to believe that the organic integration of AI may indeed have been more of a gimmick than a revolution.

 

It was in late 2022, when OpenAI released its ChatGPT that conversations surrounding AI significantly resurfaced. Much of the debate around AI, particularly models like ChatGPT, quickly became focused on its economic impact, quite understandably, as this is where the most immediate and visible changes are likely to occur. However, this focus arguably sidelines a more philosophical and sociological concern. While attention has centered on AI’s designed for efficiency, data analysis and pattern recognition, less noticed are the chatbots designed for companionship, built to simulate care, empathy, and friendship. These models, though less prominent, raise profound questions about the nature of human relationships that returns to Shannon Cross’ assertion that our interaction with technology will begin to change fundamentally.

 

Fuelling loneliness? Or the potential to ease our loss of connection?

One of the most prominent figures contributing to the debate is Sherry Turkle, an American sociologist at MIT who argues that our deepening relationship to technology has, simultaneously, created a deeper sense of alienation between individuals. This sentiment is embodied in her intriguing qualitative study Who Do We Become When We Talk To Machines?  Turkle notes that many individuals claimed that the programs do a ‘better job than people’ at relationships, using AI regularly to combat loneliness, anxiety and insecurity. The study emphasises that, while appearing on its surface to be a radical shift, this kind of interaction with technology can be considered a logical progression stemming from the prevalence of social media. Turkle notes that social media presupposed three distinct promises to the individual: ‘You can put your attention wherever you want it to be. You will always be heard. And you will never be alone.’ Social media, in this sense, provided us the illusion of companionship without the demands of intimacy. AI, for Turkle, extends this promise further as it can now represent ‘an audience that will always be on your side’, reflecting your interests and conforming to your tastes.

 

A key takeaway from Turkle’s work is that human beings have a curious ability to attribute ‘more to conversational machines than is “really” there’. Turkle terms this phenomenon the ‘Eliza effect’, named for the chatbot developed in 1966 that imitated a psychotherapist, the developer of which was greatly alarmed when his students requested to be alone with it to discuss intimate matters.

 

The prevalence of AI chatbots that are increasingly personalized, simulating empathy and responding even with specific friendly and compassionate tonalities appear to suggest a rare point of optimism regarding the human condition. Our inherent ability to humanize and empathise is so profound that we may even extend it to machines, even when our rationality tells us they do not feel.

 

HAL, in its final moments, as it is gradually shut down, begins to softly sing Daisy Bell, the real first song performed by a robotic voice. This moment in the film is comparable to the way in which Zizek describes film as a crucial medium through which we can encounter elements of ourselves. The scene evokes a strange feeling on its surface as HAL pleads for its life, made almost tragic through the realisation that it can only express this plea through a voice that is restricted to its pure functional tone. A deeper understanding of the human question that is central to AI reveals that the scene is not tragic because HAL feels, but because we feel for him.

AI - a strange disparity

Henrietta is a Year 12 pupil studying for her A levels.

I am interested in this topic area and in fact made this the research focus on my EPQ. It is clear AI has rapidly changed the educational landscape and AI seems to pop up almost everywhere I look - from creating schedules to writing essays. I’m interested in how this has also rapidly changed the job landscape and yet there is an odd tension as there is a disparity in the rate of change in education and work. The workplace has rapidly adapted to the use of new technology, such as ChatGPT, increasing the productivity of many workers; however, in schools, these changes have been slow and often forbidden. JCQ have now published two different versions of AI in exams, which raises questions about whether these exams are still examining students on the skills they will need to use in the real-world job market and how this disparity can be addressed.

It is a bit odd when on work placements you witness many workers embracing and openly using AI - even for writing - whereas education and academia have a very cautious approach, understandably so.

The useful tutor?

AI as study partner

Eddie is a Year 11 pupil who has just finished his GCSE qualifications

AI has helped me revise for most of my GCSE subjects.

In English literature, I’ve asked ChatGPT to choose the best quotes for certain characters and themes in each novel/play, with advanced analysis of language devices and structural techniques, as well as how to link them to the inner messages of the stories and intent of the authors/playwrights, to ensure high marks in each Assessment Objective for the exam boards I did. With some quickfire flashcards that utilised the information ChatGPT gave me, I was well prepared for any potential question in English Literature, giving me confidence in an exam that’s widely regarded as one of the more stressful ones.

Additionally, ChatGPT can rapidly mark thesis statements, paragraphs, or even whole essays, allowing me to refine a bulletproof plan for one of my English Language questions. My teacher may not have even marked all of this, let alone marked it proficiently. ChatGPT can instantly critique any sentence, any phrase, or even any word, to help me increase the accuracy and fluidity of my writing. For complex subjects, I asked AI to teach me parts of the specification I struggled with. Compared to a lesson at school, where the teacher may brush over my weak areas but go in depth into parts of the subject I know really well, I could ask AI to target my weak points, maximising my learning and minimising the amount of time needed to spend revising. To sum up, AI’s speed and seemingly infinite knowledge allowed me to spend less time revising for my GCSEs , which meant I could spend more time on less monotonous and dull activities.

AI stops me from thinking for myself

Will is a 10 year old pupil who has completed his SATs this year and saw a noticeable rise in homework during Year 6!

I hadn’t really thought much about AI before the CATALYST after-school programme and reading a book on Alan Turing. I don’t want AI to think for me and I did tell one of my friends not to use it for his homework as it is just getting credit for something you haven’t done.

If I am struggling with homework I can ask my parents or grandparents, but if I was really struggling I might ask ChatGPT as I’m not sure its right that someone might be very upset they can’t do their homework and there is a way to help. I’d still want to do the work myself ideally, but it could be good for asking what exactly the question means as some homework is easy when you understand what is being asked of you and AI could help you with this first step.

Summary

As with much discourse, there is a real danger we talk in homogenous terms about what students want, need and think! As this short reflective piece shows, there is a real diversity of perspective offered here and it shows there is a need to think about age, stage and context and the very different motivations we may have for engaging with AI or not.

If you enjoyed this piece you may like to read Philip Hand’s (Horizons July 2024) blog post Enslaved by Tech?

Previous
Previous

What Students Learn from a Global Classroom

Next
Next

Thinking like a rainforest