Last month we focused our reading around reflecting on the ChatGPT frenzy, and posted our collated Introduction to ChatGPT. We’ve continued to read along as people have tested, experimented, and started to think about ways to embed it in their practice.

This month we’re sharing two perspectives we’ve read on ChatGPT, and – since accessibility is one of our priorities as a team and as an institution – a reflective piece on inclusive practice that supports neurodiverse students.

February Reading

A Reflective Analysis on Neurodiversity and Student Wellbeing: Conceptualising Practical Strategies for Inclusive Practice

Neurodivergence is a term that encompasses a range of differences in the ways people’s brains process and react to the world. Covering conditions such as Autism, ADHD, Traumatic Brain Injury (TBI) and Dementia, the umbrella term covers learners with widely different accessibility needs. Academics Elliot Spaeth and Amy Pearson use their first-hand experience of studying while neurodivergent to offer both empathy and experience in the formulation of their useful and widely applicable advice.

“We emphasise the importance of questioning normative assumptions around expected student learning behaviours, and the negative impact that these assumptions can have upon neurodivergent students. We then provide several practical strategies that can be used to develop more inclusive practice, drawing upon principles embedded within a Universal Design for Learning approach.”


ChatGPT and Good Intentions in Higher Ed

This post summarises two different reactions the author is observing in the emerging online discourse about ChatGPT in Higher Education – those who are concerned about the implications for “cheating,” and those who are already starting to use it as an educational tool with their students. Autumm Caines highlights problems with both the reactionary and humanist pedagogical approaches. This piece acknowledges the complicated and ambivalent relationship the author is already developing with this technology. This reflects the experience of many of us – we are uncertain, sceptical, but also a little optimistic, and perhaps feeling resigned to the limited control we have over how technology affects our work and lives.

“The all out rejection of this tech is appealing to me as it seems tied to dark ideologies and does seem different, perhaps more dangerous, than stuff that has come before.”

On Equity and Trust and AI

Maha Bali covers a lot of ground in this blog post, and references some of the reading we’ve been doing as a team to grow our knowledge of recent developments in AI and its implications for education. As with the development of any new technology, it’s important to consider its impact around equity and access. Between considering the dynamics of our roles as educators, and what this new technology means for our students, there’s lots of food for thought – including some questions for further consideration:

“Who has access to it, literally, to learn or to teach? Who has the digital literacies to work with it and produce something good/useful? In ways that are useful to humans?”

Coming up

We’ve also been doing some reading in preparation for the next TILT reading group, which had a great turn out last month with representatives from different departments across the institution. We welcome our Nottingham Trent University colleagues to sign up for next Digital Reading Group Meeting, where we will be chatting about David White and Alison Le Cornu’s thoughts on Using ‘Visitors and Residents’ to visualise digital practices.