A colourful abstracted image of a bookshelf with books, electronic devices and plants.

Join us as we try to broaden our understanding of the new generative technologies impacting the work of staff and students across Higher Education. This is the first of a series of short posts that aim to highlight a piece that we’ve found in some way thought-provoking, and our response(s) to it. Obviously, things are changing apace, and we want to emphasise that our thinking is also in development, always subject to fluctuation and new influences as we continue to learn – just as it should be!

We’ve been reading “Resisting Dehumanization in the Age of ‘AI’,” by Emily Bender (2024)1, a US computational linguist and an influential voice in conversations around artificial intelligence. This 2024 article explains how ways that AI is discussed and used can contribute to dehumanisation, and also puts out some calls for action that might help researchers resist these effects. Bender notes that the data sets used to create AI products are sometimes “mythologized as being representative,” when factors that affect the outputs given include “…where to collect data from, how to collect it, how to filter it, what labels to apply, who should apply the labels, how the labels are verified, and more.”

While it’s well documented that AI outputs can contain bias, this breakdown of the different processes that narrow the global experience and knowledge being drawn from is making us think even more carefully about how we might want to use generative AI in our own work, and when we would avoid using it. We’ve also been discussing what we can do to extend the scope of experience and knowledge that we’re drawing from, something that we’d already identified as a continuing work-in-progress but has now been given fresh urgency.

We also found it refreshing that Bender outlines actions that can be taken on a personal, rather than a systemic level, that might help to confront the problems she describes. While these are addressed to cognitive scientists, they still make enlightening reading for those of us without an insider’s knowledge of AI and Machine Learning (ML) research, and here are a few of those we’ve been thinking about in relation to our own contexts:

Critically analysing claims about what GenAI can do

We have to remember that these products are sold to us, figuratively and literally, on the strength of their sometimes astonishing-sounding abilities. As Bender points out, if we can’t verify from evaluating the output that the tool can actually achieve these, it would be dangerous to rely on it.

Engaging with public scholarship

Bender emphasises a need for “an informed public and informed policymakers” as a counterpoint to the way commercial interests are “shaping the regulatory landscape.” We might not be researchers or experts in the areas of cognitive or computer science but we still aspire to form part of that “informed public.” Benders’ article strengthens our sense that keeping up with developments, and sharing what we’re learning (as well as seeking out diverse voices to learn from), is a small but worthwhile contribution.


1 If your organisation/institution doesn’t have access to the article, or you’re not attached to one, give this gift link a try:

Bender, E. M. (2024). Resisting Dehumanization in the Age of “AI”. Current Directions in Psychological Science, 33(2), 114-120. https://doi.org/10.1177/09637214231217286

It will expire at some point (either in terms of time or number of times its been opened), but we wanted to give as wide access as we’re able.

Image credit: we generated this decorative image using a GenAI tool as a kind of experiment as part of our learning. It’s possible that we might share our thought process and reflections on this on the blog at a later date.