Right now our team is doing a lot of reading and thinking about the new generative technologies impacting the work of staff and students across Higher Education. We want to highlight that these are our developing thoughts within a developing situation, and we are always subject to fluctuation and new influences as we continue to learn.

What we’re up to

Our team is just finishing up a toolkit of reflective questions that is designed to support our colleagues in engaging students in conversation about Generative AI (GenAI). The reflective questions and considerations aim to create space for our colleagues to consider the impact of GenAI tools, and to help structure their thinking and planning around how they will use them with students.

Since this was such a reflective task, with so much critical appraisal built into the work we were doing (and since we consider critical engagement to be essential when using GenAI tools), we felt that it would be a great opportunity for us to have a go at using GenAI ourselves and examine what use there might be in what the output was.

How we prepared

As a team, we’re pausing to reflect before using GenAI tools, given the significant sustainability costs, and other ethical issues including bias, data privacy, and issues with accuracy in outputs.

Ultimately, we decided that this was an occasion where we wanted to try using GenAI as an opportunity to learn more about how we might use it in our work, and to better understand its capabilities. This helps inform our support for colleagues who are considering its use in their contexts.

We felt we had enough expertise and access to good quality information sources to evaluate its outputs in this area. We were also explicitly addressing bias as part of this project, with input from other colleagues in order to include a variety of perspectives, which helps us to recognise and mitigate bias that might be present in the output generated by the GenAI tool.

Finally, we decided we would share our reflections as a way to be transparent about our use, and in the hopes that this could make helpful reading for others who are also exploring and reflecting on their AI use.  

We were careful not to include any sensitive data when writing our prompt, and used Microsoft Copilot (our institutionally supported tool) whilst signed in, to ensure the highest level of data privacy available to us.

What we asked for

A suggested approach for prompting GenAI tools at NTU is the RTCD approach, defining the Role the GenAI tool should take, the Task it should fulfil, the Constraints of the prompt (such as limitations on length, style or tone) and the Data that should be considered (such as providing specific context or data).

Our prompt:

“You are an educator of Higher Education Arts and Humanities students. Generate a list of no more than 25 reflective questions that will help you to prepare for a student conversation about the use of AI in your course through encouraging consideration of important angles to be covered in that conversation. Topics could include critical AI literacy, assessment and feedback, inclusivity, ethics and positionality, but you should not be limited to these topics. Divide the questions into sections by topic. The questions should be open and reflective, allow for multiple possible answers.”

What GenAI offered

Below we have shared some of the questions that were output by Microsoft Copilot, and our thoughts on their usefulness for this task.

Questions without reflection

Some of the questions lacked the important component of reflection. For example:

  1. How do you define AI, and what are its key components?
  2. How can AI be used to support diverse learning needs and styles?

(The mention of learning styles was also a red flag to us – the concept of VAK learning styles is persistent, but now widely understood to lack evidence of effectiveness).

These sorts of questions, we felt, didn’t support a reflective engagement with the use of GenAI and almost felt more like a knowledge assessment which is not the purpose of the toolkit.

Areas we had considered

Some of the questions that the GenAI tool offered were in areas that we had already considered, such as:

  1. How can we address potential biases in AI algorithms that may affect marginalized groups?
  2. In what ways can AI enhance or hinder your learning experience?

With these questions, the GenAI suggestions were less nuanced than the questions we had already composed, and understandably less tailored to our context and our colleagues. Since GenAI tools output the statistically most likely text, given the constraints of the prompts, this is something we expect would be a common issue we might encounter. We would have to spend more time developing prompts to see if we could generate something more suited to our context.

Perhaps some of the questions would have been of some use as a starting point, so timing – when to use GenAI – is something we’ll consider in future.

Buzzword soup

In some areas, the questions were outputting educational buzzwords in combination with GenAI uses, offering questions that seem to suggest potential uses of GenAI which might not be possible, practicable or supportive.

Examples:

  1. Can we foster a collaborative learning environment that integrates AI effectively?
  2. How can AI help bridge the gap between different cultural and linguistic backgrounds in the classroom?

Here, we couldn’t identify pedagogical reasoning behind the suggestions, but the implication that these are legitimate uses risks eliding the evaluative step, moving directly to the ‘how’ of GenAI tool use without considering the ‘whether’. Given the considerable evidence of bias and stereotyping of marginalised groups present in GenAI outputs, for example, the idea that it might be of use in bridging cultural gaps seems a complex proposition.

Our conclusions

Whilst we didn’t end up using the suggested questions that were output by Microsoft Copilot, our experience did solidify some of our thoughts about how we might use it, and gave us some things to consider for future use.

We see even more clearly the vital role of critical thought as well as curiosity when evaluating and editing outputs. We will also consider the timing of our GenAI use in any future projects – more general suggestions might be more useful at an earlier stage. We also recognise the importance of factoring in the time required to iterate our prompts in order to try to create outputs which are more relevant to our intended use and context.

So, for this project, our knowledge of the uses we wanted to make of the toolkit, our familiarity with our audience’s context, and our experience with the practice in our School meant that the outputs didn’t add to what we had created.

We could have developed our prompt, attempting to include more of our context and specificity about the intended use of the toolkit to see if it’s possible to generate more relevant outputs, but for this task we felt the time we might have spent in doing so was better spent in crafting and refining our reflective questions ourselves. It is the precision of our context, the critical thought through which we evaluate, and the richness of our experience which shapes our work to ensure our support is designed with our colleagues in mind.

Author: Bethany Witham

Learning Technologist

Learning and Teaching Support Unit (LTSU)
School of Arts and Humanities
Nottingham Trent University