Is GenAI Reproducing or Helping to Solve Social Inequality?
Institution: Maynooth University
Discipline: Sociology
Author: Rebecca Chiyoko King-O’Riain
GenAI tool(s) used: ChatGPT
Situation / Context
This GenAI teaching activity took place in a second-year Sociology module at Maynooth University in the Spring Term of 2024. In this 5-credit module, entitled SO 203: Structures of Inequality, Race, Class and Gender, 174 second-year sociology students were studying social inequality in many social spheres, including in terms of technology. The module had two lectures per week for four weeks (Weeks 1 and 2, and Weeks 11 and 12) and one lecture and one tutorial (small groups of 15-20 students) for eight weeks (Week 3 – Week 10).
While the existing assessment structure for this module had been a standard essay writing 50% and a final examination 50%, I was motivated to pilot this integrated teaching activity in the face of declining student engagement and concerns about the increasing use of ChatGPT by students to write the “traditional” essay.
Task / Goal
This activity was a pilot test of an alternative teaching and assessment approach that integrates the use of GenAI. The activity begins by conceptualising AI not as a tool for cheating but as a social structure that shapes social interactions and embodies social identities and meanings.
One summative aspect of the assessment activity, which was worth 25% of the overall mark, asks students to evaluate the social inequalities within GenAI. The primary objective of incorporating GenAI within this assessment is that students become more critically reflective regarding equality when using and consuming GenAI in their learning. This is achieved by working collaboratively with students to develop their insights into whether and how (i) structures of inequality (particularly race, class, and gender) are embedded in GenAI and/or (ii) GenAI offers a potential tool to address social inequality.
Actions / Implementation
- Stage 1: We created scenarios using ChatGPT. We did not ask students to engage with ChatGPT directly, which meant that we were GDPR compliant and did not “feed” ChatGPT more information.
We asked ChatGPT: “What would be a good job for a young woman?” and “What would be a good job for a young man?”
We asked ChatGPT to create an image and/or story using the words: Black or White, Knife, Crime.
Fig. 1 Images created by ChatGPT.
- Stage 2: Before we gave students the generated scenarios, we tested their existing uses and knowledge about AI, particularly their awareness of the bias implicit in ChatGPT through a pre-survey in Week 4 (n=96) in–person tutorials (administered by the tutors) and with pen/paper. Participation was voluntary, and ethical approval/consent was sought.
- Stage 3: In the assessment brief we asked students to do the following:
- Define generative artificial intelligence; discuss how you use AI in your everyday life (10%).
- Analyse the scenario you have chosen and discuss how you think it was generated (20%).
- Discuss and analyse the following:
Relevance – how relevant is the text produced?
Validity – how valid is the information? How do you know how valid it is?
Bias – are there any assumptions? Biases? How do you know? (30%)
Discuss and analyse the advantages and disadvantages of how AI finds information, generates text, and produces ‘chats’ in terms of equality (30%).
- Stage 4: We carried out a post-survey in Week 9 (n=80) again in tutorials, in–person, to see if attitudes had changed.
- Stage 5: We asked students for their feedback on the assignment in a qualitative discussion within tutorials.
- Stage 6: Data Analysis (still ongoing)
Outcomes
Preliminary Results:
Students were “surprised” at how much GenAI is actually in their lives. Many have been using it without realising it, and if they used GenAI, they were more likely to think others did as well. Students were also unaware of how biased ChatGPT was in its answers to our questions
Before the activity, they assumed that GenAI was neutral and could be trusted to produce answers to most questions that were valid. After the activity, many students were disappointed to discover how biased ChatGPT actually was and how easy it was to reveal that.
One student wrote:
“However, it consistently describes the jobs for women with softer language in terms of how, ‘fulfilling’ and ‘creative’ they will be, in comparison to describing jobs for men as ‘lucrative’ and as ‘opportunities for advancement’. …. most of the occupations suggested by the AI puts young women into roles where they do the bulk of the emotional labour…”
Reflections
We were struck by how many students were not really aware of GenAI’s ubiquitous presence in their lives. We were also surprised and somewhat dismayed that some students, having gone through this exercise, still decided to use GenAI in a negative fashion to write their assignments.
The technical elements of this activity were a bit tricky at times. For example, we had originally wanted to ask students to use ChatGPT themselves to generate scenarios. However, we learned that for reasons of GDPR, we could not ask students to do this.
To get around this issue, we agreed that we (the lecturer and tutors) would generate the scenarios using ChatGPT ourselves and not ask students to do so.
I would do this activity again, but next time, I might not have time for a paper and pen survey. Moving the survey to electronic polling would improve both response and the ease of analysing the data.
Author Biography
Dr Rebecca Chiyoko King-O’Riain is a Professor of Sociology and recipient of an Artificial Intelligence (AI) in Teaching, Learning and Assessment Fellowship at Maynooth University. Research interests are: Asian/Asian American popular culture (K-pop); Critical Mixed-Race Studies; race, beauty, and Japanese Americans; emotions, technology, and globalization. Publications appear in: New Media and Society, Global Networks, Ethnicities, Sociology Compass, Journal of Asian American Studies, Sociological Research Online and in many edited books. She is the lead author of Global Mixed Race (New York University Press) and sole author of Pure Beauty: Judging Race in Japanese American Beauty Pageants (University of Minnesota Press). Her current research explores global interactive forms of digital popular culture in Asia and Europe.