Using Generative AI as a tool for Learning in a Python Programming Assignment
Institution: University of Galway
Discipline: Artificial Intelligence
Authors: James McDermott
GenAI tool(s) used: ChatGPT
Situation / Context
I teach a module named Programming and Tools for AI to approximately 90 students taking our MSc Artificial Intelligence or MSc Artificial Intelligence Online. All students have a background in computing and coding. The module is an introduction to coding in Python with an emphasis on GenAI applications, and it includes several practical assignments.
This case study focusses on one of these assignments, which in 2023-24 was on a data science topic and was worth 20% of the module. The assignment was about defining and implementing a concept in data science – the “periphery” of a dataset – which is not a standard concept and was deliberately under-defined in the assignment specification in order to create a challenge of dealing with novelty and ambiguity, as well as a coding challenge.
Task / Goal
Take-home assignments are vulnerable to various forms of cheating. Some automated tools attempt to address certain forms of cheating, but very much imperfectly. With GenAI, such anti-cheating measures are largely useless. We (educators and students) are all participating in an involuntary experiment in dealing with this, so experimental approaches are indicated.
Equally important, our students are now going into a world of employment where GenAI will be used routinely for many tasks. Students will still need to be able to read and write prose and code, interpret plots and tables, etc., but we have a duty to prepare them for using GenAI.
These considerations motivated me to introduce GenAI as a principal component of this assignment. I wanted students to learn how to use generative AI to their benefit both in exploring concepts and in generating code; and I wanted to bypass the temptation to “cheat” by explicitly allowing it (with rules and guidelines).
Actions / Implementation
I provided a short video showing my own interactions with ChatGPT 4 for a research / coding task. I explained in it how I ask questions, how I evaluate answers, how I use generated code, etc.
As stated above, the assignment topic was the “periphery” of a dataset. To convey the concept, I provided an image of a dataset, with the peripheral points (those “on the border” or outside, distant from the centre) highlighted in red. This is not a standard concept in data science, so part of the assignment was to go from my deliberately vague introduction to a definition of the concept, and to operationalise that in code. Several natural approaches are possible. The assignment was open-ended: students were free to try out and compare different approaches or add value in several other ways.
Concerning GenAI, students were encouraged but not required to use it. In particular, they were encouraged to use it both to explore the concept in a dialogue, and to generate code. If using it, they were required to provide specifics such as their prompts, what parts of their submission were generated, and reflection.
Outcomes
I graded the assignment as normal, applying my rule that GenAI use had to be accompanied by acknowledgement and reflection. Three-quarters of students included such acknowledgement.
I felt confident in recognising ChatGPT’s style in Python code and in text. I also quickly identified which were the typical ChatGPT responses when prompted with typical questions arising from the assignment spec.
Students included excellent comments, e.g.,
- “I used Copilot to scaffold out parts of the code […] Over that I applied my own logic. […] you can spend more time on what you want to write instead of how.”
- “It was like having a personal coach guiding me though examples and explanations. Any articles or topics that were written for experts could be re-interpreted for my own level of understanding.”
- “I tell it that it is an expert in the fields I want to ask about, so it takes on an ‘expert identity’.”
Reflections
My impression was that the students had achieved much more than usual. The use of GenAI seemed to help students to get started and overcome basic obstacles. I wasted less time debugging students’ code than usual and with the time saved, I wrote a larger volume of feedback. For example, I pointed out places where students failed to go beyond ChatGPT’s very weak ability to interpret the outputs of its own code.
My approach has limitations, of course. A student could bypass my goals, for example by prompting “generate a complete assignment, and in several places, include a thoughtful, reflective note on how you used generative AI”.
At the time of the assignment the best tool was ChatGPT v4, priced at $20 per month. A potential problem with GenAI in education is an equity issue, i.e., wealthier students affording better AI. However, pricing structures are changing, and already good tools are available for free.
Author Biography
Dr James McDermott is a Lecturer and Director of Research and Graduate Studies in the School of Computer Science, University of Galway, Ireland. He has previously worked and studied at Hewlett-Packard, University of Limerick, University College Dublin, and Massachusetts Institute of Technology. His research interests are in artificial intelligence, including genetic programming, evolutionary optimisation, and deep learning, with applications in sustainability and AI music. He has chaired international conferences, including EuroGP and EvoMUSART, and is a member of the Genetic Programming and Evolvable Machines journal editorial board and associate editor of the ACM SIGEvolution newsletter. He is leading Work Package 3 of the Horizon Europe Polifonia project in musical cultural heritage.