"

What is Intelligence Anyway? AI and Higher Education in the 21st Century

K. C. O’Rourke.

Technological University Dublin.

My younger son recently finished secondary school, where he enjoyed computer science and did very well in that subject in his Leaving Certificate. When it came to his choices for university study, he listed computer science courses as his top three options. He had wanted to study classics, but that subject was not available to him at his school, so he had opted for computer science instead. Earlier in the summer, my partner had a conversation with him and asked if he was happy with his stated choice of university courses. After some to-ing and fro-ing, he revised his application form and placed classics as his first choice. He was offered a place on a classics course which he accepted and has now started at university. Some of my friends have expressed disbelief at what he had done. Computer science is surely a far better option in the twenty-first century than studying dead languages and ancient history, right? What are his job prospects going to be with a degree in classics? I am aware that this is a luxury afforded to him, not least because in Ireland our university fees are very low compared to other countries. Nonetheless, I think he made the correct choice. His future career is not exactly clear, but he is pursuing something that he really wants to do. Which brings me to the question I was trying to address in writing this piece throughout the time he was changing his mind: what is the value, if any, of higher education?

Purpose of Higher Education

In these days of perpetual-crisis news headlines – exacerbated by attention-hungry algorithms and profit-hungry social media corporations – the purpose of a university education is not always clear. Conspicuous billionaires who abandoned their studies at private colleges in favour of entrepreneurial endeavours may suggest that university education is unnecessary, possibly even irrelevant, for success in the world of business. So why bother with university education at all? For most people, income and education levels are inextricably linked – medicine, engineering, computer science, among others, all attract high earnings – but does possession of a university degree signify anything beyond employability? Do humanities subjects and those with less practical application have any relevance to twentieth-first-century life? What is a degree worth anyway?

From the US to the EU, many of the traditional assumptions about higher education are now regularly questioned, with much scrutiny of the internal functioning of our universities, not least with regards to our use of public funds. Inside the academy, rather than resisting this, we should take heed. We need to ask whether working in publicly-funded higher education is a privilege or a public good (or, indeed, both). What is the point of maintaining academic departments that teach subjects with low practical application, such as classics, philosophy, and the humanities more generally? Should our universities prioritise career skills for the economy over intellectual and personal development? This questioning extends beyond the scope and purpose of higher education to issues of academic freedom and the content of the university curriculum: should academics be sanctioned if their pronouncements extend to criticism of financial benefactors or of government policy, or if they wish to express support of (or indeed opposition to) foreign wars and regimes?

The impact of GenAI

Add to all this the arrival on the scene of ChatGPT and other Generative Artificial Intelligence (GenAI) tools. Just when academia got used to the idea that internet access is here to stay and may even be a good thing – thanks in part to continued connectivity achieved throughout the COVID-19 pandemic – GenAI landed as an even bigger bombshell. The breakneck speed at which all digital things are evolving has brought with it the integration of artificial intelligence (AI) into almost all human activity. This digital revolution has caused alarm, even outrage, at many levels. From questions of bias and surveillance to potential existential risks posed by superintelligent systems, the unknown implications of AI loom large in everyday discourse. Across the industry, there is a widespread fear that certain professions will soon be rendered redundant, while in higher education the debate has largely been one of suspicion, with fears of AI-generated student work diluting standards, prompting heightened vigilance in assessment and a tightening of academic integrity criteria.

Clearly, we cannot simply ignore GenAI. And so, the big question has become how we respond. After the initial panic, discussion within the academy has moved on somewhat, and now centres on whether GenAI is a blessing or a curse, and how we might control and direct its use by students. But it is clear that if an algorithm can produce top-quality reports and read medical scans with greater precision and speed than a human can, many of the traditional approaches to teaching and learning will need to be revised in a way that actively facilitates the usage of such tools. And that could be painful. And relentless. As humans, we tend to stick with what we know, and as educators we tend to rely on the methods and techniques used when we ourselves were students. But as long as our internet connections hold, and while the underseas cables that connect the web remain intact and rogue programs are prevented from disabling our systems, GenAI is here to stay. As intelligent humans, we must learn to live with it. And we should help our students to use GenAI for the greater good, in a manner that helps rather than hurts the world, in turn bringing us back to the initial question regarding the value of higher education itself. To do that, we must understand what it is that we mean by artificial intelligence, and how it differs from what is achievable by humans.

Intelligence

So, what do we mean by intelligence? Britannica succinctly says that human intelligence is a “mental quality that consists of the abilities to learn from experience, adapt to new situations, understand and handle abstract concepts, and use knowledge to manipulate one’s environment.” Wikipedia, the bête-noir of academia, helpfully adds that intelligence differs from learning, as the latter “refers to the act of retaining facts and information or abilities and being able to recall them for future use,” while intelligence “is the cognitive ability of someone to perform these and other processes.” Intelligence, in turn, is related to the possession of knowledge and truth. As humans, not only do we know things, but we know that we know things: this belief, although not unchallenged, has often been held up as the characteristic that distinguishes humans from other animals. Such metacognition can take us into the realms of philosophy and psychology, as well as the worlds of pedagogy and even computer science. And it may well be that these disciplines are not so different after all, a revelation long known to many but which may provide the foundation for a new way of thinking about the university and its disciplines.

For most of the twentieth century, Anglo-American philosophy was dominated by what has been called the linguistic turn, an idea rooted (ironically enough) in mathematics and logic. Put simply, it is the idea that human language and its relationship to the world are all that we can use to constitute and represent (and thereby understand) all reality. In the words of Austrian philosopher Ludwig Wittgenstein’s Tractatus (1922), a work composed in part during his time as a soldier in World War 1, “The limits of my language means the limits of my world”. This was also the era of James Joyce’s Ulysses (also 1922 and composed during wartime) and of Finnegans Wake (1939), both dense texts which are either works of art and genius or utter nonsense, depending on your perspective. Joyce’s use of language, certainly in the latter work, breaks all the rules of logic and is therefore considered to be art: some have made the same claim for the Tractatus which ends: “Of that which we cannot speak, we must pass over in silence.” Wittgenstein’s Tractatus is dense, and some would say impenetrable, both in the original German and in its English translations. In fact, he later refuted some of his earlier work in his posthumous Philosophical Investigations, a work composed in part during his time in Ireland (a plaque commemorating him sits on the steps in the Glasshouse at Dublin’s Botanic Gardens). In a similar vein, discredited German philosopher Martin Heidegger, in his 1947 “Letter on Humanism” observed that language contains all of human reality: “Language is the house of Being. In its home man dwells” (i.e., that somehow all reality is contained in language). The twentieth century can indeed be characterised as the century of language (in Ireland characterised by attempts to reintroduce the Irish language as the vernacular). And so, it is not really surprising that language – specifically large language models – rest at the foundation of what has come to be known as artificial intelligence.

The phrase Artificial Intelligence (abbreviated as AI) was first popularised by John McCarthy, an American mathematician, in 1955 in support of a funding application for a proposed summer workshop at Dartmouth College. The study, he proposed, was based on the idea that “a large part of human thought consists of manipulating words according to rules of reasoning and rules of conjecture”, and would “proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it”. Consequently, he proposed that “an attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.” Essentially, McCarthy was putting into practice what the philosophers had been saying during previous and subsequent decades that, in essence, language contains all human reality. Such an idea of reality, while attractive at one level, would appear quite limiting at another. To my mind, it seems a rather reduced way of looking at the world (to adopt academic language, it appears epistemically plausible but ontologically impoverished). Nonetheless, the essence of this idea has made its way into other disciplines, and by the late twentieth century, the linguistic turn had become very evident across the humanities and social sciences.

But this “Artificial Intelligence” is not intelligence at all, or certainly not intelligence as we attribute to humans. In fact, in many circles AI is known as Machine Learning, a far better term for what at base is a pattern-matching function. AI software is designed to reprogram itself in the act of pattern-matching: as the program is exposed to more and more accurate data, the more likely it is to reflect patterns correctly, in a manner recognisable by humans. For example, as English-speaking, literate humans we instantly recognise the letter sequences “ABCDEFG” and “abcdefg” as the first seven letters of the alphabet in upper and lower cases. We also recognise that “Abacus” and “abacus” are correct versions of spellings, while “aBBacuS” is almost certainly incorrect except, perhaps, as an imagined brand name (apt, perhaps, for an emergent software development company). We have learned this as part of our education in primary school, and our ability to recognise these patterns as words that may mean something in the context of other words and sentences remains with us as long as that part of our brain continues to function. If we encounter sentences such as “The abacus looks like Gavagai” or “The gavagai penholders the Abacus”, we will be given pause for thought, because the sentences seem to make sense but ultimately do not (unless, perhaps, we are familiar with the philosophy of Ludwig Wittgenstein and/or W.V.O. Quine). This is clearly a part of (but not all of) human intelligence.

Software can be programmed to recognise patterns by exposure to the billions of examples available to it on the internet, placed there by humans in the act of communication both in commercial and non-commercial contexts. Such programs can recognise that capital letters are used in certain places and lowercase in others, and that certain letters and words follow patterns, which allow it to predict the next letter or word. It “knows” this (or more accurately, it identifies this pattern) because of the billions of examples it has matched in the data that has been made available to it1. As anticipated by McCarthy, AI can go further, and correct errors based on probability via the number of pattern-matching examples it is presented with. It can go further still and create new patterns of letters and words in sentences that reflect the identified patterns and make sense to humans. And it can go yet even further and create meaningful sentences that are recognisable to humans as grammatically and syntactically correct, but which are either inaccurate or bear no relation to reality: asked to summarise the plot of Ulysses (March 2024), ChatGPT stated that “The novel culminates in a surreal episode at the seashore, where the two protagonists briefly converge before parting ways” which readers of Joyce’s masterpiece will recognise as incorrect. If challenged about such mistakes (or hallucinations), the software is programmed to reply with words such as “My responses are based on patterns and associations found in the data used to train me”. In a reversal of Wittgenstein’s claim, we can say that for AI at this stage, the limits of its world are indeed the limits of its language. But as the software is exposed to more and more materials, both in written and visual forms, the better it is becoming: already improved versions of these publicly available softwares are being used by all the major software companies, each attempting to outdo the other in their pursuit of market share and profits.

However, software does not actually know anything. Unlike humans, issues concerning truth are not within their capability. It has no knowledge concerning what any of these words or patterns mean and has no ability to evaluate them (“I do not have the ability to evaluate the moral implications of a particular action or decision”). The software may appear to understand by issuing such responses but, unlike humans, it is not aware that it is performing any function at all. It is simply an advanced pattern-matching machine. Put very simply in the words of Kate Crawford (Atlas of AI), AI is neither artificial nor intelligent. However, by telling the software that it is incorrect, we are, in turn, helping it to build a more accurate reflection of reality, in essence helping to feed the monster that is AI. And the intriguing thing is that as the software is exposed to more and more data, it can then begin to program itself in ways that humans cannot understand. And this, of course, is where it can begin to get frightening. The idea that software might independently execute commands to jeopardise a competitor’s software program, or even more seriously to autonomously launch missiles in an attack on a physical location, is not implausible given the prevalence of AI software in all aspects of civil and military equipment2. And so now is certainly the time to take stock, to investigate what we are not able to foresee at present but which may be significantly possible for us to predict in the future.

Artificial Intelligence: potential and challenges in Higher Education

Artificial Intelligence is already being used to excellent effect in medical diagnostics, identifying illness and disease from images and scans to a much more accurate level than humans can. However, as noted, AI software does not itself know what the patterns actually signify: human intelligence determines their meaning. In this manner also, through programming and responding, as humans we imbue software with our own particular interpretations, values and biases, whether consciously or unconsciously. And as software programming and deployment is overwhelmingly a for-profit, male enterprise, AI reflects a for-profit, male world unless there is a conscious effort to the contrary. And knowledge of this fact, equally, is where things can begin to get disturbing. AI systems, while designed to optimize efficiency and accuracy, can perpetuate biases, or produce unethical outcomes. For instance, biased algorithms used in hiring processes or loan approvals might discriminate against certain demographic groups, exacerbating societal inequalities. Our recognition of the prevalence of such biases underscores the importance of ensuring that AI systems are designed and rigorously tested to mitigate harmful consequences. Trust is essential for the ethical adoption and acceptance of AI technologies across various domains, from healthcare to finance and to criminal justice.

It is clear that AI has the potential to revolutionise the workplace and higher education for the better, making learning more personalised, accessible and effective, while also improving administration and supporting research processes. However, in a democratic society, attempts to achieve this should be made in a manner that is fair and transparent. We must address ethical concerns, data privacy and equity issues to ensure that AI technologies are deployed responsibly and inclusively, and not just for the benefit of the billion-dollar tech industry. The 2024 introduction of AI legislation by the EU goes some way towards achieving this ambition, but enterprise and higher education both need to take it a step further and educate our academic community regarding both the threats and possibilities which AI holds. And it is here that the necessity for a broader approach to education becomes evident. Just because we can do certain things does not mean that we should do them. But how can we make such decisions? Re-enter the humanities, those disciplines that explore issues concerning the human condition, whether through the lens of Greek tragedy or the philosophy of Aristotle. According to one GenAI (drawing heavily on Britannica and Wikipedia, “The humanities encompass branches of knowledge that focus on human beings, their culture, and the methods of inquiry derived from an appreciation of human values. These disciplines include the study of languages, literature, arts, history, and philosophy. Essentially, the humanities explore the unique ability of the human spirit to express itself through various forms of expression and critical analysis.” This, I believe, is where we find the true value of higher education and its relevance to contemporary society.

So, what does academia need to do in order to respond effectively to GenAI and its widespread use in education and in society more generally? In the first instance, we need to know what it is, and to what end it is being used. To date, such information has been the preserve of private enterprises, which have jealously guarded their algorithm secrets and sought to outperform each other and to bypass regulation for the sake of their own bottom line. But this lack of transparency holds us all hostage and is detrimental to the promotion of accountability, trust, ethical decision-making, and regulatory compliance in the deployment of AI. Stakeholders, especially universities, need to understand how AI systems make decisions and the factors influencing those decisions. Transparency fosters trust between AI developers and end-users: when we understand how AI systems work and the data they use, we are more likely to trust the outcomes produced by those systems. Furthermore, as AI technologies become increasingly autonomous and capable of making decisions without human intervention, questions of accountability and transparency come even more to the fore. Autonomous AI systems raise concerns about who bears responsibility for their actions and how to ensure accountability in cases of error or wrongdoing. Without clear guidelines and mechanisms for accountability, the deployment of AI in critical domains such as healthcare, criminal justice and transportation risks undermining public trust and confidence.

Privacy and surveillance represent another ethical frontier in the age of AI. Technologies such as facial recognition and predictive analytics enable unprecedented levels of data collection and analysis, something which can be used to reduce crime, but which equally raises concerns about mass surveillance and privacy infringement. The proliferation of AI-powered surveillance systems can pose significant risks to civil liberties and individual freedoms, necessitating robust ethical frameworks to safeguard privacy rights and prevent abuse. (Again, the EU laws regulating AI go some way towards achieving this.) Moreover, the potential for job displacement and economic disruption stemming from AI automation fuels debates about societal fairness and equity. While AI promises to enhance productivity and efficiency, the widespread adoption of automation technologies may exacerbate income inequality and socioeconomic disparities. Considerations surrounding job displacement highlight the need for policies that prioritise retraining and reskilling initiatives to ensure a smooth transition to an AI-driven economy. Beyond these immediate concerns, there appear to be existential risks associated with the development of AI systems, raising questions about the future of humanity and our role in shaping the trajectory of technological advancement. Such existential risks, if real, underscore the imperative of adopting a precautionary approach to AI deployment and ensuring that ethical considerations are paramount in guiding research and innovation.

But the question of ethics is more than a matter of compliance and box-ticking, of knowing which rules apply and where. Whose values are important? For example, does the belief that environmental concerns are more important than profit accumulation, or that individual liberty is more important than state control, have a basis beyond personal interests or religious belief? These are not questions of science but of the humanities. Which in turn brings us back to the question of higher education. If we merely train our students – our future workforce – to create and use AI, we are doing them a disservice. As citizens of the world, our students need to understand not just how AI works, but they also need to know how to make it work for the greater good. And just what that good is has always been somewhat moot, a grey area. And grey areas are not always looked upon favourably from a STEM perspective which, like some religious and state agencies, tends to favour black and white, right, and wrong answers.

So, what does higher education need to do in order to make sense of our digital world of artificial intelligence? To begin, we must start from a position of helping students to think more broadly about their chosen fields. And to do that, as educators, we too must think more broadly about a world changed by all things digital. Existing boundaries between disciplines need to be questioned and breached if necessary: classicists need to have an understanding of computer science and scientists need to have a grasp of the humanities that is more than superficial. We must actively revisit our curriculum in almost every discipline in order to prepare our graduates for a world of uncertainty that is imbued with GenAI. We need to understand that education and training, while not mutually exclusive, are not the same, and that everything we do as humans is imbued with our particular values. We have to acknowledge that there may be a difference between the needs of our students as future contributors to the economy and their understanding of what it means to be human in the twenty-first century. And to achieve that we need a change in mindset among us academic staff. This will not be easy: insufficient time and inadequate levels of support have regularly been cited by academics in survey after survey regarding their low levels of engagement to date with digital technologies. This will need to change. To be successful we need to work together, from top down to bottom up across disciplines and departments to agree a way forward, and we need to do that now. The companion pieces in this volume demonstrate some of the ways in which we are already doing this. But we need more. More imagination. More creativity. Only in this manner can we help our students to find their meaning and values relevant to the contemporary world. Along the way, we might also begin to clarify the role and purpose of our universities in the twenty-first century. And thus, we can lay the foundations for a better society, indeed a better world, for future generations.

(October 2024)

Suggested further reading:

Goodlad, M., Stone, M. (2024) “Beyond Chatbot-K: On Large Language Models, “Generative AI,” and Rise of Chatbots—An Introduction”, Critical AI. 2 (1).

Footnotes:

1 The legality and ethics of using such data in this manner, given that large parts of these are protected by copyright, is controversial. A recent revelation that Microsoft has paid large sums of money to academic publishers to train its OpenAI software on the publishers’ output has proven controversial, and has drawn outrage from authors who had not been informed of this move.

2 One response from MIT is the AI Risk Repository. Launched in August 2024, this documents over 700 potential risks advanced AI systems could pose and describes itself as “the most comprehensive source yet of information about previously identified issues that could arise from the creation and deployment of these models.”

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Using GenAI in Teaching, Learning and Assessment in Irish Universities Copyright © 2025 by Dr Ana Elena Schalk Quintanar (Editor) and Dr Pauline Rooney (Editor) is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.