GLOBAL
bookmark

AI study in universities must be broad, multidisciplinary

What are some key responsibilities that emerge for higher education in the context of the rise of artificial intelligence? A narrow definition of AI often used is that it involves machine-based systems that rely in the first instance on human input to produce, with varying degrees of independence, predictions or recommendations.

Moving beyond this technical definition, AI can also be described in relation to its societal impact as an imperial power.

Rather than focusing on the impact of AI on education or research, which are important areas that are being well researched, my focus is more philosophical. What roles does AI play in society, and in light of that, what responsibilities emerge for higher education?

The impact of AI on society is often sharply polarised into an AI utopian or an AI dystopian view.

In relation to the future of work, the utopians believe that dangerous and repetitive tasks will be replaced by robots. Human beings will be freed to be more creative. We will even have the freedom of not working because we will all receive a universal basic income.

In terms of planetary scale, AI will solve the climate crisis. For example, there are claims that AI can be used to combat carbon emissions through a data-driven approach, contributing to a greener society.

The AI dystopians on the other hand warn us that AI and automation will replace human workers, leading to job loss and economic insecurity. They cite privacy concerns and show us how workers are increasingly subjected to hyper-surveillance.

They also point to an existential threat: that we may be laying the foundations for a super intelligent AI that is capable of learning at an exponential rate. It could surpass human intelligence and control and evolve to dominate or wipe out humanity.

AI and power

The problem with both of these views is that AI is presented as a disembodied entity, floating above the material world, engaging in advanced techno-computations and orchestrating grand designs across our planet.

As researchers such as Professor Kate Atkinson reveal, AI is embodied and material: it feeds off the labour of people in the lithium mines, and it draws on our planet’s energy. It does not come to life independently but is born inside dominant beliefs and political will in the context of digital capitalism, reinforcing, disrupting and producing new social relations.

AI has been referred to as an imperial power because it seeps and spreads into the fabric of our society, exerting greater control. AI is used to make decisions in cybersecurity, health, banking, crime, what we read, how we shop, who we connect with – sometimes with our permission and often without our knowledge.

While there is much fear that AI will fall into the hands of unscrupulous dictators, much of the AI infrastructure and superstructure is already dominated by a small number of mega corporations working with powerful governments engaged in an AI race against other countries.

There are pressures for state regulation to be reduced to allow disruptive innovation for geopolitical advantage, even when this erodes the common good. This form of digital capitalism has the potential to centralise power to refashion the world to the benefit of this powerful minority in various ways.

First, AI introduces new forms of capitalist accumulation through mass data harvesting, including through the intellectual and creative labour of human beings such as art. The most powerful companies such as Microsoft, Apple, Google and Meta unilaterally seize human knowledge and seal it off to sell for profit in machine recomposed form, without the consent or compensation of humans whose labour trained the machines.

Second, there is a concern that AI can negatively impact democracy and reinforce political divisions as algorithms steer people into echo chambers of repeated and reinforced media and political content.

Third, AI sanitises discrimination for profit, leading to what researchers have called data-centric oppression. Unverified, untracked models trained by data using skin colour, class, gender or even accent as a proxy limit how much money a person can borrow, select people for jobs and even predict whether people will commit a crime.

These deeply flawed predictions are made on the basis of already existing prejudices in society with no consequences for the creators but huge consequences for people’s lives.

AI takes on what has been termed the role of an oracle – the decision-making is mystical, but because it relies on big data and technology, it appears to be real, neutral, and is uncontested. In this way AI accelerates technocratic power to intensify digital capitalism as the newest frontier of predatory capitalism.

AI – An autonomous artificial actor?

There is, however, a further issue to think about beyond the fact of political context.

AI theorists are also considering whether AI can be classified as an autonomous artificial actor with quasi-autonomous decision-making capacities. They refer in particular to deep neural networks that have complex layered architectures which are non-linear and compressed with numerous configurations.

It is not possible for human beings to analytically comprehend what nodes and layers have learned and how they have interacted to transform a representation at one level into another representation at a higher, more abstract level.

The result is that the designer of the programme can see the input and the output but will have no idea of how the solution was arrived at.

Universities and AI

Given the issues raised above, what responsibilities do universities bear in relation to AI? Universities have a responsibility to embrace rather than resist AI. Simply banning the use of generative types of AI in education is counterproductive.

Indeed, the use of AI in universities has huge potential. For example, the ability to personalise learning offers numerous benefits in relation to equity. The use of AI in research can also be profound. Just one example is scientists from McMaster University and MIT using AI to discover a new antibiotic in a very short space of time to treat a deadly superbug.

AI has also been used to develop some innovations for people with disabilities. For example, a Swiss start-up Biped has developed an AI co-pilot for people who are visually impaired which films the surroundings with 3D cameras and warns the wearer with immersive sounds transmitted through bone conduction earphones.

There is also research such as Elon Musk’s Neuralink, which implants chips into the brain. Neuralink has made progress toward its goal of using its N1 chip to intercept signals from the brain and then route them past spinal cord damage so people who are paralysed can walk again. This is at a very early stage and is fraught with biological and social dangers, particularly in the hands of those not concerned with the common good, but also holds great promise.

However, universities also need to highlight the dangers and co-construct solutions in contributing to making AI a force for the global common good.

Understanding AI

First, we need to push back against the fantasies of AI as an independent technological force by developing comprehensive frameworks to understand AI in its socio-political and technological context.

In other words, we need a theory that accounts for the corporations that drive and dominate it, the extractive mining and the exploitative labour practices that sustain it, the mass capture of data and data bias; and we need to develop mechanisms to counteract these global harms.

Beyond that, we also have to think about the technological puzzle of how to break open the black box of machine learning to solve what Professor Stuart Russell has termed the problem of control. This means bringing together multidisciplinary teams comprising machine learning scientists, social scientists, philosophers and legal scholars.

This also means resisting efforts to downgrade research and teaching in the humanities and social science as we cannot undertake this work without these disciplines. We also need to find ways for greater public engagement by working with journalists, poets and actors to explain what the problems are, how they arise and what we can do to ameliorate the potential hazardous effects.

Second, the ethics of AI needs to be embedded in all of our education for all students and not merely for students undertaking courses in AI.

We already know from our work on inclusivity and diversity that a didactic approach to issues of ethics and social justice does not work. Such courses need to be taught in a learning-by-doing manner and by bringing students into close contact with the lived experiences of those whose lives are altered positively or negatively by AI. We need to lessen the distance between coders and machine learning scientists and the impact of their work on real people, both near and far.

Third, in relation to greater transparency we can call for corporates and governments to be more transparent about algorithms rather than maintain trade secrets and competitive advantage. However, the lack of transparency that stems from the differences in scale and form between machine learning and human reasoning is much harder to resolve.

There is emerging work on whether we can find an interface or a translation device between machine and human computations, thus generating meaningful explanatory dialogues for end users which can help in our quest for fair, accountable, transparent AI. Interpretability via human-machine translation is, for instance, the explicit goal of the Defense Advanced Research Project Agency’s (DARPA’s) XAI initiative, which recognises explainability.

Fourth, we need to focus on resolving AI bias by calling for more representative data while at the same time addressing the systemic biases in society that are reflected in algorithms. A second way in which to address bias and algorithmic discrimination in the future is to invest in recruiting and ensuring the success of as diverse a student body as possible with a variety of life experiences.

Finally, AI raises important questions for us on what it is to be human. The dominance of AI reduces the totality of human beings as living, thinking, feeling complicated beings to numerous data points that are inputs for the machines.

At the same time AI is being humanised. When AI systems get things wrong we are not told that there is a computational problem or a glitch in the system but that the machine is ‘hallucinating’. It gives us this sense of a quirky, edgy, trippy being.

More worrying for me is when a deep neural network turns back on itself to change its first inputs or its initial parameters to make its predictions more accurate based on what it has ‘learnt’, with the suggestion that this means it has consciousness.

We are thus left with big questions for university education research and education itself to grapple with: what is thinking, what is learning, what is consciousness and what does it mean to be human?

Professor Rajani Naidoo is vice-president (community and inclusion), UNESCO Chair in Higher Education Management and co-director of the International Centre for Higher Education Management at the University of Bath in the United Kingdom. This is an edited version of her keynote presentation to the recent CIHE Biennial Conference on International Higher Education.