Ethical AI: challenges and opportunities

Blog
Publication Date
03/02/2023
A man looking at the camera
Dr Vaishak Belle, Young Academy of Scotland member, Reader at the University of Edinburgh, Alan Turing Faculty Fellow, Royal Society University Research Fellow

Artificial intelligence provides many opportunities to improve private and public life. Discovering patterns and structures in large troves of data in an automated manner is a core component of data science, and currently drives applications in diverse areas such as computational biology, law and finance.

However, such a highly positive impact is coupled with significant challenges. Machine learning techniques have become especially pervasive in areas as disparate as recidivism prediction, consumer credit-risk analysis, and insurance pricing, with far-reaching consequences. This has raised concerns about the potential for learned algorithms to become biased against certain groups.

Attributes which the algorithm should be fair with respect to include protected characteristics such as ethnicity, sex, age, nationality and marital status. Such attributes should not affect any prediction made by a truly unbiased machine learning algorithm. The process of “de-biasing” is not simply a matter of deleting such entries from the data. Even if algorithms are not provided with information about the protected attribute directly, there is still the potential for algorithms to discriminate based on proxy variables, which may contain information about the protected attribute. For example, the name of an individual may not be regarded as a protected attribute, but it will likely contain information about an individual’s ethnicity or sex.

Unfair algorithmic biases can also be introduced if an algorithm is trained on historical data, particularly in cases where the training labels were allocated based on human discretion. In such a scenario, the algorithm can inherit the historical biases of those responsible for allocating the labels, by actively discriminating against attributes prevalent in a particular group.

What is also very troubling is that there are multiple (and often mutually exclusive) definitions of what it means for an algorithm to be unbiased, and there remains a stark lack of agreement on this subject within the academic community. Implementing one or more formal definitions, moreover, might fail altogether on more wholistic fairness ideals such as egalitarianism.

Machine learning techniques have become especially pervasive in areas as disparate as recidivism prediction, consumer credit-risk analysis, and insurance pricing, with far-reaching consequences. This has raised concerns about the potential for learned algorithms to become biased against certain groups.

Many feel that a formal notion may not be appropriate at all, given the complex social contexts that occur in human-machine interactions.

What then lies ahead for the science of AI? A couple of avenues seem like good starting points. There is considerable excitement about so-called human-in-the-loop systems, where there is continuous interaction between a human expert and the system. However, care needs to be taken. Simply delegating responsibility of critical decisions to humans in an ad hoc fashion can be problematic. Critical actions can be hard to identify immediately, and it is only the ramification of those actions that raise alarm, in which case it might be too late for the human to fix.

Moreover, understanding the AI system’s rationale is a challenge, as represented by the burgeoning field of explainable artificial intelligence. A better thought-out option is to balance the data-driven learning with common sense background knowledge. This type of knowledge might relate concepts, categories and properties of the physical world, and so might be able to computationally reason about the consequence of actions and therefore provide an intelligent interface to the human.

These are just small steps towards automation that is accountable. Legal and regulatory work is needed to understand the impact of such applications on the human capital, and whether such automation is desirable in the first place. Computational abstractions need to be part of a larger ecosystem where accountability and responsibility is understood more broadly, so that we can build a better future together.


Dr Vaishak Belle is a Young Academy of Scotland member, a Reader at the University of Edinburgh, an Alan Turing Faculty Fellow, and a Royal Society University Research Fellow.

This article originally appeared in ReSourcE Winter 2022.

The RSE’s blog series offers personal views on a variety of issues. These views are not those of the RSE and are intended to offer different perspectives on a range of current issues.