How to avoid the ethical pitfalls of artificial intelligence and machine learning
While many organisations are implementing artificial intelligence and machine learning solutions, there are costs and risks that need to be carefully considered.Â
While many organisations are implementing artificial intelligence and machine learning solutions, there are costs and risks that need to be carefully considered.Â
Victoria Ticha
+61 2 9065 1744
v.ticha@unsw.edu.au
The modern business world is littered with examples where organisations hastily rolled out artificial intelligence (AI) and machine learning (ML) solutions without due consideration of ethical issues, which has led to very costly and painful learning lessons. Internationally, for example, IBM is getting sued after  while Goldman Sachs is under investigation for using an allegedly discriminatory AI algorithm. A closer homegrown example was the , in which the federal government deployed ill-thought-through algorithmic automation to send out letters to recipients demanding repayment of social security payments dating back to 2010. The government settled a class action against it at an after the automated mailouts system targeted many legitimate social security recipients.Â
âThat targeting of legitimate recipients was clearly illegal,â says UNSW Business Schoolâs Peter Leonard, a Professor of Practice for the School of Information Systems & Technology Management and the School of Management and Governance at UNSW Business School. âGovernment decision-makers are required by law to take into account all relevant considerations and only relevant considerations, and authorising automated demands to be made of legitimate recipients was not proper application of discretions by an administrative decision-maker.âÂ
Prof. Leonard says Robodebt is an important example of what can go wrong with algorithms in which due care and consideration is not factored in. âWhen automation goes wrong, it usually does so quickly and at scale. And when things go wrong at scale, you donât need each payout to be much for it to be a very large amount when added together across a cohort.âÂ
Technological developments are very often ahead of both government laws and regulations as well as organisational policies around ethics and governance. AI and ML are classic examples of this and Prof. Leonard explains there is major âtranslationalâ work to be done in order to bolster companiesâ ethical frameworks. Â
âThereâs still a very large gap between government policymakers, regulators, business, and academia. I donât think there are many people today bridging that gap,â he observes. âIt requires translational work, with translation between those different spheres of activities and ways of thinking. Academics, for example, need to think outside their particular discipline, department or school. And they have to think about how businesses and other organisations actually make decisions, in order to adapt their view of what needs to be done to suit the dynamic and unpredictable nature of business activity nowadays. So it isnât easy, but it never was.âÂ
Prof. Leonard says organisations are âfeeling their way to better behaviour in this spaceâ. He thinks that many organisations now care about adverse societal impacts of their business practices, but donât yet know how to build governance and assurance to mitigate risks associated with data and technology-driven innovation. âThey donât know how to translate what are often pretty high-level statements about corporate social responsibility, good behaviour or ethics â call it what you will â into consistently reliable action, to give practical effect to those principles in how they make their business decisions every day. That gap creates real vulnerabilities for many corporations,â he says.Â
Data privacy serves as an example of what should be done in this space. Organisations have become quite good at working out how to evaluate whether a particular form of corporate behaviour is appropriately protective of the data privacy rights of individuals. This is achieved through âprivacy impact assessmentsâ which are overseen by privacy officers, lawyers and other professionals who are trained to understand whether or not a particular practice in the collection and handling of personal information about individuals may cause harm to those individuals.Â
âThereâs an example of how what can be a pretty amorphous concept â a breach of privacy â is reduced to something concrete and given effect through a process that leads to an outcome with recommendations about what the business should do,â Prof. Leonard says.Â
Disconnects also exist between key functional stakeholders required to make sound holistic judgements around ethics in AI and ML. âThere is a gap between the bit that is the data analytics AI, and the bit that is the making of the decision by an organisation. You can have really good technology and AI generating really good outputs that are then used really badly by humans, and as a result, this leads to really poor outcomes,â says Prof. Leonard. âSo, you have to look not only at what the technology in the AI is doing, but how that is integrated into the making of the decision by an organisation.âÂ
This problem exists in many fields. One field in which it is particularly prevalent is digital advertising. Chief marketing officers, for example, determine marketing strategies that are dependent upon the use of advertising technology â which are in turn managed by a technology team. Separate to this is data privacy which is managed by a different team, and Prof. Leonard says each of these teams donât speak the same language as each other in order to arrive at a strategically cohesive decision.Â
Some organisations are addressing this issue by creating new roles, such as a chief data officer or customer experience officer, who is responsible for bridging functional disconnects in applied ethics. Such individuals will often have a background in or experience with technology, data science and marketing, in addition to a broader understanding of the business than is often the case with the CIO.Â
âWeâre at a transitional point in time where the traditional view of IT and information systems management doesnât work anymore, because many of the issues arise out of analysis and uses of data,â says Prof. Leonard. âAnd those uses involve the making of decisions by people outside the technology team, many of whom donât understand the limitations of the technology in the data.âÂ
Prof. Leonard was recently appointed to the  â the first of its kind for any federal, state or territory government in Australia â to advise the NSW Minister for Digital Victor Dominello on how to deliver on key commitments in the stateâs AI strategy. One focus for the committee is how to reliably embed ethics in how, when and why NSW government departments and agencies use AI and other automation in their decision-making. Â
Prof. Leonard said governments and other organisations that publish aspirational statements and guidance on ethical principles of AI â but fail to go further â need to do better. âFor example, the Federal Governmentâs  by public and private sector entities were published over 18 months ago, but there is little evidence of adoption across the Australian economy, or that these principles are being embedded into consistently reliable and verifiable business practicesâ, he said. Â
âWhat good is this? It is like the 10 commandments. They are a great thing. But are people actually going to follow them? And what are we going to do if they donât?â Prof. Leonard said it is not worth publishing statements of principles unless they are supplemented with processes and methodologies for assurance and governance of all automation-assisted decision-making. âIt is not enough to ensure that the AI component is fair, accountable and transparent: the end-to-end decision-making process must be reviewedâ.Â
While some regulation will also be needed to build the right incentives, Prof. Leonard said organisations need to first know how to assure good outcomes, before they are legally sanctioned and penalised for bad outcomes. âThe problem for the public sector is more immediate than for the business and not for profit sectors, because poor algorithmic inferences leading to incorrect administrative decisions can directly contravene state and federal administrative law,â he said.Â
In the business and not for profit sectors, the legal constraints are more limited in scope (principally anti-discrimination and scope consumer protection law). Because the legal constraints are limited, Prof. Leonard observed, reporting of the žéŽÇČúŽÇ»ć±đČúłÙÌę»ć±đČúČ賊±ô±đ has not led to similar urgency in the business sector as that in the federal government sector.Â
Organisations need to be empowered to think methodically across and through possible harms, while there also needs to be adequate transparency in the system â and government policy and regulators should not lag too far behind. âA combination of these elements will help reduce the reliance on ethics within organisations internally, as they are provided with a strong framework for sound decision-making. And then you come behind with a big stick if theyâre not using the tools or theyâre not using the tools properly. Carrots alone and sticks alone never work; you need the combination of two,â said Prof. Leonard.Â
The Australian Human Rights Commissionâs  was recently tabled in Federal Parliament. Human Rights Commissioner Ed Santow stated that the combination of learnings from Robodebt and the Reportâs findings provide âa âonce-in-a-generation challenge and opportunity to develop the proper regulations around emerging technologies to mitigate the risks around them and ensure they benefit all members of the communityâ. Prof Leonard observed that âthe challenge is as much to how we govern automation aided decision making within organisations â the human element â as it is to how we assure that technology and data analytics are fair, accountable and transparent. Â
A good example of the need for this can be seen in the Royal Commission into Misconduct in the Banking, Superannuation and Financial Services Industry. It noted key individuals who assess and make recommendations in relation to prudential risk within banks are relatively powerless compared to those who control profit centres. âSo, almost by definition, if you regard ethics and policing of economics as a cost within an organisation, and not an integral part of the making of profits by an organisation, you will end up with bad results because you donât value highly enough the management of prudential, ethical or corporate social responsibility risks,â says Prof. Leonard. âYou name me a sector, and Iâll give you an example of it.âÂ
While he notes that larger organisations âwill often fumble their way through to a reasonably good decisionâ, another key risk exists among smaller organisations. âThey donât have processes around checks and balances and havenât thought about corporate social responsibility yet because theyâre not required to,â says Prof. Leonard. Small organisations often work on the mantra of âmoving fast and breaking thingsâ and this approach can have a âvery big impact within a very short period of timeâ, thanks to the potentially rapid growth rate of businesses in a digital economy.Â
âTheyâre the really dangerous ones, generally. This means the tools that you have to deliver have to be sufficiently simple and straightforward that they are readily applied, in such a way that an agile âmove fast and break things' type-business will actually apply them and give effect to them before they break things that really can cause harm,â he says.Â