As a society we’re often slow to understand the impact that new technology will have on our lives. Sometimes these unexpected consequences are positive and at times they can be negative: just think of the damage done by adding lead to petrol or the heart-breaking consequences of prescribing thalidomide as a treatment for morning sickness.
The problem lies in the fact that the full impact of new technology takes time to emerge, and the jury is still out on the long-term effects of social media – or, for that matter, artificial intelligence.
Ethical questions unanswered
We live in a time of rapid and profound technological change that affects every aspect of our lives, so why pick on AI as deserving particular scrutiny? Ask most technologists and they will tell you that AI, of all the technologies currently being developed, will likely bring the biggest changes to our world in the next few decades. It will revolutionize our jobs, the services we use and even the way that we think; it could also fundamentally alter humanity’s relationship with the machines it creates.
As things stand, we are rushing heedlessly into the future with a blithe disregard for the unsolved ethical questions of AI. That’s not true of everyone: Microsoft is aware of the racism problems experienced by its self-teaching Tay and Zo chatbots, while autonomous vehicle developers are grappling with the Trolley Problem – an ethical question that will ultimately determine who lives and who dies in a road accident.
While not every AI application will involve life-or-death decisions, a failure to examine and answer ethical questions will lead to damaging consequences for businesses or other organizations who deploy AI-based technologies.
Peculiar challenges of AI
If this seems to be a scare tactic, consider last year’s story about the machine learning application developed by the Correctional Offender Management Profiling for Alternative Sanctions (Compas) in the U.S. The tool was found to mistakenly label black defendants as likely to re-offend and was twice as likely to flag them up as recidivists compared to white people.
Or take the issue of autonomous weapons systems. We have pilot-less aircraft (in fact, remotely piloted), but should we leave the decisions to launch a Hellfire missile to an algorithm?
The list of problematical questions is almost infinite: we’ve already looked at the issue of driver-less cars, but what about AI applications dealing with sensitive data? The Cambridge Analytica scandal has shown what happens when people take a cavalier approach to people’s personal information; without an ethical foundation, future AI applications could wreak the same damage on an unimaginable scale.
We can scoff at Terminator-style scenarios where AI gains self-awareness and turns against humanity, but the fact remains that machines are only as ethical as they are programmed to be. How then can we create an ethical framework for AI – and whose job is it to do so?
A delicate balance
There will be some who say that the answer to these difficult questions is to create a raft of legislation setting out the parameters for ethical AI, but, in my view, this would be a grave mistake.
The problems with this approach are countless: legislation is often heavy-handed and a government-mandated set of rules would stifle technological advances in an area where the UK enjoys an envious lead over other nations. Moreover, politicians (no matter how well-briefed) are not the best people to decide complex, fluid questions about technologies that they do not fully understand.
That’s not to say that politicians can’t play an important role in shaping our future relationship with artificial intelligence. An example of the positive effect that parliament can have is the publication of the House of Lords AI Select Committee’s report in April. This document proposed a cross-sector code of ethics for AI based on five principles. These represent sensible proposals that would provide an ethical foundation for future AI projects, including the principles that artificial intelligence should not be used to diminish the data rights or privacy of individuals or groups, and that the autonomous power to hurt, destroy or deceive human beings should never be vested in AI.
This is a very welcome development, but we can’t leave it to politicians to shape the future of artificial intelligence. Instead, we must show businesses that making AI ethical is a matter of enlightened self-interest. It’s an argument that should resonate with any free-marketer. Businesses need a moral compass for no other reason than their customers, suppliers and other partners expect them to protect their interests. Being aware of the topic and some of the leading thoughts surrounding digital ethics is a good place to start—the IEEE’s second iteration of its Ethically Aligned Design Framework, for example, is a rich source of insights, particularly for software engineers and data scientists.
We’re all aware of the reputational damages to major brands in recent times, dealt by poor cybersecurity practices that have led to massive data leaks. Businesses should be approaching AI with ethics at the forefront of their strategy. We don’t want to see organizations hamstrung by fear of what could go wrong, but instead to consider the ethical implications of the applications and services they create.
Every business needs to understand where it faces potential risks from AI, and having a code of ethics is an essential foundation to ensure that this technology brings as much good and as little evil as possible.

Jonathan Ebsworth
Partner, Infosys Consulting
Jonathan is an automation and artificial intelligence expert at Infosys Consulting, working with clients to explore potential use-cases for innovative technology and to ensure that they establish strong centers of excellence while addressing ethical concerns. He is also one of the firm’s leading design thinking practitioners. Jonathan is a career technology specialist and has spent much of the last 30 years blending the management of successful solution delivery with providing strategic advice to clients on technology portfolio management and adoption. Prior to joining Infosys Consulting in 2015, Jonathan held senior leadership positions in large technology and consulting organizations, including VP for CSC and Capgemini, and partner at Arthur Anderson. He is a regular keynote speaker at technology conferences around the world.