Research Library
Monthly Research
& Market Commentary


The Winter of AI Discontent: Emergent Trends in Algorithmic Ethics

We’re not fighting a battle on tech ethics: we’re fighting a battle on ethics, period. 

Ethics is a hot-button topic of discussion for the tech industry right now.

Ethics is a hot-button topic of discussion for the tech industry right now, especially in the rapidly developing fields around artificial intelligence, machine learning and automated decision systems.

Why is this important for your business? What are the key risks on the battlefield for tech ethics? These are the dogs of war that may slip past your defences:

  • Your customers will abandon you for more ethical service providers.
  • Your talent will leave you for more ethical employers.
  • You will do great public harm through unintentionally exacerbating systemic inequalities.
  • You will lose the competitive edge of technical prowess to companies or countries with ethics that endanger your employees, your customers or the public at large.

Consumers and tech industry workers alike are raising their voices to influence what kinds of automated decision-making systems get designed, what decisions they should be allowed to make (both in terms of industry verticals and specific applications within them), and what data sets should be used for building the models that power those systems. Google is pledging $25 million towards ‘AI for Social Good’, yet some critics are “filled with dread”1 at the prospect of efforts spearheaded by the tech giant that may accelerate the dominance of one mode of thinking to the detriment of other groups.To show an example of this in practice, a recent global survey of autonomous driving ethics, the results of which will be used to design many of the algorithms that will make life-or-death decisions in driverless cars, was criticized for not having sufficient samples from the developing world. “How important is it to include those missing perspectives? The worry is that any decisions about how autonomous vehicles (AV) ought to be designed, if influenced by the MIT survey, won’t be fully informed without those unheard voices.”2

There is a growing emphasis on de-risking negative public perception of the tech industry through robust policies on data privacy and data security. One emergent strategy is developing self-checking mechanisms in the form of ‘Chief Ethical Officers’. Much like ‘Chief Diversity Officers’ at companies that continue to struggle to make meaningful changes in their makeup, particularly at the highest levels of leadership, these officers may end up like the eponymous character in Lois Lowry’s 1993 young adult novel The Giver, bearing alone the responsibility for remembering and viscerally experiencing all the painful lessons of history while the remainder of the population exists in a state of blissful ignorance. However, some companies like Mastercard, Bloomberg and JP Morgan are putting their data where their conscience is by adopting proactive stances for using data philanthropically. Not only mitigating the potential risks to the consumer, but proactively seeking ways to use data for societally valuable change, seems to be a new avenue for corporate social responsibility.

Swimming in the ethics sea

All these conversations around AI ethics are a reflection of wider social polarization about who is benefitted by systems of power and who is underrepresented, excluded or left behind.

Technology platforms, including automated decision-making systems, can exacerbate these rifts but technology does not create them in isolation. How can we build an ethics for AI when we are not even aligned on ethics in technology as a broader field, or indeed, in most other forms of human endeavour?

We are better able to understand and measure the impact of AI interventions.

The data science community is beginning to use conversations around the risks of automating systemic biases to push for better understanding of how existing biases show up: when we can measure the impact of systemic inequality on particular populations, we are better able to understand and measure the impact of AI interventions on those populations. Take three examples of where systemic inequality that already existed was exacerbated by well-publicized interventions of automated systems from Cathy O’Neil’s book Weapons of Math Destruction: underrepresentation of minorities in particular industries or at levels of seniority, higher rates of incarceration and harsher sentencing, and consistently higher risk ratings for credit and insurance.

Despite – or perhaps because of – the lack of consensus about whose ethics we are building for, a raft of approaches are being developed to create more ethical AI and more ethical technology in general. These range from data literacy campaigns to flight-plan-like checklists, from codes of practice to the emergent industry of algorithmic auditing. Notable examples include the Open Data Institute’s Data Ethics Canvas3; the IEEE’s The Ethics Certification Program for Autonomous and Intelligent Systems (ECPAIS)4; the UK government’s Data Ethics Workbook5 for interpreting the Data Ethics Framework designed for the public sector; A People’s Guide to AI6; and Accenture’s Fairness Tool, leading the way ahead of Microsoft, Facebook and Google which are all developing their own automated tools in this area. But as Rumman Chowdhury, Accenture’s global ethical AI lead says, in order for the tool to work,“Your company also needs to have a culture of ethics ... if I don’t have a company culture where I can go to my boss and say,‘Hey, you know what, we need to get better data,’ then selling all the tools in the world wouldn’t help that problem.”7

Quis custodiet?

Is it desirable or even feasible for the tech industry to lead the way on ethics? At Web Summit 2018, Mitchell Baker, Executive Chairwoman, Mozilla, argued that “I think almost nothing at the size and scale and power of big tech and big pharma and big oil and big auto is capable of regulating itself. The incentives are not set up that way.”8

In other industries, new regulation and new regulatory bodies have been set up in response to public exposures of hazards to the social and environmental good (e.g. the US Environmental Protection Agency and the Food and Drug Administration). Why are we content with proposals for the introduction of Chief Ethics Officers or self-assembled ethics review boards within tech companies to oversee the emergent problems of AI scandals?

Credit: Varoon Mathur, AI Now Institute

Credit: Varoon Mathur, AI Now Institute 9

Conversely,  Palmer Luckey, Founder, Oculus and Anduril, at Web Summit 2018 presented his view that rather than waiting for legislators to set the agenda, technologists have a moral imperative to lead the way on building ethical technologies, particularly in controversial industries such as defence: “The US never would have been able to define the rules around the use of nuclear arms if we didn’t have any and our adversaries did. You can’t just be equal with people if you want to lead the discussion, you have to be the leader. Technological superiority is a prerequisite for ethical superiority. That’s why it’s important that we’re not just equals to these other places, but that we’re so far ahead that we’re not just one of the seats at the table but at the head of the table.”10

On the global stage, the UK considers ethics to be its unique contribution in a rapidly evolving industry. Examples like the Open Data Institute, CognitionX and Doteveryone certainly speak to the UK’s existing ecosystem of non-profit and private-sector organizations advocating for greater ethical development in AI. With the forthcoming establishment of the Centre for Data Ethics and Innovation, the UK is clearly seeking to capitalize on this growing capability. But the Centre is an advisory body only: will it have sufficient tools to make the necessary impact both at home and abroad? And will this come in time to establish the technological dominance that Luckey and others have stated is a prerequisite for writing the rule book?

Where do we go from here?

With so many ethical frameworks, toolkits and consulting services to choose from, what steps do industry leaders need to take to keep the dogs of AI firmly on the ethical leash?

  • Conduct an ethics audit. What are the internal perceptions of ethical responsibility within your company? Do managers and team members feel empowered to make and take ownership of ethical decisions? Do they recognize areas where their decisions have an ethical impact? Are parts of your organization already using ethical frameworks to guide their decision-making?
  • Decide what your company ethics are. Drafting an ethical framework must be more than an opinion survey, but it necessarily involves coming to a consensus about what the guiding principles of the group or organization must be. How do these relate to your core company values? What internal and external resources can you draw upon to create a framework that will protect your employees and customers?
  • What are your considerations for the ethics of obedience, the ethics of care and the ethics of reason?
  • Design ethical feedback loops in your projects. Depending on the level of autonomy in the management style of your organization, this may entail getting team members on board to build their own canvases to support a discussion-led approach, or you may evolve a more directive checklist of responsibilities. Different phases of a project may also benefit from different types of intervention: during the design and experimentation stage a design-thinking model that encourages teams to ask “What if ...?” about the tools they are building may help them anticipate red flags that will emerge during a formal algorithmic auditing process at a later stage.
  • Have a plan. Above all, think about what tools you have in place when ethical risks and ethical breaches are identified. You probably have a BCM plan that outlines procedures for what happens in the event of fire, floods and other calamities. What will you do when your company becomes another dot on the AI Now scandals timeline pictured above?

Do not do this alone.

If there is one thing to take away from this article, it is: do not do this alone. Ethics is a systemic-level conversation about what the right thing to do is. Here at LEF we will be continuing to follow the emerging discourse about AI ethics and the broader field of tech ethics so that we can make informed contributions to this evolving dialogue. At a time when conversations about what is right are becoming increasingly heated within the tech industry and outside it, the worst thing we can do is abdicate our responsibility to shape our ethical future. Start talking.

1. https://twitter.com/DocDre/status/1057042346838245377?s=19 
2. https://www.forbes.com/sites/patricklin/2018/10/29/does-ai-ethics-need-to-be-more-inclusive/#2ad239eb11f3
3. https://theodi.org/article/data-ethics-canvas/
4. https://standards.ieee.org/industry-connections/ecpais.html
5. https://www.gov.uk/government/publications/data-ethics-workbook
6. https://store.alliedmedia.org/products/a-peoples-guide-to-ai
7. https://www.fastcompany.com/40583554/this-tool-lets-you-see-and-correct-the-bias-in-an-algorithm
8. https://www.youtube.com/watch?v=gC0ATYKKoqA
9. https://www.technologyreview.com/s/612318/establishing-an-ai-code-of-ethics-will-be-harder-than-people-think/

10. https://www.youtube.com/watch?v=Eu1SHsLNN6I

JOIN THE CONVERSATION

*{{ error }}
*{{ error }}
*{{ error }}
captcha
*{{ error }}
*{{ error }}
*{{ error }}

DOWNLOADS

Research Commentary

PDF (225.6 KB)

AUTHORS

CATEGORIES

21st Century
Adaptive Execution
Assets/Capabilities
Identity/Strategy
Proactive, Haptic Sensing
Reimagining the Portfolio
Value Centric Leadership

ALSO IN THIS CATEGORY

Defending Digital
12 Dec 2018 / By David Moschella
Our Research Agenda 2019
30 Nov 2018 / By Simon Wardley, David Reid
The Winter of AI Discontent: Emergent Trends in Algorithmic Ethics
29 Nov 2018 / By Caitlin McDonald
It’s Time to Challenge the Value of Everything
29 Nov 2018 / By Richard Davies, Bill Murray
What is an Anthropologist?
22 Nov 2018 / By Caitlin McDonald