Piecemeal approaches to AI that don’t consider ethical practices are exposing businesses to both reputation and financial risk, according to PwC.
Speaking at the World Economic Forum in Dalian, China, this week, Anand Rao, global AI leader at PwC, said failing to balance the economic benefits of AI with the impact it could have on society “represents fundamental reputational, operational and financial risks.” He added that there needs to be a fully integrated strategic plan in place to guide responsible use of AI, rather than inconsistent, informal and ad hoc procedures the global advisory firm has observed.
“AI brings opportunity but also inherent challenges around trust and accountability,” he said. “To realize AI’s productivity prize, success requires integrated organisational and workforce strategies and planning. There is a clear need for those in the C-Suite to review the current and future AI practices within their organisation, asking questions to not just tackle potential risks, but also to identify whether adequate strategy, controls and processes are in place.”
The ethics of AI have been hotly debated, both inside and outside the business world. At its core, AI is at risk of being exposed to bias, be through the unconscious biases of programmers or the inherent bias in data sets chosen to power AI systems.
Beyond that, the ways in which AI is used can present its own moral minefield. AI used to create highly realistic “deepfake” audio and video recordings is creating both privacy and national security concerns, but some advertisers have nonetheless leapt at the opportunity to use it in their campaigns. There are growing concerns that AI used in facial recognition could not only be a privacy issue, but increase racial profiling and political persecution. Since businesses are the ones leading investment into AI development, they have a responsibility to consider how it might be used in the real world.
To that end, PwC has launched a Responsible AI Toolkit. In addition to a free diagnostic tool to asses a business’ AI approach, it includes a set of frameworks, tools and processes that address the needs of responsible AI use, and can also be customized to a business’ needs and progress when it comes to implementation.
The kit covers the five “dimensions” business need to consider in order to use AI responsibly: governance (who is accountable for risk and controls on an AI solution), interpretability and explainability (giving leaders the ability to explain, understand and defend critical business decisions an AI makes), bias and fairness (being able to recognize bias in an AI system’s data source and correct it), robustness and security (ensuring AI systems meet requirements that prevent errors associated with performance issues or hacking) and ethics and regulation (contextualizing ethical considerations to ensure AI solutions are not only legal, but moral).
“AI decisions are not unlike those made by humans,” Rao explained. “In each case, you need to be able to explain your choices, and understand the associated costs and impacts. That’s not just about technology solutions for bias detection, correction, explanation and building safe and secure systems. It necessitates a new level of holistic leadership that considers the ethical and responsible dimensions of technology’s impact on business, starting on day one.”
To highlight the need for such a toolkit, PwC also released the results of a survey of 250 global senior executives. Only 25% of respondents said they would prioritize considering the ethical implications of an AI solution before implementing it. Only 20% have clearly defined processes for identifying risks associated with AI, with 60% either relying on developers, using informal processes or having no documented procedures. More than half (56%) said they would find it difficult to explain the cause if their AI did something wrong, with 39% of those who have applied AI at scale saying they were only “somewhat” sure they know how to stop their AI if it goes wrong. And over half of respondents have not formalized their approach to assessing AI for bias, citing a lack of knowledge, tools, and ad hoc evaluations.