Editor’s Note: Today’s article is a guest post from VentureRadar, which helps organizations innovate and generate new growth by connecting them to the emerging technologies and expertise that can solve their challenges. If you or someone you know would like to become a guest contributor, please contact us at editor at cporising dot com. Thanks!
When reflecting on the most potentially disruptive technologies of our times, it’s clear that artificial intelligence (AI) has no rival. AI is having a seismic effect on virtually all industries, changing entire business models and the way companies operate. Just over the past four years, enterprise use of AI grew by 270%. And while the market was valued at $1.4 billion in 2016, it is likely to reach $60 billion by 2025.
Such growth has resulted in increasingly complex AI setups that can create new complications, including becoming less interpretable and traceable. This inherently hampers preventing or remedying AI decision-making. After all, AI isn’t perfect; and while some errors, such as failing to predict the results of the 2018 World Cup, are quite harmless, others could have fatal consequences.
This concern has brought back into focus the development of explainable AI (XAI), the concept of trying to explain the decisions, recommendations, predictions, and other actions made by AI systems. Working directly towards greater transparency, trust, and accountability, XAI’s potential is great, with important roles across industries, from healthcare to finance.
Why should companies aim to accompany AI with explainability?
Trust in Results
As with many emerging technologies, the initial excitement and widespread adoption tend to run well ahead of the development of transparency and control networks. AI is no exception. In fact, the technology is often said to have a “trust barrier”, as 80% of people wouldn’t trust AI to manage their personal finances.
This is bad news for AI: The lack of trust is directly connected to the acceptance of the technology. Without the ability to rely on the automation outputs and with skepticism among impacted users, companies can never fully implement AI and reap its benefits.
That’s why companies should put explainability high on their priority list. It will become even more pressing in the future as we witness the introduction of more advanced setups. Deep learning network approaches – the increasingly popular but complex algorithms in use – often deliver opaque decisions. Explainable AI can provide a window, shedding light on individual steps and interpreting them in a way that enhances trust in automation.
A New Level of Accountability
In some industries, it’s not feasible – both in terms of ethics and compliance – to provide advice or take action without some understanding about the reasoning behind the decision. In fact, a comprehensive report on the UK’s national AI policy clearly states that it is unacceptable to deploy any AI system which could have a substantial impact on an individual’s life unless it can generate a full and satisfactory explanation for the decisions it will take.
Whether it’s in the public sector or industries like healthcare, every company aiming to connect customers with AI needs to be aware of such standards. And it may turn out to be a very beneficial practice too: Microsoft research entitled Maximising the AI opportunity suggests that companies that implement an ethical framework within AI structures outperform those who don’t.
Explainable AI could also become indispensable when meeting different expectations and regulations. Companies typically have to explain to their clients or stakeholders how AI models work, so the more data-driven and transparent these insights are, the better. For example, a company called Flowcast developed models that can transform fragmented credit statistics and globally-sourced alternative data into functional datasets that can be leveraged to build trust with banking partners’ credit risks officers.
Understand and Eliminate Bias
It’s no news that AI systems have often proven biased. For example, Amazon had to scrap its recruitment tool after the algorithm displayed clear gender bias. The venture failed because the machine learning was fed into a database relying on historic data that showed a preference towards male candidates – and the automation followed this tendency.
Explainable AI can show value wherever there’s a risk of AI partiality. It can be used to dive deep into the models and reveal built-in biases. This has many benefits – one of which being the improvement of those customer relationships that could be strained by AI usage.
It’s common now that banks rely on AI to carry out analyses leading to loan decisions. The models assess the payment history, defaulted loans, personal data, and credit score of each candidate. Any unfair distribution of loans could irreparably damage the institution’s reputation. With explainable AI, companies can guide their customers through the decision-making logic and demonstrate that the process is fully controlled, objective, and transparent.
Improved AI Performance
Apart from increased trust, better customer relations, and higher accountability, explainable AI can directly improve the performance of AI models. By understanding every aspect of decision-making, companies can identify optimization opportunities and re-design the process to make it more effective.
This is crucial when preventing negative or even fatal consequences. The more we rely on AI, be it self-driving cars or robot-guided surgeries, the more we need to understand how it comes to decisions in case something goes wrong. With careful data management, companies can ensure their models don’t make dubious correlations – and ultimately, flawed decisions.
While this perhaps may not seem vital in the cases of simple AI chatbots, in aerial navigation, for instance, explainable AI helps to guarantee that the process is safe and reliable for all parties involved. There are a number of challenges that AI is facing, and many of these can be tackled with the help of more explainable AI systems to enable widespread adoption and trust. Empowered by knowledge, companies can stop asking themselves: “Why did AI do that?“ and instead start making confident steps to improve their data-driven outcomes.
About the Author
Andrew Thomson is the founder of VentureRadar.
RELATED ARTICLES
Artificial Intelligence for Procurement Gets a Reality Check
Artificial Intelligence: Making the Procurement World Smarter