New technology, including AI Systems, must be transparent and explainable. For the general public to trust AI, it must be transparent. Technology companies must be clear about who trains their AI systems, what data was used in that training and, most importantly, what went into their algorithm’s recommendations. If we are to use AI to help make important decisions, it must be explainable. The public should be able to understand the behaviour of the algorithm without holding an advanced degree in computer science.
The purpose of AI and cognitive systems developed and applied by IBM is to augment – not replace – human intelligence. Our technology is and will be designed to enhance and extend human capability and potential. We believe AI should make all of us better at our jobs, and that the benefits of the AI era should touch the many, not just the elite few.
All new and emerging transformational technologies require society to consider the ethics of their development and deployment. We need to understand and anticipate how and when they will be used, and act accordingly to ensure that their usage conforms with societal norms and values.
As with any system developed by human beings, there is potential for bias in AI. Bias can be introduced through the algorithms, the training methods, and the data sets themselves. To manage this, IBM is committed to being transparent about all of its efforts to develop and deploy these technologies, in combination with our existing best practices around data integrity. We are also consulting with leading researchers, ethicists and experts on bias around the world to inform our thinking and approaches to these technologies.
In many cases, the biases of AI systems simply reflect the hidden biases of society. But because AI systems can often uncover these biases as they are engaged, they provide a unique opportunity to identify and remove prejudice from many of our social systems.
AI systems offer great promise for societal benefit. For the first time, we have the opportunity to transform complex and unstructured data into actionable insight. We can reveal hidden patterns, advance science, and intimately understand how some of the fundamental systems that facilitate life on this planet work. This insight can and will be used to inform decisions on everything from business strategy to government policy.
As an industry leader, IBM has a responsibility to ensure that AI systems are developed in the right way, deployed for the right reasons, and without unintended consequences. This requires developing an ethical framework that guides both how we design and how we use AI solutions, giving businesses, consumers and societies the confidence to trust these systems and fully benefit from their capabilities.
IBM’s trust and transparency capabilities provide explanations in easy to understand terms, showing which factors weighted the decision in one direction vs. another, the confidence in the recommendation, and the factors behind that confidence. Also, the records of the model’s accuracy, performance and fairness, and the lineage of the AI systems, are easily traced and recalled for customer service, regulatory or compliance reasons – such as GDPR compliance.
Additionally, the fully automated software service explains decision-making and detects bias in AI models at runtime – as decisions are being made – capturing potentially unfair outcomes as they occur. Importantly, it also automatically recommends data to add to the model to help mitigate any bias it has detected.
In addition to creating better consumer experiences through the unit, marketers are enjoying insights they can use to help improve future campaigns. The unit becomes similar to running a focus group at unprecedented scale - the more a consumer interacts with the ad, the more the brand understands how to serve their customer in the future. For consumers, they get a superior customer experience with a brand. This increases the value of the ad for both the marketer and the consumer.