I read an article about AI by Prof Enkel and how to improve trust by consumers in order to quicken the pace of adoption. Appended the article below. Took it from a marketing angle that you have to properly display to consumers the benefits and solve their problem. No point to launch the next shiny new mouse trap (that is controlled by a mobile app via bluetooth and linked to various apps that can tell the weather, traffic conditions, air quality…). But hey, there are no mice in the house. You might end up with something as a sub standard solution to the greater problem. And that’s not all. The cause of the problem could be something totally different and further down the line. Engineers, delve into the cause of the problem and then try to come up with something that solves it or at least reduces it. Marketeers, craft that message to communcate that it clearly.
I then digressed and it got me thinking about the role of artificial intelligence and how it would play out in our personal and professional lives. There is a growing interest in AI in terms of deployment within our everyday lives to improve productivity, service provided and the solution. Robots may possibly come in the future but I am not necessarily referring to that but more of deployment within existing solutions to quicken the response and pace of service.
Take the travel industry for example, embedding machine learning and chatbots is the rage at the moment. Several startups including the ex Siri founders have launched prototypes (I deem it as that as most have not passed the full Turing Test). To pass the Turing Test, the solution(s) must ensure that a human interacting with it has no idea that they are actually dealing with a “robot”. Read more about the Turing Test from the guys from Stanford over here. If you are really interested then a good movie to catch on this is Ex Machina.
As a resident geek, I signed up for the waiting list for x.ai, a virtual assistant. This was after I had interacted with a business contact who was happily using it. The solution serves to help busy business folks to locate and schedule meetings. Dennis Mortensen who found the company was determined to reduce the need for repetitive tasks and hence developed and launched the solution. And yes, there is a waiting list. Perhaps finetuning of the solution to ensure that responses are prompt and the backend could cope. My dealing with Amy (yes, the chatbot name is Amy) has been nothing less than fantastic. Responsive and able to detect little tricks I have been throwing at her, the solution is pretty good. Spewing different changes in dates and requesting for out of context items (eg can we meet on a boat in the desert) have been met with a relevant response. So overall, Kudos to Dennis. You should try it out. But yes, there is a waiting list so take a number and stand in line.
Last but not least, something totally trivial but won’t it be cool to have a start up call itself Darth Vadar that delivers an AI solution. I won’t want him standing beside me while I wash the dishes but that deep breathing might be reassuring during yoga sessions!
To Get Consumers to Trust AI, Show Them Its Benefits
APRIL 17, 2017
Artificial intelligence (AI) is emerging in applications like autonomous vehicles and medical assistance devices. But even when the technology is ready to use and has been shown to meet customer demands, there’s still a great deal of skepticism among consumers. For example, a survey of more than 1,000 car buyers in Germany showed that only 5% would prefer a fully autonomous vehicle. We can find a similar number of skeptics of AI-enabled medical diagnosis systems, such as IBM’s Watson. The public’s lack of trust in AI applications may cause us to collectively neglect the possible advantages we could gain from them.
In order to understand trust in the relationship between humans and automation, we have to explore trust in two dimensions: trust in the technology and trust in the innovating firm.
In human interactions, trust is the willingness to be vulnerable to the actions of another person. But trust is an evolving and fragile phenomenon that can be destroyed even faster than it can be created. Trust is essential to reducing perceived risk, which is a combination of uncertainty and the seriousness of the potential outcome involved. Perceived risk in the context of AI stems from giving up control to a machine. Trust in automation can only evolve from predictability, dependability, and faith.
Three factors will be crucial to gaining this trust: 1.) performance — that is, the application performs as expected; 2.) process — that is, we have an understanding of the underlying logic of the technology, and 3.) purpose — that is, we have faith in the design’s intentions. Additionally, trust in the company designing the AI, and the way the way the firm communicates with customers, will influence whether the technology is adopted by customers. Too many high-tech companies wrongly assume that the quality of the technology alone will influence people to use it.
In order to understand how firms have systematically enhanced trust in applied AI, my colleagues Monika Hengstler and Selina Duelli and I conducted nine case studies in the transportation and medical device industries. By comparing BMW’s semi-autonomous and fully autonomous cars, Daimler’s Future Truck project, ZF Friedrichshafen’s driving assistance system, as well as Deutsche Bahn’s semi-autonomous and fully autonomous trains and VAG Nürnberg’s fully automated underground train, we gained a deeper understanding of how those companies foster trust in their AI applications. We also analyzed four cases in the medical technology industry, including IBM’s Watson as an AI-empowered diagnosis system, HP’s data analytics system for automated fraud detection in the healthcare sector, AiCure’s medical adherence app that reminds patients to take their medication, and the Care-O-bot 3 of Frauenhofer IPA, a research platform for upcoming commercial service robot solutions. Our semi-structured interviews, follow-ups, and archival data analysis was guided by a theoretical discussion on how trust in the technology and in the innovating firm and its communication is facilitated.
Based on this cross-case analysis, we found that operational safety and data security are decisive factors in getting people to trust technology. Since AI-empowered technology is based on the delegation of control, it will not be trusted if it is flawed. And since negative events are more visible than positive events, operational safety alone is not sufficient for building trust. Additionally, cognitive compatibility, trialability, and usability are needed:
Cognitive compatibility describes what people feel or think about an innovation as it pertains to their values. Users tend to trust automation if the algorithms are understandable and guide them toward achieving their goals. This understandability of algorithms and the motives in AI applications directly affect the perceived predictability of the system, which, in turn, is one of the foundations of trust.
Trialability points to the fact that people who were able to visualize the concrete benefits of a new technology via a trial run reduced their perceived risk and therefore their resistance to the technology.
Usability is influenced by both the intuitiveness of the technology, and the perceived ease of use. An intuitive interface can reduce initial resistance and make the technology more accessible, particularly for less tech-savvy people. Usability testing with the target user group is an important first step toward creating this ease of use.
But even more important is the balance between control and autonomy in the technology. For efficient collaboration between humans and machines, the appropriate level of automation must be carefully defined. This is even more important in intelligent applications that are designed to change human behaviors (such as medical devices that incentivize humans to take their medications on time). The interaction should not make people feel like they’re being monitored, but rather, assisted. Appropriate incentives are important to keep people engaged with an application, ultimately motivating them to use it as intended. Our cases showed that technologies with high visibility — e.g., autonomous cars in the transportation industry, or AiCure and Care-O-bot in the healthcare industry — require more intensive efforts to foster trust in all three trust dimensions.
Our results also showed that stakeholder alignment, transparency about the development process, and gradual introduction of the technology are crucial strategies for fostering trust. Introducing innovations in a stepwise fashion can lead to more gradual social learning, which in turn builds trust. Accordingly, the established firms in our sample tended to pursue a more gradual introduction of their AI applications to allow for social learning, while younger companies such as AiCure tended to choose a more revolutionary introduction approach in order to position themselves as a technology leader. The latter approach has a high risk of rejection and the potential to cause a scandal if the underlying algorithms turn out to be flawed.
If you’re trying to get consumers to trust a new AI-enabled application, communication should be proactive and open in the early stages of introducing the public to the technology, as it will influence the company’s perceived credibility and trustworthiness, which will influence attitude formation. In the cases we studied, users who could effectively communicate the benefits of an AI application had a reduction in their perceived risk, which resulted in greater trust, and a higher likelihood to adopt the new technology.
Other interesting reads