Conversation with Jonathan Foster

The Future of Digital Assistants In Marketing

According to the iProspect 2018 Global Client Survey, 69% of marketers think the rise of digital assistants represents an opportunity to become closer and develop their relationship with consumers.
Jonathan Foster has spent the last five years leading the team that developed the personality of Cortana, Microsoft’s virtual assistant. He and his team now focus on expanding upon this experience to build Conversational UI all-up for Microsoft. Their latest project, Project Personality Chat, a catalogue of personalities for brands and digital assistant developers, was acknowledged in September 2018 by Fast Company’s Innovation By Design Awards. In his interview with iProspect, Foster talked about how personality can build trust, and what it means to be human. 
How do you factor trust in your design process when you work on intelligent agents?  

We have created core design principles about how Cortana talks and behaves, and a lot of them align to the idea of creating a trustworthy experience. 

 

One of our earliest principles was that Cortana is always transparent about not being human. We don’t believe in the Turing test, which requires an artificial intelligence to somehow dupe the individual into thinking it is human. We’re in a period where individuals need to be always aware they’re not dealing with a human. The person could ask “Are you a bot?” and some competitor might say “I don’t know if I’m a bot or not, sometimes it’s hard to tell” to be cheeky. We have a principle that says “I am not a human”. The industry has been known to make egregious mistakes in that way, where they were showing off the technology, and in doing so were laughing about the fact that they’re fooling humans into making them think that it was human. 

 

I have been asked for years if we’re trying to create a human- like experience. Yes, we are, through the design of personality, because we know people like that. I have an iPhone that’s curved-shaped with beautiful buttons. If it was sharped- edged with uncomfortable buttons, I wouldn’t like holding it in my hand as much. So that’s a choice that Apple made to make it feel better. Personality, in voice driven interaction model, helps smooth out the edges, because as humans we’re hardwired to expect that. With that said, we still want to be upfront that people are dealing with a bot. We don’t believe in anything that would potentially undermine this principle of being transparent about not being a human. 

 

Another principle is being clear about the value people get when sharing their data. When we design products, we need to honor people’s concerns and through that, we hope we can build trust. We don’t want data for the sake of data, we want information about individuals in areas that will be of value to them. For instance, if people want their assistant to know where they are so that they can obtain helpful information, then there’s value and they might share that information. Applications like Uber ask where I live, so that they can arrive quicker. As a user, I’m happy with Uber having my home address because I don’t have to type it anymore. I trust Uber enough to think that they’re not going to misuse my address and give it out to somebody who could potentially be a bad actor. I trust Uber with my personal information because there’s value. It is true that if users give us their data, we can create a better experience for them. An agent can’t be an agent on your behalf, if it knows nothing about you. We want to make sure that we’re putting things in place in the design process to ensure that we are deserving of that trust, otherwise people are not going to come to what we have to offer. 

 

We wanted a deep engagement for Cortana, and personality can do that. We knew that personality-enhanced experience can increase likeability. We were competing with agents with reputations of being snarky and we realised that wasn’t what we wanted to do. We weren’t trying to create a funny experience, but a likeable experience. Our North Star was “Cortana is always positive”, which came with the addendum “it doesn’t mean she’s chipper all the time, but that people walk away feeling good”. That was the way we structured everything, which incidentally made humour very difficult because most humour throws somebody under the bus, whether it’s someone external or yourself. But we managed to find it in certain ways! 

 

 

How do you measure the level of consumer trust in agents? 

 

There are two main ways: we ask users and we observe the signals from their behaviours. When it comes to knowing whether people inherently trust our products at a higher level, we ask them directly through surveys or interviews. Our user research team asks people if they would trust Microsoft if we asked them for this and deliver that. Answers are often helpful to measure trust. That said, the best indicator is ultimately engagement. We can see if people are engaging with Cortana or other bots. We measure trust by the depth of engagement and the length of engagement. People have to trust the agent is going to work, that it is going to come through for them every time, so they call it up. If people stop asking questions to a bot or an app then you must look at your results because clearly you’re not delivering on the value proposition. 

Technology influences people’s behaviours and attitudes. A big concern in the industry is about how innovators like brands, agencies or technology companies can address unconscious bias within technology and its potentially harmful consequences. Are you making conscious decisions to draw on different aspects in society when you build some of these responses? 

Assistants are both proactive and reactive. Proactive experience is popping up and saying, “Hey you’re running late for a meeting across town”. That’s an area where we leave the personality out of the way. We want that experience to be efficient. We’re popping up in front of the user and we have to be very careful about that. We’re all inundated with alerts and notifications as it is. We don’t want to be a part of that stream. 

A large part of the personality work is reactive, which is answering what people are asking. We get the data from what people are saying, even if we don’t know who they are or where they are. We realised quickly we had to deal with abusive, often gender-coded abusive language or racist comments, which became the most interesting part of our job. We had the choice of ignoring or acting. One of our core principles is “Never perpetuate bad behaviour”, which means you can’t just look away, otherwise you’re potentially condoning the behaviour in a passive way. We are not in the business of telling people what they can and cannot say, but we can give them the idea that there’s nothing there. Thus, we issued what we call a “blah” response around this, because we didn’t want to have a clever response that people could show off at cocktail parties, like “Look at what Cortana says when I say this horrible thing.” Cortana has simple, real mundane responses to those areas, so that the statement we’re making is clear: “We know what you’re saying and there’s nothing there for you”. 

“There is an artistic charge trying to show humanity to humans.”

When I was tasked with creating Cortana’s personality, I knew I’d be smart to hire people who understand the way humans speak: screenwriters, playwriters and novelists. These people are constantly trying to make sure their dialogue sounds good and they are used to developing characters. I then realised they would also be the best suited to find ways to deal with this dark part of humanity, because their headspace is already there. We could hire experts on ethics and people who have practical knowledge on how to guide the conversation somewhere else, but we realised the value of having a whole spectrum of writers in the writing discipline. Copywriters can be clever and funny - I’ve done the work myself at times - but their success metrics are based upon getting somebody to convert. If you think about screenwriters and even poets, their success is based upon their ability to reach the humanity in another person. They’re coming from an artistic side of the discipline. There is an artistic charge trying to show humanity to humans. 

Do you think brands, in the same way they have people responsible for maintaining their brand identity, will need these kinds of profiles, as they enter into the conversational UI space and develop their own personalities? 

 

Yes. Particularly as natural language interaction models become more pervasive. The button-pushing and keyboard is not going to go away in my lifetime, but it’s going to expand to what we call ambient computing possibilities. We need to be investing in what is good for people. 

There may be bots writing sports articles or doing weather forecasting. From a technology side, we can get excited about that, but it’s more important that we focus on what it means to be human going forward, not only to create great products, but also to make sure that technology doesn’t take over the human part of our existence. 

We need to make sure that humans are at the center of everything we’re building. We need more people from the humanities and the social sciences in technology. In my little way I found that through hiring artists. It’s a very interesting contrast to the engineers. Not that they’re unethical at all, it’s just their whole reason to be is about building something. For me, the test is finding out ways to make the human side of design a business need. There is obviously an ethical need, but I don’t have much hope for this if there’s no business need! (laughs) It could be that customers are going to see very human-oriented experiences and they’re going to prefer that! 

This article is excerpted from Future Focus 2019: Searching for Trust.Download Future Focus 2019 for key insights and success stories on navigating truth and authenticity in 2019.