Personality and Personalization: How Human Should a Bot be?
Voice-controlled digital assistants offer brands a new opportunity to get in touch with their customers. The “personality” of an assistant is key because it will play the role of brand ambassador, representing its values to the outside world. If you clearly define an assistant’s profile, you can make sure that it acts consistently across all channels. His or her behavior becomes predictable, which not only builds trust but also creates more personal experiences and makes it possible to adjust interactions to individual preferences. How then can you strike the right balance between being consistent and offering personalization? And what does the concept of “personality” mean when it comes to digital assistants?
Digital Assistants at Deutsche Telekom
Deutsche Telekom develops digital assistants for various usage scenarios, for instance to help customers solve their problems and thereby increase the quality of our service. Or to leverage algorithms to recommend the right contracts or cell phones in a sales situation. The smart home also offers interesting use cases such as controlling the lighting, heating or home entertainment systems.
All those digital assistants have one thing in common, mainly that they are driven by voice interaction. The user engages in a natural language conversation, either by speaking or typing. The second unifying feature is that these assistants use artificial intelligence.
Simply put, digital assistants are computer programs designed to interact with a customer in the most natural way to appear almost human. No wonder, then, that the topic of personality deserves some serious attention. Once you start looking into digital assistants, voice assistants or chatbots you’ll quickly realize that it’s a hot topic. So how much personality does a bot need – and what exactly constitutes a bot’s personality?
Taking a detour through movie history
Perhaps it helps to take a few cues from movie history. Intelligent machines dreamed up in Hollywood have always had a personality, just look at the R2D2 and C3PO droids from the Star Wars saga. C3PO has clearly distinguishable human traits and is always a bit anxious and nervous. Little R2D2, on the other hand, displays his emotions sparingly through blinking lights and sounds – but we still know exactly what’s going on inside the mind of this more courageous and daring robot.
The famous HAL 9000 computer in “2001: A Space Odyssey” has feelings, too. Stanley Kubrick tried hard to present him as a cold, calculating intelligence by giving him a monotonous voice, but the famous sentence “I am sorry Dave, I’m afraid I can’t do that” still drips with human emotions. Regret and fear are tell-tale signs of a being that possesses consciousness and a personality. We can also find intelligent machines with an attitude in modern films such as “Her” or “Ex Machina.” Here, the bots surprise their human creators with their personalities and literally “self”-confidence.
Now one could be tempted to conclude that Hollywood has nailed it and just give digital assistants more of a human attitude. But we shouldn’t forget that the computers that gain self-awareness in movies usually pose a threat to humans. Roboticists have even coined a term for the uncomfortable feelings and fear when we encounter robots with very realistic and human traits: the “uncanny valley”. When a computer pretends to be human, it somehow triggers our creepiness alarm.
There’s also a big difference between examples shown in movies, which exhibit true consciousness, and today’s existing chatbots and voice assistants. In our user tests, we found time and again that behavior that comes across as too human-like elicits a negative reaction. Testers are very aware that they’re talking to a computer that doesn’t know human emotions and therefore can’t be happy or feel sorry for them.
It has led us to a basic rule when designing a digital assistant: “Be honest with your users!” We don’t try to build emotions a machine can’t have and adhere to this rule when constructing dialogs.
Personality = consistent and predictable behavior
Why, you might wonder, do we even bother to write about personality? Property theory defines personality as the traits that describe a person’s behavior while remaining constant over time. That gives us two factors that are also relevant for personal assistants: behaving in a consistent and predictable fashion over a longer period of time and in different contexts.
It’s important to distill an assistant’s clear behavioral profile from the brand values and positioning of an enterprise -- and then stick to it. How he or she will speak, react, what he or she looks and sounds like are features that should stay the same across all contexts and channels. That’s how a customer can recognize an assistant regardless of the specific situation. They know, in other words, what they can expect.
The brand and design guidelines for digital assistants should therefore describe and provide basic and immutable properties. It starts with simple things such as name, voice, and the visual appearance or icon of an assistant. More complex rules such as instructions on how to write dialog or interaction design are also important elements.
What about personalization?
While it’s important that an assistant behaves consistently, personalization can’t be overlooked, either. By analyzing previous interactions and training, an assistant should be able to constantly improve its response to a customer and solve his or her problems. To do that, the assistant has to build a “relationship” and conduct individual conversations with each user.
How does this fit with the idea of overarching guidelines which are supposed to ensure a consistent appearance?
We’re using a multilevel model that I call the personalization pyramid. At its base are the immutable properties such as name, visualization, voice and some fundamental rules for writing and behavior. One such immutable property is the “hot word” or wake word for an assistant, such as “Hey Siri!” or “Alexa!”
One layer up sits the contextual level that determines in which channel you interact with an assistant. That alone can make sure you’ll see slight differences in appearance and behavior. While a channel like a website allows for a visualization, a hotline only offers a voice assistant. The same goes for addressing the customer. On social media, the customer is greeted with the informal “Du” and on the official company site with the more polite “Sie.”
One more level up, we deal with the interaction content. What exactly does a user talk about with the assistant, what does he or she want?
If it’s a service request it has to be dealt with differently that if it’s about using a product. Does the customer expect an explanation, or just a brief and concise confirmation? Is the situation relaxed and informal, or does it require a formal and courteous reaction? This level is incredibly important to make sure you react the right way in each situation.
Personalization, responding to each individual customer, resides at the very top of the pyramid. It requires knowledge of previous interactions, presets and personalized recommendations, although it’s not always possible to take the content or the individual customer into account. Missing log-in data, for instance, can thwart that intention, as can systems that aren’t well-connected enough or the fact that a request couldn’t be clearly identified. In those cases, the foundation has to be solid -- which means basic rules for behavior and language have to be written in such a way that they work just as well for a serious service interaction as for a relaxed chat at home.
This article is based on a presentation I gave at World Usability Day 2017 in Berlin. Watch the video