Voicebot's personality - Voice Contact Center

Voicebot’s personality

Piotr Kempa, the creator of Primebot, talks about how to create a smart conversation bot and what qualities to give it.  

Let me start by saying that bots have a personality. But this personality is given by us at the stage of creating the bot. First, we choose the sound of a synthesized voice or record a selected voiceover. This is the first stage of personality creation. At this stage we decide whether the bot will be a man or a woman. The second stage is often giving a name. Despite appearances it is not an easy task, because the name must have good associations, sound well in speech synthesis and be consistent with the brand image. The third stage of building the bot’s personality is creating its speech. We have to determine what language it will use, how it will address the interlocutor and how to react when it does not understand. All the things I have mentioned should always be consistent with the client’s brand image. It is this image that will answer many questions that arise when creating the bot and its personality.

 

It is important for the bot to “be itself”

An important aspect that is often discussed at conferences is whether a voicebot should pretend to be human or whether it should present itself as a robot. We at Voice Contact Center are of the opinion that it is ideal when a person realizes that they are talking to a robot. Then interlocutors use simpler formulations, as if taking into account the fact that a robot will not understand everything. So it is good to introduce oneself as an “automatic assistant” or something like that. This is also where the completely unintended benefit of speech synthesis comes to light. It often sounds a bit more “robotic”, thanks to which more people will notice that they are talking to a bot. In the case of a voiceover it may be more difficult. Speech synthesis – another aspect is writing utterances in such a way that they sound good when passed through speech synthesis. Most often you should simply listen to the bot and correct the utterance so that it sounds as good as possible. Sometimes speech synthesis systems can surprise us with unexpected small stumbling blocks.

 

Teaching the bot

Learning a bot is very important in the process of its creation. Firstly we have to teach the bot what to say and secondly what a human can say. So we train the “brain” to understand intentions. Another skill is dialogue management – how to react to what a human says and how to direct the conversation in a given situation. Finally, we teach the bot how to retrieve and store data in the customer’s system. Using the metaphor of growing up, we can say that the school-age stage of a bot is its first independent attempts at conversation. The bot is prepared by its creators, tested and potentially ready to live independently, although it still needs support and education. We can start talking to it as a test. After all, it is worth seeing how what we are buying works, and therefore how we will ultimately be represented by the bot in front of our customers on the other side of the handset. During these conversations with the bot, you need to watch it closely, pay attention to how it works, if it is wrong, in what way, what are its shortcomings. It is necessary to constantly educate it, not only at this stage, but always! Bots work best in repeatable, preferably simple processes. The ideal is when the bot immediately knows what to talk about, handles a specific business need or guides the caller through a specific process. In a customer service office, we can use bots for: satisfaction surveys, debt collection, making appointments, hotline service, etc.

 

The key is the cooperation between a bot and a human

We combine the work of bots and humans in several ways. The first and most obvious connection is switching the call to a consultant. In principle, this can occur in several cases. The first case will be simply designing the process in such a way that at some point the call is routed to a human as standard.

The second case is the connection with a consultant when a human has difficulties in getting along with a bot. For example, he may be calling with a case that the bot does not support. This usually happens after the announcement “sorry, I didn’t understand the answer I will connect you to a consultant to deal with your case”. The third case would be switching to a consultant at the explicit request of the caller. And in the fourth case, there are also situations when switching to a consultant results from the assumptions of the business process, for example, the caller gives us a registration number that is not in our database. Generally speaking, in terms of switching to a consultant, it is extremely important to provide the so-called ‘context’ of the conversation. The idea is that the consultant has a complete set of data and a summary of the conversation with the bot. In this way, we avoid situations in which the switched caller has to repeat everything from the beginning, or go through the authentication again. This is usually annoying for customers. Finally, the last example of combining human and bot work is to constantly supervise the bot’s work and retrain it if we find where it is not performing well. This too, of course, has to be done by a human with the right skill set and experience.

 

Piotr Kempa has been developing voicebot technology since 2013. He has worked on research projects related to artificial intelligence, funded by the National Centre for Research and Development. Piotr is the creator of Primebot, an intelligent conversation bot that is prepared for non-linear conversations, i.e. going beyond the script. Since 2019, he is working on a permanent basis with Voice Contact Center from the OEX group.