Chatbots have become a more and more accepted way to help visitors to find what they search for. Satisfaction is an essential factor in ensuring people keep using a chatbot. According to the Dutch consumer association's survey (n=10.000) 78% of people didn't get a satisfying answer when talking to a chatbot. But what determines the satisfaction of a bot? Well, one very important determinant is how the bot handles questions it can’t answer.

“I don’t understand, please rephrase your question.” 

One major frustration among chatbot users is a (too) hard fallback. Because of that, chatbot designers try to soften these fallbacks as much as possible, for example, by directing the user to a page that might answer their question.

But this type of fallback is still quite ‘hard’ since people must search for their answers by themselves. And they started the conversation with the bot to prevent that!

We can’t avoid this, can we? Manually training the bot takes a long time to be done well. And we should only give perfect answers.

Semantic fallbacks

Yes, every bot's answer would be perfectly conversational and in line with the users’ context in the perfect world. But, just blurting out a generic ‘I dunno’ as soon as someone leaves the happy path? This can be done differently.

Why don’t we add a third possibility to the mix? So a bot is either:
a) Very sure about the answer and gives the pre-defined answer;
b) Completely unclear on what the user asked and tells them to rephrase the question;
c) Searches for the answer in company data sources and returns an answer if it is moderately sure.

The concept

Let’s clarify this concept using the diagram below. In this example, a chatbot user wants to return their purchased product. In the ideal situation, this topic is covered by the bot content, and the bot returns a perfect (conversational) answer.

However, if this is not an existing topic covered by the bot, the user is not immediately bugged by the last resort: a hard fallback response. Instead, semantic search queries the database consisting of all website data and other relevant sources. 

In this case, it found a suitable piece of information somewhere on the organization's website. It can extract exactly the sentence that answers the user's question. Of course, it’s not as good and conversationally composed as the first source. But it’s much better than the traditional default fallback response.

For which situations?

The smarter fallback plug-in for chatbots is most suited when:

- When you are starting to design a chatbot and have not trained it on lots of data.

- When there are many data sources within the organisation on which the bot has no answer.

- When lots of content is created on which the bot should have an answer right away.

So, smarter fallbacks might be a good idea if you are working on a chatbot with a (rapidly developing) knowledge base or manual as a data source. If you are creating a chatbot for a small legal firm that only has its website as a data source, smarter fallbacks are not as necessary.

Want to read more?

We know the chatbot design struggles. In the perfect situation, the bot would have all the answers but let’s face it; it doesn’t. This solution might make your customers more satisfied with your bot, and it’s available out-of-the-box. Want to hear more? Read more about it here.