

In this blog series, our developers and conversation designers will share what they learned from building our first bot that incorporates ChatGPT.
We will be explaining everything there is to know about relevant concepts like prompt engineering.
Let's start at the beginning; Why do we want to incorporate ChatGPT in chatbots?
If you work in the conversational AI field and have to explain what you do to someone that’s not a Techy, you probably use the word chatbot in that conversation. What usually happens is the following;
Everybody will get out their torches and pitchforks, and many anecdotes will be hurled in your general direction. This chatbot kept going on and on about the same thing. That chatbot only had broad answers I could have found on the website anyway. I even heard about a chatbot that put pineapple on pizza!
Chatbots are not very well-liked by humans. One of the main reasons is that chatbots mimic a conversation with a human while they can only provide an answer to a few very specific questions. People feel betrayed by them.
Chatbots can only provide an answer to a few specific questions because every answer has to be defined by a human. If the user says this, then you should tell them this. Very controlled, very boring, and very frustrating.
This is where ChatGPT comes in. Gone are the days in which chatbots were mere ‘if this then that’ machines. The challenge for conversational AI teams will shift from trying to add the relevant question-answer combinations to telling the AI in general terms what can and can’t be said.
Sounds fairly easy, but is it?
That’s what we are going to find out in this blog series. Stay tuned for part 2.