Webinar: 6 Steps to Conversation-Driven Development

 

The challenge with building great AI assistants is that it’s impossible to anticipate all the things your users could say. But the opportunity is that in every conversation, users are telling you—in their own words—exactly what they want.

Agile, DevOps, and user testing are established methods for building reliable applications and a better user experience, but conversational software presents unique opportunities and challenges. Conversation-driven development, or CDD, is a blueprint for applying these practices to building AI assistants.

In this recorded 45-min webinar led by Rasa co-founder and CTO Alan Nichol, you’ll learn the 6 steps that make up CDD. Alan will discuss how product teams can use CDD to learn what users want and turn insights into meaningful improvements. CDD can be used by product managers, designers, and developers—anyone interested in building AI assistants with lower abandonment rates, higher resolution, and continuous improvement over time.

Originally aired: July 23, 2020

Get news about product updates, webinars, and events from Rasa:


Transcript:

Karen:
All right. Welcome everyone. Thank you for joining us on the call today. I am Karen, with the marketing team at Rasa, and I'm joined by Alan Nichol, our co-founder and CTO. So we'll be spending the next 45 minutes talking about conversation-driven development and the six steps that make up CDD and Alan's going to tell you a lot more about that in a moment.

Karen:
We've got a lot of exciting content to cover, but before we begin, I wanted to quickly go over the agenda and how to participate during today's webinar.

Karen:
We'll start with a little bit of background about Rasa and then we'll dive into conversation-driven development, that this isn't just theoretical. We'll talk about how you can put CDD into practice with your AI assistant today. Then we'll save some time at the end for Q&A.

Karen:
A few housekeeping notes. You'll notice that you're muted. Please go ahead and keep your mic muted during the webinar, but do ask questions in the Zoom chat. We'll be gathering questions for the Q&A portion throughout the webinar, so please go ahead and ask your question at any time. We'll add it to the list and then we'll ask your question on at air at the end. Last but not least, we are recording the webinar and will upload it to YouTube in a few days, as soon as we get edited. We'll email you a link to that recording after the webinar, along with a short survey.

Karen:
With that, Alan, I will hand things over to you.

Alan:
Thank you very much, Karen. Great. So thanks everyone for joining. I wanted to start with a quiz so we can gauge where everyone is. So if you type this URL into your browser, it will take you 20 seconds to do the quiz. If you're feeling brave, please post your score into the chat. I'd love to see how people are doing, but the idea behind this is just to get a feeling for ... to what extent you're all already practicing CDD and if any of the questions don't make sense to you, don't worry too much about it. Hopefully it's all clear once we get to the end. So I'll just give everyone another 20 seconds or so to do the quiz, and if you choose to, post your score in the chat.

Alan:
All right. Well, it doesn't look like anyone's brave enough to share their score, but hopefully you had enough time to complete the quiz. Oh, someone came in with 130 points. Incredible. We've got a 95. Very respectable. Excellent. I'll just give everyone a few more seconds then, since we're getting different people finishing the quiz now.

Alan:
This is really fun to watch. Some familiar names coming here in the group chat. Welcome, everyone. Thanks for showing up. So I'm sure a few more scores will trickle in, but I'll just get started then.

Alan:
So, thanks for playing along. That was fun. Great place to start. Looks like lots of people are already embracing part of CDD, but there's also room to improve, so that's a great place to be for this webinar.

Alan:
About Rasa, I'll cover this very quickly. Probably a lot of you are already familiar with us. We build the standard infrastructure for conversational AI, and we do that by doing three things. Of course, if you want to be the standard, you better be Open Source. Rasa framework is Open Source. We invest very heavily in our community and make sure that people have the educational material they need and the tools they need to build great conversational AI, and we have a big research team and we work on a lot of applied research, because this is very difficult and there are a lot of unsolved problems in this space.

Alan:
We have a huge community, it's really fantastic, in almost every single country in the world. We have developers building things with Rasa. We have a huge number of users on our forum, we have companies in every kind of industry, large companies using Rasa. That's fantastic, and thank you to everyone who has contributed to the code base or created content or anything like that.

Alan:
The way we think about making programs on conversational AI is we introduce this idea of having five levels, which we first introduced in 2018 and just recently, last month, we published an update at the L3-AI conference, where we created the 2020 edition of the five levels of conversational AI. The key part of getting towards the whole five is being more and more accommodating about the way that users think about problems and lowering the burden on them to translate what they want into your language. So if you're interested in that, there's a video on the Rasa YouTube channel. There's also a blog post on the Rasa blog, but the kind of key connection between those two pieces is that conversation-driven development, or CDD, is our thesis on how to get towards the whole five.

Alan:
That's kind of this point here, how these two things connect. The key idea to keep in mind about CDD is that you have an assistant out there in production, you're already gathering all the information and all the data that you need to climb through level three, four, and five. The key point is to listen, right? So CDD, or conversation-driven development, is the process of listening to your users and using those insights to drive the development of your AI assistant. So there's really just one idea and CDD is a framework for making use of that idea.

Alan:
Okay. One step back. Why do we need something like this. So those of you who are in, especially those of you with high scores on the quiz, have clearly been building conversational AI for a while, right, and if you've been doing that, you know that it's very hard. You also know that building that first prototype is not the hard part. That part's easy. The hard part is going from the initial prototype to something that's really great and you're ready to ship and show to people. All the hard parts show up when you're trying to bridge that gap, and so I want CDD to serve two purposes. One is to help all of us build better conversational AI, but even more importantly, to save people who are new to the field from having to learn that themselves. We want to give new people a shortcut.

Alan:
So CDD is made up of six actions, and they're called share, review, annotate, test, track, and fix. What we'll do in this webinar is I'll introduce what each of them is, and what the motivation is behind them and what the lessons are behind them and then show, practically, what those look like in a real development workflow.

Alan:
Okay, the first one is share, and this one is super, super important. It's probably the most important one. It doesn't matter what you're building. Users will always surprise you with what you say. You cannot anticipate what users will say to your chat bot. So you should get some test users to try your prototype as early as possible. Way too many teams go off into the mountains for months at a time and then try and polish something and then they give it to users and it breaks. Of course it breaks. The only way to build good conversational AI is to build bad conversational AI and give it to people to break and improve from there. There are no shortcuts, there are no exceptions.

Alan:
The second one is to review. So spend time at every stage of the project, it doesn't matter if you've been in production for a year or if you're just getting started, read the conversations that people have with your assistant. Spend time looking at it. The human brain is still the best pattern matcher and you can empathize, you can understand what is this user looking for, what are they wanting? Again, too many teams get caught up right away in trying to look at metrics, how many times is this intent being predicted? That's all good and that's useful, but it's not everything, and you're missing a huge amount of insight if you're not using your brain and looking at some of these conversations.

Alan:
The next one is to annotate. Your model, your NLU model, is going to make predictions on real user messages that are getting sent in, so that's the data that it should be trained on as well. Really common banana peel that you see is that folks think they can use a script or some kind of pack to generate a bunch of synthetic training data and that that's going to work well in production. It doesn't. The model needs to be trained on the same kind of data that it's going to be asked to make predictions on, so use those messages as they come in, as people talk to your assistant. Annotate them and turn them into new training data.

Alan:
The fourth one is test, also something I care deeply about. Just because we're building something, an AI assistant which has a machine learning component doesn't mean that we're off the hook for using good software engineering practices. Professional teams don't ship software without tests, so the way we like to do it is have end to end tests, so message in, message out, check that the whole conversation still works, whatever works for you. Make sure that you have them. If you're working with a conversational AI platform that doesn't support creating tests and running tests, then what you have is a prototype end tool. It's not fit for production if you can't run tests.

Alan:
Track is also really key piece. Make sure that you have some way of measuring conversations that are going well and conversations that aren't going well, and users that you're helping and users that you're not helping. Quite often that's actually information that you need from the outside world, from outside of the conversation. Did this user end up converting, or even did they not do something, right? Did this user talk to the customer service bot and then not get back in touch with customer support? It's not perfect, but it's a decent proxy measure for saying, "We probably helped this person," and if that number's going down, then that's a good thing. So find a way, it won't be perfect, but find some kind of proxy to measure conversations that are going well and that aren't, because all the other effort that you're putting in to improving your assistant, you want to make sure that that correlates to your end users actually having a better experience and having it be more successful.

Alan:
And then the final one is fix. Embrace it. Doesn't matter what you do, you will always, of course, encounter those situations where your AI system fails. So study those conversations, look at the ones that went smoothly, look at the ones that failed. They're usually unexpected situations. If a conversation went perfectly, fantastic. Turn it into a test. Add it to your test cases. If you find an issue, maybe you need to annotate some more data, maybe there's an issue with your code, with your AI calls or something like that, but go ahead and fix it and use the regular software development process. Have an issue tracker, have a way of prioritizing the conversations that come up.

Alan:
Cool. So that's the quick theoretical introduction to CDD. There are six actions, so I have two comments about CDD in practice and then we'll switch to the demo.

Alan:
So the first comment about CDD is that it's not a linear process. It's not something that you just kind of follow a set of six steps and then you're done. You are going to find yourself jumping between each of these things and that's great. That's fantastic. That's the way it should be.

Alan:
The second is that it takes a mix of skills to practice CDD. Some of these actions require data science skills, machine learning skills. Some of them are software engineering skills, if you go to debug these AI calls. Some of them require a deep understanding of the domain and of the user and of the use case and what their expectations. Rarely does one person have all of those skills, so it's usually a team effort. So think about the team that you have and the team that you need to practice CDD.

Alan:
Okay. So we're going to go into the demo. I'll show you some things that we've implemented to help people practice CDD, using Rasa X, but I think it's worth mentioning that CDD is completely independent of what tool you're using. It's just an approach, it's a philosophy, and it's not tied to a particular tool. We just build some things to help you do it.

Alan:
So just to clarify for those who are maybe new to Rasa or use Rasa Open Source but not Rasa X. Rasa Open Source and Rasa X work together. It's not an either/or situation. They're two separate product. Rasa Open Source is a framework which you use to build AI systems to train your model, right, your model, et cetera, and Rasa X is a tool to help you practice conversation-driven development, so to look at what people are saying, listen to your users, and use those conversations to improve your bot.

Alan:
Cool. Okay. So we're going to go into the demo and I'm still sharing my screen. This part is quite part techy, so if you're into demos and coding and all of that, you'll get a kick out of it. So the first thing we'll do is run Rasa X in local mode, which is really just about kind of getting those first CDD steps in as early as possible in your development process of your system and then from there we'll go on to server mode.

Alan:
So if I'm in local mode, what I need to get started with CDD is the first version of my assistant. I need something to get started with, it doesn't need to be sophisticated, but I need to have something and so I've decided to start with one of our starter packs. If you go to the docs on Rasa, we have a couple of starter packs here that you can check. There's a financial services one, which is kind of a banking use case. You can ask questions about ... you can make transactions and check your balance and those kinds of things. Then there's the help desk, which is like an IT help desk setup. It has a ServiceNow integration. Neither of these are intended to be fully-fledged bot or assistant that you would put in production, but they're starting points, you can fork from there and go and develop your own thing. It's just enough that you can get started.

Alan:
So this Help Desk here, it's on GitHub. You can go check it out. All the code is obviously Open Source, and I've just cloned this repository here, and that says Help Desk Assistant. This is just a typical Rasa project. You can also just type Rasa in it, and use the mood bot example that you start off with when you're a new Rasa user, but here we've got this Help Desk Assistant running for us. So I've installed Rasa X. You can install it via pip. The instructions are in the docs. To run it, you just move into the directory where your assistant lives and type Rasa X and we'll give this a second to start up.

Alan:
Cool. So the kind of key thing that you want to be able to do in local mode with Rasa X is just get those first test conversations in with your users. So of course you can go in here and talk to the bot yourself. You can see which intents are being predicted, et cetera, kind of try it out in a nice UI, but the really key piece is being able to share it with different folks, right? So even before you have a Slack integration setup or a Twilio number setup or a Facebook Messenger web hook integrated or anything like that, you don't have to have anything. You just have the first basic assistant and you can share with people. So you can go here and generate a link, which you can then send to people so that they can try it out.

Alan:
I'll open this up in an incognito window. This is what the user experience looks like. So here, I don't get all this complexity of all the things we saw on the other screen. This is just the place where I can try out this assistant and see what could it do for me, what's happening here? So there's one little caveat which is that this link here is to local host, which will only work on my machine, of course. So we want to use something like ngrok. If you're familiar with it ... oh, here we go. I have the command ready, ngrok. So this just creates the public URL which forwards to my local host, port 5002, and then I can actually just replace this here in the URL. Then this is a link that I can send to anyone in the world and they can go and have this conversation.

Alan:
Now, all these conversations will come up in Rasa X here, and so it doesn't matter if I already have an integration with Facebook or something like that, or if I'm just talking to the bot myself. I see everything show up and I can start that process of review. So we've done the share step and we can start to do the review step and we can start to do the annotation step. We can start to practice CDD, which I'll show you more of in just a second. This is just really the kind of basics in these very early versions of your bot where you're just doing the basic things.

Alan:
So I'm using this in local mode. This is obviously not going to be my production deployment but I can already start to explore some of this functionality. So I can look through all the conversations and I can go and I can annotate all the messages. So we have this NLU Inbox which contains all the messages which aren't already part of the training data where Rasa NLU had to make a prediction and I can either go and confirm these or delete them or change the intent or mark them up and go and like add that to my training, so I can already start to practice CDD locally even before I have a production assistant.

Alan:
Those are kind of the first couple of steps of share, review, and annotate, which you can already start to practice locally before you have a server set up. Then the real fun obviously comes in when we have a Rasa X server, so that's the next piece that we're going to look at.

Alan:
So this Rasa X server that I'm showing you here is connected to the bot which lives in our documentation, which is called Sara. The code for that is also Open Source. It's a Rasa demo, and in our GitHub org. So if you're going in our documentation, and you're talking to this bot, you'll get to talk to this one and check out all the code in here and how it all works and we can go and review those conversations in here and see how people are doing.

Alan:
The home page of the Rasa X product is the conversations screen and that's to encourage you to review conversations. So here we have conversations coming in from different channels. Socket.IO is the one that connects to our website, that's the little chat widget. Of course, I have test users, they'll show up here. Also, if I have Facebook connected, it'll show up here. Of course, I now have much larger number of conversations but lots of people talk to this assistant all the time and so the question of, "How do I review conversations in a way that actually scales for that," and so we give you kind of Swiss Army knife of filters that you can use to dive down into which of the conversations that are really relevant and interesting to you.

Alan:
Let's have a look. So we can have it look at maybe ... so quite a few people will just open up the chat, say hi, and then kind of abandon it. One of the things that I like to filter for is a certain number of user messages, maybe five or something like that, and ... wait a second. This should take a chunk out of the blurry filter. Yeah. Or I can look for particular intents that are being expressed that I want to know because I want to know if they're going well or if they're not going well. Actions, specific channels, entities, thoughts, confidence with the NLU predictions which can be a very ... and the confidence of the Rasa core predictions as well which can be very helpful for spotting the errors. I can look for the fallback action. All sorts of things, and I can slice and dice all these conversations and start to look at the relevant ones, rather than every single one.

Alan:
So that's the kind of review step and we have some quick actions here, so you can mark conversations as reviewed, you can save them for later, you can filter by that status as well and you can add tags, if you want to keep track of things. So that's the key part, so if we say we have share, review, annotate, right, you've seen those parts and now we have test, track, and fix.

Alan:
In test, we have our test set up on a CI/CD server, so we're using GitHub actions for the Rasa demo. So if I look at the pull request, if I look at some that were recently merged, if I go and check, for example, this one, and I look at all the checks that were run ... not that one. This one ... yes, we do a whole bunch of things. We check ... we run the Rasa data validate command to make sure that that all works as expected, so we don't have any clashes in our returning data. We run through a bunch of test stories, so we train a model, run a bunch of test stories. We do a cross-validation of the NLU model, so we check how the model's performing, and then we actually post those in a comment to the pull request, and then we have a continuous deployment set where if the model's good, we push it out the server and then it's updated in real time.

Alan:
So if I go back and I look at that pull request, so firstly I can see that all my tests have passed, so I get a lot of green checks. Actually, I get a comment on the pull request which tells me how my different intents are performing. So what's the overall F1-score for all my intents? Which ones are doing well, which ones are not, and what are they most likely to be confused with, and the same for the entities, right? What's the precision of recall for all of the different entities that I have? So I can check that and, like any other piece of software, check that when I'm annotating things, when I'm improving things, that they're not getting worse. So if I'm working through my NLU inbox here, and I'm annotating some new examples here, I want to use Rasa on my website. Okay. FAQ. Wonderful. I'm told here that I have changes. I can push them to get and then they show up in a new pull request and I can check that my changes actually improved the system before I push, right?

Alan:
So that's the testing step. Testing, CI/CD, same as any other piece of software development. Then tracking and fixing. So we have track. If we go back to the conversations view, and if you look, some of these conversations have labels, and these labels can either be added manually here, by adding labels, but you can also use the API to add these tags. So when we talk about bringing external information in, that's a really key piece. So if I know that my user has either given positive feedback or they converted, they've upgraded to a premium, or they haven't gotten back in touch or something has happened in the outside world which I want to bring back into Rasa X, I can make an API call and that tags that conversation with this tag and then I can filter by those conversations, so I can track how many people are in fact converting or having successful interaction over time.

Alan:
Then the final step is to fix. Very simple, straightforward. Just a nice little feature that each conversation has a unique URL, which means that if you create a GitHub issue, you can paste this URL in there and then people can go and see exactly the conversation you were looking at. In fact, you can select the specific message and each message actually has its own URL, so you can point people to exactly the thing you were looking at and say, "Hey, what was going on here," and you can use your typical issue tracking tool to go and smash out all of the bugs.

Alan:
Cool. So that's obviously once you have your systems ready to give to users. You deploy it to a server. You can do that with the one line deploy script, which you can find in the Rasa X docs. It's really easy to set up, and then you have a sort of full production grade server which you can hook up to as many channels as you want, your website, to Facebook, to Slack and go and review all those conversations and practice CDD.

Alan:
So yeah. That's it for the demo, so I think we're going to go on to Q&A. Karen?

Karen:
Awesome. Thank you, Alan. Thanks to everyone that submitted your questions. So let's go through those. Our first one is from Yassine, who noted, "Is reading reviews just available for Rasa X and Rasa?" Maybe there's something you could clarify there.

Alan:
Mm-hmm (affirmative). Yeah. Great. I mean, of course, if you like, you can look at your tracker store, if you're just using Rasa Open Source, and try and review conversations in there, but Rasa X gives you a nice UI. You can see all the conversations in there and it's obviously much more pleasant to read than trying to read through logs. It's not theoretically impossible, but it's obviously much nicer if you're using Rasa X. It's free, so you might as well.

Karen:
Nice. Next question is from Andy, who noticed that in the demo, you filtered for a minimum of five questions, but the question is wouldn't filtering for below five actually tell you more about why users abandon ship because they had a frustrating experience or something was unhelpful?

Alan:
That's a very good point. It's a cool feature request. I'll add it to the back log for sure.

Karen:
Nice. Then our next question is from Will. Are there any plans to add a filter for the user ID?

Alan:
Oh, well, you can filter by user ID by just attaching it to the URL. I'm still sharing my screen, right? So here in the URL, this is just the user ID, so if you just type it in, you should find the one that you're looking for. I presume that that solves that use case that you're describing.

Karen:
Nice. Then our next question is from Joan. So is semantic clustering used in any of the filters, or is it most often by confidence scores and intents?

Alan:
That's a great question. We have, so far in the latest version of Rasa X which is out which is .31, it's sort of confidence, specific intents, actions, length, all that. A really exciting new filter that's coming out in the next release is actually being able to filter by conversations which have a novel conversation turn that you haven't seen before. So something that isn't in your training data, a user did something that you hadn't expected, they said something in a place you hadn't seen them say that before, and you can specifically filter by conversations that contain novel terms, which is obviously a very powerful filter, to understand things, where you don't have coverage in your training data.

Karen:
Very cool. Then our next question is from Vaibhav, and the question is, "Can you search on the tags later, once you create them?"

Alan:
You can filter by the tags, for sure. Let's have a look at the tag filtering experience, as we go tags here. Yeah. So you can search through, if you have a big list of tags like we have here, but then yes, of course, you can very much filter conversations by which tags are there. So in this case, if you've ever talked to Sara, if you ask a technical question and the bot doesn't know the answer, it will try searching the documentation for you and then it'll ask if you found an answer there in the documentation and if you didn't, then the conversation gets tagged with this "doc search unhelpful" thing, and you can sort of filter by those to see where the users not get a good answer to their question.

Karen:
Perfect. The next question is from Vaishnavi, and this might be maybe tied back to the user ID, but they ask, "How do we know which user is having the conversation?"

Alan:
Well, you don't. Yeah, so I mean, from Rasa's perspective, Rasa gets a conversation ID, which is, depending on ... I mean, in this case, it's on the website, so these are sort of randomly generated for a user. We don't know who that user is. If you're on Facebook or Slack, then that user ID will be something that you can tie back to a specific user, and you can certainly do that, but Rasa doesn't do that for you, so you'll have to fill that somehow in there.

Karen:
Awesome. Then Cooper asks, "Is the team working on additional starter packs for different verticals?"

Alan:
Definitely. I definitely wanted to do way more starter packs, and if you have suggestions for ones you'd like to see, please create an issue either in the Rasa repo ... yeah, actually just in the Rasa repo, which ideas for new starter packs. I would love to hear them.

Karen:
Great. Another question from Vaishnavi. How many users can Rasa handle at once?

Alan:
Mm-hmm (affirmative). Well, I mean, like any of these questions, obviously the answer is, "It depends." You can scale up a ... so when you deploy Rasa X with the one line deploy script, you deploy it on a tiny Kubernetes cluster, but you can also use the one line deploy script with a full sort of Kubernetes as a service on Google Cloud or AWS or something like that. You can scale up the number of Rasa servers which are actually handling users simultaneously, so the total limit is, of course, how many users can you handle simultaneously? With, let's say, two to three Rasa Open Source containers on reasonable hardware like a CPU or two each, you can already handle a lot of traffic. Of course, it depends a little bit about the NLU pipeline components that you're using.

Alan:
Often times, the big bottleneck is if your custom actions that are very slow, kind of waiting for those to return can be a bit of a hassle. There's no sort of obvious straight answer, but you have to run at a really big scale before two or three Rasa Open Source containers can no longer handle your load.

Karen:
Awesome. Okay. Our next question is from Jonathan. So test results seem to be at a summary level, so for example, an intent might have 90% accuracy. Is there a way to identify specific messages that you want to track or need to pass? It isn't necessarily meaningful if an intent has 90% accuracy if the 10% that fail are the most common messages.

Alan:
100% agree. So that's, of course, the beauty of Open Source. The way that we've set up the CI, the GitHub action for Rasa, for the Sara demo book, that's just what we check and we print the summary statistics. But what the GitHub actually does, is it actually just runs the Rasa test command, which does all of is and usually prints the output to the console and writes its systems files, and specifically it writes an output file with all the mistakes that the model made during the testing. So you can ask literally one more in line in your CI, doing a regex search in that file, checking that there's nothing in that you really, really need to pass. That's definitely something that you can customize, if that's what's important to you, which makes perfect sense. If the bot gets 99% actually but it doesn't understand hello, then that's not great.

Karen:
Awesome. That's a really good segue into our next question, which is from Sanjay. So is there an option, or maybe planning for in the future, with GitHub Merge where below a certain intent accuracy, let's say 20%, the PR check fails automatically?

Alan:
Oh, yeah. Absolutely. That ... let's see. So one of the tests that is run in CI is we run through the test stories, and there the Rasa test command has a fail on errors flag, and if you add that fail, like minus minus fail on errors, then if any of the test stories, the CI check fails. We don't yet have a feature for failing based on some heuristic of how much you want to pass, but that's a really cool idea for future requests. So you could either hack that in to the script yourself, or just add it as a post-processing step to check that score, write that score to a file, check that it's higher than a certain number, and if not then you can fail the build. It's a cool suggestion.

Karen:
Nice. We have another question from Yassine. So how would you recommend deleting the history of a chat bot, though in this case, Yassine has a course that's deployed Udemy, and it has more than ten thousand students which is very cool. The students who are using the chat bot sometimes ask why the session is stuck. He's already set session persistence to false on Socket.IO, but maybe is there something else that you would recommend checking?

Alan:
So my guess is that the chat bot is sitting on a website and it's using Socket.IO and it's persisting that user ID across sessions and saving it in a cookie, and so when the user comes back, it's still the same conversation, even if the session expired. The easiest thing to do is to actually just generate a new user ID every time the user logs in. So you can either ... I'm sure that in the web chat component you can turn off the use of that cookie so that every time the user comes back, they generate a fresh ID and they have a brand new conversation. That's probably the easiest way to do it.

Karen:
Okay.

Yassine:
Sorry. Can I have clarification about this one? I'm Yassine. I am the one who ask for this question. As you say, I can't create this user's as current state. The session persistence I set false for the Socket.IO, and problem is, for example, if I am from one browser sending a message and the other browser gets same message and so it's persisted same problem. I have the same message. This is what I didn't understand. I tried to publish this in the forum, but until I didn't get the answer. Sorry for the interruption.

Alan:
Okay. I'll take this answer, but please, everyone, keep yourself muted. I'm not sure that I fully understood what's going wrong. It sounds like what you're saying is that you have a user in two different browsers who's getting the same answer. I would suggest please just you can tag me in the forum, tag me in your question, and I'll gladly take a look.

Yassine:
Okay. Thank you so much.

Karen:
Yeah, sounds like that one might need a little more time offline. So I think we've got time for maybe just one or two more. Our next question's from Reuben. How does the recently released GPT-3 affect Rasa's development?

Alan:
Yeah, great question. We've been playing around with it. It's very fun. Lots of interesting things. I think we still don't know all the things it can do. Personally, I certainly wouldn't just put GPT-3 generated output and send it to my users. If you go on the Rasa blog, you can check out a recent blog post by Vincent, a research advocate who's been playing with it and exploring things you can do, interesting things you can do with GPT-3 as it relates to conversational AI. I would say it's first impression so far, but it's an exciting development, for sure.

Karen:
I think this will probably be our last question because Alan, I know you had some other resources that you wanted to share at the end. I think this is a good one to end on. So this is from Vlad, and this one's a little subjective. In your personal opinion, what's still currently missing in conversational AI technology? So not really specifically about Rasa, but really across the broader industry, all of the companies and researchers that are working on this problem.

Alan:
Yeah. I mean, obviously this is something I think about all the time. The biggest one, the drum I've been beating for the last six months or so, is that we have to break free from this idea of intents and having intents for every single message. So if you Google "we have to get rid of intents," there's a blog post that I published end of last year and I think that's one of the really big bottlenecks. So long as we're limiting ourselves to saying that every single user message has to fit into one of these categories, we're never going to get to true level three or beyond conversational AI. So I think that's one of the biggest things we have to get rid of.

Karen:
Yes. And Alan, do you want to say a few words to send us out?

Alan:
Yes. Of course. So quick recap. Just going back to motivation, why do we need an idea like CDD? It's an acknowledgment that conversational AI is hard, and it's not going to magically solve itself, and I think for those of us who are experienced, we owe it to folks who are new to the community to give them a shortcut, that they don't have to bang their heads against the same problems, but they can sort of go further than we did and we can all make progress as we do this together.

Alan:
So just an invitation to the join the conversation in different ways. Share the knowledge you have, especially those of you who crushed the quiz. Help other teams move faster. Just a few initiatives we've just been starting around getting the conversation going around CDD. You can add a tag to your GitHub repo. There's now a conversation-driven development tag and then anyone who goes and checks out that tag can see your repo in that list, so you can kind of compare and see what other people are doing.

Alan:
There's a LinkedIn group that I would everybody to join. It's called Conversation Driven Development. It should be easy to find. Some great conversations have been started in there. If you enjoyed the quiz, and I hope you did, share it with your friends. It might start an interesting conversation about how you as a team are building your assistants and the URL is easy to remember, but I've got it in here, in any case.

Alan:
As I said with all of us, you disagree, if you have thoughts, please email me. I'd love to hear about it. Start the conversation and thanks for joining today.