Webinar
44:46

How Rasa’s Research Team Uses Rasa X to Build Carbon Bot

Originally aired: December 17, 2019

In this recorded 1-hour webinar, Alan Nichol, co-founder and CTO of Rasa presents the Rasa research team’s work building and fine-tuning Carbon Bot, a contextual assistant that helps travelers purchase carbon offsets for flights.

The demo includes:

  • Reviewing conversations and annotating training data in Rasa X
  • Pushing changes to GitHub using Integrated Version Control
  • Deploying updates via continuous delivery and running end-to-end tests

Transcript:

Hi, everyone. Can you hear me? I hope so. Thanks, everyone, for phoning in. Getting some comments. Does that mean people can hear us?

Yes, people can hear us.

Excellent. All right. Thanks for the quick feedback, everyone. So it's Alan and Ty here from Rasa. Great that all of you joined. We'll be recording this webinar and making it available online in case you want to share it with your friends later on. So we're going to talk a little bit about Carbon Bot, which is a project that we're working on in the research team, and specifically we're going to be talking about how we use Rasa and Rasa X together to improve the assistant based on the conversations that people have with it. So I'll be showing a mix of things. I'll be showing the GitHub repo, which is open source, I'll be showing the command line where I have the code running, and I'll be showing the server where we're running Rasa X and just do some live annotation and look at things, and we'll give you the chance to ask lots of questions. So if you have a question, just make a note of it. Ty's here, keeping track of all of them, and we'll make sure that there's plenty of time to answer everybody's questions about things that maybe don't make sense.

Cool. So I'll go and start with a very quick intro to Rasa and why we exist just for a couple of minutes and then we'll go straight into looking at Rasa X. So the reason why this company exists is to empower all makers to create AI assistants that work for everyone. That's our mission because we believe it's an important piece of technology. We think that voice and chat interfaces are extremely important, and it would be a real shame if we only get to use that technology in the context of being an Amazon customer or an Apple customer or anything like that. There are many important, valuable use cases for this technology, which the big tech companies will never build. So we want to make sure that everybody has the ability to build great AI assistants to help themselves and others in their lives.

The way that we aim to achieve that mission is doing three things. So we open-sourced the Rasa framework, which is an open source machine learning based framework for building AI assistants. Then we invest heavily in our community. We have a very active forum, we have lots of active contributors. If you're in this webinar, a good chance you're already part of the community, so thank you for being part of it. It's really a core foundational piece of our mission. We do lots of applied research, and that's also what we're doing with Carbon Bot. So we have a research team, and our focus is really on finding out where people get stuck when they build an AI assistant, and that can also mean just building one ourselves and seeing where we get stuck, and then figuring out what things we can build that help you build something that you couldn't build today. How do we push the boundaries of what's possible? All those things together are how we want to achieve our mission.

Lots of people use Rasa, which is great. We just had our third birthday this week, and since then we've had over 2 million downloads, since we originally open-sourced Rasa. We have companies in all sorts of different industries using it for all sorts of use cases. So we also recently launched a showcase on our website. So if you go to Rasa.com/showcase, you can see some of the things that people are building with Rasa. I really like the showcase because it shows just how different some of the things are that people are building and it's really inspiring. The way we think about building AI assistants and making them more sophisticated is to split it up into five levels. This is something we talk a lot about at Rasa. We borrowed this idea from self-driving cars. So we don't just say there are dumb chat bots and artificial general intelligence, there are lots of layers in between. The same way that like a level five self driving car, you can just fall asleep in the back and wake up at your destination, we think of level five conversational AI as building an autonomous organization with lots of different AI assistants coordinating and handling large parts of your operations.

But where the advanced teams are today and what we're really focused on right now at Rasa is level three. Level three means that you can have fluid conversation within a narrow domain. So you don't have a chat bot that can talk about anything, you don't have an AI assistant that can help you with anything, but you're focused on a specific domain, but you can handle users going off the happy path. That's a really crucial part to building AI systems that really help people rather than just frustrate them. Just to add a bit more color to the difference between level two, so a real level two assistant is one where you're rigidly... you're interpreting a message, you're passing it through your NLU system, you're saying, this is what the intent is, these are the entities that I extracted, and I'm always going to give it the same response. Whenever a user says this, this is what I'm going to say. Or maybe adding like a little bit of maybe a follow on intent or a little bit of conditional logic, but really just basic interactions.

Then the next level up is kind of a basic version of level three where you do some form filling. So you ask the user a couple of questions in order to help them. But of course if the user has follow on questions or they don't comply or they want to know why you'd want to know that or they're not about something, you get a little stuck. So that's what we think of as advanced level three, is where you can guide a user to complete their goal, you can collect the information that you need from them, and you can also help them decide what it is they really want, choose between different options ask follow on questions, compare things, and have a more interesting dialogue around a narrow topic.

So one of the pieces of tech that's here to enable these advanced level three assistants is a model called TED, which is the transformer embedding dialogue policy, as we wrote a paper about this and it's available in Rasa as of version 1.3, which was released this summer. What the TED model does is it looks at the history of your conversation at every turn and it decides which parts are relevant at any point in time. So if the user has been going off topic for a bit and having some follow on questions and then returns to the original topic, TED knows which parts of the conversation to skip over because they're not relevant and go back and complete the topic that was addressed earlier on. So that's advanced level three, where you can have quite a natural, fluid set of conversations with nested sub conversations and referring back to earlier things.

Cool. So we'll get back to level three in a little bit, but I also want to introduce Carbon Bot just very briefly. So Carbon Bot is a very simple thing. It just tries to encourage people to buy carbon offsets when they take a flight. So if you're flying, as opposed to taking the train somewhere, you're causing a great deal of emissions that could be avoided. If you can't avoid it, if flying is your only option, then you can at least buy some carbon offsets which kind of counteract the carbon dioxide that's admitted from your flight. Recently, we got a bit of press and give a sarcastic take on the Carbon Bot idea. But I actually quite like the way that this was written up.

So if you want to give a try, you can go to this link here, or you can scan the QR code on your phone and you can have a chat with Carbon Bot right away. It's deployed on Facebook messenger right now. So if you want to have a go at it, just feel free to follow that link. The code for Carbon Bot is open source. So if you want to look along, maybe have a look at the code base, also to ask some questions later in the webinar, feel free to just go to GitHub now and have a poke around the base and look at how it's set up and look at the training data set that's in there, and we'll be looking at the GitHub repo in a few minutes in any case. Just giving you another second to grab the link in case you want to type it in.

Cool. So then I want to talk a little bit about Rasa X and how we use it together with Rasa to build Carbon Bot. So Rasa X is not a replacement for Rasa, it's a tool that you use together with Rasa Open Source. So the way we intend it to be used is that you build the minimal, simplest possible version of your bot using Rasa Open Source, and then as quickly as possible, you give it to some people to test. So, there's really no substitute for getting test users to try your bot and learning from those conversations. A mistake just we see developers make all the time is that a team of people go, and they go off into the mountains for two or three months working on an assistant, building out lots of things, coming up with lots of training data themselves. And then as soon as they put it in front of a test user, it just breaks down completely and it doesn't work. There's just no substitute for getting people to talk to your bot and figuring out how your users, the language that they use, the kinds of questions that they have, the things that they're interested in. Your users will always surprise you, so you want to optimize for getting them to talk to your bot as early as possible.

So we think of this as different stages. You have your minimal viable assistant. Once you have that running, then you can set up Rasa X, you can talk to the assistant yourself, then you can share the link with some friends or some colleagues and get them to test it out. And then once you have it to a level that you're happy with, then you can start involving real users and hook it up to Facebook messenger or Slack or wherever it is you want to deploy your bot. So we really see these tools as working together. An important piece is that Rasa X is not meant to be a kind of all in one, but building point and click platform. Rasa X is there to do the things that are hard to do just in code and on the command line. So some of the things I'll be showing today are things that you really just need a server collecting data and a UI for.

So the way the Rasa X works is you collect the conversations that people are having with your assistant. That's the first step. You then review them. So you go through, see what people are saying, understanding which parts of your bot you need to improve, and then you set it up together with your continuous integration and deployment to continuously improve the assistant as you invest time and energy into improving it. So that's what I'll be showing in the webinar today, is this process and how we use it for Carbon Bot. Cool. So that's a nice segue into showing some Rasa X stuff. So I'll just go right ahead and minimize this window, and we can look at Rasa X.

So I have the Rasa X UI open, it's deployed on a server here at carbon.rasa.com. So this is just where I go and log in and see everything that's going on with the bot. So if you want to figure out how to deploy Rasa X on a server, the latest episode of the Rasa Masterclass is episode nine, and it just goes into a huge amount of detail of provisioning a server, installing Docker Compose, getting everything up and running and getting it also hooked up to the right channels so that people can actually chat to your bot. So whenever I go here and just look at what people are doing, the first thing I do is go to the conversation screen, and that's the most important screen there is.

So if I just look, it looks like some of you have been chatting to Carbon Bot just now, which is cool. So your conversations are coming up here. I can see that in total, over 400 people have chatted with Carbon Bot, which is very cool and very encouraging. That's obviously then a challenge for me to figure out, okay, which of these conversations contained something interesting that I can learn from, and which ones are going well and which ones aren't? So we'll be doing all of that. But the first thing I want to do is just show you what it's like, for those of you who haven't tried out Carbon by yourself, what does it look like to talk to Carbon Bot? So like I said, one of the things we want to achieve with Rasa X is making it as easy as possible to get people to test out the earliest version of your bot. Because the only way to build a good assistant is to build a bad assistant and give it to people to test and figure out where it breaks.

So we have the very simple feature in Rasa X where you can share your assistant with guest users. So I can go here, I can generate a link, and I can share this with anybody that I want to test it out. So that maybe some of my colleagues, it might be my friends or my partner, whoever, and I can simulate their experience. If I copy the link and if I open it in an incognito tab, then it's like I'm a guest. So give this a second to load. Right. So this is what my friends will see. They'll get this very simple UI where they can talk to Carbon Bot, and I can actually fill out the information here on the left about what instructions they're given and what they've been told to do.

So I'll just say hi to get started. I've cheated here a little bit, which is that mostly, the conversations are happening on Facebook, and on Facebook you have this getting started payload. So when I press the Christmas button, that now sends the getting started payload, so this is the point where someone on Facebook messenger starts off. So at this point, I'm now having the same experience as someone who's opened up the bot on Facebook messenger and they're being told, first of all, that all of this data is becoming open source. Please don't tell us anything sensitive about yourself. And then it prompts them that they're traveling somewhere for Christmas and will be flying there and asking if they're interested in buying UN certified carbon offsets.

So I'll just go down the happy path just so you get a sense of what this bot does. So I'll say yes, and it tells me a bit about the greenhouse gases which are emitted by air travel, and that a typical flight will actually emit close to a ton of carbon dioxide, which is quite a large amount. So if people want to get a more accurate estimate for their particular flights, yes, let's do that. Will you be traveling in economy class? Yes, I will. I will be flying from Berlin, and I'll be heading to San Francisco. So I've just put the airport code in this case, so hopefully that should pick it up. Excellent. Right. So very simple. This is just a happy path. It's telling me that my one way flight from Berlin Shinefeld to SFO emits 5.3 tons of carbon dioxide.

At the point that I had this working was exactly the point that I started giving it to people to test. Obviously, there are lots of things that can go wrong. People can ask questions, people can not know, people can change their mind, people can tell you all sorts of things and have questions, and all those things have been happening and I've been using those to improve the assistant. But if you get something really basic like this working, that's already a great moment to give this link to people and get them to test it out because no matter what, they will surprise you with what they say. Right. That's a very simple case. So let's go back and look at our conversations. Oh, somebody said hi Alan, that's nice. Hi, whoever that was. I don't know who you are, and I'm happy about that.

So now I have 400 and something conversations, and the question is, are these going well? Are these not going well? Which ones should I look at? So the first thing I'll probably do is filter these conversations, because what I see quite often is that people open up the link on Facebook and then maybe they'd say hello or something, but then they abandon the conversation and they don't really engage. So let's say that I want to insist that there are at least three user messages in here, so I'll apply that filter, and that's already knocked out about 100 of these conversations. So that's already filtered things down quite a bit, that I know that these are actual conversations where something relatively interesting is happening.

If I want to go a little further, then the question is, okay, which conversations really include something interesting? So one of the questions that I get a lot is that people ask about how that number is calculated, those five tons of carbon dioxide. Where on earth do you get that number from? People, of course, right to be skeptical. So I could do that by filtering for an intent, but I'm not going to. I'm going to actually filter by the action where the assistant explains how this calculation is done. So it says here, auto explain also calculation. So we'll have a look for all the conversations where at least this happens. We have a much more manageable number now of 33 conversations.

So this one looks very recent. So it's someone who's been just having a conversation with our bot just now. Apparently this action took place here, unfortunately. Ah, okay. So this looks like an error that the bot made. So here, it looks like I misunderstood or the bot misunderstood what the person said, and this unfortunately is an interesting case, because in this particular scenario, it means that... Well, yes, it means affirm. I have an intent to affirm, but I'm not super comfortable with the idea of annotating this as affirm. So I'm going to deal with this later. So I'm just going to mark this with a flag and then I can filter by flag messages later on and go and deal with the ones that are difficult and I'm not sure how to handle.

So I go to the next one and see if I can make some progress on something that works a little better. So here the person got their quotes. Yes, they were going from Edinburgh to Poznan. Excellent. So we told them... The reason this looks this way, by the way, is this is a custom Facebook message, which if you're chatting to the bot on Facebook, you get a nice preview with a hyperlink, and that's why it's rendered this way. So this person is surprised that their flight is missing almost a ton of carbon dioxide. I think it's a pretty appropriate response here to say you can find out how these offsets are calculated. So the person got their quote, they then got an explanation, then they say, "Thank you." Okay, the boat says, "Goodbye." They ask if this is all they can do. I can help you calculate. Okay, that's useful. A little smiley. Thank you again. Goodbye.

I'm actually really happy with how this conversation went. I'm actually really chuffed. So what I'm going to do is if I select the last action in the conversation that I have, the whole conversation is selected, and on the right I can see the story that corresponds to this conversation. I'm happy that this worked, but I want to make sure that this always works. So I'm going to copy it, and I'm actually going to add it to my test stories. So that's something I'm just going to do on the command line. So if I go here, so I'm in the Carbon Bot repo, I have all the usual files in here for a Rasa Bot, I have a config, a domain, an actions.py file, and I have a test stories folder with some stories in it. So I'll have a look at that, and I'll add a test story here.

So I know that this works now, but the point of tests is to make sure that you don't break things. So that's why I'm going to add a test story here. So I'll create a new branch, add a new test story and then I'll add this test story in, and I'll push that up to the... Oh. Oh, always forget this. I'll push that up to GitHub. Wonderful. So I think one of the most important things about building a good assistant is just having a decent set of tests, because just because we're building something which has machine learning components in it doesn't mean that we should give up all our good software engineering practices. You should write tests, you should do CI, you should check against regressions, you should do versioning and branch branch-based development, and all those things are important.

When you think about, what are the workflows that we want people to use with Rasa X, they should fit into good software engineering practice. So in any case, I've created a new test story, which is cool. If any of these were misannotated, I can also annotate them in here. But let's maybe look at a different screen now. So if we go and check out the NLU training screen, I have an inbox here under the annotate new data tab, and this includes all the predictions that NLU made. I can check if the predictions look correct and if I want to add these things to my training data, all right. For example, this person's saying you're a bit dumb, aren't you? They're obviously not particularly impressed. Actually, if I want to go look for conversations later that are going badly, I should probably look for this negative emotion intent, because that seems like a pretty good indicator that things aren't going particularly well.

So let's look at some examples of Colombo, the city, yes, informed. These look pretty good. So I'm going to mark these as correct, and I'm going to mark this one is correct as well. Also, if this person, I'm not flying for Xmas. They're denying that they're flying, that also looks correct. So anyway guys, I've just confirmed a few of these predictions, calculated my emissions. This also looked pretty good. Again, so, so. This is an interesting one. I am using retrieval intense, so this is a one of my retrieval intents for just general discourse markers like so, and I'm and this and that. So that's probably fine. But any case, I've now got an indication here because I've got the new integrated version control feature set up and I've got a yellow status here, which is telling me that I have changes, which are these annotations I've just made, which should now be here on my training dots. So these are the ones I just annotated. These are not in Git yet. So I have this deployed on the server. They're perfectly safe, they're in the database, but they're not yet part of Git, so I want to make them part of Git.

So what I'll do is I'll say here that I want to add these changes to the Git server, and I can collect as many changes as I want here. Of course, I can also change... I can add some new stories, I can change my domain, whatever it might be. But in any case, I'm going to add these changes to the Git server. The way we implemented this is it's basically the same as if you go on GitHub and you edit the read me file directly. You can choose whether you want to commit these to master or if you want to create a new branch. So my preferred workflow is to create a new branch, so I'll do that. So I'll add these changes, and what this will do is it'll actually create a new branch and push that up to GitHub. So you can see here I'm back to green status, which means that Rasa X has pushed my branch up to GitHub and has checked out the master branch again. We did that so that you can never ever, ever end up with version conflicts in your Rasa X server because we didn't want to rebuild GitHub inside of Rasa X.

So this is the new branch that was just pulled. It has some sort of metadata in the branch name, the time, and the person who created the annotation. So I can create a pull request and let's have a look. Here you go. So this is a nice diff, just as if I had changed things locally or done anything else. I have a nice diff here showing the new training examples that I've created. So I'm going to go ahead and create that pull request. So this is going to now run some checks. So you need to have tests, you also have CI. I know that these tests will run and will tell me if these are good changes that I want to integrate into my bot or if some of my annotations have broken something or decreased the performance. So these tests will take about 15 or 20 minutes to run. So I'll check. I made a PR a few hours ago. So here, add a couple more. A new example, this is something very similar.

So I run a bunch of checks, and they all passed in this case. Let's have a look at what they do. So one of the outcomes is that my GitHub action here posts the results of a cross validation on my intent classification data set. So what this does is it will take all of my NLU training data, it will do five folds, or five different runs, each time it'll take one chunk of the data, split it off, use it as a test set and train on the remaining chunk. It'll do that five times and then average out the scores. So it gives me some sense of, how well is my model generalizing, which intents are easy to predict, which ones are difficult to predict, and is my configuration a good one or have I introduced something which is degrading performance?

So it's telling me here that my overall F1 score is in the 80s, which is not amazing, but pretty decent, and it's telling me that I get the F1 score for each of these intents, and I also get the other intents that are the most common false positives. So this is a kind of a very lightweight version of a confusion matrix just as a markdown table. This is not in any way a Rasa X feature, it's just part of the workflow that I use with Rasa X. So I'm using GitHub actions here, but I could just as well be using Jenkins or Travis CI or anything like that. So I'll show you how it's set up. So if we look here in the GitHub workflows, I just defined my CI job the same way I would on Travis CI or Jenkins or anything else, I set things up, I installed the requirements, I run through my test stories, and this is very important, I run through all my test stories and make sure that anytime I change a configuration, I change my max history, I add new stories that I don't create a change which breaks this nice story that was predicted well earlier, and it runs this cross validation and then it posts the comments back on the PR.

So there's no magic going on here. It's a very simple configuration. You can copy this. This repo was open source. You can apply this to your own Rasa projects independently of whether you're using Rasa X. But the way I use it with Rasa X is that I create one of these pull requests by using the Git integration and then I can know... No matter who's been creating the annotations, I know before I merge them that we haven't done anything that's going to break something. That's a really key part of the workflow. So I'm a big believer in having tests, in running tests, and tests should be able to fail. That's the most important thing, is that when you change things further down the line, you haven't broken something and caused a regression. So, in any case, we'll go back.

So if I think that, Well, look, I added a few more NIU examples here, then I'm pretty happy with this performance. I haven't degraded any of the intents particularly, so I'll just merge that. I'll merge that in, and I can delete this branch now. I don't need it anymore. My Rasa X server, here you go, it just pulls, and now it pulled the latest version of the master branch. So any changes that I make on the code side, whether I'm pushing a test story locally and adding it to GitHub or whether I'm creating annotations in the Rasa X server, everything ends up back in Git, which is the single source of truth. Everything's tested, everything gets run through CI.

There are lots more things that I could show around Rasa X and other features I could show off, but I really just wanted to bring home a couple of key messages around how you integrate a Rasa X server with good software engineering practices, like version control, writing tests, checking all these things, being able to roll back easily. So I think I'll stop the demo there and go back to the slides. I have a couple more slides to show and then we'll go right ahead and switch to your questions. So we should have plenty of time for questions. Cool.

So I just want to point you all to some more resources. So there's one which I don't have a slide for it because it's so new, which is there's a blog post on our blog. So you go to blog.rasa.com or you Google the Rasa blog, you'll find the most recent posts up there about Rasa and Rasa X better together, and it explains in a lot more detailed this process of building the minimal viable assistant, the first version of your assistant, and then improving it over time based on real conversations using Rasa and Rasa X together. So that's a really good resource, which explains a lot of these things in more detail.

I already mentioned at the start that if you're interested in deploying Rasa X on a server, which is really the place where it's most useful and most powerful, check out the episode of the Rasa Masterclass that was just posted a few days ago. It's on YouTube, and it goes into a lot of detail about how to set up Rasa X on a server so that you can use this too and you can get people talking to your bot right away and you can see all the conversations from all those different channels in one place and use them to improve your system. And then finally, if you have questions about usage or anything like that, I can always recommend going to our forum. I'm guessing that a lot of people in this webinar are already forum members. So I probably don't have to tell you this, but people are very responsive on there, people are extremely helpful. So if you have any questions about how to do something, how to set something up, what's the best way to use something, just sign up, it's free, and ask any question that you like. That's at forum.rasa.com.

So with that, I'll switch over to questions, which Ty's been collecting. Thanks for paying attention so far. So is the first question that top one there? So the first question is, what is the best and recommended way to deploy Rasa on AWS? It's a very good question. So if you want to deploy Rasa and Rasa X on AWS, you have a couple of different options. So the option Juste described in the YouTube video is to use Docker Compose, which you can essentially do on a bare metal server. So if you just have an EC2 instance, you can install Docker Compose on there and spin up all the Docker containers to run a Rasa the X deployment. If you want to use a managed Kubernetes service, I think EKS is the name of the AWS offering, Google Cloud have one, the Google Kubernetes engine. Most cloud providers now have a managed Kubernetes service, and we provide helm charts. There's an updated helm chart coming out in the Rasa release coming this week, 0.24. The helm charts give you a nice way to install with just minor configuration of full Kubernetes deployment with the all the good stuff that comes with the Kubernetes. You'll need that. For Rasa 0.24, you'll need to be using the latest version of Rasa, which is 1.6.0 for Rasa X 0.24.

Cool. Next question. It is sometimes difficult to get test data for a bot. What would be the best way to get a sample data set for use case from a set of random crowd users? So crowdsourcing is a common thing, and actually the first 50 people that I had talking to Carbon Bot I found via Mechanical Turk. So you can use Rasa X and you can share your guest tester link on Mechanical Turk or any other crowdsourcing platform and get people to talk to it there. Just be sure to write them clear instructions about what they should do. We also have a resource. If you search for NLU training data, we have a GitHub repo where community members can donate their training data to the community for common use cases. So there's already a bunch of PRs there with people so you can bootstrap and get started faster.

Oh, Manuel asked, any reasons why you picked this website specifically for offsetting carbon emissions? It seems that the price per ton is super low, around one to two us dollars compared to other renowned websites such as Gold Standard. That is a very good question. I went for the UN one because I thought the UN had the most credibility as... I have no intention of us making any money off Carbon Bot. We don't get a referral fee or anything like that, and the UN felt like the most neutral place to send people. So just sending them off to that website and telling them how many tons they should purchase to offset their flying. But you're quite right that there are lots of open questions about carbon offsetting, and I've actually seen lots of conversations where people aren't necessarily convinced that carbon offsetting is effective. I've also seen a whole bunch of conversations where people aren't convinced that climate change is a thing, and so it's been very educational from that perspective as well.

Cool. William is asking, is it possible to extract the Rasa X log in bulk to create custom views reports based on custom metrics? Yes, it is. So when you're running Rasa Open Source, it creates an event stream which you can tap into and you can dump that into any data warehouse that you like. You can also query anything from Rasa X that you like directly. So if you go to our docs, if you go to rasa.com/docs and look in Rasa X, the REST API for Rasa X is also fully documented, and that's actually the documentation that we use to develop the front end of Rasa X. So everything is in there. It's really all there.

Vishu asks, which of these Rasa X features are community and which of these are enterprise. Or put another way, what's the difference between the enterprise versus the free closed source version of Rasa X? So the way that we differentiate between Rasa X community edition and the enterprise, none of the things I'm showing you today are enterprise features. So the way that we think about it is that Rasa X community edition helps you deal with technical complexity. So it gives you the tools you need to build a better assistant. The enterprise features are the things that you need if you have organizational complexity.

So if you have different people on your team who need to have different permission levels, in Rasa X community edition, there's just one shared password to log in, and everybody uses it has admin rights to everything. For a small team, that doesn't matter. If you're a large organization, you'll need role based access control. So that's something that we offer. You can customize the roles and the permissions that each user has. We have single sign on integration, we have extra security features like SSL between containers, we have an extra analytics feature for reporting. So things that you need when you are in a larger organization that you don't need if you're just a small team. Those are the things that go into the enterprise products.

So the question is how well does Rasa X scale for reviewing and annotating a large amount of conversations? So we actually just merged a huge performance improvement to the conversation screen. So thanks Emma for that big PR, making it much more performant, which means that we do fewer renders of the UI and it's much leaner and we send fewer requests. So together with the filters, you can boil down quite quickly the conversations that you actually want to look at. That said, I think the question of how can we make it easier to find the conversations that are really, really relevant and that are really worth looking at and contain some new information, that's a very interesting question and one that we think about a lot, and that's very much on a roadmap as well is, if you really have 10,000 conversations every day and you can only look at 50 or even 20, which 20 should you look at because they really something interesting? So that's very much on our roadmap, so expect some more features in that vein soon.

The question is, can we use community edition directly at work? Yes, you can. So the license for the community edition allows you to use it for commercial purposes. If you go on the Rasa X docs, there's also a link to the terms and conditions, both like plain English FAQ and the detailed legalese version. The short version is, you can use it at work, you can use it for commercial projects, you can run it in production. The only thing you can't do is run it as a SaaS service for other people. That's pretty much the only thing you can't do, is like build a third party SaaS business using it.

The question is, if we only use Rasa NLU and not Rasa Core, can we still integrate with Rasa X to annotate data, test and train? The answer is yes, you can. So as I just showed, you can have... Everything I Git, you can have everything hooked up so that your training data shows up in the Rasa X UI. What won't happen automatically if you're not using Rasa X is that you won't automatically have this conversation screen filled with all your new conversations. But like I said, the whole REST API for Rasa X is documented. So if you want to just manually post messages in from your Rasa NLU server, you can do that.

Then next question, if we're not able to use Rasa X, then should we need to write a custom tracker store which will store all the conversations and then manually correct our bot NLU story data? Essentially, yes. So we built Rasa X because there is this need, because people need a tool that's better than just dumping a log file and manually going through a log file to try and figure out. We think having the server setup, having everything integrated in a slick UI and it makes the process much easier to filter down conversations, look at them. If you don't want to or can't because I use Rasa X, then unfortunately, you'll have to do things manually.

The question is, when will the recording be shared? As soon as possible. We have to do a light bit of editing and then it will be posted on YouTube. And then the final thing I just want to mention again is that if you go to our blog on blog.rasa.com and check out the most recent posts, there's one on using Rasa Open Source, and Rasa X together, and there's lots more information about all these things. I'll just plug it one more time, check out episode nine of the Rasa Masterclass on YouTube. That's got detailed instructions on how to set everything up on the server and get everything hooked up. Cool. All right. Then I think we're through all the questions. Thank you all for joining. I hope this was interesting and informative. If you have further questions, I will see you on the forum. Thanks everyone. Bye-bye.

Speakers
alan
Alan Nichol

Co-Founder & CTO

Rasa

Ty Dunn
Ty Dunn

Product Manager

Rasa

Is your Enterprise Ready for a
Conversational Customer Experience?