Share your Assistant


Download the full CDD Playbook - no email required.

Includes 5 guided activities that help conversational AI teams adopt conversation-driven development, and build the assistants users want.

Get the Playbook


Bringing users into the development process—early and often—is one of the cornerstones of conversation-driven development. We typically think of user testing as a method to validate whether or not the user’s problem is being solved, and this is certainly a good reason to get your assistant into the hands of users. A user test can provide incredibly valuable insight into users’ expectations and behaviors that can’t be uncovered in a simple interview. And understanding this feedback early on allows you to course correct and modify your design before it becomes too costly or technically complex.

But, there’s another good reason to test your assistant with users while you’re still in the early stages of development: it helps you build a representative data set. User testing doesn’t just measure how effective your design is; the process also generates messages, which can be annotated and turned into training data, helping you build your assistant at the same time.

If possible, you should recruit testers who reflect the demographics of your end user, but early tests can also be conducted using volunteers from your organization. The most important thing is that testers should not have first-hand knowledge of the assistant’s development. Testers who were involved with building or designing the assistant tend to subconsciously stick to the messages and conversation flows they know the assistant can handle. Even though it might be painful to see your assistant fail, test users who stretch the limits of your assistant’s capabilities will give you the best insight into how to prepare your assistant for the real world.

We recommend running this play as soon as your assistant can handle a few basic conversation paths, and we also recommend using Rasa X to share your assistant with test users while it’s still in the early stages. Rasa X provides a simple chat interface, so you can test before you connect a messaging channel, and your tester’s conversations are automatically saved for you to analyze. You can run this play as often as you need to, while your assistant is in development, and later, after you release your assistant to production.


 

Play 2: Conduct a User Test

Materials:
Rasa X instance, running locally or on a server
Laptop or mobile phone for each tester

Time:
2-3 hours

People:
3-10 testers
1 note taker
1 facilitator

Step 1: Prep for the user test

Before you begin, decide on a list of tasks you want the tester to complete. Provide enough contextual information for the tester to complete the tasks without leading the tester or influencing their behavior. Prepare a list of 10-15 questions you’d like to ask the user during the session.

Designate a note taker from your team to record observations during the session. This allows the facilitator to concentrate on asking questions and guiding the session, and it brings other team members into the process as active participants.

Step 2:  Explain the process

At the beginning of the session, warm up your testers by providing background information about your project and outlining what you’ll be doing during the test session. Encourage your testers to open up and be candid—it’s important to convey that you’re testing the assistant, not them. If you’re conducting your user test remotely, this step can be accomplished via email.

Step 3: Share a test link

If you’re running Rasa X locally, you’ll need to run ngrok to make your assistant available to your testers on their own computers. If you’re running Rasa X in server mode, the test link will be publicly accessible by default. 

Generate a Share your bot link using Rasa X and distribute the link to each tester. If your users are on mobile devices, we recommend converting the link to a QR code. 

Ask your testers to complete the tasks you’ve outlined and encourage them to describe their thought process out loud, especially if they find something frustrating or surprising. 

Step 4: Interview the testers

Interview your testers using the questions you prepared. Let your testers do most of the talking—you can follow up with statements such as, “and then what happened?” to encourage testers to keep going and open up.

Step 5: Discuss and document

As soon as the session has ended, reflect on what you’ve learned as a team while the session is still top of mind. Then, review the conversations you’ve collected. Check out Play 3 for more tips on reviewing and annotating conversations.  

Discuss important feedback and publish the notes and summary of the test session in a place where they’re accessible to others, both on your team and across the organization. Create a list of next steps for turning what you’ve learned into improvements for your assistant.

Discussion Questions

  1. Were users able to return to the happy path if they got stuck?
  2. Did users have a clear starting point for how they should begin to complete a task?
  3. Were users able to successfully complete the tasks they were asked to do?
  4. What were your testers’ expectations for what the assistant should be able to do? How did testers approach the assistant based on these expectations?
  5. How do users structure their requests to the assistant—in short commands or longer paragraphs? 
  6. Under what conditions will your assistant be used? Will users be on a mobile phone or at a desktop? Focused or multitasking? Does this influence the way users interact with your assistant?
  7. How do testers respond to the assistant’s tone and voice?
  8. How does the team publish the summary of testing sessions, to make the research accessible to others in the organization?

Next: Review and Annotate Conversations

Back: CDD Self-Assessment