Improving conversation quality of Manifest AI assistant
In this project I will talk about how at Manifest we improved the conversation quality of Manifest AI shopping assistant by increasing user engagement on training experience.
About Manifest AI
Manifest AI Shopping Assistant helps shoppers find what they need & convert faster through chat conversations. Functioning as a virtual sales agent on Shopify storefronts, it answers questions, recommends products, reduces cart abandonment, and promotes upselling and cross-selling. Additionally, it engages visitors with gamification and quizzes.
Context
Many brands who were dropping off, because the conversation quality was not satisfactory. Businesses were intalling the app and after testing it for a while they were uninstalling the app without exploring the full capabilities of the platform.
We wanted to decrease that drop offs and help businesses realise the full potential of the tool and alsoimprove the conversation quality.
User research
What our users were saying
“Assistant was only good for basic questions, disappointed with response quality.”
“Couldn’t answer many questions, unclear training process without support.”
“Assistant’s responses were basic, struggled with more detailed inquiries.”
“AI unable to answer some questions despite information being available on the site.”
Summary
- Response quality fell short of expected standards.
- AI assistant proficient primarily in addressing basic inquiries and fails to address complex inquiries.
- In some instances, the assistant failed to provide answers despite relevant data available on the store.
- Understanding the training process proved challenging without direct support from the team.
Reasons for poor conversation quality
- Bot not trained on enough data to cover different types of use cases.
- Merchants are not able to correct responses to improve for future conversations.
- Poor discovery of message sources, which allows merchants to add more data to the source if they want to improve.
- Merchants do not have an idea about what customers usually ask and what data needs to be added.
Goals
To improve the response quality, the platform needed data to be trained on. So we needed our users to add more data on the platform. The more the data, the better the response. We decided to work on two things majorly —
- Improve discovery of message source so that users can add data to the source if message needs improvement.
- Improve structure of data addition in training tab.
Solutioning
Improving the discovery of message source
In Manifest merchants can view bot conversation in messages tab. There they can provide feedback whether they are satisfied or not with the conversation and view message source by hovering over the “i” icon. Since message source was visible after hovering over the “i” icon, its dicovery was very low.
We figured out just showing message source is not enough to improve the response. There should also be feature to add data or correction when reviewing conversation in message tab.
Instead of “i” icon we added to buttons: message source and update response. These text buttons give a clear indication of what actions merchants can do to improve response.
Message Source: The new message source shows 3 information —
- Intent of the message.
- Data used to form that response
- A CTA to add relevant data that can be used to improve the response
Update response: This action allows to correct response for future conversation by directly omitting the response.
Improving the training tab
Training tab had two sections —
- Data sources: Allows users to add data sources based on the type of data. After addition users can choose to add intent to those data.
- Improve your responses: In this section, data addition is allowed based on intents. Users first decide the intent of data they want to add and then add data to that intent. Adding data here improves the performance of the bot.
Here we mainly focused on improving “Improve your responses” sections. To improve accuracy and quality we wanted users to use this section to add data.
Issues with the section:
- The UI was confusing, users weren’t able to understand what are they supposed to do here and were skipping to “Data sources” section.
- UI wasn’t motivating enough to engage users in adding data here.
- Some sections here like Agent handover and Products shouldn’t be here as they were providing fuctions different from other intents.
We broke down training tab into three section — Data Sources, Improve your responses, Product.
In “Improve your responses” section, we added a training quality score which will inform users where the bot is weak and where it is strong and the same will be reflected in the conversation. This will also motivate them to add more data if the score is low.
With training quality score we are also showing the number of data sources added to that intent, which will keep our users better informed if they needed to take any action to improve conversation quality.
For each intent we added three sections — Generated Q&A, Unsatisfactory response and Add data sources.
Genrated Q&A: We saw that even if they are adding data sources through links and pdf format, they were missing out on basic information which were frequently enquired by the customers. So we added this section which will have some template questions that are frenquently asked by customers. Users can use AI to add responses or simply add responses by themselves.
Unsatifactory responses: Sometimes the assistant don’t have enough data to answer any particular query. Manifest can mark those responses as unsatisfactory. The unsatisfactory responses can be fixed from messages tab or they can be fixed in training tab under the “Improve your responses” section.
Results 📊
Implementation of these solutions helped increasing the volume of data added to Manifest which improved the quality of responses for the brands. These solutions were implemented over multiple sprints and in 4 months helped in driving monthly conversions from 8% to around 24%.