Dynamic Follow-Up Suggestion
UX Design · 2024
Chegg, a leading higher-education platform with millions of subscribers, revamped its Q&A product in 2023 to an AI-powered conversational learning experience. One key challenge was the low engagement in asking follow-up questions. As the sole UX designer, I introduced a feature that significantly improved user engagement.
My role
Sole UX designer & UX researcher
Contribution highlights
- Increased user engagement by 4.67% in an A/B test
- Redesigned information hierarchy
- Crafted prototypes for leadership review
- Conducted UX research
Duration
11 weeks
Team
UX designer (Me), UX content designer, Product manager, ML and software engineers, Learning science designer
Tools
Figma, FigJam, UserTesting.com
Impact
4.67%
Increase in subscribers who ask follow-ups
3.93%
Increase in follow-ups per subscriber
Context
The new chat functionality was underutilized by students
Historically, most students used Chegg to search for homework solutions and left the platform without further engagement. One major reason that Chegg revamped the legacy product to a chat experience powered by AI since 2023 was to provide more personalized and engaging Q&A experience, especially through enabling students to ask follow-up questions to homework solutions.
In the new experience, after users submit a homework question and receive a solution—either from archived expert answers or the LLM—they can continue to ask follow-up questions to deepen their understanding. These follow-ups are seamlessly answered by the LLM.
However, asking follow-up questions was not yet a natural behavior among students using Chegg. Data revealed that only 19% of user inputs focused on digging deeper into the same question, indicating significant room for improvement.
19%
of user inputs are follow-ups.
Goal alignment
How to enhance user engagement in asking follow-up questions for homework problems?
To tackle the challenge of low engagement with follow-up questions in Chegg's AI-powered chat experience, we kicked off the project by identifying and aligning on the target users, their needs, and the business goals.
👥 Target Users
Chegg subscribers who have been allocated to the new AI-powered chat experience, primarily STEM college students in the U.S.
⛳ User Need
Subscribers are either unaware of or underutilizing the follow-up functionality within the AI-powered chat experience, a feature designed to enhance learning effectiveness based on learning science. Previous user research also indicates a desire among students to understand the process behind solving homework.
💼 Business Goal
Increase the percentage of subscribers who ask follow-up questions and the number of follow-ups asked by each subscriber.
Strategies & early ideas
More guided and discoverable experiences
I spearheaded the ideation process to enhance user engagement with the follow-up functionality, focusing on two key areas:
- Increasing awareness: Ensuring users discover the feature, as it could easily be overlooked due to their established usage patterns on Chegg.
- Reducing friction: Assisting users in overcoming the learning curve associated with using the feature.
While my teammate worked on onboarding solutions, I concentrated on returning users. My design solutions were guided by these strategies:
Prominent
Enhancing proximity and discoverability of follow-up entry points.
Effortless
Minimizing user effort required to engage with follow-up functionality.
Show vs. Tell
Guiding users through the learning curve by showing potential questions they could ask the AI.
To help the team strategize and prioritize, I categorized my design ideas into three types:
Type I: Suggestions based on solution types
Suggestions tailored to problem types—Facts, Concepts, Procedures, Processes, or Principles—with guidance from learning science. For example:
- For a factual question like “What is the formula for calculating kinetic energy?”, the system might suggest asking, “Can you give me an example?”
- For a process question like “How does the water cycle work?”, it might recommend, “Can you explain in more detail?”
Type II: Suggestions based on solution content
Suggestions dynamically generated by the LLM based on the solution's content with guidance from learning science, providing more specific and contextual prompts. For example:
- If the homework question is “At the end of Jan, Mineral Labs had an inventory of 735 unites, which cost 8% per unit to produce. During Feb the company produced 700 unites at a cost of $12 per unit. If the firm sold 1100 unites in Feb, what was the cost of goods sold?”, the system might suggest asking “How did you determine which units were sold first and why?”
Type III: Suggestions based on key terms
Suggestions derived from key terms in the solution, selected by the LLM with guidance from learning science. For example:
- If the Q&A is about the steps of metabolism, and the solutions states “The first step in metabolism is the ingestion of food. This is where you consume food which is then broken down into smaller molecules during digestion.” The system might highlight “molecules” in the solution and prompt users to ask to learn more about the term.
While content-specific and key-term-based suggestions provide more personalized and dynamic learning experiences, we prioritized the first type because it had a balanced trade-off between personalization and delivery speed. This approach allowed us to launch the feature sooner, while still delivering meaningful value. We planned to explore the more advanced types of suggestions in future iterations to further enhance the learning experience.
Design iteration
Follow-up suggestions based on solution types
Where should the feature be placed?
I considered three potential placement options for the feature: in the right rail (A), inline under each step (B), or grouped with the chat input box (C). After evaluating these options, I chose the right rail placement (A). It was more prominent and allowed for easier layout of multiple prompts compared to the other options.
Refining the interaction
Once the placement was determined, I iterated on the interaction design to ensure the feature was both intuitive and easily discoverable. To enhance discoverability, I added a visual affordance to the right side of each step in the solution. When users hover over it, follow-up suggestions appear seamlessly.
To showcase the end-to-end flow to stakeholders, I built an interactive prototype that integrates both onboarding tooltips for follow-ups and dynamic follow-up suggestions.
Project pivot
We have more time now — what's next?
Navigating ambiguity and adapting quickly were crucial as priorities shifted in response to the fast-evolving AI landscape. When the program's focus changed, the project was temporarily paused but later resumed with an extended timeline. We reassessed our approach and revisited ideas that had been put on hold. The 2x2 matrix I created earlier helped the team pivot quickly to a more personalized option, which offered the potential for a greater impact.
Design iteration
We need a better information hierarchy
How to maintain clear information hierarchy as new features and content are added?
Adding new functions and content without considering the overall user experience can quickly make a product overwhelming. A key challenge I faced was structuring the layout to align with users' mental models.
Below homework solutions, we already suggested actions like “Let’s practice,” “Give me an example,” and “Send to expert.” However, students requested more solutions to compare options and find the best fit. My task was to integrate similar solutions from the archive to support this primary goal. Without meeting it, students were less likely to engage with other features like follow-up questions.
Balancing these additions while avoiding a cluttered interface required thoughtful organization to maintain a clear and intuitive experience.
The previous solution layout had a disorganized information hierarchy, with functions scattered throughout the interface.
What’s the right information hierarchy? Let’s talk to users to find out.
What should the information hierarchy prioritize? PMs initially argued that follow-up questions should take priority to evaluate the feature’s performance, with actions like reviewing similar solutions or sending questions to experts ranked lower. However, user feedback indicated these “secondary” actions were crucial to students. I believed testing assumptions shouldn’t compromise usability.
To resolve this, I conducted user research to understand how students approach finding homework help on Chegg. I presented the current design and observed how students identified the best solutions. Here are the key findings:
- When dissatisfied with the initial Q&A, 75% of students compare similar Q&As, and if that doesn’t resolve their issue, 42% ask for expert help.
- They might ask follow-up questions for clarification after identifying the best Q&A.
I redesigned the information hierarchy, with follow-up suggestions placed at the bottom. From both a UX and learning science perspective, this is the most ideal layout that matches the user's mental model and fosters effective learning.
That said, my PM and I agreed that making the new feature more visually prominent was crucial for better evaluating its performance in the A/B test. Ultimately, the team decided to test a second variant, placing the follow-up suggestions as a sticky element on the right rail for improved discoverability.
Design iteration
Refining the details
Visual treatment for feature grouping
When presenting my design iteration to the UX team, one key suggestion I received was to group the follow-up suggestions with the existing Next Best Actions (e.g., "Let’s practice"), as both represent subsequent actions users can take. I went through multiple rounds of iteration, continuously evaluating the trade-offs of each option.
Choice of icons
I explored different icon options to find the one most relevant to the follow-up suggestion feature.
Spark icon: This icon is commonly used to represent AI. However, one of our content strategies was to focus on the value proposition of the feature—learning through chatting with AI—rather than overemphasizing the technology itself.
Arrow icon: While it suggests the action of "send," it doesn't clearly communicate the feature’s value proposition.
Chat icon: This icon directly relates to the value of the feature and is easily understood, conveying the concept of "conversation".
Since our design system only included a plain chat icon at the time, I designed a more illustrative chat icon to better align with the feature’s purpose and enhance the visual appeal.
Backend
Designing with generative AI
Prompt engineering
I worked closely with the learning science team and LLM engineers to refine the prompt strategy. Initially, we experimented with offering a set of predefined options (e.g., help me memorize, create an analogy, give an example) and allowed the LLM to choose the most appropriate one based on the given question. However, after conducting prompt engineering tests, we discovered that the more specific the instructions, the less relevant and lower quality the output became, as the LLM would attempt to follow the instructions too rigidly. As a result, we shifted to providing a more generic prompt, as follows:
Your goal is to provide suggestions of follow-up questions that a student might ask regarding a question and response you are helping them with original question and response context.
Write 3 follow-up questions, no longer than one sentence, tailored and relevant to the original question the student asked and the content of the answer they received. These follow-up questions should be from the perspective of a student, should be engaging and inviting for a student to ask, and span a variety of types of follow-up questions.
I estimated the typical length of follow-up questions generated by the LLM and set a character limit to ensure the output wouldn’t overwhelm the user or constrain the quality of the suggestions. The limit was added to the prompt as follows:
Each follow-up question should be brief and no longer than 120 characters.
Adoption of AI means less control
As we integrated generative AI into the product, we realized that we had less control over the output. I became more aware of potential friction and proactively designed for possible scenarios, such as: What if it takes too long to generate follow-up suggestions? What if an error occurs and the generation fails? I made sure to address these cases within the design to ensure a smooth user experience.
Show the shimmer for the follow-up suggestions before the content is fully loaded. Hide the follow-up suggestion section if an error occurs.
Final output and result
A/B testing with two variants
The solution provides dynamic, contextually relevant follow-up suggestions to help students engage more meaningfully, encouraging them to move beyond quick answers and achieve deeper learning outcomes. When a student receives a homework solution from Chegg, they will be presented with three dynamic suggestions, generated by the LLM through prompt engineering. Upon selecting a suggestion, students will receive a tailored response from Chegg.
Feature highlights:
- Dynamic, context-aware follow-up suggestions
- Clean information hierarchy that matches with the user’s mental modal
- Improved discoverability through strategic visual treatment and placement
- Seamless, effortless interaction
Variant A - bottom placement
In variant A, the follow-up suggestions are placed inline at the bottom of the solutions. While this placement might appear less discoverable, it aligns closely with the user’s mental model—they typically consider asking a follow-up only after identifying the right question and solution to their homework problem. This alignment proved crucial, as this variant outperformed the other variant in the A/B test.
Variant B - sticky right rail placement
Variant B features a more prominent placement, designed to ensure the new feature is highly visible and not easily overlooked by users. As users scroll through the screen, the follow-up suggestions remain visible, sticking to the right side of the solution as long as the solution body is in view. While this approach yielded positive results in the A/B testing, its performance was slightly lower compared to the other variant.
Reflection
Bridging user needs and business goals
This project enhanced my business acumen and refined my A/B testing skills. While I previously focused primarily on advocating for user needs in my designs, this experience allowed me to gain a deeper understanding of the business goals and potential impacts behind the task. I incorporated these insights into my design decisions and proactively collaborated with PMs to define success metrics and create an instrumentation plan. By designing two A/B test variants, I successfully aligned design decisions with both user needs and business objectives.