Discovery Research @ Noom

Reducing Friction Without Losing Personalization: Noom's 23-Minute Buyflow Challenge


Overview

Led comprehensive UX research to redesign Noom's pre-signup experience, reducing a 23-minute survey while maintaining personalization quality. Research directly informed a new buyflow structure that improved user expectations and increased conversion rates.

Key Impact: New buyflow design currently baseline at Noom.com, with sequencing and messaging changes based on user mental models rather than internal assumptions.

The Challenge: When Personalization Becomes a Barrier

Business Context

Noom's competitive advantage lies in personalized behavior change programs, but their pre-signup survey had become a 23-minute ordeal covering psychology, health, habits, and preferences. The length was causing user fatigue and drop-offs, directly impacting Customer Lifetime Value to Customer Acquisition Cost (eLTV/CAC) ratios.

The Core Problem

The Tension: Noom needed detailed user information for personalization, but the current approach was driving potential customers away before they could even see their program.

My Challenge: Determine what information users actually want to provide versus what the business thinks it needs, and find the optimal way to collect it.

Link to beginning state of buyflow.

My Approach: Challenging Internal Assumptions

Reframing the Research Strategy

While stakeholders initially wanted to optimize the existing survey, I pushed to step back and question fundamental assumptions:

Instead of: "How do we make our current questions faster to answer?"
I asked: "What questions do users actually expect and value in a weight loss program signup?"

Instead of: "How do we collect all the data we want?"
I asked: "What information do users need to feel confident this program will work for them?"

Why I Chose Card Sorting Over Traditional Usability Testing

I recognized this wasn't just a UX problem—it was a mental model problem. Users had preconceived notions about weight loss programs, and our survey needed to align with their expectations, not fight against them.

My Methodology:

  • Card sorting exercise with 12 weight loss-interested participants

  • Three categories: Must have, Nice to have, Not necessary

  • Sequencing exercise to understand user mental models

  • Deliberate bias reduction: Didn't reveal we worked for Noom to get authentic responses

Critical Insights: What Users Actually Want

1. Users Have Strong Mental Models About Weight Loss

Key Finding: Participants brought their own beliefs about what's necessary for weight loss success, which didn't always match our business assumptions.

Why This Mattered: If our questions didn't align with user expectations, we were creating friction AND setting wrong expectations about what the program would deliver.

Example: Users expected extensive schedule questions but found marital status irrelevant, yet our survey emphasized demographics over lifestyle logistics.

2. Context Drives Question Relevance

User Quote: "There's no point asking about nutrition and exercise if you don't know when I'm actually available to do these things."

The Insight: Users viewed schedule as the foundation that makes all other personalization possible. Without understanding their time constraints, other questions felt superficial.

Impact: This led to restructuring the entire question hierarchy around lifestyle context first.

3. Trust Must Be Earned Before Asking Personal Questions

Critical Discovery: Users expected psychology and behavior questions last, not first, because these require the highest level of trust.

Their Logic: "Show me you understand my practical needs first, then I'll share my emotional relationship with food."

My Recommendation: Sequence questions to build trust progressively, explaining the "why" behind sensitive questions.

Sequencing Pattern

Navigating Stakeholder Resistance

The Pushback Moment

When I recommended removing several "personalization" questions that users deemed unnecessary, the product team worried about losing competitive advantage.

My advocacy: Personalization that users don't value isn't actually personalization, it's just data collection. If users don't see the connection between a question and their results, it undermines trust in the entire system.

Cross-Functional Impact

With Product Management: Helped prioritize questions based on user value rather than internal data wants

With Design: Collaborated on new information architecture that matched user mental models

With Marketing: Provided insights on messaging that would set appropriate expectations

With Data Science: Worked to identify which data points were truly necessary for personalization versus "nice to have"

Systems Thinking: Considering the Bigger Picture

The Ripple Effects I Considered

User Journey Impact: How would changes to signup affect in-app experience and retention?

Business Model Impact: How might reducing questions affect our ability to create truly personalized programs?

Competitive Positioning: How does our signup experience compare to other programs users might consider?

Team Dynamics: How could improved buyflow organization reduce rapid experimentation errors and improve cross-team collaboration?

My Framework for Evaluation

  1. User Value: Does this question help users feel confident about the program?

  2. Personalization ROI: Does the answer meaningfully change their experience?

  3. Trust Building: Does this question build or erode confidence in our expertise?

  4. Sequence Logic: Does this follow users' natural mental progression?

Impact & Outcomes

Immediate Results

  • New buyflow structure implemented based on user-preferred sequencing

  • Enhanced question explanations that clarify purpose and impact

  • Empathetic copy improvements that create psychological safety for personal questions

  • Progress indicators that set clear expectations about survey length and next steps

Strategic Impact

  • Shifted company mindset from "data collection" to "user value creation"

  • Established user-centered prioritization framework for future buyflow optimization

  • Improved cross-team collaboration by providing shared understanding of user needs

  • Created foundation for ongoing A/B testing with larger sample sizes

What I Learned: Research as Change Management

1. Challenging Sacred Cows

Sometimes the most valuable research challenges fundamental business assumptions. The hardest part wasn't running the study, it was helping stakeholders see that our "personalization" strategy might be working against us.

2. Mental Models Trump Business Logic

Users don't care about our internal data needs. They have their own logic about what questions make sense, and fighting that logic creates unnecessary friction.

3. Sequencing Matters as Much as Content

The order of questions communicates as much as the questions themselves. By putting schedule questions first, we signaled that we understood their real constraints.

4. Trust Is Built Through Transparency

Users were more willing to answer personal questions when we explained why we needed the information and how it would benefit them.

Validation Through Implementation

Real-World Testing: The research recommendations weren't just theoretical, they were implemented as the baseline experience on Noom.com, demonstrating stakeholder confidence in the findings.

Ongoing Optimization: The new structure provides a foundation for continued A/B testing with larger sample sizes, allowing us to refine the approach based on behavioral data while maintaining the user-centered foundation.

Future Considerations

Personalization Evolution: How can we collect user data progressively throughout the app experience rather than front-loading everything in signup?

Competitive Differentiation: How might our streamlined approach influence user expectations across the industry?

Measurement Strategy: What metrics best capture the relationship between signup experience and long-term program success?

Previous
Previous

Third Party Ads

Next
Next

Design Tool Benchmarking