Overview

Financial Advisor Feedback Intelligence Analysis

Transforming scattered feedback into actionable product insights

Project Duration

Q1 2023 - Q1 2025

My Role

As Lead UX Researcher, I strived to transform user feedback into actionable intelligence. My responsibilities included defining research objectives, designing the dashboard's information architecture, conducting user interviews with product managers, designers, executives and other stakeholders, and performing quantitative analysis. I translated these findings into actionable requirements, directly enabling data-informed product decisions and significant efficiency gains for the team.

Team

UX Researcher (Me)
Product Manager
Senior Product Manager
Data Scientist
Director of Product
UXR Manager
VP of Product

Project Overview

This quantitative UX research initiative addressed fragmented voice-of-customer data by delivering a feedback intelligence dashboard. Across 18,5k+ submissions (2020–2025), we combined user interviews, textual analysis, descriptive statistics, time series analysis, user segmentation, ML-driven sentiment, and CSAT surveys. The dashboard centralized insight generation and operationalized prioritization: 83% reduction in manual feedback processing, 47% improvement in feature prioritization accuracy (segmentation + 19K votes), 28% faster feature time-to-market, 32% shorter development cycles, stabilized CSAT around 88–90, and a 27% uplift in overall user sentiment.

Technologies & Methods

User InterviewsTextual AnalysisDescriptive StatisticsTime Series AnalysisUser SegmentationSentiment AnalysisCSAT Surveys

Tools

Power BIMicrosoft SQL StudioSnowflakePythonHugging FaceMiro

The Challenge: From Feedback Chaos to Clarity

Advisor360 faced a significant operational hurdle: user feedback, a goldmine of insights, was scattered across numerous channels—emails, support tickets, survey responses, and various internal systems. This disorganization made it nearly impossible to get a clear, holistic view of emerging needs/painpoints, user sentiment, and satisfaction.

Overwhelming & Siloed Feedback

How could we unify fragmented user voice?

Overwhelming & Siloed Feedback

Feedback was trapped in disparate systems (18,5k+ items), preventing a unified view and leading to missed insights and duplicated effort in manual collation.

Unquantified Sentiment & Trends

What did users *really* feel, and how was it changing?

Unquantified Sentiment & Trends

Without robust analysis, it was difficult to quantify overall user sentiment and satisfaction, track their evolution over time, or identify statistically significant trends versus anecdotal noise.

Reactive & Slow Insight Generation

Could we move from slow manual review to proactive insights?

Reactive & Slow Insight Generation

Product Managers spent excessive time (avg. 12hrs/week) manually sifting through feedback, delaying insight generation and leading to reactive rather than proactive product decisions.

The High Cost of Disconnected Feedback

  • Significant Team time consumed by manual feedback processing.
  • Delays in product decisions due to lack of timely feedback insights.
  • Missed opportunities to proactively address user pain points or capitalize on positive trends.
  • Inability to accurately gauge user satisfaction or the impact of product changes.
  • Product roadmap decisions were often based on assumptions or the loudest voices, not comprehensive data, costing effort and resources.

Our Research Aimed To:

  • Create a centralized, single source of truth for all user feedback.
  • Develop an interactive dashboard for Stakeholders to easily explore and understand feedback data.
  • Drastically reduce manual effort and empower data-driven prioritization.
  • Implement automated feedback sentiment analysis and trend identification.
  • Provide an overall CSAT metric for Stakeholders to track the impact of their product decisions, and compare that to the overall CSAT of other companies, inner products and services.
"We're drowning in data, but we cant comprehend what the users are saying. We need to be able to understand the data and make decisions based on it."
VP of Product

Orchestrating User Feedback into Actionable Intelligence

The core challenge was to transform this cacophony of fragmented feedback into a harmonized, intelligent system. This research initiative set out to build a solution that could not only collect and organize feedback but also analyze and visualize it, turning raw user comments into clear, actionable insights for strategic product development.

Key Research Questions

To address the challenge of fragmented feedback and build an effective intelligence dashboard, our research was guided by key questions aimed at understanding current processes, defining actionable insights, and leveraging technology.

Understanding the Current Feedback Landscape

To map existing feedback mechanisms and identify critical gaps and pain points for Stakeholders.

  • Q1:

    How are stakeholders currently collecting, processing, and utilizing user feedback across various channels?

  • Q2:

    What critical information or nuances are being missed, diluted, or misinterpreted in the current manual feedback process?

  • Q3:

    What are the primary frustrations and inefficiencies stakeholders experience when trying to synthesize and act on user feedback?

Defining Actionable Insights & Dashboard Needs

To determine the most valuable metrics and visualizations for a dashboard that empowers data-driven decisions.

  • Q1:

    Which specific feedback metrics (e.g., sentiment scores, trend velocity, CSAT levels) would provide the most actionable insights for product strategy and UX improvements?

  • Q2:

    How can feedback patterns, sentiment distributions, and CSAT movements be most effectively visualized to help PMs quickly identify priority areas and emerging trends?

  • Q3:

    What level of data granularity, filtering, and drill-down capability do stakeholders require to confidently analyze feedback related to specific product features or user segments?

Leveraging Technology for Enhanced Analysis

To explore how advanced technologies like Machine Learning could automate and elevate the feedback analysis process.

  • Q1:

    How can machine learning (e.g., NLP, sentiment analysis, CSAT trend modeling) be employed to automate the categorization, tagging, and summarization of large feedback volumes and quantify satisfaction impact over time?

  • Q2:

    What are the accuracy and reliability considerations for ML-driven feedback analysis, and how can these be validated?

  • Q3:

    Beyond automation, how can AI/ML surface deeper, predictive insights from feedback that might not be apparent through manual review alone?

From Questions to Methods: A Strategic Approach

These focused inquiries were crucial in shaping our research strategy. They ensured that the methods chosen—like Time Series Analysis for trends, Sentiment Analysis for user feelings, and CSAT benchmarking—were precisely targeted to deliver the comprehensive insights needed for the feedback intelligence dashboard and ultimately address the core challenge.

Research Methods

To transform scattered feedback into actionable insights, we employed these specialized quantitative techniques:

Time Series Analysis

Identifying patterns over time

Analysis of 18,5k+ feedback submissions from 2020 to 2025, showing peak volumes of 2.7K in 2023.

What this means: We analyzed how user feedback volume changed over several years. This helped us spot trends, like peak feedback times (e.g., after new releases), enabling us to anticipate loads and understand the impact of product updates.

Feedback Volume Trends (18,5k+ items)

User Segmentation

Categorizing feedback by user & topic

Categorization of feedback by user type (Advisors, BAA, Fee Only Advisors) and category (Client Services: 7.3K, Practice: 6.3K, Technology: 3.0K).

What this means: We grouped feedback based on who sent it and its subject. This showed if specific user groups had particular types of feedback more often, helping tailor solutions.

CS:
7.3K
Practice:
6.3K
Tech:
3.0K

Feedback by Category Segments

CSAT Surveys

Measuring satisfaction with structured surveys

Customer Satisfaction (CSAT) surveys used to establish benchmarks, monitor weekly shifts, and contextualize changes with product releases.

What this means: We monitored CSAT to quantify how changes affected user satisfaction and prioritized work that moved the needle the most.

Mean 88Median 90Low 79

CSAT survey benchmarks & weekly shifts

Sentiment Analysis

Gauging emotion in feedback

Implementation of machine learning algorithms to categorize feedback as Neutral, Negative, Positive, Very Positive, and Very Negative.

What this means: We used AI to automatically determine the emotion in each piece of feedback. This gave a quick overview of user satisfaction and helped prioritize issues causing strong negative reactions.

Negative: 15%Neutral: 79%Positive: 6%

Sentiment Score Distribution

Foundation for Process: These analytical methods were key to our research process. They enabled structured data discovery (Time Series, Segmentation), drove in-depth analysis (Sentiment, CSAT), and directly informed the actionable insights feeding into the dashboard design.

Research Process

From raw feedback to actionable insights: our data-driven approach

1

Challenge Identification

2

Data Source Mapping

3

SQL Querying & Cleaning

4

Data Visualization

5

Dashboard Implementation

1

Challenge Identification

Identified the key challenge of Stakeholders who were struggling to efficiently process large volumes of user feedback spread across multiple channels.

2

Data Source Mapping

Located and documented all feedback sources including feedback tickets, support tickets, idea tickets, user interviews, and internal feedback channels.

3

SQL Querying & Cleaning

Extracted raw feedback data using SQL queries, then cleaned and normalized the data to create a unified dataset for analysis.

4

Data Visualization

Developed visualizations to identify patterns, sentiment trends, and key insights from the aggregated feedback data.

5

Dashboard Implementation

Delivered a comprehensive feedback management dashboard that Stakeholders could use in their daily workflow to track, categorize, and act on user feedback.

Key Findings

Our analysis of 18,5k+ feedback submissions revealed these critical insights about advisor experience

User Experience Issues
43%items
VS. other issues
Feedback Sentiment Gap
-8pp
More negative feedback than positive
CSAT Benchmarks
82mean
Good score, but not industry leading.
Efficiency Gain
83%faster
Feedback processing

User Experience Issues

43% of feedback pointed to experience issues (navigation, clarity, task flows) versus other issues. Focusing fixes here contributed to a measurable lift in overall sentiment.

UX 43%
Other 57%

Feedback Sentiment Gap

Negative feedback outweighed positive by 8 percentage points, with Neutral dominating. This gap narrowed after targeted UX fixes.

Neg 14% • Neutral 79% • Pos 7% (−8pp gap)

CSAT Benchmarks

Initial CSAT sat at a mean of 82, and a median of 85. Weekly monitoring and release guardrails reduced post‑release dips and improved stability.

Mean
82
Median
85

Research Efficiency Gains

Our automated feedback analysis process reduced processing time from 12 hours to 2 hours weekly while improving categorization accuracy by 87%.

Before Automation
Manual
Review
0h12h
After Automation
Auto
QA
0h2h

Key Insight

By implementing our advanced feedback analysis system, the product team was able to identify and prioritize improvements that led to a 27% increase in positive advisor sentiment and a more targeted product roadmap.

Impact & Outcomes

How our feedback analysis drove concrete engineering decisions, design changes, and measurable business results

1

Research Findings

43% of feedback was User Experience Issues

User segmentation revealed distinct needs across advisor types

Unclear goals and scattered requests caused extra engineering work and slow downs

2

Implementation

Experience-first fixes: navigation, clarity, task flows, customization

Segmentation-driven building initiatives/redesigns and roadmaps

Centralized feedback in a dashboard reduced miscommunication, handoffs and rework

3

User/Business Impact

Experience issues reduced; increasing positive feedback +27%

Feature prioritization accuracy improved by 47% (segmentation)

32% reduction in development cycle duration, measured by Engineers actively using the dashboard

Time Efficiency

~10hrs/week saved

By reducing workflow friction, each advisor gains more than a full workday back each week to focus on clients.

Operations Impact

28%faster time-to-market

Streamlined feedback processing improved confidence and cut the average concept-to-ship timeline from ~8 to ~6 weeks.

CSAT Stabilization

88–90mean range

After actively analyzing feedback and addressing the root causes of dissatisfaction, the signal stabilized.

Impact on Product Roadmap

Before Research

  • Fragmented requests & unclear criteria caused rework and delays

  • Prioritization driven by anecdotes; votes for ideas scattered and unattainable; user/product segmentation unused

  • Manual collation and unclear understanding of the product increased decision latency and delayed releases

After Research

  • Centralized backlog + structured requirements + weekly triage reduced developement cycle by 32%

  • Segmentation-driven scoring + 19K‑vote ideas dashboard (47% prioritization accuracy)

  • Automated feedback data collection + weekly triage cut decision latency; 28% faster time‑to‑market

35% increase in roadmap efficiency

Visuals

Client Analysis Dashboard

Interactive time series visualizations tracking feedback volume across quarters from 2020-2025

Total of 18,5k+ feedback submissions with detailed client-based segmentation analysis

Breakdown showing 83% of feedback (~15.3K items) from one segment and 17% (~3.2K items) from another

Client Analysis Dashboard
1 of 8

For more information feel free to contact me

Back to Portfolio