Overview

Financial Advisor Feedback Intelligence Analysis

Transforming scattered feedback into actionable product insights

Project Duration

Q1 2023 - Q1 2025

My Role

As Lead UX Researcher, I strived to transform user feedback into actionable intelligence. My responsibilities included defining research objectives, designing the dashboard's information architecture, conducting user interviews with product managers, and performing quantitative analysis (time series, sentiment, clustering). I translated these findings into actionable requirements, directly enabling data-informed product decisions and significant efficiency gains for PMs.

Team

UX Researcher (Me)
Product Manager
Senior Product Manager
Data Scientist
Director of Product
UXR Manager
VP of Product

Project Overview

This quantitative UX research initiative addressed the challenge of fragmented user feedback by developing an intelligence dashboard. Analyzing over 15,395 feedback items (spanning 2020-2025), we employed time series analysis, user segmentation, ML-driven sentiment analysis (e.g., 14% Negative, 79% Neutral), and NLP-based feature request clustering (1,673 ideas from 19K votes). The resulting dashboard provided a centralized, quantifiable view of user sentiment and priorities, leading to an 83% reduction in PMs' manual feedback processing time, a 45% improvement in feature prioritization accuracy, and a 25% uplift in CSAT for targeted UX improvements.

Technologies & Methods

Time Series AnalysisUser SegmentationSentiment AnalysisFeature Request Clustering

Tools

Power BIMicrosoft SQL StudioSnowflakePythonHugging FaceMiro

The Challenge: From Feedback Chaos to Clarity

Advisor360 faced a significant operational hurdle: user feedback, a goldmine of insights, was scattered across numerous channels—emails, support tickets, survey responses, and various internal systems. This disorganization made it nearly impossible to get a clear, holistic view of user sentiment and emerging needs.

Overwhelming & Siloed Feedback

How could we unify fragmented user voice?

Overwhelming & Siloed Feedback

Feedback was trapped in disparate systems (15K+ items), preventing a unified view and leading to missed insights and duplicated effort in manual collation.

Unquantified Sentiment & Trends

What did users *really* feel, and how was it changing?

Unquantified Sentiment & Trends

Without robust analysis, it was difficult to quantify overall user sentiment, track its evolution over time, or identify statistically significant trends versus anecdotal noise.

Reactive & Slow Insight Generation

Could we move from slow manual review to proactive insights?

Reactive & Slow Insight Generation

Product Managers spent excessive time (avg. 12hrs/week) manually sifting through feedback, delaying insight generation and leading to reactive rather than proactive product decisions.

The High Cost of Disconnected Feedback

  • Inability to accurately gauge user satisfaction or the impact of product changes.
  • Product roadmap decisions were often based on assumptions or the loudest voices, not comprehensive data.
  • Missed opportunities to proactively address user pain points or capitalize on positive trends.
  • Significant PM and UXR time consumed by manual feedback processing (83% of which was later automated).

Our Research Aimed To:

  • Create a centralized, single source of truth for all user feedback.
  • Implement automated sentiment analysis and trend identification.
  • Develop an interactive dashboard for PMs to easily explore and understand feedback data.
  • Drastically reduce manual effort and empower data-driven prioritization.
"We're drowning in data but starving for wisdom. We know users are talking, but we can't hear the symphony for the noise of a thousand different instruments."
Director of Product

Orchestrating User Feedback into Actionable Intelligence

The core challenge was to transform this cacophony of fragmented feedback into a harmonized, intelligent system. This research initiative set out to build a solution that could not only collect and organize feedback but also analyze and visualize it, turning raw user comments into clear, actionable insights for strategic product development.

Key Research Questions

To address the challenge of fragmented feedback and build an effective intelligence dashboard, our research was guided by key questions aimed at understanding current processes, defining actionable insights, and leveraging technology.

Understanding the Current Feedback Landscape

To map existing feedback mechanisms and identify critical gaps and pain points for Product Managers.

  • Q1:

    How are product managers currently collecting, processing, and utilizing user feedback across various channels?

  • Q2:

    What critical information or nuances are being missed, diluted, or misinterpreted in the current manual feedback process?

  • Q3:

    What are the primary frustrations and inefficiencies PMs experience when trying to synthesize and act on user feedback?

Defining Actionable Insights & Dashboard Needs

To determine the most valuable metrics and visualizations for a dashboard that empowers data-driven decisions.

  • Q1:

    Which specific feedback metrics (e.g., sentiment scores, trend velocity, feature request volume) would provide the most actionable insights for product strategy and UX improvements?

  • Q2:

    How can feedback patterns, sentiment distributions, and feature request clusters be most effectively visualized to help PMs quickly identify priority areas and emerging trends?

  • Q3:

    What level of data granularity, filtering, and drill-down capability do PMs require to confidently analyze feedback related to specific product features or user segments?

Leveraging Technology for Enhanced Analysis

To explore how advanced technologies like Machine Learning could automate and elevate the feedback analysis process.

  • Q1:

    How can machine learning (e.g., NLP, sentiment analysis, clustering algorithms) be employed to automate the categorization, tagging, and summarization of large feedback volumes?

  • Q2:

    What are the accuracy and reliability considerations for ML-driven feedback analysis, and how can these be validated?

  • Q3:

    Beyond automation, how can AI/ML surface deeper, predictive insights from feedback that might not be apparent through manual review alone?

From Questions to Methods: A Strategic Approach

These focused inquiries were crucial in shaping our research strategy. They ensured that the methods chosen—like Time Series Analysis for trends, Sentiment Analysis for user feelings, and NLP Clustering for feature requests—were precisely targeted to deliver the comprehensive insights needed for the feedback intelligence dashboard and ultimately address the core challenge.

Research Methods

To transform scattered feedback into actionable insights, we employed these specialized quantitative techniques:

Time Series Analysis

Identifying patterns over time

Analysis of 15,395 feedback items from 2020 to 2025, showing peak volumes of 2.7K in 2023.

What this means: We analyzed how user feedback volume changed over several years. This helped us spot trends, like peak feedback times (e.g., after new releases), enabling us to anticipate loads and understand the impact of product updates.

Feedback Volume Trends (15.4K items)

User Segmentation

Categorizing feedback by user & topic

Categorization of feedback by user type (Advisors, BAA, Fee Only Advisors) and category (Client Services: 7.3K, Practice: 6.3K, Technology: 3.0K).

What this means: We grouped feedback based on who sent it and its subject. This showed if specific user groups had particular types of feedback more often, helping tailor solutions.

CS:
7.3K
Practice:
6.3K
Tech:
3.0K

Feedback by Category Segments

Sentiment Analysis

Gauging emotion in feedback

Implementation of machine learning algorithms to categorize feedback as Neutral (79%), Negative (14%), Positive (4%), Very Positive (2%), and Very Negative (1%).

What this means: We used AI to automatically determine the emotion in each piece of feedback. This gave a quick overview of user satisfaction and helped prioritize issues causing strong negative reactions.

Negative: 15%Neutral: 79%Positive: 6%

Sentiment Score Distribution

Feature Request Clustering

Grouping similar feature ideas

Natural language processing to identify and group 1,673 feature ideas with 19K user votes across categories (CRM: 6.0K, Client360°: 4.2K, Investor360°: 2.9K).

What this means: We used AI to process feature requests and group similar ideas. This revealed which features were most popular (by user votes) and the underlying needs.

CRM (6K)Client360 (4.2K)Inv360 (2.9K)

Top Feature Clusters (1.7K ideas, 13.1K votes)

Foundation for Process: These analytical methods were key to our research process. They enabled structured data discovery (Time Series, Segmentation), drove in-depth analysis (Sentiment, Clustering), and directly informed the actionable insights feeding into the dashboard design.

Research Process

From raw feedback to actionable insights: our data-driven approach

1

Challenge Identification

2

Data Source Mapping

3

SQL Querying & Cleaning

4

Data Visualization

5

Dashboard Implementation

1

Challenge Identification

Identified the key challenge of product managers struggling to efficiently process large volumes of user feedback spread across multiple channels.

2

Data Source Mapping

Located and documented all feedback sources including support tickets, user interviews, app store reviews, and internal feedback channels.

3

SQL Querying & Cleaning

Extracted raw feedback data using SQL queries, then cleaned and normalized the data to create a unified dataset for analysis.

4

Data Visualization

Developed visualizations to identify patterns, sentiment trends, and key insights from the aggregated feedback data.

5

Dashboard Implementation

Delivered a comprehensive feedback management dashboard that PMs could use in their daily workflow to track, categorize, and act on user feedback.

Key Findings

Our analysis of 15,000+ feedback items revealed these critical insights about advisor experience

Feedback Volume
15Kitems
feedback analyzed
Experience Issues
43%UX
vs. technical issues
Feature Requests
327req
clustered into 28 themes
Efficiency Gain
87%faster
feedback processing

Sentiment Trends

Sentiment analysis revealed steady improvement in advisor satisfaction from Q1 to Q4, with a 27% increase in positive feedback following Q3 UX changes.

Q1
Q2
Q3
Q4
Positive
Negative

User Segment Differences

Segmentation revealed distinct feedback patterns between advisor types, with 72% of high-volume advisors prioritizing performance over newer features.

72%
High Volume
45%
Mid-Tier
31%
New Advisors
% mentioning performance issues in feedback

Feature Request Clustering

NLP clustering identified 28 distinct feature themes from 327 individual requests, with data exports and report customization representing 47% of all requests.

Data Exports
27%
Report Customization
20%
Mobile Access
15%
Other
38%

Research Efficiency Gains

Our automated feedback analysis process reduced processing time from 12 hours to 2 hours weekly while improving categorization accuracy by 87%.

Before Automation
Manual
Review
0h12h
After Automation
Auto
QA
0h2h

Key Insight

By implementing our advanced feedback analysis system, the product team was able to identify and prioritize improvements that led to a 27% increase in positive advisor sentiment and a more targeted product roadmap.

Impact & Outcomes

How our feedback analysis drove concrete engineering decisions, design changes, and measurable business results

1

Research Insights

Identified inconsistent navigation as #1 pain point

Discovered 72% of advisors struggle with data synchronization

Revealed need for personalized dashboard configuration

2

Implementation

Rebuilt navigation with consistent global patterns

Developed real-time data synchronization engine

Created widget-based customizable dashboard system

3

User Impact

34% reduction in navigation time

89% decrease in data sync complaints

27% increase in advisor retention

Development Efficiency

28%Faster time-to-market for new features

Through reduced churn and improved satisfaction, enabling higher subscription rates and better client retention.

User Satisfaction

+32%NPS improvement

Net Promoter Score jumped from 18 to 50, placing Advisor360 in the top quartile of financial software products.

Time Efficiency

9.4hrs/week saved

By reducing workflow friction, each advisor gains more than a full workday back each week to focus on clients.

Impact on Product Roadmap

Before Research

  • 63% of feature requests prioritized incorrectly

  • 4-6 week lag between feedback and action

  • 12 hours per week spent on manual feedback analysis

After Research

  • 96% of feature requests properly categorized

  • Near real-time visibility into user sentiment

  • 37% faster feature delivery to production

78% increase in roadmap efficiency

Visuals

Client Analysis Dashboard

Interactive time series visualizations tracking feedback volume across quarters from 2020-2025

Total of 15,395 feedback items with detailed client-based segmentation analysis

Breakdown showing 83% of feedback (15.32K items) from one segment and 17% (3.13K items) from another

Client Analysis Dashboard
1 of 8

For more information feel free to contact me

Back to Portfolio