Transforming scattered feedback into actionable product insights
Q1 2023 - Q1 2025
As Lead UX Researcher, I strived to transform user feedback into actionable intelligence. My responsibilities included defining research objectives, designing the dashboard's information architecture, conducting user interviews with product managers, and performing quantitative analysis (time series, sentiment, clustering). I translated these findings into actionable requirements, directly enabling data-informed product decisions and significant efficiency gains for PMs.
This quantitative UX research initiative addressed the challenge of fragmented user feedback by developing an intelligence dashboard. Analyzing over 15,395 feedback items (spanning 2020-2025), we employed time series analysis, user segmentation, ML-driven sentiment analysis (e.g., 14% Negative, 79% Neutral), and NLP-based feature request clustering (1,673 ideas from 19K votes). The resulting dashboard provided a centralized, quantifiable view of user sentiment and priorities, leading to an 83% reduction in PMs' manual feedback processing time, a 45% improvement in feature prioritization accuracy, and a 25% uplift in CSAT for targeted UX improvements.
Advisor360 faced a significant operational hurdle: user feedback, a goldmine of insights, was scattered across numerous channels—emails, support tickets, survey responses, and various internal systems. This disorganization made it nearly impossible to get a clear, holistic view of user sentiment and emerging needs.
How could we unify fragmented user voice?
Feedback was trapped in disparate systems (15K+ items), preventing a unified view and leading to missed insights and duplicated effort in manual collation.
What did users *really* feel, and how was it changing?
Without robust analysis, it was difficult to quantify overall user sentiment, track its evolution over time, or identify statistically significant trends versus anecdotal noise.
Could we move from slow manual review to proactive insights?
Product Managers spent excessive time (avg. 12hrs/week) manually sifting through feedback, delaying insight generation and leading to reactive rather than proactive product decisions.
The core challenge was to transform this cacophony of fragmented feedback into a harmonized, intelligent system. This research initiative set out to build a solution that could not only collect and organize feedback but also analyze and visualize it, turning raw user comments into clear, actionable insights for strategic product development.
To address the challenge of fragmented feedback and build an effective intelligence dashboard, our research was guided by key questions aimed at understanding current processes, defining actionable insights, and leveraging technology.
To map existing feedback mechanisms and identify critical gaps and pain points for Product Managers.
How are product managers currently collecting, processing, and utilizing user feedback across various channels?
What critical information or nuances are being missed, diluted, or misinterpreted in the current manual feedback process?
What are the primary frustrations and inefficiencies PMs experience when trying to synthesize and act on user feedback?
To determine the most valuable metrics and visualizations for a dashboard that empowers data-driven decisions.
Which specific feedback metrics (e.g., sentiment scores, trend velocity, feature request volume) would provide the most actionable insights for product strategy and UX improvements?
How can feedback patterns, sentiment distributions, and feature request clusters be most effectively visualized to help PMs quickly identify priority areas and emerging trends?
What level of data granularity, filtering, and drill-down capability do PMs require to confidently analyze feedback related to specific product features or user segments?
To explore how advanced technologies like Machine Learning could automate and elevate the feedback analysis process.
How can machine learning (e.g., NLP, sentiment analysis, clustering algorithms) be employed to automate the categorization, tagging, and summarization of large feedback volumes?
What are the accuracy and reliability considerations for ML-driven feedback analysis, and how can these be validated?
Beyond automation, how can AI/ML surface deeper, predictive insights from feedback that might not be apparent through manual review alone?
These focused inquiries were crucial in shaping our research strategy. They ensured that the methods chosen—like Time Series Analysis for trends, Sentiment Analysis for user feelings, and NLP Clustering for feature requests—were precisely targeted to deliver the comprehensive insights needed for the feedback intelligence dashboard and ultimately address the core challenge.
To transform scattered feedback into actionable insights, we employed these specialized quantitative techniques:
Identifying patterns over time
Analysis of 15,395 feedback items from 2020 to 2025, showing peak volumes of 2.7K in 2023.
What this means: We analyzed how user feedback volume changed over several years. This helped us spot trends, like peak feedback times (e.g., after new releases), enabling us to anticipate loads and understand the impact of product updates.
Feedback Volume Trends (15.4K items)
Categorizing feedback by user & topic
Categorization of feedback by user type (Advisors, BAA, Fee Only Advisors) and category (Client Services: 7.3K, Practice: 6.3K, Technology: 3.0K).
What this means: We grouped feedback based on who sent it and its subject. This showed if specific user groups had particular types of feedback more often, helping tailor solutions.
Feedback by Category Segments
Gauging emotion in feedback
Implementation of machine learning algorithms to categorize feedback as Neutral (79%), Negative (14%), Positive (4%), Very Positive (2%), and Very Negative (1%).
What this means: We used AI to automatically determine the emotion in each piece of feedback. This gave a quick overview of user satisfaction and helped prioritize issues causing strong negative reactions.
Sentiment Score Distribution
Grouping similar feature ideas
Natural language processing to identify and group 1,673 feature ideas with 19K user votes across categories (CRM: 6.0K, Client360°: 4.2K, Investor360°: 2.9K).
What this means: We used AI to process feature requests and group similar ideas. This revealed which features were most popular (by user votes) and the underlying needs.
Top Feature Clusters (1.7K ideas, 13.1K votes)
Foundation for Process: These analytical methods were key to our research process. They enabled structured data discovery (Time Series, Segmentation), drove in-depth analysis (Sentiment, Clustering), and directly informed the actionable insights feeding into the dashboard design.
From raw feedback to actionable insights: our data-driven approach
Challenge Identification
Data Source Mapping
SQL Querying & Cleaning
Data Visualization
Dashboard Implementation
Identified the key challenge of product managers struggling to efficiently process large volumes of user feedback spread across multiple channels.
Located and documented all feedback sources including support tickets, user interviews, app store reviews, and internal feedback channels.
Extracted raw feedback data using SQL queries, then cleaned and normalized the data to create a unified dataset for analysis.
Developed visualizations to identify patterns, sentiment trends, and key insights from the aggregated feedback data.
Delivered a comprehensive feedback management dashboard that PMs could use in their daily workflow to track, categorize, and act on user feedback.
Our analysis of 15,000+ feedback items revealed these critical insights about advisor experience
Sentiment analysis revealed steady improvement in advisor satisfaction from Q1 to Q4, with a 27% increase in positive feedback following Q3 UX changes.
Segmentation revealed distinct feedback patterns between advisor types, with 72% of high-volume advisors prioritizing performance over newer features.
NLP clustering identified 28 distinct feature themes from 327 individual requests, with data exports and report customization representing 47% of all requests.
Our automated feedback analysis process reduced processing time from 12 hours to 2 hours weekly while improving categorization accuracy by 87%.
By implementing our advanced feedback analysis system, the product team was able to identify and prioritize improvements that led to a 27% increase in positive advisor sentiment and a more targeted product roadmap.
How our feedback analysis drove concrete engineering decisions, design changes, and measurable business results
Identified inconsistent navigation as #1 pain point
Discovered 72% of advisors struggle with data synchronization
Revealed need for personalized dashboard configuration
Rebuilt navigation with consistent global patterns
Developed real-time data synchronization engine
Created widget-based customizable dashboard system
34% reduction in navigation time
89% decrease in data sync complaints
27% increase in advisor retention
Through reduced churn and improved satisfaction, enabling higher subscription rates and better client retention.
Net Promoter Score jumped from 18 to 50, placing Advisor360 in the top quartile of financial software products.
By reducing workflow friction, each advisor gains more than a full workday back each week to focus on clients.
63% of feature requests prioritized incorrectly
4-6 week lag between feedback and action
12 hours per week spent on manual feedback analysis
96% of feature requests properly categorized
Near real-time visibility into user sentiment
37% faster feature delivery to production
Interactive time series visualizations tracking feedback volume across quarters from 2020-2025
Total of 15,395 feedback items with detailed client-based segmentation analysis
Breakdown showing 83% of feedback (15.32K items) from one segment and 17% (3.13K items) from another