Transforming scattered feedback into actionable product insights
Q1 2023 - Q1 2025
As Lead UX Researcher, I strived to transform user feedback into actionable intelligence. My responsibilities included defining research objectives, designing the dashboard's information architecture, conducting user interviews with product managers, designers, executives and other stakeholders, and performing quantitative analysis. I translated these findings into actionable requirements, directly enabling data-informed product decisions and significant efficiency gains for the team.
This quantitative UX research initiative addressed fragmented voice-of-customer data by delivering a feedback intelligence dashboard. Across 18,5k+ submissions (2020–2025), we combined user interviews, textual analysis, descriptive statistics, time series analysis, user segmentation, ML-driven sentiment, and CSAT surveys. The dashboard centralized insight generation and operationalized prioritization: 83% reduction in manual feedback processing, 47% improvement in feature prioritization accuracy (segmentation + 19K votes), 28% faster feature time-to-market, 32% shorter development cycles, stabilized CSAT around 88–90, and a 27% uplift in overall user sentiment.
Advisor360 faced a significant operational hurdle: user feedback, a goldmine of insights, was scattered across numerous channels—emails, support tickets, survey responses, and various internal systems. This disorganization made it nearly impossible to get a clear, holistic view of emerging needs/painpoints, user sentiment, and satisfaction.
How could we unify fragmented user voice?
Feedback was trapped in disparate systems (18,5k+ items), preventing a unified view and leading to missed insights and duplicated effort in manual collation.
What did users *really* feel, and how was it changing?
Without robust analysis, it was difficult to quantify overall user sentiment and satisfaction, track their evolution over time, or identify statistically significant trends versus anecdotal noise.
Could we move from slow manual review to proactive insights?
Product Managers spent excessive time (avg. 12hrs/week) manually sifting through feedback, delaying insight generation and leading to reactive rather than proactive product decisions.
The core challenge was to transform this cacophony of fragmented feedback into a harmonized, intelligent system. This research initiative set out to build a solution that could not only collect and organize feedback but also analyze and visualize it, turning raw user comments into clear, actionable insights for strategic product development.
To address the challenge of fragmented feedback and build an effective intelligence dashboard, our research was guided by key questions aimed at understanding current processes, defining actionable insights, and leveraging technology.
To map existing feedback mechanisms and identify critical gaps and pain points for Stakeholders.
How are stakeholders currently collecting, processing, and utilizing user feedback across various channels?
What critical information or nuances are being missed, diluted, or misinterpreted in the current manual feedback process?
What are the primary frustrations and inefficiencies stakeholders experience when trying to synthesize and act on user feedback?
To determine the most valuable metrics and visualizations for a dashboard that empowers data-driven decisions.
Which specific feedback metrics (e.g., sentiment scores, trend velocity, CSAT levels) would provide the most actionable insights for product strategy and UX improvements?
How can feedback patterns, sentiment distributions, and CSAT movements be most effectively visualized to help PMs quickly identify priority areas and emerging trends?
What level of data granularity, filtering, and drill-down capability do stakeholders require to confidently analyze feedback related to specific product features or user segments?
To explore how advanced technologies like Machine Learning could automate and elevate the feedback analysis process.
How can machine learning (e.g., NLP, sentiment analysis, CSAT trend modeling) be employed to automate the categorization, tagging, and summarization of large feedback volumes and quantify satisfaction impact over time?
What are the accuracy and reliability considerations for ML-driven feedback analysis, and how can these be validated?
Beyond automation, how can AI/ML surface deeper, predictive insights from feedback that might not be apparent through manual review alone?
These focused inquiries were crucial in shaping our research strategy. They ensured that the methods chosen—like Time Series Analysis for trends, Sentiment Analysis for user feelings, and CSAT benchmarking—were precisely targeted to deliver the comprehensive insights needed for the feedback intelligence dashboard and ultimately address the core challenge.
To transform scattered feedback into actionable insights, we employed these specialized quantitative techniques:
Identifying patterns over time
Analysis of 18,5k+ feedback submissions from 2020 to 2025, showing peak volumes of 2.7K in 2023.
What this means: We analyzed how user feedback volume changed over several years. This helped us spot trends, like peak feedback times (e.g., after new releases), enabling us to anticipate loads and understand the impact of product updates.
Feedback Volume Trends (18,5k+ items)
Categorizing feedback by user & topic
Categorization of feedback by user type (Advisors, BAA, Fee Only Advisors) and category (Client Services: 7.3K, Practice: 6.3K, Technology: 3.0K).
What this means: We grouped feedback based on who sent it and its subject. This showed if specific user groups had particular types of feedback more often, helping tailor solutions.
Feedback by Category Segments
Measuring satisfaction with structured surveys
Customer Satisfaction (CSAT) surveys used to establish benchmarks, monitor weekly shifts, and contextualize changes with product releases.
What this means: We monitored CSAT to quantify how changes affected user satisfaction and prioritized work that moved the needle the most.
CSAT survey benchmarks & weekly shifts
Gauging emotion in feedback
Implementation of machine learning algorithms to categorize feedback as Neutral, Negative, Positive, Very Positive, and Very Negative.
What this means: We used AI to automatically determine the emotion in each piece of feedback. This gave a quick overview of user satisfaction and helped prioritize issues causing strong negative reactions.
Sentiment Score Distribution
Foundation for Process: These analytical methods were key to our research process. They enabled structured data discovery (Time Series, Segmentation), drove in-depth analysis (Sentiment, CSAT), and directly informed the actionable insights feeding into the dashboard design.
From raw feedback to actionable insights: our data-driven approach
Challenge Identification
Data Source Mapping
SQL Querying & Cleaning
Data Visualization
Dashboard Implementation
Identified the key challenge of Stakeholders who were struggling to efficiently process large volumes of user feedback spread across multiple channels.
Located and documented all feedback sources including feedback tickets, support tickets, idea tickets, user interviews, and internal feedback channels.
Extracted raw feedback data using SQL queries, then cleaned and normalized the data to create a unified dataset for analysis.
Developed visualizations to identify patterns, sentiment trends, and key insights from the aggregated feedback data.
Delivered a comprehensive feedback management dashboard that Stakeholders could use in their daily workflow to track, categorize, and act on user feedback.
Our analysis of 18,5k+ feedback submissions revealed these critical insights about advisor experience
43% of feedback pointed to experience issues (navigation, clarity, task flows) versus other issues. Focusing fixes here contributed to a measurable lift in overall sentiment.
Negative feedback outweighed positive by 8 percentage points, with Neutral dominating. This gap narrowed after targeted UX fixes.
Initial CSAT sat at a mean of 82, and a median of 85. Weekly monitoring and release guardrails reduced post‑release dips and improved stability.
Our automated feedback analysis process reduced processing time from 12 hours to 2 hours weekly while improving categorization accuracy by 87%.
By implementing our advanced feedback analysis system, the product team was able to identify and prioritize improvements that led to a 27% increase in positive advisor sentiment and a more targeted product roadmap.
How our feedback analysis drove concrete engineering decisions, design changes, and measurable business results
43% of feedback was User Experience Issues
User segmentation revealed distinct needs across advisor types
Unclear goals and scattered requests caused extra engineering work and slow downs
Experience-first fixes: navigation, clarity, task flows, customization
Segmentation-driven building initiatives/redesigns and roadmaps
Centralized feedback in a dashboard reduced miscommunication, handoffs and rework
Experience issues reduced; increasing positive feedback +27%
Feature prioritization accuracy improved by 47% (segmentation)
32% reduction in development cycle duration, measured by Engineers actively using the dashboard
By reducing workflow friction, each advisor gains more than a full workday back each week to focus on clients.
Streamlined feedback processing improved confidence and cut the average concept-to-ship timeline from ~8 to ~6 weeks.
After actively analyzing feedback and addressing the root causes of dissatisfaction, the signal stabilized.
Fragmented requests & unclear criteria caused rework and delays
Prioritization driven by anecdotes; votes for ideas scattered and unattainable; user/product segmentation unused
Manual collation and unclear understanding of the product increased decision latency and delayed releases
Centralized backlog + structured requirements + weekly triage reduced developement cycle by 32%
Segmentation-driven scoring + 19K‑vote ideas dashboard (47% prioritization accuracy)
Automated feedback data collection + weekly triage cut decision latency; 28% faster time‑to‑market
Interactive time series visualizations tracking feedback volume across quarters from 2020-2025
Total of 18,5k+ feedback submissions with detailed client-based segmentation analysis
Breakdown showing 83% of feedback (~15.3K items) from one segment and 17% (~3.2K items) from another