Enhancing proposal tools through data-driven insights
Q2 2024 - Q3 2024
As Lead UX Researcher, I designed and executed the research plan to analyze financial proposals. I applied machine learning (hierarchical clustering, correlation analysis) and quantitative methods to uncover usage patterns, template co-occurrence, and categorization issues. I facilitated stakeholder workshops and translated findings into actionable recommendations that improved categorization accuracy and template reuse.
This quantitative UX research analyzed 6,793 financial proposals (created July 2022 - July 2024) to optimize template effectiveness and streamline creation. Leveraging ML techniques like hierarchical clustering and correlation analysis, we identified that 30.7% of proposals were miscategorized ('Other') and 437 were named 'test'. Key findings included Q1 peak usage (1,193 proposals in Q1 2023) and high update rates (90% of proposals). The insights led to a 43% improvement in categorization accuracy, a 51% increase in template reuse, a 26% reduction in search time, and a 17% faster average proposal creation time.
The existing financial proposal system, despite high usage (6,793 proposals created), suffered from significant inefficiencies. Advisors lacked clear guidance on template selection, leading to inconsistent proposal quality and suboptimal use of the platform's capabilities.
Were advisors using the *right* templates effectively?
Advisors often struggled to find or select the most appropriate templates, leading to many proposals being poorly categorized (30.7% as 'Other') or inconsistently named (437 as 'test').
How could data improve proposal strategy?
No systematic understanding of proposal effectiveness, template co-usage, or usage pattern variations hindered data-driven optimization and strategic enhancements.
Why was proposal creation and retrieval cumbersome?
Poor categorization and naming made finding existing proposals or efficiently creating new ones difficult, causing wasted time and duplicated advisor effort.
These challenges highlighted a clear need to move beyond anecdotal feedback. A deep, data-driven analysis using quantitative methods and machine learning was essential to understand actual usage, identify inefficiencies, and provide a foundation for a more intelligent and effective proposal system.
To transform the proposal system from a mere document generator into a strategic tool, we centered our quantitative UX research around understanding actual usage, identifying inefficiencies, and pinpointing opportunities for ML-driven enhancements. Our inquiry focused on how advisors create, manage, and leverage proposals in their daily workflows.
To map out when, how often, and by whom proposals are created and modified, identifying peak periods and update behaviors.
What are the primary usage patterns of proposals across quarters and months, and are there discernible seasonal trends?
How frequently do advisors update existing proposals, and what is the typical lifecycle or modification frequency over time?
What are the dominant proposal categories by volume, and how does usage vary across different company segments or advisor types?
To identify which templates are most utilized, how they relate to each other, and how to improve their discovery and application.
Which specific proposal templates are most frequently used, and are there underutilized templates with high potential value?
What correlations exist between different proposal template types, suggesting common co-usage or bundling opportunities?
How can template categorization and naming conventions be improved to enhance discoverability and reduce misclassification (e.g., 'Other', 'test')?
These research questions were pivotal in dissecting the complex proposal ecosystem. The answers formed the bedrock for applying machine learning techniques—like hierarchical clustering and correlation analysis—to not only understand current behaviors but also to architect a more intelligent, efficient, and data-driven proposal generation and management system. The goal was to transform raw usage data into actionable strategies for platform enhancement.
To unlock insights from 6,793 financial proposals and optimize template effectiveness, we employed these advanced analytical techniques:
Revealing Template Structures
Applied agglomerative clustering to visualize proposal template relationships and identify natural groupings for consolidation using dendrograms.
What this means: By mapping relationships, we found natural clusters of templates (e.g., 'IRA' and 'IRA, Proposal'), which guided consolidation for a 43% improvement in categorization accuracy.
Hierarchical Template Relationships
Uncovering Template Co-Usage
Calculated and visualized correlation coefficients between 19 proposal template types to uncover co-usage patterns and inform template design.
What this means: This identified templates frequently used together (e.g., 'Investment Strategy' often paired with 'Performance Review'), leading to a 51% increase in effective template reuse through better suggestions.
Template Co-usage Matrix
Forecasting Proposal Demand
Developed predictive models based on historical data (6,793 proposals) to forecast future proposal usage trends by category and company.
What this means: Models forecasted peak proposal demand in Q1 (used for resource planning) and identified that 90% of proposals are updated, highlighting the need for efficient editing tools.
Usage Forecast by Quarter
Tracking KPIs & User Segments
Leveraged dashboard analytics on 6,793 proposals and 6,102 updates to track KPIs, user segments, and identify key trends (e.g., Q1 peaks).
What this means: Dashboards showed 30.7% of proposals were miscategorized as 'Other' and 437 named 'test', directly leading to UI changes that reduced search time by 26%.
Key Dashboard Metrics
Foundation for Process: These analytical methods (Clustering, Correlation Analysis, Predictive Modeling, Dashboard Analysis) were crucial for our research. They enabled us to identify template relationships, understand co-usage patterns, forecast trends, and derive actionable insights, directly informing the optimization of the proposal generation system.
Our systematic approach to analyzing proposal system usage and informing engineering decisions
Usage Analysis
Data Extraction
Power BI Analysis
Advanced Analytics
Executive Delivery
Collected quantitative and qualitative data on how financial advisors were using the proposals system and identified key usage patterns.
Navigated complex database structures to locate and extract the necessary proposal usage data for comprehensive analysis.
Imported data into Power BI for cleaning, transformation, and creation of foundational statistical analyses and visualizations.
Applied regression analysis, machine learning, and LLMs to uncover deeper patterns in how advisors used proposal features together.
Presented findings to VP of Product with clear recommendations about engineering resource allocation for the proposals platform.
Our machine learning analysis of 6,793 financial proposals revealed these critical insights about advisor proposal creation and usage patterns.
Analysis showed consistent Q1 peaks in proposal creation (e.g., 1,193 in Q1 2023), indicating predictable seasonal demand for proposal tools.
Hierarchical clustering revealed that 30.7% of proposals were in 'Other'. Dendrograms showed 4-5 natural template families suitable for consolidation.
Text analysis found 437 proposals named 'test', highlighting poor naming practices. Consistent naming correlated with template co-usage in the correlation matrix.
Correlation matrix analysis revealed strong positive correlations (dark purple areas) between specific template types, indicating common advisor workflows.
By addressing category fragmentation (consolidating based on ML) and implementing AI-assisted naming, we can improve proposal searchability by over 26% and increase template reuse by 51%, significantly boosting advisor efficiency.
How our financial proposal analysis transformed document creation workflows and delivered measurable efficiency improvements.
Identified 30.7% proposals in 'Other' category
Found 437 proposals named 'test', hindering search
Revealed template co-usage patterns via correlations
Consolidated categories using ML cluster analysis
Implemented AI-assisted naming suggestions
Developed guided template selection system
43% improvement in categorization accuracy
26% reduction in proposal search time
51% increase in template reuse
Reduction in average proposal creation time due to better template discovery and reuse (51% increase).
Improvement in categorization accuracy provided cleaner data for more reliable business reporting and trend analysis.
Reduction in time spent searching due to standardized naming conventions, facilitating better knowledge sharing.
Fragmented categories, large 'Other' group
Inconsistent naming ('test', duplicates)
Limited template reuse, high search times
Consolidated, ML-derived categories (43% better)
AI-assisted naming & standardization
51% increased template reuse, 26% faster search
Shows proposal counts by quarter (6,793 total) and updates (6,102) with company distribution.
Reveals Q1 trends with highest proposal creation (1,193 in Q1 2023).
Displays top company concentration with a single company accounting for 659 proposals.