Playbooks Reports
Understanding the performance of a sales organization is a critical piece of knowledge for any sales leader. With Playbooks Reports, we built a modern, sleek and engaging visualization engine for breaking down sales activities and the associated conversion metrics for those activities.
To comply with my non-disclosure agreement, I have omitted and obfuscated confidential information in this case study. The information in this case study is my own and does not necessarily reflect the views of XANT.
My Role
The concept of a reporting engine for Playbooks was first discussed nearly three years before the release of Playbooks Reports. I worked as the lead Product Designer for the first version of reporting, where I worked with a junior designer to determine layouts, colors, and reports. Our first version was built on top of a PowerBI theme and while we were able to build out some great reporting features with little need for engineering resources, we found the platform limited our abilities to maximize the potential of our reports.
After transitioning to Director of Product Design, I oversaw the work of two Product Designers who came up with the original concepts for the new reporting engine. I provided direction in terms of user experience, flow, and user interface. I also provided resources and design inspiration to help guide the designers in seeing and understanding the complete vision of the product.
Finally, as the project reached the front end engineering stage of development, our Product Manager was transitioning away from our organization. I took the opportunity to absorb the product management role and carry the project through to production.
Feature & Use Case Discovery
After the release of our first iteration of reporting, we already knew we had significant challenges. Our first version was built on top of PowerBI, a visualization engine built by Microsoft. It was extremely limiting in terms of how we could arrange pages, how we could present options, and how we could allow customization. We wanted a highly flexible reporting tool but PowerBI would not be able to give us everything we needed.
Reporting is a crucial part of the sales process. Sales leaders must have insight into the activities and efforts of their sales teams. A team must be as efficient as possible, and a report showing where the efforts are taking place and how effective they are at converting sales is a must-have. Many large organizations who utilize our Playbooks tool have internal sales ops teams who can handle this reporting for them, but we had a good portion of our user base who didn't have any access to their data.
After interviewing our current user base as well as sales leaders not familiar with our product, we learned that every organization want to report differently. Each organization had a different way of selling, different levels of expected activities, and different concepts of what a "conversion" was. In order to meet the needs of as many users as possible, we needed flexibility in our reporting.
Our competition was frequently mentioned in terms of delighting users with their reporting. It was solid, streamlined and easy to comprehend. They were brought up to us again and again, so we took notice and studied how they were offering reporting in their product. We believed we match their feature levels and then go even further with our reporting.
Personas
Front-Line Sales Managers
Front-line sales managers have a crucial need for reporting. Their job depends on the success of their immediate sales team and they need their team to perform to certain metrics and KPIs. They need to see day-to-day activities and understand how the actions those sales reps are taking lead to success.
Sales Leadership
Sales Leadership needs to understand how their teams are performing as a whole so they can make adjustments to their teams and organizations. They want to see what areas they need to focus on, what areas are performing best, and what areas are not. Their reporting is a lot higher level than a front-line manager but demands the same level of accuracy.
Design Process
Design Sprint
Our initial design process started with a Design Sprint. I brought in experts from inside the company who understood our core personas well and who could help guide us in the right direction for our tools. We spent 5 days together as a team, listening to interviews with sales leaders and managers, writing notes, and discussing as a group. I organized and facilitated the sprint, guiding our team to a prototype we could test with 5 sales managers at the end of the week.
This design sprint taught us what NOT to do. We realized we had gone in the wrong direction and needed to change course after the sprint. What a great opportunity for us to spend a little time learning what wasn't the right way to go for our product.
Looking for Inspiration
Following the design sprint, I assigned a designer from our team to tackle the problem, with the help of other team members. We first gathered together and brought inspiration to discuss. We highlighted great design efforts across the web, looked deeply at our competition, and found pieces that solved interesting challenges we believed we would be facing.
A Second Failed Effort
Our designer took these designs and began working closely with the product manager and engineers to come up with a unique and effective design approach for our tool. After a few weeks, the designer came back with their solution and presented it to the team and stakeholders.
Unfortunately it was unimpressive. The designs felt stale and tired. They were too rigid and didn't meet the expectations we felt our users would have for the product. Things were too boxed in and users would not be able to adjust reports the way they wanted.
Last-Ditch Concept Work
Following the second failed effort, our designer was feeling frustrated and lost. I brought the rest of the design team together in a mini-session to help solve the challenges we were facing. We each spent time understanding the problem and then we each quickly mocked up potential solutions in a rapid fashion.
Finally, we came up with a better concept and solution that would work better for our users. Stakeholders were ecstatic with the new direction and we moved quickly to finalize the first phase of designs.
One interesting side note that came out of this project is how the approach to reporting is an extremely gray area. We found that we could offer anywhere from one single chart to hundreds of separate charts to report on the activities of users. We eventually settled on a small group of high-level tabs that would break down reports in similar ways, but with core metrics for each tab.
Continued Interations
Once we felt our approach was on target, we did some informal testing with a few internal users and then began to iterate on our concept, refining the final designs as we went along.
Rebranding
As we were nearing the end of the design phase and we were starting to work with front-end engineering, our company decided to pivot and introduced a new brand for the organization. As part of this, we had to take our existing designs and quickly alter them to match the updated branding. We worked hard to minimize changes to the UX while we adjusted colors, fonts, and the overall look-and-feel.
Concept Testing
During the heat of the design process, we began to test our designs with users. While we knew stakeholders were excited with the design direction, we wanted to validate that with our user base. We began user tests through interviews while showing an InVision prototype. We spent time with 10 different sales leaders and managers, gaining insight into our designs. We came away from that very confident in our final designs.
Development
As we neared the end of the design process, we began working with the front-end engineering team very closely. While they had been a part of the design process from the beginning, our team had stayed ahead of them during the first part of the design process in order to ensure we were making the right choices on such a large project. Our company didn't have the resources to spend on allowing large mistakes or changes on the fly.
Engineering began building out components and the core structure of the app. As they worked, they stayed in close contact with the project designer, sitting no more than 10 feet away from each other. The communication was constant, both in-person and over messaging. Reviews happened early and often, the designer often standing for minutes or hours behind the engineer's screen, pointing out errors or working with them on solutions to unforeseen challenges. The development process took a number of months to complete, but the design team and engineering teams stayed close together throughout the entire process, building an amazing product that everyone was proud to be a part of.
Beta Program
As the engineering team neared the final part of the development process, we began to alpha test our product. Our first beta tests were done via remote interviews. We demonstrated the process of creating and modifying reports, fielded questions, and solicited feedback. Participants were excited and feedback was extremely positive. We were confident we were moving in the right direction.
Beta Environment
Soon, our engineers were able to push a production-ready version to our beta environment. Our product researcher (a direct member of our design team) recruited over a dozen organizations to participate in our beta program, tirelessly handling kickoff calls, answering questions, and facilitating the setup with engineering and CSMs. We were able to launch quickly and get the program off its feet.
Event Tracking
We had meticulously crafted a proper set of event tracking throughout the new product as well. This gave us a highly detailed look into usage of the product and allowed us to understand what metrics were being chosen and how often certain views were being selected.
Satisfaction Surveys
Partway through the beta program, our product researcher sent in-product surveys out to the beta users to measure satisfaction with the feature. Scores came in extremely high, though we were made aware early on that a number of core metrics and ways to break down the data were not yet available in the product. We prioritized them quickly.
User Interviews
We also scheduled interviews with a number of beta users to collect additional feedback and to test new concepts for upcoming releases. These interviews were exceptionally insightful for teaching us what was working and what needed help. We found more positive feedback as we interviewed and knew we had built the right product.
Exit Strategy
Following just a few weeks, we had clearly the bulk of any last-minute bugs found in the product. Our usage metrics were strong and satisfaction scores were some of the highest of any feature in the product. We removed the beta tag and pushed to production.
v1 Release
Activity Metrics
The first version of our reporting product was largely focused around activities. A user can break down sales activity by type, disposition, or email interaction. They can choose from a dozen different metrics, choose their own timeframes for reporting in the past, and decide how they want to view the data.
Conversion Metrics
After choosing a specific primary activity metric, a user can then layer on top of that one of a dozen different conversion metrics. This can include call result metrics, conversations, email interactions, or whether the activity ended in a success.
Compare by Team or Person
Data can be analyzed even further by comparing the data by team or person. This allows the user to visualize which teams are working harder or which teams need to increase their effort. Additionally, the user can filter by specific teams or people, giving front-line managers a look into just their teams.
Flexibility in Reporting
Flexibility is the core tenet of reporting in Playbooks. Our offering gives dozens of activity metrics alongside an equally impressive list of conversion outcomes. Report can be generated by timeframe, play used, person, or team, and broken down even further to fit the way the user wants to report.
Competitive Analysis
Our first version of reports brought our product very close to feature-parity with our competitors and our future roadmap will take us well past their capabilities, giving our product a competitive advantage. Additionally, our tool has been specifically designed for flexibility, giving our users access to potentially hundreds more reports than what can be served through competing offerings.
Roadmap & Iterations
We are continuing to work on build onto this product. During the beta process, shortcomings in our reporting metrics were surfaced and we immediately prioritized those into our engineering efforts. We have continued to collect feedback from our users as we move forward with the product.
Now that I am serving as product manager for the reporting tool, I spend a lot of my time discussing the offerings directly with our users. I capture feedback and maintain a long list of requests and requirements as I consider the future direction of the roadmap.