Developing the Qmlativ Data Miner
Several months ago, I shared a post about the user-centered development process at Skyward. While it's important to understand how the process works on a conceptual level, it's hard to do justice to just how much of a difference it makes to your experience in the software.
I've been waiting for a good, solid example, and I think I found the perfect candidate in our Qmlativ data miner. This is that story, from beginning to end and back again—several times over. I hope you'll take note of how the input of users just like you shaped what we did at every step along the way.
Why is it important to loop back to the beginning? After spending time defining the project, ideating solutions, testing, and learning, shouldn’t we be done and have a feature ready to ship to users? Not quite. You'll see why later in this post.
Beginning with the end in mind
Data miner began with a team of Skyward experts from multiple departments. This cross-functional team, made up of product owners, product managers, programmers, systems managers, and UX designers, covers the “iron triangle” of user-designed product development: business value, technical considerations, and user experience voices are all represented to ensure a highly valuable product.
Deb, the product manager of the data miner, collaborated with several internal subject matter experts to validate her assumptions and define the “what” and the “why” of the data miner feature. We’d create a powerful, but easy-to-use reporting tool for Qmlativ based on how customers already used the data miner in our more-established SMS 2.0 solution. Deb gathered feedback from customers who had used SMS 2.0 over the years. Once assumptions were in place, we asked for more customer feedback to confirm we were on the right track. This collaboration helped ensure an accurate, meaningful definition of success.
Sketching the skeleton
From there, we fast-tracked ideation—the brainstorming phase of the project. We started building a skeletal version of data miner, called a minimum viable product (MVP), based on our initial assumptions.
Gaining context from observing users firsthand was a great benefit to guide development focused directly on the end user. Programmers had to create the basic building blocks to meet users’ needs and business needs while focusing on how those needs translate into existing elements in the software.
“This kept our code-base robust and maintainable, which did two things: Made troubleshooting easier, and allows for additional features to be added quickly,” Leslie, one of our programmers, explained.
From the beginning, the team understood they’d be creating several versions, so we strategized to maximize time and effort. One particularly tricky part of data miner was the field selection, which performs the important task of finding and selecting data fields for a report. A very basic field selector was used in the early versions of the feature. Here are three ideas that didn't make it into the final product:
A simple field selector mock-up
Another option for selecting fields
A proposed wizard
“We knew field selection was going to take a long time, and we wanted to get something out there,” Adam, another programmer, commented. “Then we could get feedback and make it a lot better.”
This is how user-centered development stories go: We consistently circle back to our customers to get their feedback to drive our decision-making process. Our team had to be comfortable showing customers a skeletal product as we continued to develop a strong foundation we knew was limited. We were eager to hear if we were on the right path before we continued to build functionality.
The sooner we can get customer eyes on our product, the sooner we know if we are creating a good user experience. We’re constantly showing users what we're developing, then asking if what we assume is right.
Gathering user data
Once we had a first iteration to show customers, we started formulating our strategy for obtaining feedback. Ten customers and a handful of internal stakeholders participated in different testing methods which included:
In addition, we visited user groups to ask current users of SMS 2.0 about how they used data miner. We learned a lot about how many people—close to 90% of these customers—were self-taught. This really informed how we looked at Qmlativ’s data miner and how intuitive it needed to be.
- Contextual interviews: We observed customers completing assigned tasks and asked for individual user input for data collection.
- Diary studies: Users were encouraged to try an early version of the software at their convenience for one week and were asked to journal their experience in a Google Doc to give live feedback.
- Follow-up interviews: After usability enhancements were made, we followed up to see if we enhanced their experience.
Adding up all our user testing opportunities, we spoke to over 100 customers about data miner in 2017.
Learning from feedback
All this feedback resulted in about 15 projects—enhancements needed to address usability issues customers helped us uncover during testing. Even small things like swapping a couple buttons or changing the name of a field selector could result in a better experience. Thanks to the cross-functional team, everyone was aware when a “small” change could add up to many hours of programming on the back end. One example was totaling reports. It looks like a checkbox to a user, but it’s much more programming than meets the eye.
Even as we’re constantly checking in with customers, we’re also checking in with internal stakeholders to make sure our team is aligned with company goals as well. Continuous feedback loops provide great data to share with stakeholders. Taylor, a product owner for data miner, spent a lot of time creating harmony among the different functional groups on the team. If there were conflicting ideas, Taylor was the one who took feedback from one group to another. He helped determine solutions among the team members.
The 15 user-inspired projects resulted in a second version (V2) of data miner. Benchmark data, called key performance indicators (KPIs), showed a significant improvement between the different versions.
Qmlativ's data miner in action
More people indicated they felt V2 was easier to use, had a more appealing design, performed better, and their overall satisfaction increased. This is how we track data to determine if a release was successful—in other words, whether users are happy and satisfied.
Customer feedback comes in many forms!
But it wouldn’t be a proper UX story if it ended here. Remember, these initial versions of data miner were shared with customers, but not necessarily released on a grand scale to all of Skyward’s users. Another step we took was to map out the different versions we envisioned with team members and stakeholders across the company. Of course, we’d run these plans by customers later. Seeing the big picture of where the software is going helps cut down on extra work programming.
“We pushed back exporting and importing specifically,” Adam explained. “If we do it early on, we’re still going to be making fluid changes and it’s going to result in a lot of maintenance. Everything we do from now on would need to be able to be imported and exported.” The team decided to wait on this feature in order to avoid having to slow down the production of other features.
As more and more users start dabbling in Qmlativ’s data miner, we hope we’ll receive more usage data and feedback. Then we’ll discover even more opportunities to improve. And the cycle of user-centered design will continue.
Follow-up Resource: We need you
Want to help create the next big feature? Help us build a better experience by joining our User Research Panel.