Here at Oildex, our future user experience involves us adopting our users’ success as our own. We do that by giving our users a voice in our design process by constantly testing and validating our OpenInvoice, Field Ticket, Data Exchange, and Business Relations products.
When our customers are building their businesses on top of our software we don’t want to make guesses. We have made significant investments in our usability research practice to ensure customers have a productive, secure and error free experience. If you attended one of our Community Conferences in September, you got to experience some of our research techniques. These techniques fall into two buckets:
- Qualitative Research
This is where we’re digging a little bit into who our users are and what they need to be successful in their work. The structure of this type of research is broader in scope and not as task-focused. Qualitative methods expose market opportunities, demographic data, industry trends, and other high-level metrics that we use to keep our products on pace with customer need.
During qualitative testing, we use a lot of open-ended questions and try to uncover users’ higher level objectives. This type of research drives some of our long-term roadmap prioritization.
- Quantitative Research
When we’re really fine-tuning our user interface we’ll get into more specific usability testing. We’re looking for numbers and objective data. As part of our quant efforts, we might be timing tasks, looking for ways to optimize layouts, workflows, and investigating errors.
Our quantitative research focuses on uncovering opportunities to optimize our products, and will typically contribute to fine-tuning the user interface.
The three most common types of studies we do with our users are:
Surveys help us measure both qualitative and quantitative data. We’ll send out questionnaires to get insight on what our users want, what they don’t want, and how well our systems are performing based on expectations.
We run analytic software on computers that show us where users look on the screen. This is helpful when we’re considering page layouts, interface elements, colors, typography, etc. The eye tracker study gives us insight on what kinds of elements stand out most to the user. We can then make adjustments to simplify user workflow.
Moderated Usability Testing
Over the past year, we’ve sat with some users and observed as they complete tasks using our software. Occasionally we’ll do this on a video conference call, but the best is when we’re physically in your office with you. The “moderated” part is when a researcher is present with the user. Our researcher will be with the user (or on the webcam), observing and asking questions as they complete tasks. We’ll go into these sessions with a few questions or some prescribed tasks we’re looking to observe. We’ll ask participants to “talk aloud” as they navigate through a workflow, this gives us invaluable insight into the things our users need and do not need in our software.
This level of observation combined with comments from users in the context of their work is critical to assessing our product’s usability. The data we gather in moderated usability sessions directly impact some of the enhancements and features in our product releases.
We structure all of our test sessions to be easy and quick, between 45 minutes to an hour. Whereas surveys and other click tests are limited in scope so they can be completed in just a few minutes.
If you would like to help us improve our products by participating in testing sessions we’d love to talk. Email firstname.lastname@example.org and we can get started.