User research is a field that studies user behaviors, needs, and motivations through observation, analyzing tasks, and getting feedback. The goal is to improve usability using experimental and observational research to guide how a product is built and the design that is used in the product and the development priorities. This might be through observation, running experiments, interviews, etc.
The research work should be infused into all aspects of the product lifecycle. This allows for rapid prototyping, which then allows organizations to focus on building things customers want and doing so in the way customers need them done to be pertinent. When done right, this reduces the timeline to go to market, impacts prioritization of development, and explain behaviors.
So if UX Research includes a lot of observation and feedback, Handrail is a repository to store the findings of those research projects. If you think about it, each project is a research study, much as they would be at an academic institution. And so Handrail organizes data into studies. These might map well to an Epic in Jira.
Each study has a plan and a guide. The plan identifies the name, purpose, and who will be participating. Participant criteria defines who should participate and then you can see a list of people who will be involved as well as a timeline for when the study will be run. Here, you can also import contacts to involve them in the study.
The guide lists the questions or topics that will be covered. You can build about as many questions as you can think of, but this lays out the topics or tasks that will be repeated by each participant in the study. For example, you might ask a researcher to observe a specific action and define how that information is reported back. A little extra time to be deliberate about your line of questioning here will go a long way in terms of getting consistent results, especially if there are multiple researchers involved in the study.
Within each study, there are a number of sessions. These sessions have topics that are recovered and then the feedback from each person participating in the study. There’s a timer, so the researcher can see how long each session lasted and how long was spent on each topic being covered. You can tag something covered as an Insight and then when you’re done, you can go to the Analysis Board and see sorted cards for the study.
How might other people within an organization consume the output of a Research Ops team? This is different for every organization. But if a study can be considered an epic, think of the topics as defining what’s in the study and each question feeds into the summary, with specific insights or findings appropriate to reference in a given Jira card. If a developer then wants to drill down into the raw research to see where those Insights are derived they can do so pretty easily given a link to the study.
The raw data for each study can then be exported into a standard csv file and brought into a number of tools to track research performance over time and the csv can get piped into a number of different tools for quantitative measurement of the qualitative data brought in. This might be tagging, categorization, sentiment analysis or to produce visualizations.
Analytics as necessary for those doing limited 5-subject usability studies (as Jakob Nielsen wrote about back in 2000 at https://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/), but does get necessary if you’re looking to have higher confidence intervals for analytical topics like the features people would like added to a piece of software or gaining insight into new product lines, potentially by persona – a job frequently being taken on by Research Ops.
Handrail also has an option for personas, so you can map research findings to the persona they best fit with. Here, you can name the persona and use the imagery customers use for their global personas. They can build a story and define the motivations, goals, frustrations, and pain points for those personas.
There are a number of methodologies that researchers use to analyze information. Handrail supports a number of qualitative and quantitative methodologies. These include Prototyping, Personas, Task analysis, Ethnographic studies, Guerrilla testing, Scenarios, Expert review, Focus groups, Card sorting, Contextual design, Parallel design, and Content analysis on the qualitative methodology side. Quantitative methods that pair well include Surveys, First click testing, Eye tracking, Web analytics, and A/B testing.
Handrail is one tool for doing this type of data entry. You can use custom Confluence pages, Airtable or specialized Research Ops tools like Aurelius and Glean.ly. And then there’s more visualization tools like Kumu or Miro, for mapping relationships, etc.