A guide to research questions

As one of the lead product designers at Delphia, I developed a series of guides to help our team with user research and discovery.

What is a user research question?

A user research question articulates what you want to learn about your users in your study. These questions determine the methods you use, the insights you uncover, and the decisions you make based on those insights. Good research questions should be: specific, practical, and actionable.

  • Specific: You know when you’ve found an answer and you can find that answer within the scope of a study.

  • Practical: You can reasonably answer the question with the time and resources you have available.

  • Actionable: Whatever you learn will be used to make decisions or changes.

✅ Do this

What are the primary motivating factors behind why young professionals begin budgeting?

How often do young professionals check their spending per month?

❌ Not this

Why do people budget?

⚠️ This question is too broad. When you do not have a defined audience, your insights will vary greatly and it will be difficult to answer this question within a study.

What is an assumption?

We almost always have assumptions at the beginning of a new project because we usually don’t know everything about your users. Assumptions are beliefs that we expect to be true, but we do not have any evidence supporting them.

It’s important to avoid treating assumptions as fact. Making decisions based off of incorrect assumptions can have consequences. The best way to mitigate this is by documenting your assumptions and creating a plan to validate them to turn them into facts through research.

💫 Tracking and turning assumptions into facts

  1. Document your research questions about user behaviours, attitudes, or motivations

  2. Document your assumptions about these behaviours

  3. Execute research to test assumptions

  4. Document your facts based on collected user data

Types of research questions

Descriptive

Evaluate behaviours that are already going on or exist. They ask: “what does X look like?” or “what is X?”

Example: What percentage of young professionals use budgeting software?

Causal

Evaluate whether one or more variables causes or affects one or more outcome variables. They ask: “what effect does X have on Y?” or “how does X influence Y?”

Example: How does offering access to premium features affect user engagement?‍

Comparative or relational

Evaluate the relationships between two or more variables. They ask: “how does X compare or contrast with Y?”

Example: What is the difference in spending tracking between male and female young professionals between the ages of 20 to 30?

Choosing your research methods

The reason why we want to write out our research questions is because well-written research questions can help us determine the type of methods we should use.

Attitudinal vs. Behavioural

“What people say” vs “what people do”.

  • Attitudinal research assesses users’ preconceived attitudes or feelings toward an experience

  • Behavioural research assesses what the user does.

Qualitative vs. Quantitative

Questions that ask “how,” “what,” or “why” vs. questions that ask “how much” or “how many”.

  • Qualitative studies generate data about behaviours or attitudes based on observing or hearing them directly (think interviews, usability tests).

  • Quantitative studies gather data about behaviour or attitudes and are typically gathered indirectly (think surveys, A/B testing).

In-study context of use

There are two definitions for context of use. The first definition focuses on the context of use of the product within the study.

Natural or near-natural use of the product

Gathering data about behaviours or attitudes towards the product as close to reality as possible; in these cases interference is minimized.

Example:

We shipped our new MVP for monthly budgeting.

We want to measure monthly engagement in-app and assess customer satisfaction after using the new feature.

We have set up analytic events to assess user engagement and set up intercept surveys to gather qualitative data on customer satisfaction.

Scripted

Gathering data on insights for specific product areas.

Example:

We redesigned the experience for adding daily expenses due to various customer complaints about how difficult it was to use.

We want to measure and assess how quickly and easily users can now add their expenses.

We will use a scripted and consistent usability test to produce reliable usability metrics.

Limited

Gathering data using a limited form of the product. Users interact with or rearrange design elements that could be part of a product experience. The user is given the opportunity to discuss how their proposed solutions would meet their needs and why they made certain choices.

Example:

We want to explore a new concept for family budgeting because we heard from our users that it’s difficult for them to budget as their expenses are shared.

We want to validate that this feature solves their problem before we build it.

We use concept testing to assess if they want or need this product or service.

Decontextualized

These tests exclude using the product and are used to gather data beyond usage and usability.

Example:

We have refined our value proposition.

We want to validate its alignment with solving our users’ problem.

We interviews to gather qualitative insights about what users value and assess its alignment with our value proposition.

Circumstantial context of use

The second definition focuses on the circumstances of use. These are the external factors which may impact the way users use or think about your product. The following questions should be answered:

  • Where do your users engage with your product? (i.e. mobile, desktop)

  • What is happening to the user when they are using it? (i.e. social or emotional influences)

  • What is physically or socially preventing users from completing their tasks? (i.e. third-party has to act first)

  • When does usage happen and what triggers it? (i.e. timing and coordination)

  • What expectations do users bring to the task? (i.e. mental model)

  • Why do users want to do things in the order that they do? (i.e. workflow, motivation, flow)

  • What makes sense to users and why does that differ from how you think about it? (i.e. content, labeling, problem-solving)