Doing UX Research in the AI Space

As the world of Artificial Intelligence (AI) continues to advance, teams are increasingly engaged in rapid prototyping activities to drive innovation. However, it’s essential to keep in mind the purpose behind these endeavors. The question we must ask is, “What problem are we trying to solve?”

It’s not uncommon for groundbreaking technologies to lead to product innovations. However, there is a risk of teams developing technical solutions that seek user problems, rather than addressing existing needs. To avoid this, it’s crucial to prioritize the user perspective throughout the design and development of AI solutions.

UX Researcher Support

If your team has an assigned UX Researcher, you have access to the necessary support for your AI-specific research topic. Reach out to your assigned stage UX Researcher following the research prioritization process. They will ensure that your research is given priority among the other identified projects within their stage.

In case your team does not have an assigned UX Researcher, Nick Hertz is managing research requests for groups without a designated researcher. You can still prioritize your AI research topic by adding it to the AI research-specific prioritization calculator after opening a research issue.

The Guidelines

To successfully integrate the user’s perspective into AI solution development, we have outlined several guidelines:

Guideline 1: Problem Validation – Identify and Understand User Needs

AI solutions themselves do not reveal the user problems they aim to solve. To identify and understand user needs and determine if an AI solution addresses a genuine problem, there are several approaches to consider:

  • Review existing research: Start by delving into existing research knowledge. Platforms like Dovetail and the UX Research Drive, as well as consulting your assigned stage researcher, can provide valuable insights. Don’t forget to explore research conducted outside of GitLab as well.

  • Use case definition (recommended): Leverage existing research and your domain expertise to formulate assumptions about the user problem that the AI solution seeks to address. Use the Jobs to be Done (JTBD) format to phrase the problem statement. For example: “When [circumstance a person is in when they want to accomplish something], I want to [something the person wants to accomplish].” Validate these problem statements through a quantitative online survey, assessing the frequency of encountering the problem and its significance. Include additional parameters such as how users currently solve the problem and the difficulty they face.

  • Extended solution validation: Incorporate solution validation studies to gain insight into user needs before they engage with the AI-powered prototype or feature. Include questions about job tasks, workflows, tools, expectations, and pain points. Moderated sessions are recommended for this approach.

  • Generative research (recommended for low confidence): If you lack confidence or understanding of the problem statement or user needs, conduct generative research to gain deeper insights into a user group and their needs. While this approach requires more time, it provides valuable insights into users’ needs, goals, and pain points, which can be used to ideate on new solutions.

Guideline 2: Pre-Solution Validation – Collect User Feedback Before Building

Did you know that you can validate your future AI-powered feature while it’s still being developed using Wizard of Oz prototyping? This approach allows you to capture users’ expectations and requirements early on, which can inform the engineering efforts in training the AI.

When preparing the prototype, consider the following:

  • Data collection: If your solution includes personalization, collect relevant user data to create a more realistic experience in the prototype. Ensure that users are aware of how their data will be used.

  • Include “wrong” recommendations: As AI technology is probabilistic and not always accurate, include recommendations that may not be directly connected to the user’s needs. This helps gauge what users find acceptable and where their frustration lies.

Instead of asking users if they would use the AI feature, focus on understanding their problem or need and evaluating how helpful the solution is in addressing it.

Guideline 3: Solution Validation and More – Collecting Usability and Beyond

During solution validation of an AI-powered prototype, it’s essential to collect feedback not just on usability but also on other dimensions:

  • Baseline data: Collect information on how users currently solve the problem at hand. This allows us to assess the impact and helpfulness of the AI solution.

  • Trust: Evaluate if users trust the information provided by the AI. Understanding their level of trust is crucial, as lack of trust may deter users from employing the solution. Ask questions like “How much do you trust the [feature name] provided?” or “Do you trust [feature name] with [task]? Why/Why not?”

  • Feedback acceptance: Check if users feel comfortable providing feedback on the system, such as when a code suggestion is not helpful. User feedback helps improve AI performance, so it’s vital to enable users to provide feedback on the AI’s recommendations.

  • Attitudes towards third-party AI services: If the AI solution relies on third-party services, it’s important to understand user awareness and their attitude toward these services. This insight sheds light on users’ mental models and the impact on GitLab as a brand.

During solution validation, aim to collect at least three data points to account for the variability in AI output. This can be achieved by assigning similar tasks and observing how participants react to the AI’s responses in different scenarios.

Guideline 4: Learning from AI Errors

As AI systems are probabilistic, mistakes are inevitable. It’s essential to understand how these mistakes may impact users and their perception of the AI system. Some recommended actions include:

  • Research activities: Plan research activities to assess which mistakes are acceptable and which should be avoided at all costs.

  • Prototype setup: Design your prototype in a way that it includes “wrong” recommendations, allowing you to capture user reactions to AI errors.

Guideline 5: Planning for Longitudinal Research

AI evolves as users interact with it over time, and their mental models change as a result. To ensure the continuous delivery of valuable AI solutions, it’s crucial to understand how these mental models develop and to evaluate the performance of AI solutions as use cases and user numbers grow.

We are currently piloting a set of AI metrics that enable the evaluation and tracking of user experiences with AI-powered features over time.

AI User Experience Metrics (Pilot)

To assess how well AI-powered features meet user needs, we have developed a set of metrics. These metrics focus on eight constructs observed in a literature review and encompass 11 survey questions. The constructs include:

  • Accuracy
  • Trustability/Fallibility
  • Value
  • Control
  • Error handling
  • Guardrails
  • Learnability
  • AI limits

We have created a survey using these metrics, which can be sent to participants using AI features. If you wish to use this survey, please contact Anne Lasch for access to the Qualtrics project.

References

  • People + AI playbook by Google
  • User research for machine learning systems – a case study
  • Testing AI concepts in user research
  • Human-centered machine learning

For more articles like this, visit Zenith City News.