[Empathy with AI] Discover the Problem, the Competitor, and the Target User Using ChatGPT
đź”­

[Empathy with AI] Discover the Problem, the Competitor, and the Target User Using ChatGPT


Design thinking has always been our preferred method for addressing problems in a creative and compassionate way. As we explore the possibilities of AI, we’re discovering new ways to enhance this process. Think of AI as a helpful companion that can support us throughout our design journey. Together with AI, we can explore more possibilities while staying true to our design principles.
Before delving deeply into the context, we must be mindful of AI’s limitations.

AI limitations

  1. Visual input processing.
    1. Most AI tools cannot process visual input, and the main issue is that neither humans nor AI tools can analyze usability testing sessions solely based on transcripts. So for now, do not trust AI tools that claim to be able to analyze usability testing sessions based solely on transcripts.
  1. Context understanding
    1. This is still a major problem. AI insight generators cannot consider the study objectives, research question, insights or tags from previous research cycles, participant context, product or user group background information, new versus existing users, task lists, or interview questions.
  1. Lack of citation & validation
    1. The tools are unable to differentiate between the real session transcript and the researcher’s notes. A major ethical issue here. The participants’ words and actions must always be clearly distinguished from our own interpretations or presumptions. Another issue with the absence of citation is that it makes accuracy verification extremely hard. Sometimes, information generated by AI systems may seem very credible but is actually incorrect
  1. Unstable performance
    1. Another issue is unstable performance and usability. None of the tools they tested performed well or were easy to use. They reported outages, errors, and unstable performance in general. At least for now
  1. Bias
    1. In AI, bias can originate from algorithms (computational bias), data collection (statistical bias), training data (systematic bias), or human bias.
      AI must be trained on data that can introduce systematic, such as historical and institutional, and statistical biases, like a dataset sampling that is not representative enough. When people are using AI-powered results in decision-making, they can bring in human biases like anchoring bias. As a result, bias can infiltrate research efforts on multiple levels, and these tools don’t yet have the mechanisms in place to prevent that.
While AI is still unable to truly determine which problems humans should solve, it can provide us with some curious places to start. We are the ones who are able to recognize actual issues that people face by applying our design thinking and combining abilities like psychology, observation, critical thinking, domain knowledge, empathy, curiosity, and so forth.
https://www.nngroup.com/articles/research-with-ai/
AI-generated information should always be handled with caution, but if we cross-reference it, fact-check it, or even ask the AI to cite its sources, it can be a fascinating starting point for our research.

The problem space

General question

Let us imagine a scenario where we are unsure of what makes a VR headset comfortable for our upcoming VR Gym application. In considering the usability factor of daily experiences of using virtual reality technology.
Let’s see how ChatGPT can be used to explore the problem space. We can start with a broader query that gives some context for GPT, and we will start by concentrating on this answer using the following prompt:
“As an extended reality user experience designer, help me to identify the usability factor of daily experiences of using virtual reality technology. I want to know what the problems are with using virtual reality on a daily basis. And please list the problems from the most challenging to the least.”
 
The Result:
notion image

Focus on smaller context

As we can see, GPT’s response was quite extensive, and we will begin by focusing on a smaller portion of it. To do that, we could try a prompt such as:
“I am especially curious to learn more about the specific factors that cause physical discomfort. Please help me to expand it.”
The Result:
notion image

Dig deeper

We can begin delving deeper into the body of content that we most likely have at this point. However, before we proceed, we can put our eyes to the first limitation that AI is already showing. What is the source of this information? I’m going to challenge it by saying something like:
“Can you point me toward some research that was done on why motion sickness, eye strain and fatigue, and neck and muscle pain are the factors that contribute to the physical discomfort of using virtual reality technology?”
The result:
notion image
GPT has already provided us with some guidance as we attempt to analyze the usability factor in using a virtual reality headset on a daily basis. Let us see how we can proceed with this further.

The competitive space

We can continue by trying to understand a competitive environment. Maybe conduct some research on our options to conclude the most comfortable headset, which will help us increase the time users spend using our future VR gym application. Let us see if we can do that, we can simply ask GPT something like:
“Are there any virtual reality headsets that are currently helping users with the physical discomfort problem? Help me to list all the available VR headsets from the most comfortable into the least.”
The result:
notion image
We can see that ChatGPT is generating a lot of content on this and directing us to a number of headsets. At this point, it will actually make sense to research those headsets and make sure they really exist, but build our own understanding of what a competitive space looks like to complement what GPT had. Let’s narrow down the scope by asking ChatGPT to point us towards the most popular hardware.
“According to the result, what’s the most popular VR headset?”
The result:
notion image
From here, we can see that the most comfortable hardware in terms of use and the most popular among users is Meta Quest 3, so we can use it as a finding that we will combine with real data to convince stakeholders to develop future apps using Meta Quest 3 hardware.

The target user

We will try to move into understanding what might be the audience that is suitable for the use of MetaQuest 3 in a virtual gym activity. How would the person that we are targeting look like? Let’s try to find out how GPT would describe our ideal persona. We are simply going to prompt it:
“Can you describe our ideal persona for a VR gym that uses MetaQuest 3 as its hardware?”
The result:
notion image
We just went far enough in this story to give us a sense of how we might investigate the problem space of a design problem that we are attempting to resolve. Keep in mind that the information generated by GPT or other tools is limited and questionable.
Fact-checking everything and paying close attention to what we find is advised; the quality of the responses we receive depends on the quality of the prompts and persistence until we receive the refined answers we want to our questions.

Tips

Here are a few tips to make the research still sharp when collaborating AI with our research workflow, as stated in this article
  1. Always ask AI systems to cite primary sources, and then go check those sources.
  1. Use tools specifically designed for information seeking (such as Perplexity or ScholarAI), but remember that no generative AI system will be free of misinformation, bias, or hallucinations.
  1. Ask your AI tool to follow established best practices when generating options for tasks or questions. If you don’t like the results, you may need to explicitly list the characteristics you want the output to have.
  1. Ideally, have a human research expert review your final list of ideas. If you’re an expert, that could be you. If you’re new to research, contact a more experienced researcher for guidance.
  1. When asking AI tools to complete research-related documentation, provide them with a template as a starting point.
  1. Watch out for mistakes in how the system completes the documentation. For example, double-check that the correct data collection permissions are outlined in your consent form.
 

References:

  1. Design thinking is an iterative process in which you seek to understand your users, challenge assumptions, redefine problems, and create innovative solutions that you can prototype and test, https://en.wikipedia.org/wiki/Design_thinking
  1. Crafting Product-Specific Design Principles to Support Better Decision Making, https://www.nngroup.com/articles/design-principles/
  1. Usability Testing 101, https://www.nngroup.com/videos/usability-testing-101/
  1. Computational bias, https://en.wikipedia.org/wiki/Algorithmic_bias
  1. Statistical bias, https://en.wikipedia.org/wiki/Bias_(statistics)
  1. Systematic bias, https://en.wikipedia.org/wiki/Observational_error
  1. Accelerating Research with AI, https://www.nngroup.com/articles/research-with-ai/
  1. Writing Tasks for Quantitative and Qualitative Usability Studies, https://www.nngroup.com/articles/test-tasks-quant-qualitative/
  1. 6 Mistakes When Crafting Interview Questions, https://www.nngroup.com/articles/interview-questions-mistakes/
 
🤓
 ⛳ If you found this helpful, please: Follow me on Twitter | LinkedIn | Instagram | Medium