In software development, researchers collect data about end users' needs to make sure the resulting system facilitates the work they are trying accomplish. In order to do this, they can use interviews, which is a method of collecting qualitative data that teams can use to define system requirements that will produce game-changing results. The data from the interviews is then fed into the agile and user-centered design lifecycles as an input for creating the following artifacts: personas, task analyses, task scenarios, user stories, usability testing scripts and for understanding users’ mental models. Such artifacts will directly stimulate all ideation sessions that occur throughout the project development process and will be mission critical for successfully moving the project along the lifecycle. It should be noted that interviews are not the only means of getting information to create these artifacts. Using a mixed-method approach of interviews, contextual inquiries and surveys for projects is recommended.
User interviews are important and need to be taken seriously. Interviews are not conversations. A conversation consists of two or more people who are mutually sharing information of equal social weight. In user interviews, the interviewer’s sole intention is to understand the interviewee--to make the conversation entirely about them. Good interviewers are empathetic and try to see the world from the perspective of their interviewees in an attempt to understand how they think and feel about their work (i.e., their mental model) and how the technology they use may or may not be conducive to what they are trying to accomplish.
Conducting user interviews for a software project can be a daunting task, especially for the first time. Here are six tips to help you out: 1) Know the Objective, 2) Stay on topic, 3) Don’t Ask Leading Questions, 4) Don’t break the silence, 5) Don’t Interrupt, 6) Collect A.E.I.O.U.
Research methods are a means of getting answers to specific questions that are related to business objectives. Researchers use different methods based on the questions they are trying to answer. Before conducting the user interviews, or any other research method, make sure you know the business objectives. What is the project trying to accomplish? What is the researcher trying to understand? How will that knowledge help the stakeholder make a positive impact on his business? Once the objectives are understood, get a representative sample of interviewees and formulate an interviewing script that you think will get you the answers you are seeking.
Interviewing scripts should be partitioned by topic. For projects focused on re-innovating the way enterprise does software, topics may include: daily workflow, exceptions, team organization and collaboration, and common mistakes workarounds or bottlenecks experienced on the job. When initiating the interview, start by focusing on a single topic. When asking follow up questions, always stay on the topic being discussed, and don’t move to another until you have exhausted the current one.
For example, when discussing the topic of workflow, users always mention a number of things related to different topics that are all important. They may say something along the lines of, “Well first I meet with the team, then I check my to-do list, then I go on my route … ”. When listening, make note of all the different steps in the workflow.
In this case, proper follow up questions would be, “and then what?”, or “What do you do next?”, in order to stay on the workflow topic. Only after exhausting this topic would it be appropriate to go back and ask follow up questions that dive deeper into each sub-topic like “What is discussed at the team meeting?” “How do you know what to discuss?”, etc.
Why stay on topic? Staying on topic makes it easier for interviewees to recall facts. Something of note: cognitive psychologists believe the human brain categorizes and stores information in schemas. When people recall information to answer a question, they do so by scanning information stored in schemas. Theoretically, each conversation topic is stored in its own schema. Asking a series of questions related to a single topic, or schema, makes it easier for interviewees to recall information about the topic being discussed. This allows the researcher to get the most information possible. In conclusion, to get the most out of each interview, always stay on topic. The end of the interview will provide an opportunity to ask questions related to how all the different topics relate and to dive deeper into the sub-topics of each parent topic.
It should also be mentioned that it is impossible to know everything about a subject before you start studying it. Because of this, you shouldn’t stick to the script too rigidly during a user interview. Be ready and willing to adapt to surprises! If the interviewee introduces a topic you didn’t anticipate, be ready to learn.
When asking questions that start a new topic of conversation, it’s best to be a little vague. That is, don’t ask leading questions. Ask open-ended questions so that interviewees aren’t prompted to think about issues in a particular way and so that the scope of possible answers aren’t limited.
For example, the leading question: “What do you dislike about smartphones?”, prompts users to think negatively about smartphones and limits the scope of possible answers to only include negative thoughts and experiences with smartphones. By contrast, the open-ended question of, “What do you think about smartphones?”, extends the scope of possible responses to include positive, negative and neutral thoughts towards smartphones, without prompting the interviewee to think about them in a particular way. In fact, the direction the interviewee chooses will be an indicator of what is most top-of-mind and important to them. Knowing which direction interviewees freely choose will be insight that is lost when interviewers ask leading questions. This insight can be very informative when analyzing information across a sample.
When asking non-leading questions, it’s natural for interviewees to pause and think about responses. It’s common for novice researchers to break this silence by giving the interviewee example answers before the interviewee responds on their own. This is bad practice. The urge to break the silence stems from the fact that pauses like these are awkward in normal conversations, but what needs to be remembered is that an interview is not a normal conversation. The rules and customs are different.
For example, when asking the question, “What do you think about smartphones?”, the novice interviewer may wait a few moments and then break the silence with something like, “Do you download many apps? Which ones are your favorites?”. Breaking the silence with filler questions like these can completely derail the interview. If filler questions are asked on every interview over the course of a project, they can completely skew the findings.
Silence after a good question is actually a good thing. It means the interviewee is thinking about their response and is engaged in the conversation. It means that the researcher asked a question that provoked thought and reflection. Don’t break the silence.
Interruptions aren’t just bad when interviewees are thinking about how to respond to a question. They’re also inappropriate when interviewees are speaking. When people talk about a topic, what they say oftentimes spurs the memory of something else that allows them to answer the question in more detail, which gives the researcher more insight than expected. This directly relates to the importance of staying on topic, or schema, until it is time to continue.
Doing this may sound easy, but it's something that actually takes practice. The best way to learn not to do this is to record and re-listen to interviews. When I first started interviewing users and transcribing audio files, I noticed that there were times when interviewees were about to say something really interesting, but I interrupted by asking a follow-up question prematurely, losing insight into what they were going to say. From experience, I’ve learned that all follow-up questions need to be written down and asked only when the interviewee completely finishes their thought, and it is almost never appropriate to interrupt.
If you’re new to using interviews as a research method for informing the design of technology, it is not uncommon to feel a little lost. While each project is different, generally information about A.E.I.O.U. should be collected.
What are user’s attitudes towards things relevant to the project? What do they like and dislike about the technology they are currently using for work?
Learn about the physical domain where the work is conducted. What about the environment supports or constrains the work? How can technology be used in the given environment to support work processes? What are the lighting conditions like? How might they impact the design of the interface?
How do users interact with technology, artifacts, objects, etc. to perform their work? Can any interactions be altered for efficiency and simplicity? For instance, are hourly workers standing in line at a kiosk to check in to work by typing a serial number on a keypad? Can this interaction be altered so that an employee is automatically checked in by breaking a geofence? If so, this can eliminate the downtime associated with checking in at the kiosk.
What are the objects a worker group uses to do their job? Does any of it cause re-work? For example, are employees filling out paper forms and then transcribing the forms to a program on a desktop at a later time? Can this be streamlined? What are the common errors associated with this workflow? How long do they take to fix?
Who are all the players involved with getting work done? If the goal of the project is to promote efficiency, user groups can’t be studied in isolation. Team dynamics need to be understood and supported through good design.
While conducting interviews requires forethought and preparation--considering things like sample size, question sets and data collection tools--the real insights don’t surface until data is analyzed. Many techniques are available to the qualitative data analyst including Affinity Diagramming, Interpretative Phenomenological Analysis, and for the really ambitious, Grounded Theory. Some methods are too unwieldy for small projects, while other methods are too simple for big projects. Researchers can always find guidance in remembering to answer specific research questions related to the project’s business objectives and keeping in mind that outputs should be data for constructing personas, task scenarios, user stories, and usability testing scripts. They should also give the designer's a sense of users’ mental models. The aim will always be to construct these artifacts in only enough detail to quickly move the project to the next step in the agile and user-centered design lifecycle.
Keep up with new insights from industry leaders on digital transformation, mobile app development, enterprise architecture, and tech innovation topics.