Interviews are a real time interaction between an interviewer (who sets the topic and generally poses the questions) and one or more participants (who respond to the questions). Generally, interviews are conducted verbally in person, on zoom, or on the phone; and they might include materials for the participants to interact with or work on. They are usually recorded with video, audio, or both. Sometimes, the recordings are transcribed for analysis; other times, the recordings are analyzed directly. During the interview or immediately thereafter, the interviewer makes field notes about what happened. Afterwards, these notes can be used as metadata about the interview, as a primary data stream for analysis, or to augment the analysis of the recordings.
This is not a primer about job interviews. If you are looking to get a (faculty) job, you might look here instead.
Why do interviews?
Interviews are great if you want to learn something from or about people. Let’s unpack some of the affordances here.
Interviews are an opportunity to engage as a human with another human. Engaging humans has a couple of subtle affordances: people are generally more willing to talk about their ideas or experiences if they’re thinking about you as a person, and you can check their vibe in real time and adjust accordingly. As human interactions, interviews also hold a wealth of “non-word” information like tone and pacing of words; you might also see body language, facial expression, and gesture. All of these are important parts of human communication. These affordances mean that interview data can be rich and nuanced, yet targeted to your research questions.
Kai worries that their graduating seniors aren’t well prepared for professional life after graduation. Their department has asked them to make recommendations about how the department could better support students about planning for post-grad life.
Kai believes that connecting with these students on a human level is very important to understanding how they think about life after graduation. They think the nuances of how students think about their experiences and build their plans are important, so Kai decides to use interviews as a primary data stream in their project.
Asking follow-up questions
In an interview, you can ask follow-up questions to help expand on your participant’s reasoning. This feature is great when you’re interested in the processes of their thinking: in what ways do they connect ideas together, why those ideas, and perhaps chronologically how are their ideas sequenced. If your research study is about student conceptual understanding or problem solving, then you probably want to use interviews to probe their thinking as they work through specific problems.
Follow-up questions are also great if you’re still working out how to pose a question, because you can probably figure out in real time whether your participant understands what you’re asking for and adjust your questioning accordingly. Follow-ups are also great for asking participants to expand on previous responses (or parts of their response). If your study involves asking about participants’ perceptions or experiences, follow-up questions can help you probe more deeply into the parts of those perceptions or experiences that are meaningful to your study, using the language your participants bring up.
Alyssa is a second year PhD student, and she would like to study how introductory courses in biomedical engineering affect undergraduate students’ motivation to continue in the field. Alyssa plans to conduct interviews with students towards the end of their first year, looking for a variety of experiences across different genders and grades.
Because Alyssa is putting together her dissertation study, she isn’t sure yet what kinds of responses she’ll get from her interview respondents. She’s planning to ask follow-up questions so that she can probe their thoughts in an emergent way.
Comparing interviews and other data streams
As a data stream, interviews are often compared to written responses, like surveys, homework or exam responses, or essays. Interviews are great when the thing you want to learn about is nuanced, process-oriented, or you’re not entirely sure how to ask. Written responses are generally poor at those three things. However, interviews are a lot more time-expensive for researchers, especially if you want a lot of them: it’s more time to collect the data, and it might take more time to analyze it well. Depending on who your research participants are, interviews might also be more difficult to schedule or more time consuming for your participants than surveys or copies of their coursework.
Julian is curious about how students in general physics connect vector diagrams with how vectors decompose and add. His department has 15 sections of recitation. He plans to collect students’ homework from 3 sections of general physics to see which representations they use solve problems with vectors.
Julian’s students’ lab notebooks won’t tell him what they’re thinking or why they include those particular diagrams. However, it’s a lot easier to collect three sections worth of lab notebooks than to interview three sections of students! Julian thinks he might augment the data he gets from students’ notebooks with a few interviews to probe their ideas more deeply.
Interviews are also often compared to in situ observations, like classroom video. Classroom video is great when you want to know what students or instructors really do in real classrooms, especially if your research questions are about processes of human interaction. However, classrooms are generally noisy, you can’t ask follow up questions in the moment to probe what happens, and sometimes you need to account for extra people who aren’t part of your research study. Interviews have none of these limitations.
You might need to consider how interview data connects with other data streams for your project. While interviews can be used as the sole source of data, they are more commonly used in conjunction with other data streams: as the primary source of data supported by other data streams; or as a supporting data stream with other sources of data. Generally speaking, the larger or more complex a research project is, the more likely it is to include multiple streams of data.
What purposes can interviews serve?
As you design your research project, your research questions will help you decide your purposes in interviewing. Ask yourself what you’re hoping to learn about or from your participants, and how to best elicit that information. Your purpose in interviewing can help determine what kinds of interviews are appropriate.
In education research, interviews are generally used for the following purposes.
Probe how your participants think in a controlled setting.
Perhaps you are curious about student problem solving or topic understanding. You might want to explore how students think about your topic, how they connect different representations, or which pieces of knowledge they bring together. Asking participants to solve problems in an interview will let you probe how they think in more depth than either a written test or classroom observations. Alternately, perhaps you are curious about participants’ experiences and how they shape their ideas or identities. In an interview, you can elicit their thoughts and probe their connections.
Test and refine your questions.
For example, if you’re working on developing or validating written questions for a survey, you should test their validity (are people answering what you’re asking) and applicability (are their responses relevant to your research questions) by asking people to explain their thinking as they answer each question. If you’re developing questions, you can change them to improve their validity and applicability; if your survey is already fixed, you can change your analysis to reflect your new understanding of why people like your survey participants might respond.
Triangulate with other data streams with the same participants.
Perhaps your study involves classroom observations of teachers, and you want to check with them about how they prepare for class or reflect on how class went. Or perhaps you have students’ grades and demographic information, and you want investigate how they think about choosing/changing their majors. If you have several kinds of data from the same participants, interviews can be a great opportunity to connect different pieces of information and build a richer picture.
Prototype and test your developed materials.
If your project includes developing curricula or other resources, testing your materials in a controlled setting can help you understand if they are working the way you intend. If you’re developing curricula, these are sometimes known as “teaching-learning interviews”; if you’re developing websites, these are sometimes known as “usability testing interviews”.
Learn information from experts.
These “informational interviews” help you, the interviewer, learn something about a topic or process. For example, you might be curious about the history of physics education research, so you ask some of the progenitors of the field about what happened in the 1980s and 90s. Or you might need to learn about how the institutional budget model works, in preparation for asking faculty about how the model affects them. A key difference between these interviews and other kinds of interviews is the focus: you are learning something about a topic by asking an expert; your research is not about the person you are interviewing or their experience.
These different purposes for interviews will affect what kind of interview data you want to collect: the kinds of questions you ask, whether and how you bring in materials for your participants to work with, how many people you want to interview, and what characteristics you want in your interview participants.
What kind of interviews should I do?
In education research, your research questions and purpose in interviewing can help you decide what kind of interviews to perform. The most common kinds of interviews in education research are:
Type | Description |
---|---|
Semi-structured | You have a set of questions to ask, but follow-up questions can emerge naturally from your conversation with your participant. These are the most common kinds of interviews. They’re used for all of the purposes above, especially near the beginning stages of a project when you’re not sure how participants are going to respond. |
Fully-structured | You have a set of questions to ask, perhaps including follow-ups, but the specifics of each question is known ahead of time and there’s a pre-determined decision tree about which questions to ask and under which circumstances. When a nurse asks you about your medical history, this is a fully-structured interview protocol. These protocols are usually used near the end stages of a project, when you have a really good idea of the space of responses. |
Think-aloud | You have specific tasks that you want your participants to perform, and you want them to perform the tasks while you watch. You want to know what they’re thinking, so you ask them to talk while they’re working. You don’t want to change their thinking, so you avoid asking follow-up questions or giving hints or directions. These protocols are usually used to test questions you have already developed (e.g. for surveys) or materials that are nearly complete (e.g. websites) which will later be used without instructor interactions. |
Because semi-structured interviews are so common, there are myriad ways to do them. Here are some common sub-types.
Type | Description |
---|---|
Clinical | You want to deeply understand how your participants think about a topic, but you don’t particularly want to change their thinking about it. Because you want to deeply understand, you have follow-up questions to probe their thinking; because you’re exploring the space of possible ideas, those follow-up questions might be emergent. However, you’re not trying to teach them anything, so you’d like your follow-up questions to merely elicit their ideas, not change their ideas. These are used extensively across all of the purposes above, though they are less common for testing developed materials. |
Teaching-learning | You have specific problems that you want your participants to work on, and you want to see how they solve them with your help. For example, you’re working on a new in-class activity, so you “teach” it to interview participants to see how it helps them learn more. These interviews are a kind of semi-structured interview: you have some questions planned out, and you’re planning to ask follow-up questions to help them progress if they get stuck. You might ask participants to share their thoughts as they solve the problems, but unlike true think-aloud interviews, you’re allowed to ask follow-up questions to probe their thinking in specific ways. Unlike clinical interviews, which try to minimize teaching, your follow-up questions can often be aimed at helping your participants learn. These interviews are almost exclusively used to test and refine your materials, and they are extensively used for that purpose. |
Conversational interviews | You want to explore an idea or experience together with your participant. As the interviewer, you generally set the topic of your exploration. However, unlike clinical interviews, it’s ok for your participant to ask you follow-up questions or for you to bring your personal experiences into the interview. Conversational interviews tend to focus on human topics like motivation or persistence, and less commonly on science topics like electric fields or mitosis. They usually have a substantial relationship-building component between the interviewer and the participant. These interviews are generally used to learn from experts or to probe your participants ideas on a topic. |
Prompted Reflective | You have another piece of data from the same participant, and you want them to reflect on it with you. For example, you might look at a faculty member’s syllabus with them to ask them why they structured it in that way, or you might watch a recording of an athlete’s performance with the athlete to dissect their movements together. A key feature of this kind of interview is that you’re looking at the piece of data together while they reflect. If you’re asking them about something which happened in the past without thing in front of you, it’s not a prompted reflection. |
In larger projects, it’s common to use multiple kinds of interviews. You might find that at different stages in your research project, you have slightly different purposes in mind, and therefore you need to do different kinds of interviews. For example, you might start with a need to learn information from experts using conversational interviews, move to a need to probe how students think using clinical interviews, and finish with a need to test materials you’ve developed using teaching-learning interviews. In this case, you will probably have three different interview protocols over the lifetime of the project, with different sets of questions, materials, and participant characteristics.
How many interviews is enough?
This is an important question because it affects the logistics and timing of your research project. Interviews are time-expensive for researchers to perform and analyze, and they’re work for your participants. They can be complicated to schedule, and you might need to plan for interview incentives in your budget. How many do you need to do?
Ideally, you want to do the minimum number of interviews that you need to answer your research questions. From a planning perspective, it’s easier to perform fewer interviews than you planned than to add additional interviews later. So, you need to have procedures in place to quickly analyze data so that you can stop early. Conversely, you don’t want to dramatically over-estimate your needs because then you’re misallocating your resources at the planning stage.
How many should I plan for?
It depends on your purpose in doing each round of interviews, and where you are in your research project. Iterative design can help you make good choices about how many interviews you need, because early phases of research will help you make good plans for later phases. Not every project needs to aim for prevalence research, but every project that does prevalence research needs to start with exploratory research.
Getting started
In the beginning parts of your research project, you’re still learning about the space of ideas and developing your interview protocol. At this phase, you should plan for small numbers of interviews in each round so that you can rapidly make good decisions about how to proceed. These interviews are not data: they are helping you understand the issues, develop a protocol, and/or prototype some materials so that you can get good data later.
- If you’re looking to gather information from experts, it might be enough to perform 2-3 informational interviews.
- If you’re getting started in developing your interview protocol, practice it on 2-3 people between protocol revisions; plan 1-2 sets of practice interviews before you submit your IRB protocol.
Exploratory research
If your research questions are about exploring the space of ideas, building case studies, or developing corroboratory interview data to support other data streams, then you’re looking for enough interviews to find interesting or actionable information, but not so many interviews that you need to fully determine the space of possibilities.
- If you’re looking to get a broad sense of how people think about a particular topic, generally 5-6 is enough to elicit the most common parts of the space of ideas.
- If you are looking to develop contrastive case studies of individuals, generally you want to plan for about twice as many interview participants as case studies that you want to develop. For 2-3 contrastive cases, that’s 5-6 interviews; for 1 case study, 2 interviews. If possible, you should perform at least one more interview than you plan to have cases, even if your first few interviews are super awesomely interesting.
- If you’re looking to use interview data to corroborate other data streams (for example, you have a large survey corpus and you’re looking to attach descriptive stories to particular patterns in the data), then 2-3 interviews per major pattern is probably enough; however, if you discover that these interviews are wildly disparate and you expected them to be very similar, then you need to either reconsider the pattern or do more interviews.
- If you’re looking to conduct usability testing on your website, generally 4-6 is a good idea for each major revision or piece, and 2-3 for minor revisions or followups where you have conflicting information.
- If you are looking to develop personas in preparation for developing resources, generally 2-3x as many participants as personas, spread widely around the kinds of demographics you’re looking to serve. If you’re looking for a set of 3-5 personas, that’s 8-15 interviews. Choose a larger number of you’re looking to serve multiple different demographics, or a smaller number if you’re serving a more cohesive group of people.
Prevalence research
If your research questions are about how prevalent some ideas are, then you are looking to saturate
. Saturation has occurred when you have so much data that conducting your most recent interview did not bring in new information to answer your research question. If your goal is saturation, then you can only know if you have achieved your goal in retrospect, which is frustrating from a planning perspective. Different research questions to saturate at different points; however, there are some broad commonalities which can aid your planning.
If you are looking for student ideas around a particular STEM topic or faculty ideas about teaching within a particular discipline, 15-20 participants is a good number to budget for. I would plan to schedule for 7-10 and check quickly for early saturation, with an intended second round to occur ASAP. In my grant proposal, I would budget for 20, and in my IRB application I would say 30.
If you are looking to subset your participants by categorical characteristic (e.g. gender, prior grades, tenure status, etc), then 12-15 participants per category (e.g. 12 pre-tenure + 12 tenured) is a good idea. If you are not planning to subset by category, but you think categories might be important anyways, you should collect this information and check to see if it is consequential midway through your data collection. If it is, then you can decide quickly to pursue subsetting or not.
Conversely, if you plan to subset by category and partway through your data collection you don’t find variation in that characteristic, then you can probably drop that characteristic from your analysis. Altogether, this suggests that when you are scheduling your interviews, you should (as much as possible) interleave participants from your different categories rather than do all of one category first and then another category.
If you want to investigate multiple categorical characteristics (e.g. gender AND tenure status), then you should aim for 10-12 participants for each combination of categories in order to saturate (gender: M/W/NB x tenure: pre/post = 6 combinations -> 60-72 participants).
The more characteristics you have, the more unwieldy it becomes to saturate on all of them. Consider carefully: do you need to saturate on all these characteristics, or can some of them be more exploratory? If there are systemic effects in your population (e.g. there are very few non-binary tenured professors), saturation might be fundamentally impossible.
Generally speaking, if you are conducting a study where finding the prevalence of ideas is important and there are lots of categories to consider, then you should strongly consider mixing your methods: use interviews for exploratory or corroboratory information, and use surveys for prevalence information.
Additional topics to consider
History
This article was first written on May 4, 2023, and last modified on May 30, 2024.