Home Page

Section 2 Becoming a Researcher/Scholar / Chapter 5 The Nature of Inquiry

4. Framework of Scientific Research


Creswell's (2014) research design framework will be used to explore the research process. This framework was chosen for a few reasons. First, Creswell’s framework is a clear and accessible introduction for beginning researchers. Second, the scope of Creswell’s text is comprehensive enough to offer an overview of each of the major scientific methodologies. There are three components to Creswell's (2014) framework, including research paradigms (positivist, post-positivist, constructivist, transformative, and pragmatic), research methods (quantitative, qualitative, and mixed methods), and specific research designs (nonexperimental: survey research, experimental, ethnographic, phenomenological, case study, narrative, grounded theory, convergent, and sequential). Although the terms quantitative and qualitative will be used early in this section, they will not be defined in detail until much later. For now, it will be sufficient to think of quantitative research as referring to the use of numbers to describe data, and qualitative research as referring to the use of words and qualities to describe data.



























 Having a view of the world is to operate under a paradigm, a lens through which a person views the world. For science, paradigms are the tacit rules under which researchers operate when taking certain approaches in scientific inquiry. The word paradigm was given a specific meaning by Thomas Kuhn, a philosopher of science. As Kuhn (1962/1996) conceived it, “the study of paradigms … is what mainly prepares the student for membership in the particular scientific community with which he will later practice” (p. 10). Such an acceptance allows the researcher to embrace the traditions within a specific scientific approach, the beliefs that underlie the approach, and the practices that keep it coherent. In this way, philosophical worldviews or paradigms provide researchers with a means of formulating approaches to scientific inquiry that are consistent with prior practices in the field.


 Creswell (2014) outlined four paradigms that form the basis of current scientific inquiry, namely post-positivist, constructivist, transformative, and pragmatic. With the exception of the pragmatic, these research paradigms are roughly consistent with those put forth by other prominent researchers (Denzin & Lincoln, 2000). In addition, positivism will be reviewed as a precursor to post-positivism. Positivist and post-positivist paradigms view inquiry as the measurement of phenomena and discovery of facts, with post-positivism adding the need for hypothesis testing. In contrast, constructivist and transformative paradigms generally view inquiry as the exploration of individual differences, social construction of meaning, and the means to empower individuals (transformative) to participate in discovery. Finally, the pragmatic paradigm views inquiry as the employment of all practical means to obtain knowledge, including the use of both quantitative and qualitative methods. While Creswell’s (2014) list of paradigms is useful as a heuristic for understanding dominant perspectives in social science research, it is not exhaustive or an entirely accurate depiction of every kind or combination of scientific approach. As Shulman (1981) stated, “research methods are not merely different ways of achieving the same end. They carry with them different ways of asking questions and often different commitments to educational and social ideology” (p. 10).


 Shulman’s insight underscores the value of understanding paradigms within scientific research. Although most scientific studies will include a review of literature, a methods section, and an analysis of data collected, decisions made about which studies to include in the review, how research questions are to be presented (with a prediction or not), and how the data is to be analyzed depend in part on the paradigm under which the researcher operates. For example, a researcher operating under a post-positivist paradigm would use research questions to guide their study and make one or more predictions about likely research outcomes, as well as seek to confirm their predictions. Conversely, a researcher operating under a constructivist paradigm would also use research questions to guide their inquiry but would not make such predictions. In each of these cases, the decisions made by the researcher would be governed by the conventions traditionally utilized within the respective paradigm.


Two terms relevant to discussing paradigms are epistemology and ontology. Although not named as such earlier, epistemology is the study of knowledge, which outlines how researchers obtain facts and justify their belief in such facts.  Within social science, the term ontology refers to the study of being, specifically how researchers define reality, to what degree personal perception and values are important to inquiring about human existence (Quine, 1948). Because exploring a deep understanding of human behavior and social interaction is a complex task, qualitative researchers often explicitly use these terms to guide and explain their inquiry. As will become clear in this section, the epistemological and ontological perspective of the researcher may determine the kind of method used and how that method is employed. The following is a historical account of primary research paradigms.



In the nineteenth century, Auguste Comte (Schmaus, 2008) coined the term positivism. This paradigm supported the program of discovery of obtainable facts. Tacq (2011) explained that the term positive meant something that is real, has use, and can be measured. In the social sciences, this implied that facts were not encumbered significantly by context, that understanding the context of facts was not of prime importance. As Schmaus (2008) stated, however, the sense of discovering the real world still connoted a sense that researchers were the discoverers, and, because of this, research under positivism did not completely abandon contextual considerations. For positivist researchers, context simply was not the primary focus, or considered substantially relevant to obtaining knowledge about the world. Within social science, positivism is no longer widely accepted as a valid approach to inquiry. There are many reasons for this, but one important reason could be its neglect of context. Post-positivism and later paradigms all, to some degree, consider context as an important, and, in some cases, necessary component to social science research.



Positivism was replaced by post-positivism. In this paradigm, although the goal is to discover facts, such facts are deemed falsifiable, which introduces the need for exhaustively testing hypotheses. As Onwuegbuzie, Johnson, and Collins (2009) stated, “they [post-positivists] assert that all observation is inherently theory-laden and fallible and that all theory can be modified” (p. 121). Popper (1959/2005) qualified the shift away from positivism by proposing that, to be considered scientific, a hypothesis must be capable of being falsified, or subject to the process of falsification—hypothesis testing. In effect, researchers operating under the post-positivist paradigm use hypotheses and require that they be tested in the field. Quantitative researchers predominately operate under the post-positivist paradigm.



A constructivist conception of reality refers to the idea that reality is constructed through social interaction and interaction with the environment. This social construction implies that there are multiple accounts of reality, that reality is pluralistic (Onwuegbuzie et al., 2009). In terms of epistemology, researchers operating under a constructivist paradigm view knowledge as socially constructed or socially mediated, as convention, rather than factual in the sense of either positivist or post-positivist paradigms. A researcher operating under this paradigm focuses on the unique qualities of individuals and socially constructed experience. This shift from inquiring about facts to a focus on uniquely experienced realities and how these experiences create unique or singular phenomena is not a trivial one. Within social science, the shift is from seeing the world as theory-laden but real and measurable to seeing the world as subjectively unique and socially mediated. In this view, unique experience cannot be glossed over or referred to as kinds or categories. For a researcher operating under the constructivist paradigm, experience frames and substantially alters the meaning of knowledge that is discovered. One implication of this is that, under this paradigm, scientific knowledge obtained about human experience can only be similar but not the same for any two or more people or cultures. The very idea of generalizing uniqueness, in this view, is contradictory. To generalize the results from a scientific study to populations outside of the sample, there is a requirement, among other things, to assume sufficient similarities between the two populations. Under the constructivist paradigm, even if  the age, gender, and other characteristics of the participants were kept the same in the two populations, the individual differences in one population would make using the results in the other complicated, and in some cases not even viewed as possible. Qualitative researchers typically operate under the constructivist paradigm.



The transformative paradigm also is aligned well with qualitative research. This paradigm is a combination of several perspectives that Denzin and Lincoln (2000) identify as including critical theory and participatory approaches to inquiry. The overarching assumption in this perspective is the presumed existence of human oppression, and the resulting need to alleviate such oppression. Creswell (2014) explained that these perspectives necessarily link scientific inquiry with its historical and political roots and serve as foundations for empowerment of people through the discovery of knowledge. The transformative perspective focuses on identifying the constraints placed on people by race, gender, and socioeconomic status in order to increase awareness of inherent oppression. As with the constructivist paradigm, there is a focus on subjective concerns, but the primary difference is that this approach views reality through the lens of power structures. A transformative scientific inquiry seeks to engage individuals in the process of empowerment by lifting the constraints that limit human potential. Discovery, in this view, is bracketed by the necessity to enfranchise individuals in the production of socially situated meaning making. This perspective views human collaboration as a way to emancipate socially oppressed people. An example of such meaning making is action research, which is generating knowledge in the real-life setting for the purpose of improving practice (Creswell, 2014).



The pragmatic paradigm centers on the pragmatic maxim that scientific inquiry is for practical purposes (McKaughan, 2008). Pragmatic approaches to inquiry situate the inquirer in a natural setting, and allow the employment of all practical means to obtain knowledge, including the analysis of both quantitative and qualitative data. This approach acknowledges the tentative and tension-filled nature of human existence. In the pragmatic approach, reality is viewed as experience dependent, and knowledge is obtained within the context of inquiry. Dewey (1938/1986) defined this type of approach to inquiry as “the controlled or directed transformation of an indeterminate situation into one that is so determinate in its constituent distinctions and relations as to convert the elements of the original situation into a unified whole” (p. 109). As Onwuegbuzie et al. (2009) and Bryman (2006) have pointed out, pragmatism typically is associated with mixed-methods inquiry.


Research Methodology

As earlier stated, a simple way of grasping the difference between quantitative and qualitative research is to think of quantitative as using numbers and qualitative as using words to describe phenomena. Viewed this way, a mixed-method approach would use both numbers and words. Although not entirely accurate, this heuristic is a good starting point. Creswell (2014) and Wiersma (2000) provide additional scaffolding to this heuristic in stating that method differences are based upon both the philosophy of the researcher (paradigm) and the technique employed in data collection. As such, differences in method of inquiry rest, in part, on beliefs about facts and whether such facts can be separated from the values researchers hold (Wiersma, 2000). The fact-value distinction is another way of using paradigms to characterize different types of research. In quantitative research, separating facts from values is not only normal practice but is necessary to determine the validity of research. Conversely, in qualitative research, facts are viewed as inseparable from both researcher and participant values. For qualitative research, separating facts from personal values would be viewed as eliminating part of the context of inquiry, not something a qualitative researcher would want to do. The credibility of qualitative research rests in part on its value transparency.


Quantitative Research

There are two accessible ways to think about quantitative research. One could think about the difference of describing data and inferring relationships from data. This approach compares descriptive and inferential statistics. A person could also think about differences in ways of collecting data. This approach compares practical techniques used to collect data. This chapter does not assume the reader has had either basic or advanced statistics preparation. For this reason, the latter approach will be used in this section, namely comparing techniques. Both experimental and nonexperimental techniques will be explored. Before discussing each, there is a need to define the framework of quantitative inquiry, which includes the concepts of variables, measurement, and operational definitions.


 In quantitative research, a variable is used to assign attributes that represent characteristics of people, places, things, or ideas (Freedman, Pisani, & Purves, 2007). Attributes are assigned in that people, for example, can have different kinds of characteristics, such as gender or age. Abstract concepts such as having motivation or being passionate can also be variables. Within the research framework, there are two primary ways of describing variables, as either independent or dependent. An independent variable represents a cause or prediction of an outcome, and a dependent variable is considered the outcome or effect. Take for example teacher-student interaction. Wiersma (2000) offered the case of teaching method and student achievement in science. The teaching method is the independent variable, and the student outcome is the dependent variable. The purpose of such a study would be to determine in what way, if any, teaching method affects student academic outcomes. To do this, the researcher would need to be clear on definitions assigned to the variables.


 Operationally defining variables within the research context has two benefits. First, by making clear the purpose of the study, characteristics of the variables being explored and how such variables are to be measured, a researcher strengthens the validity of a study. Second, by providing operational definitions, the results can be compared to studies with similar conditions, thus either providing additional support or contrary evidence to previous research. Take teaching methods for example. Teaching methods might include giving a lecture, moderating a group discussion, or conducting in-class practical activities. Without clarity on the components that make up the phenomena of teaching methods, a researcher would not be certain which part, if any, affected student achievement. Conversely, without defining student achievement, for example a score of 90% or better out of 100 percent on a standardized test, the researcher would not be able to establish an effect occurred. Participants also need to be defined. When a researcher states students are sampled, a definition needs to be provided that makes clear what kind of students are sampled, such as currently enrolled sophomore-level students who have completed 30 or more college-level credits. Lastly, an operational definition needs to explain how the variables will be measured. In the case of students, a researcher might assess student achievement by analyzing multiple-choice test scores. For the teaching method variable, surveys might be used to determine the types of teaching methods used, as well as to measure student perceived rating of each method.


Nonexperimental: Survey Research

Non-experimental designs use surveys to obtain data from sample participants. Surveys are instruments that contain questions for participants to answer. Surveys can contain yes/no, numerical rating, multiple-choice, or open-ended questions, and each of these responses, except for open-ended, are scored numerically. Creswell (2014) stated survey research describes “trends, attitudes, or opinions of a population by studying a sample of that population” (p. 155). When a survey is referred to as a scale, it uses an exclusively numerical rating system. The most prevalent numerical rating system is the Likert scale, which, typically, at minimum, would include 5 points, such as 1 to 5, with 1 meaning strongly disagree and 5 meaning strongly agree (Edmondson, 2005; Wiersma, 2000).  For surveys to be useful in scientific research, they must be valid and reliable.

For science, paradigms are the tacit rules under which researchers operate when taking certain approaches in scientific inquiry.

Sidebar 2:

Core Quantitative and Research Designs

As the term implies, grounded theory builds or discovers theory from the ground up, not by imposing theory on participants.

Sidebar 3:

Mixed-Methods Research, Choosing a Method

A simple experimental design entails comparing two randomly assigned groups within a sample population.

There are three forms of survey validity, including content, concurrent, and construct validity (Creswell, 2014). Construct validity is the most prevalently used measure for survey validity (Creswell, 2014). To be considered content valid, a survey must measure what it claims to measure. Concurrent validity refers to an instrument correlating with the results of an established survey, and construct validity refers to whether the individual questions have been confirmed to measure specific constructs (cognitive traits). Take the construct of passion for example. The passion scale was developed using exploratory and confirmatory factor analysis to establish the factors underlying the passion construct and to create a tool for detecting them (Vallerand et al., 2003). Factor analysis is used “to explore the possible underlying structure in a set of interrelated variables without imposing any preconceived structure on the outcome” (Child, 2006, p. 6). In establishing the construct validity, a survey is confirmed as being valid to measure a specific construct, such as passion.


 If a survey does not measure the same way each time, the results obtained would not be reliable. A common measure of instrument reliability is the Cronbach’s alpha score (Peterson, 1994). The score eliminates the need to obtain multiple samples to establish reliability of a survey (Cronbach, 1951). Cronbach's alpha determines the internal consistency of the survey on a scale of 0 to 1; the higher the score the more reliable the survey. The conventionally accepted threshold is a score of .77 or higher (Peterson, 1994). Keep in mind that doctoral students performing survey research typically use statistical software packages, such as SPSS, to perform validity and reliability tests for surveys. In addition to survey validity and reliability, the sampling design is integral to the validity of the results.


Sampling refers to selecting a sample of participants from a larger population. Although there are many sampling designs, random, stratified, and convenience are three common approaches. These sampling designs lie on a spectrum from random to nonrandom selection. For example, in the target population of undergraduate students at a given university, a random sample would give all enrolled undergraduate students at that university an equal chance of being part of the sample group. Suppose a researcher was only interested in male sophomore students. This would be a stratified sample, selected by gender and college level. A convenience sample would be needed if only a portion of the students had publicly accessible e-mails. It would not be random or randomly stratified, but merely based upon conveniently available participants. Lastly, there are many factors to consider in determining a sufficient sample size. In effect, the larger the sample the more likely it will be representative of the larger population. Response rate is one consideration. Not every person sent a survey will complete it. Fowler (2009) offers a good discussion of conventional and formulaic means of determining an appropriate sample size.


Experimental Research

Experimental research is the most reliable method for determining causal relationships, but as Pearl (2001) explained, the idea of cause and effect were not central, or even desirable, components of initial conceptions of quantitative research. Early statisticians such as Neyman, Pearson, and Fisher all preferred to focus on correlation not on causation (Biau, Jolles, & Porcher, 2010; Pearl, 2001). Correlation refers to the degree of relationship between two or more variables (Freedman et al., 2007; Wiersma, 2000). Correlation does not imply that cause and effect exists between variables; it simply indicates a relationship exists between two or more variables. For example, a researcher might determine that college-level (junior, senior, etc.) correlates with increases in academic GPA, the higher the college-level the higher the GPA. Now it would seem strange to say that being a junior versus being a sophomore causes a higher GPA, but a person could suppose that maturity, study skills, and consistent preparation in some way cause a higher GPA. To determine the cause of increased GPA, one might design an experiment.


A simple experimental design entails comparing two randomly assigned groups within a sample population. One rigorous version of this design is called a random controlled trial (Freedman et al., 2007). A seminal, although imperfect, example of this technique is the Salk polio vaccine field trials of the 1950s (Freedman et al., 2007). In a simple experimental design, one group is the control, which means this group does not receive any stimulus, and one group is the intervention, which means this group receives a testable stimulus. Within experimental research, the researcher endeavors to eliminate threats to validity, or factors that might interfere with determining an observed change in the intervention group when compared to the control group. Campbell's causal model (as cited in Shadish & Sullivan, 2012) essentially rests on eliminating threats to validity within the framework of an experimental design. This model strives to eliminate merely chance causes for an effect. As Shadish and Sullivan (2012) explained, Campbell's formulation rests on two assumptions. The first is the establishment of internal validity, and the second is the establishment of external validity. “Internal validity threats are experimental procedures, treatments, or experiences of participants that threaten the researcher's ability to draw correct inference” (Creswell, 2014, p. 174).


The second assumption, external validity, refers to how the observed change in the intervention group could be relevant to the general population (Shadish & Sullivan, 2012). Generalizing the results is known as external validity. As Cartwright (2004) stated, generalizing the results to different populations is not an easy task, as it requires taking into account nearly perfect alignment of population characteristics, including contextual and situational variables. For example, one threat to external validity is selection bias (Wiersma, 2000). Selection bias refers to researchers using personal preference to select and assign participants to control and intervention groups. This introduction of human bias in assignment decreases external validity. As such, there is a need to remedy this threat. One such solution is Rubin’s (2004) causal model, which eliminates selection bias by using a mathematical process for assigning units and inferring causes.


 The experimental design presented in this chapter is simplistic in form. In designing more advanced experiments, there are several items to consider. For example, if it is not possible or preferable to assign participants to groups randomly, this would be considered a quasiexperimental design (Creswell, 2014).  Other more advanced experimental designs might include observing a single subject over time, a pre-posttest configuration, and multiple control and intervention groups with an arrayed pre-posttest configuration. Creswell (2014) offers an accessible treatment of these more advanced experiments.


Qualitative Research

Qualitative research primarily entails the analysis of symbols to explain human experience and interaction. The reason for this is that qualitative research focuses on individuals or groups of individuals, and people use symbols to describe contexts and situations. The constructivist paradigm might be useful here to explain the scope of qualitative research. If knowledge is constructed socially, and if knowledge creation is about meaning making and is affected substantially by social interaction, then there is a need for an expansive description to define the phenomena of inquiry. As Ponterotto (2006) explained, the term thick description has come to define the way qualitative researchers acquire and explain knowledge about social interaction. Only through observing the thoughts, feelings, and entire context of experience does the qualitative researcher capture such a thick description of reality.


 A thick description is used in qualitative research to obtain a holistic or extensively detailed expression of a context or situation. Describing an apple on a table is a thin description. It is thin in that it does not describe the observer's perspective, nor does it provide details about the physical objects, such as the nuances of variation in color, shape, or texture. Exclaiming that there is an apple on a table does not capture qualities such as why the observer was there, how those items came to be where they were observed, or where the room was located, such as in a specific house, in a specific geographic location. A thick description would include all of these qualities.


There are a variety of ways qualitative researchers use a thick description to inquire about phenomena. The main approaches include ethnography, phenomenology, case study, narrative, and grounded theory.



In phenomenological research, researchers inquire about the unique thoughts, feelings, and experiences that help describe people within situations (van Manen, 2002). For example, both researchers and sample participants might want to know about the experience of traveling to work. Do commuters get frustrated when it takes longer than usual to travel to work? How does a person travel to work? The lived experience of traveling to work in this way can be viewed as unique to an individual, with a unique means of transportation, unique route driven, or unique experience of actually driving in an automobile, such as sitting in front of steering wheel and steering. To explore these phenomena, a researcher might observe people traveling to work, interview one or more individuals found to travel to work, or write down thoughts in a field journal, expressing personal reflections of observing and interviewing these people. In each of these examples, the focus is on unique lived experience, and the lived experience is not only the person traveling to work but also the researcher inquiring about such practices.


Case Study

Whereas an ethnographic study aims at discovering cultural artifacts and a phenomenological study focuses on individual lived experience, a case study examines time sensitive activities that have explicit and tacit rules that affect human experience and interaction. The case study approach was first defined as inquiry for obtaining knowledge about decision making within particular cases; a more complete definition explains that a case study “investigates a contemporary phenomenon in depth and within its real-life context, especially when the boundaries between phenomenon and context are not clearly evident” (Yin, 2009, p. 18). Case studies are time and activity dependent, which means a case study explores a set of activities within a particular time frame (Creswell, 2014). As with other qualitative approaches, the researcher could utilize artifacts, interviews, and observations to explain a particular case, or series of cases. In addition, focus groups and surveys with open-ended questions are often used to collect information from participants. Although numerical surveys are not often used in qualitative research, descriptive information might be collected in a case study to aggregate demographic information about participants, such as the total number of participants, average age, or gender percentages. Yin (2009) cited Tally’s corner as a seminal example of a single-case case study. The study explored the experience of African American men who were frequently found at Tally’s corner in Washington, D.C. in the late 1960s (Coles, 1968). The study provided information about a subculture of individuals, “their coping behavior, and in particular their sensitivity to unemployment and failure” (Yin, 2009, p. 49). Although case studies typically include interviews with participants, such interviews might only tell part of the story, or confirm the story a researcher expects to find.



In narrative research, the focus is on telling the narrative life stories of participants. This type of inquiry entails expressing the constantly changing but meaningful experience of people. Whereas a case study has well-defined parameters, such as being time and activity dependent, narrative research tells a story that may not necessarily have such parameters or restrictions. Connelly and Clandinin (2000) offered a framework for how this might be implemented. A narrative researcher attempts to realize and embrace the idea that experiences only become isolated when reflected upon. In experience, people hold remnants from past experiences, create new unified experiences, and carry forward these new remnants, while simultaneously interacting with others in situations (Connelly & Clandinin, 2000). Narrative research then uses interviews, artifacts, and researcher observations, as co-participants in the inquiry, to help tell life stories. Connelly and Clandinin (2000) offered the example of teacher knowledge. Their example demonstrated how two groups of specific teachers obtained, retained, and expressed the knowledge they used as teachers. Narrative research can be a useful tool to reflect on experiences in particular contexts, with the realization that the retelling of stories creates a new story in and of itself. In this way, narrative research can be transformative.


Grounded Theory

In contrast, a grounded-theory approach focuses less on transformation and more on reducing the effect of preconceived notions in the research activity. Glaser and Strauss (1967/2006) outlined the components of grounded-theory research. As the term implies, grounded theory builds or discovers theory from the ground up, not by imposing theory on participants. This approach views all encountered data as possibly useful to an emerging understanding of the topic being researched. This differs significantly from quantitative research, which has a specific focus, and most qualitative approaches, which at least use a literature review to establish a theoretical framework. As previously stated, a literature review is an efficient way of determining the right questions to ask. In grounded theory, the right questions emerge from encountering data in the field. No preconceived theory is imposed on grounded-theory data. Instead of seeking out other studies completed on a topic, a grounded-theory researcher may interview and observe participants to generate themes, examine artifacts, or perform content analysis of written texts, which involves analyzing idea or word frequency in written material. The researcher then interprets the collected data to create a theoretical framework. As Glaser and Strauss (1967/2006) stated, this is called “discovering theory from the data” (p. 1). In effect, grounded theory is an inductive approach to inquiry, because it creates propositions based upon exemplars found in experience. Its approach to provisionally acknowledging all found data as valuable intersects with the approach employed by mixed-method researchers.


Mixed-Methods Research

A mixed-method researcher may utilize both quantitative and qualitative methods to obtain knowledge. In one sense, exploring mixed-method approaches brings this chapter full circle, because using a mixture of methods is the way people inquire in everyday life. People use numbers, observed phenomena, and information from others to make decisions. There is still some disagreement amongst researchers as to the usefulness of a mixed-methods approach, as it combines paradigmatic lenses, but the pragmatic paradigm does offer a coherent way of framing mixed-method research (Bryman, 2006). There are several types of mixed-method approaches that have been developed, such as convergent and sequential designs (Creswell, 2014). In using a mixed-method approach, the researcher has to be clear on the goals of such inquiry. The goal of the project will determine the research design, data analysis, and application of the findings. For example, in convergent designs the quantitative and qualitative portions are conducted simultaneously. Here the researcher could convert the data into either quantitative or qualitative formats or display the data side-by-side (Creswell, 2014). As for application of the findings, approaches might include comparing and contrasting quantitative data, explaining in more detail quantitative findings, or providing support for an intervention in a program evaluation (Creswell, 2014). Depending upon the intended purpose, a mixed-method approach will have varying degrees of value for a researcher and the research community.