Previous Section: Blocks 5 & 6
 

Critical Interpretive Research: An Introduction

Block 7: Integrating Empirical-Analytic and

Interpretive Inquiry in a Critical Paradigm


The final Block addressed integration at the level of paradigm. Critical inquiry into educational phenomena draws upon techniques from both the empirical-analytic and interpretive paradigms, and rests upon ontological assumptions that integrate these two. Another look was taken at the stance, role, and effects of the researcher on the phenomenon being investigated, at research ethics and the evaluation of an interpretive account.

Week 1: Critical Inquiry Introduced

Week 1 introduced the aim of critical inquiry: to go beyond the common sense constructs of everyday life and identify the operation of obscured relations of power and force. These relations cause systematic distortions in people's understanding and produce patterns and regularities that are empirically describable, but do not reflect the intentions of the people caught up in them. The regularities detectable by measurement and statistical testing do not follow universal laws but are historical and cultural products of human practices and institutions. Nonetheless, they are generally taken as natural by those whose lives they govern.

Critical inquiry rethinks the relations between research and practice. It presumes that understand how a social system functions one must try to change it. Theory provides a critique of existing forms of social and political reality with practical implications. In critical inquiry "the development of knowledge is comprehended as social and political action which must be understood and justified as such" (Carr & Kemmis, 1986, p. 152). It follows that the researcher practices neither naive detachment, nor participant-observation, but "participant objectivation" (Bourdieu & Wacquant, 1992), becoming an agent of change. This in turn involves the people studied as participants in the research process. (Readings: Kincheloe, 1991; Lather, 1986.) (There was no Project for Block 7, since students were completing their Collaborative Research Project.)

Week 2:

Week 2 explored ways to take research beyond theory-building and theory-testing. Change-oriented research approaches such as action research and collaborative research are critical approaches (Awbrey, 1987; Oberg & McCutcheon, 1987; Oja & Pine, 1987). An integrated analysis of an educational phenomenon (dropping out of school; caring between teachers and students; socialization of graduate students; problem solving in science classes--to pick a few topics from recent journals) requires attention to the way the phenomenon is understood by the people who are involved with it.

Attention must be paid both to their reflective understanding (Block 2) and to the way they grasp and accomplish the phenomenon in their everyday activity (Block 4). Analysis also requires attention to the field or social setting in which the phenomenon occurs (Block 6). This setting provides (some of) the conditions for the possibility of the interaction. And quantitative techniques are needed to objectify the social whole in which people are situated, and of which they have a partial and incomplete grasp, to characterize the social institutions which exert a constitutive causality on people's conduct, and identify systematic biases in the distribution of resources of which participants are unaware.

Ethical issues in educational research, discussed in Week 2, are especially important and complex in critical inquiry.

Flinders (1992) usefully distinguishes among four ethical perspectives: utilitarian, deontological, relational, and ecological. These perspectives are concerned, respectively, with minimizing risk of harm, avoiding wrong by appeal to moral standards, avoiding imposition of the researcher's values on participants, and recognizing the researcher's responsibilities in a larger system.... The perspectives identify different ethical issues that can arise in the research practices of recruitment, fieldwork, and reporting, and provide different ways of thinking about ethical conduct of research.

Week 3: Validity and Reliability

Issues of validity and reliability, discussed in Week 3, are also important.

Validity cannot be guaranteed by proper procedure for it is not simply a technical problem (Packer & Addison, 1989; Mishler, 1990). Even natural science doesn't have objective validity procedures; these would in fact stifle research. Instead there are "criteria that influence decisions without specifying what those decisions must be" (Kuhn, 1977, p. 330); values that remain imprecise and open to interpretation, so that scientists will legitimately differ in their evaluation of a specific theory: values like accuracy, consistency, scope, simplicity, and fruitfulness. Approaches to evaluation of interpretive and critical explanations will seem flawed if judged against a traditional notion of truth and validity that seeks a procedure of evaluation, but viewed as reasonable checks on interpretation they provide as good an approach to evaluation as one can expect. (Readings: Mishler, 1990; Packer & Addison, 1989; Caputo, 1987. Also recommended: Lieberman, 1992; Kaestle, 1993.)

The following table diagrams four ways of combining interpretive and empirical-analytic components:

  1. Descriptive interpretive component, descriptive empirical-analytic component. This is one of the most common ways of combining approaches.
  2. Descriptive interpretive component, explanatory empirical-analytic component. This is the second most common way.
  3. Explanatory interpretive component, descriptive empirical-analytic component.
  4. Explanatory interpretive component, explanatory empirical-analytic component

Interpretive Component:
Empirical-Analytic Component: Descriptive Explanatory
Descriptive 1. Descriptive statistics +

First person accounts treated statistically

This is one of the most common ways of combining the two kinds of material.

Code material and then do statistical analysis of the codes: at least descriptive (frequency counts, etc.); often explanatory too.

This approach loses sight of:

1. The perspectival character of what's coded.

2. The improvisatory character of its production.

3. Whose categories are these?

The participants'?: (How do we know?)

The researchers'?: (Why? What's the point?)

3. Descriptive statistics +

Social process explanations

The E-A makes assumptions in applying statistical tests that are generally false.

Second, E-A in quantifying its data (instrument measures, test scores) loses the semiotic character of a phenomenon: its openness to several interpretations; its ability to be viewed from several perspectives.

E-A ignores the question: "Value for whom?"

Quantitative material is at best a hypothesis-generating phase of research. The explanatory phase is best dealt with qualitatively, in the form of an articulation of the way people understand and make sense of the phenomenon.

Explanatory 2. First person accounts +

Hypothesis-testing

There are empirical regularities and causal relations which are not apparent to the participants. Their views are "subjective" and need have little relation to the "objective" relations.

Qualitative analysis is at best a hypothesis-generating phase of research. The explanatory phase is properly dealt with quantitatively, in the form of hypothesis-testing and falsification of causal relationships.

4. Social process explanations +

statistical analysis of social fields of power and capital

There are influences of power (and coercion and violence) that influence people's perceptions and actions in ways they are not aware of.

E.g. systematic discrimination against women or ethnic minorities. "Bias" in the sense of a lack of equity, unfairness.

Less dramatically, people cannot immediately observe the whole of the social order of which they are a part. They have an insider picture. The researcher can use various forms of quantitative analysis to objectify the whole.

Towards the Integration of Quantitative and Qualitative Techniques

One can integrate the techniques of quantitative and qualitative analysis while remaining within either of the empirical-analytic paradigm, or within a non-critical interpretive paradigm. Or one can integrate in a manner that moves one into a critical interpretive paradigm.

Staying within the Empirical-Analytic Paradigm
Goal:

    To generate empirical explanatory theories of human behavior; basic invariant laws of human behavior; generalizable knowledge to enable prediction and control. To search for empirical regularities and correlations through detached and impartial observation.

Treatment of qualitative material:

Transcripts and videos are rated and coded, to transform them to numerical data.

Uses for qualitative techniques:

Hypothesis generation (but not hypothesis testing).

Monitoring treatment implementation.

Triangulation (to investigate validity of quantitative measures).

Illustrative examples of quantitatively identifiable relationships (case studies).

Staying within the Interpretive Paradigm
Goal:

    To describe the ways people interpret their own behavior and the behavior of others, both reflectively (in interviews) and in action. To articulate the structures, categories and interpretive schemes that organize these interpretations.

Treatment of quantitative material:

Measurement is an interpretive process: it is guided by a practical purpose, and shaped by a tacit fore-knowledge or theory of the phenomenon. This fore-knowledge is embodied in the instrument. (Consider how even a thermometer is designed in accordance with theories of thermal expansion and conduction, inscribed with numbers that embody a definition of a temperature scale, and standardized so reading it will be unproblematic and routine.) Instruments -- in both the physical and the social sciences -- are read, just as texts are read.

Numerical quantities are the products of social convention and normative exchange. They are more like price than weight (at least as we usually think of the latter).

Quantitative concepts and analyses shape the ways people understand themselves and other people, and the ways they relate to one another.

Uses for quantitative techniques:

Hypothesis generation (but not hypothesis testing).

Selection of cases for detailed study.

Triangulation (to investigate validity of qualitative interpretations).

Indices of the outcomes (products) of social processes.

Summary descriptions of characteristics identified in case studies.

Conjunctural analysis of characteristics identified in case studies.

Moving to Critical Inquiry
Goal:

    To go beyond the common sense constructs of everyday life, to identify the operation of systematic influences that go unnoticed, and systematic distortions in people's understanding of what they are doing. To question the assumptions and beliefs of the researcher. To collaborate with participants in the resolution of practical conflicts and dilemmas.

Uses for quantitative techniques:

Objectification of the social whole in which people are situated, and of which they have a partial and incomplete grasp.

Characterization of the social fields which exert a constitutive causality on people's conduct.

Identification of systematic biases in the distribution of resources of which participants are unaware.

Uses for qualitative techniques:

Next Section: Conclusions and References

© Martin Packer, 1999

Home Page

Developmental Psychology

Cultural Psychology