This site is made to provide information about recently started Pharm D and Pharm D(Post Bacclaurreatte) courses in india to all
Wednesday, 10 April 2013
How to read research papers? Papers that go beyond numbers (qualitative research)
How
to read a paper: Papers that go beyond numbers (qualitative research)
What is qualitative research?
Epidemiologist Nick Black has argued that a finding or a result
is more likely to be accepted as a fact if it is quantified (expressed in
numbers) than if it is not.1 There is little or no scientific evidence, for
example, to support the well known “facts” that one couple in 10 is infertile,
or that one man in 10 is homosexual. Yet, observes Black, most of us are happy
to accept uncritically such simplified, reductionist, and blatantly incorrect
statements so long as they contain at least one number.
Researchers who use qualitative methods seek a deeper truth.
They aim to “study things in their natural setting, attempting to make sense
of, or interpret, phenomena in terms of the meanings people bring to them,”2 and they use “a holistic perspective which
preserves the complexities of human behaviour.”1
Summary points
Qualitative methods
aim to make sense of, or interpret, phenomena in terms of the meanings people
bring to them
Qualitative research
may define preliminary questions which can then be addressed in quantitative
studies
A good qualitative
study will address a clinical problem through a clearly formulated question and
using more than one research method (triangulation)
Analysis of qualitative data can and should be
done using explicit, systematic, and reproducible methods
Questions such as “How many parents would consult their general
practitioner when their child has a mild temperature?” or “What proportion of
smokers have tried to give up?” clearly need answering through quantitative
methods. But questions like “Why do parents worry so much about their
children's temperature?” and “What stops people giving up smoking?” cannot and
should not be answered by leaping in and measuring the first aspect of the
problem that we (the outsiders) think might be important. Rather, we need to
listen to what people have to say, and we should explore the ideas and concerns
which the subjects themselves come up with. After a while, we may notice a
pattern emerging, which may prompt us to make our observations in a different
way. We may start with one of the methods shown in box, and go on to use a selection of others.
Box 1: Examples of qualitative research methods
Documents—Study of documentary accounts of events, such
as meetings
Passive observation—Systematic watching of behaviour and talk in
natural occurring settings
Participant observation—Observation in which the researcher also
occupies a role or part in the setting, in addition to observing
In depth interviews—Face to face conversation with the purpose of
exploring issues or topics in detail. Does not use preset questions, but is
shaped by a defined set of topics
Focus groups—Method of group interview which explicitly
includes and uses the group interaction to generate data
Box box summarises (indeed, overstates) the
differences between the qualitative and quantitative approaches to research. In
reality, there is a great deal of overlap between them, the importance of which
is increasingly being recognised.4
View this table:
BOX 2
Qualitative versus quantitative research—the
overstated dichotomy
Quantitative research should begin with an idea (usually
articulated as a hypothesis), which then, through measurement, generates data
and, by deduction, allows a conclusion to be drawn. Qualitative research, in
contrast, begins with an intention to explore a particular area, collects
“data” (observations and interviews), and generates ideas and hypotheses from
these data largely through what is known as inductive reasoning.3 The strength of the quantitative approach lies
in its reliability (repeatability)—that is, the same measurements should yield
the same results time after time. The strength of qualitative research lies in
validity (closeness to the truth)—that is, good qualitative research, using a
selection of data collection methods, really should touch the core of what is
going on rather than just skimming the surface. The validity of qualitative
methods is greatly improved by using a combination of research methods, a
process known as triangulation, and by independent analysis of the data by more
than one researcher.
The so called iterative approach (altering the research methods
and the hypothesis as the study progresses, in the light of information gleaned
along the way) used by qualitative researchers shows a commendable sensitivity
to the richness and variability of the subject matter. Failure to recognise the
legitimacy of this approach has, in the past, led critics to accuse qualitative
researchers of continually moving their own goalposts. Though these criticisms
are often misguided, there is, as Nicky Britten and colleagues have observed, a
real danger “that the flexibility [of the iterative approach] will slide into
sloppiness as the researcher ceases to be clear about what it is (s)he is
investigating.”5 These authors warn that qualitative
researchers must, therefore, allow periods away from their fieldwork for
reflection, planning, and consultation with colleagues.
Evaluating papers that describe qualitative research
By its very nature, qualitative research is non-standard,
unconfined, and dependent on the subjective experience of both the researcher
and the researched. It explores what needs to be explored and cuts its cloth
accordingly. It is debatable, therefore, whether an all-encompassing critical appraisal
checklist along the lines of the Users' Guides to the Medical Literature6 7 8 9 10 11 12 13 14 15 16 17 1819 could ever be developed. Our own view, and
that of a number of individuals who have attempted, or are currently working
on, this very task,3 5 is that such a checklist may not be as
exhaustive or as universally applicable as the various guides for appraising quantitative
research, but that it is certainly possible to set some ground rules. The list
which follows has been distilled from the published work cited earlier,2 3 5 and also from our own research and teaching
experiences. You should note, however, that there is a great deal of
disagreement and debate about the appropriate criteria for critical appraisal
of qualitative research, and the ones given here are likely to be modified in
the future.
Question 1: Did the paper describe an important clinical problem
addressed via a clearly formulated question?
A previous article in this series explained that one of the
first things you should look for in any research paper is a statement of why
the research was done and what specific question it addressed.20 Qualitative papers are no exception to this
rule: there is absolutely no scientific value in interviewing or observing
people just for the sake of it. Papers that cannot define their topic of
research more closely than “We decided to interview 20 patients with epilepsy”
inspire little confidence that the researchers really knew what they were
studying or why.
You might be more
inclined to read on if the paper stated in its introduction something like,
“Epilepsy is a common and potentially disabling condition, and up to 20% of
patients do not remain free of fits while taking medication. Antiepileptic
medication is known to have unpleasant side effects, and several studies have
shown that a high proportion of patients do not take their tablets regularly.
We therefore decided to explore patients' beliefs about epilepsy and their
perceived reasons for not taking their medication.”
Question 2: Was a qualitative approach appropriate?
If the objective of the research was to explore, interpret, or obtain
a deeper understanding of a particular clinical issue, qualitative methods were
almost certainly the most appropriate ones to use. If, however, the research
aimed to achieve some other goal (such as determining the incidence of a
disease or the frequency of an adverse drug reaction, testing a cause and
effect hypothesis, or showing that one drug has a better risk-benefit ratio
than another), a case-control study, cohort study, or randomised trial may have
been better suited to the research question.19
Question 3: How were the setting and the subjects selected?
The second box contrasts the statistical sampling methods of
quantitative research with theoretical methods of qualitative research. In
quantitative research, it is vital to ensure that a truly random sample of
subjects is recruited so that the results reflect, on average, the condition of
the population from which that sample was drawn.
In qualitative
research, however, we are not interested in an “on average” view of a patient
population. We want to gain an in depth understanding of the experience of
particular individuals or groups; we should therefore deliberately seek out
individuals or groups who fit the bill. If, for example, we wished to study the
experience of non-English speaking British Punjabi women when they gave birth
in hospital (with a view to tailoring the interpreting or advocacy service more
closely to the needs of this patient group), we would be perfectly justified in
going out of our way to find women who had had a range of different birth
experiences—an induced delivery, an emergency caesarean section, a delivery by
a medical student, a late miscarriage, and so on—rather than a “random” sample
of British Punjabi mothers.
Question 4: What was the researcher's perspective, and has this
been taken into account?
View larger version:
PETER BROWN
It is important to
recognise that there is no way of abolishing, or fully controlling for,
observer bias in qualitative research. This is most obviously the case when
participant observation is used, but it is also true for other forms of data
collection and of data analysis. If, for example, the research concerns the
experience of asthmatic adults living in damp and overcrowded housing and the
perceived effect of these surroundings on their health, the data generated by
techniques such as focus groups or semistructured interviews are likely to be
heavily influenced by what the interviewer believes about this subject and by
whether he or she is employed by the hospital chest clinic, the social work
department of the local authority, or an environmental pressure group. But
since it is inconceivable that the interviews could have been conducted by
someone with no views at all and no ideological or cultural perspective, the
most that can be required of the researchers is that they describe in detail
where they are coming from so that the results can be interpreted accordingly.
Question 5: What methods did the researcher use for collecting
data—and are these described in enough detail?
I once spent two years
doing highly quantitative, laboratory based experimental research in which
around 15 hours of every week were spent filling or emptying test tubes. There
was a standard way to fill the test tubes, a standard way to spin them in the
centrifuge, and even a standard way to wash them up. When I finally published
my research, some 900 hours of drudgery was summed up in a single sentence:
“Patients' serum rhubarb levels were measured according to the method described
by Bloggs et al [reference to Bloggs et al's published paper].”
The methods section of
a qualitative paper often cannot be written in shorthand or dismissed by
reference to someone else's research techniques. It may have to be lengthy and
discursive since it is telling a unique story without which the results cannot
be interpreted. As with the sampling strategy, there are no hard and fast rules
about exactly what details should be included in this section of the paper. You
should simply ask, “have I been given enough information about the methods
used?”, and, if you have, use your common sense to assess, “are these methods a
sensible and adequate way of addressing the research question?”
Question 6: What methods did the researcher use to analyse the
data—and what quality control measures were implemented?
The data analysis
section of a qualitative research paper is where sense can most readily be
distinguished from nonsense. Having amassed a thick pile of completed interview
transcripts or field notes, the genuine qualitative researcher has hardly
begun. It is simply not good enough to flick through the text looking for
“interesting quotes” which support a particular theory. The researcher must
find a systematic way of analysing his or her data, and, in particular, must
seek examples of cases which appear to contradict or challenge the theories
derived from the majority.
One way of doing this
is by content analysis: drawing up a list of coded categories and “cutting and
pasting” each segment of transcribed data into one of these categories. This
can be done either manually or, if large amounts of data are to be analysed,
via a tailor-made computer database. The statements made by all the subjects on
a particular topic can then be compared with one another, and more
sophisticated comparisons can be made such as “did people who made statement A
also tend to make statement B?”
In theory, the paper
will show evidence of “quality control”—that is, the data (or at least, a
sample of them) will have been analysed by more than one researcher to confirm
that they are both assigning the same meaning to them, although in practice
this is often difficult to achieve. Indeed, when researching this article, we
could find no data on the interobserver reliability of any qualitative study to
illustrate this point.
Question 7: Are the results credible, and if so, are they
clinically important?
We obviously cannot
assess the credibility of qualitative results through the precision and
accuracy of measuring devices, nor their significance via confidence intervals
and numbers needed to treat. It usually takes little more than plain common
sense to determine whether the results are sensible and believable, and whether
they matter in practice.
One important aspect
of the results section to check is whether the authors cite actual data. Claims
such as “general practitioners did not usually recognise the value of audit”
would be infinitely more credible if one or two verbatim quotes from the
interviewees were reproduced to illustrate them. The results should be
independently and objectively verifiable—after all, a subject either made a
particular statement or (s)he did not—and all quotes and examples should be
indexed so that they can be traced back to an identifiable subject and setting.
Question 8: What conclusions were drawn, and are they justified
by the results?
A quantitative research paper should clearly distinguish the
study's results (usually a set of numbers) from the interpretation of those
results (the discussion). The reader should have no difficulty separating what
the researchers found from what they think it means. In qualitative research, however, such a distinction
is rarely possible, since the results are by definition an interpretation of
the data.
It is therefore
necessary, when assessing the validity of qualitative research, to ask whether
the interpretation placed on the data accords with common sense and is
relatively untainted with personal or cultural perspective. This can be a
difficult exercise, because the language we use to describe things tends to
impugn meanings and motives which the subjects themselves may not share.
Compare, for example, the two statements, “three women went to the well to get
water” and “three women met at the well and each was carrying a pitcher.”
It is becoming a
cliché that the conclusions of qualitative studies, like those of all research,
should be “grounded in evidence”—that is, that they should flow from what the
researchers found in the field. Mays and Pope suggest three useful questions
for determining whether the conclusions of a qualitative study are valid:
·
how well does this analysis explain why people behave in the way
they do?
·
how comprehensible would this explanation be to a thoughtful
participant in the setting?; and
Question 9: Are the findings of the study transferable to other
clinical settings?
One of the commonest criticisms of qualitative research is that
the findings of any qualitative study pertain only to the limited setting in
which they were obtained. In fact, this is not necessarily any truer of
qualitative research than of quantitative research. Look back at the example of
British Punjabi women described above. You should be able to see that the use
of a true theoretical sampling frame greatly increases the
transferability of the results over a “convenience” sample.
Conclusion
Doctors have traditionally placed high value on numerical data,
which may in reality be misleading, reductionist, and irrelevant to the real
issues. The increasing popularity of qualitative research in the biomedical
sciences has arisen largely because quantitative methods provided either no
answers or the wrong answers to important questions in both clinical care and
service delivery.1 If you still feel that qualitative research is
necessarily second rate by virtue of being a “soft” science, you should be
aware that you are out of step with the evidence.
In 1993, Pope and
Britten presented a paper to the BSA Medical Sociology Group conference
entitled “Barriers to qualitative methods in the medical mindset,” in which
they showed their collection of rejection letters from biomedical journals. The
letters revealed a striking ignorance of qualitative methodology on the part of
reviewers. In other words, the people who had rejected the papers often seemed
to be incapable of distinguishing good qualitative research from bad. Somewhat
ironically, qualitative papers of poor quality now appear regularly in some
medical journals, whose editors have climbed on the qualitative bandwagon
without gaining an ability to appraise such papers. Note, however, that the
critical appraisal of qualitative research is a relatively underdeveloped
science, and the questions posed in this chapter are still being refined.
The articles in this series are excerpts from How to read a
paper: the basics of evidence based medicine. The book includes chapters on searching the
literature and implementing evidence based findings. It can be ordered from the
BMJ Publishing Group: tel 0171 383 6185/6245; fax 0171 383 6662. Price £13.95
UK members, £14.95 non-members.
Labels:
ARTICLES
Subscribe to:
Posts (Atom)