D. Damian, J. Chisan, An empirical study of the complex relationships between requirements engineering processes and other processes that lead to payoffs in productivity, quality and risk management. IEEE Trans. Softw. Eng. 32(7), 433–453 (2006)CrossRefGoogle Scholar
M. Denscombe, The Good Research Guide For Small-Scale Social Research Projects, 4th edn. (Open University Press, Maidenhead, 2010)Google Scholar
K.M. Eisenhardt, Building theories from case study research. Acad. Manag. Rev. 14(4), 532–550 (1989)Google Scholar
B. Flyvberg, Five misunderstandings about case-study research. Qual. Inq. 12(2), 219–245 (2006)CrossRefGoogle Scholar
R.L. Glass, Pilot studies: What, why, and how. J. Syst. Softw. 36, 85–97 (1997)CrossRefGoogle Scholar
M.M. Kennedy, Generalizing from single case studies. Eval. Q. 3(4), 661–678 (1979)CrossRefGoogle Scholar
B. Kitchenham, L. Pickard, S.L. Pfleeger, Case studies for method and tool evaluation. IEEE Softw. 12(4), 52–62 (1995)CrossRefGoogle Scholar
C. Robson, Real World Research, 2nd edn. (Blackwell, Oxford, 2002)Google Scholar
P. Runeson, M. Höst, A. Rainer, B. Regnell, Case Study Research in Software Engineering: Guidelines and Examples (Wiley, Hoboken, 2012)CrossRefGoogle Scholar
J.M. Verner, J. Sampson, V. Tosic, N.A.A. Bakar, B.A. Kitchenham, Guidelines for industrially-based multiple case studies in software engineering, in Research Challenges in Information Science, 2009. RCIS 2009. Third International Conference on, 2009, pp. 313–324Google Scholar
L. Warne, D. Hart, The impact of organizational politics on information systems project failure-a case study, in Proceedings of the Twenty-Ninth Hawaii International Conference on System Sciences, vol. 4, 1996, pp. 191–201Google Scholar
R.J. Wieringa, Towards a unified checklist for empirical research in software engineering: first proposal, in 16th International Conference on Evaluation and Assessment in Software Engineering (EASE 2012), ed. by T. Baldaresse, M. Genero, E. Mendes, M. Piattini (IET, Ciudad Real, 2012), pp. 161–165Google Scholar
R.J. Wieringa, A unified checklist for observational and experimental research in software engineering (version 1). Technical Report TR-CTIT-12-07, Centre for Telematics and Information Technology University of Twente (2012)Google Scholar
R.K. Yin, Case Study research: Design and Methods (Sage, Thousand Oaks, 1984)Google Scholar
R.K. Yin, Case Study research: Design and Methods, 3rd edn. (Sage, Thousand Oaks, 2003)Google Scholar
In fields such as epidemiology, social sciences, psychology and statistics, an observational study draws inferences from a sample to a population where the independent variable is not under the control of the researcher because of ethical concerns or logistical constraints. One common observational study is about the possible effect of a treatment on subjects, where the assignment of subjects into a treated group versus a control group is outside the control of the investigator. This is in contrast with experiments, such as randomized controlled trials, where each subject is randomly assigned to a treated group or a control group.
The independent variable may be beyond the control of the investigator for a variety of reasons:
- A randomized experiment would violate ethical standards. Suppose one wanted to investigate the abortion – breast cancer hypothesis, which postulates a causal link between induced abortion and the incidence of breast cancer. In a hypothetical controlled experiment, one would start with a large subject pool of pregnant women and divide them randomly into a treatment group (receiving induced abortions) and a control group (not receiving abortions), and then conduct regular cancer screenings for women from both groups. Needless to say, such an experiment would run counter to common ethical principles. (It would also suffer from various confounds and sources of bias, e.g. it would be impossible to conduct it as a blind experiment.) The published studies investigating the abortion–breast cancer hypothesis generally start with a group of women who already have received abortions. Membership in this "treated" group is not controlled by the investigator: the group is formed after the "treatment" has been assigned.
- The investigator may simply lack the requisite influence. Suppose a scientist wants to study the public health effects of a community-wide ban on smoking in public indoor areas. In a controlled experiment, the investigator would randomly pick a set of communities to be in the treatment group. However, it is typically up to each community and/or its legislature to enact a smoking ban. The investigator can be expected to lack the political power to cause precisely those communities in the randomly selected treatment group to pass a smoking ban. In an observational study, the investigator would typically start with a treatment group consisting of those communities where a smoking ban is already in effect.
- A randomized experiment may be impractical. Suppose a researcher wants to study the suspected link between a certain medication and a very rare group of symptoms arising as a side effect. Setting aside any ethical considerations, a randomized experiment would be impractical because of the rarity of the effect. There may not be a subject pool large enough for the symptoms to be observed in at least one treated subject. An observational study would typically start with a group of symptomatic subjects and work backwards to find those who were given the medication and later developed the symptoms. Thus a subset of the treated group was determined based on the presence of symptoms, instead of by random assignment.
- Case-control study: study originally developed in epidemiology, in which two existing groups differing in outcome are identified and compared on the basis of some supposed causal attribute.
- Cross-sectional study: involves data collection from a population, or a representative subset, at one specific point in time.
- Longitudinal study: correlational research study that involves repeated observations of the same variables over long periods of time.
- Cohort study or Panel study: a particular form of longitudinal study where a group of patients is closely monitored over a span of time.
- Ecological study: an observational study in which at least one variable is measured at the group level.
Degree of usefulness and reliability
Although observational studies cannot be used to make definitive statements of fact about the "safety, efficacy, or effectiveness" of a practice, they can still be of use for some other things:
- "[T]hey can: 1) provide information on “real world” use and practice; 2) detect signals about the benefits and risks of...[the] use [of practices] in the general population; 3) help formulate hypotheses to be tested in subsequent experiments; 4) provide part of the community-level data needed to design more informative pragmatic clinical trials; and 5) inform clinical practice."
Bias and compensating methods
In all of those cases, if a randomized experiment cannot be carried out, the alternative line of investigation suffers from the problem that the decision of which subjects receive the treatment is not entirely random and thus is a potential source of bias. A major challenge in conducting observational studies is to draw inferences that are acceptably free from influences by overt biases, as well as to assess the influence of potential hidden biases.
An observer of an uncontrolled experiment (or process) records potential factors and the data output: the goal is to determine the effects of the factors. Sometimes the recorded factors may not be directly causing the differences in the output. There may be more important factors which were not recorded but are, in fact, causal. Also, recorded or unrecorded factors may be correlated which may yield incorrect conclusions. Finally, as the number of recorded factors increases, the likelihood increases that at least one of the recorded factors will be highly correlated with the data output simply by chance.
In lieu of experimental control, multivariate statistical techniques allow the approximation of experimental control with statistical control, which accounts for the influences of observed factors that might influence a cause-and-effect relationship. In healthcare and the social sciences, investigators may use matching to compare units that nonrandomly received the treatment and control. One common approach is to use propensity score matching in order to reduce confounding.
A report from the Cochrane Collaboration in 2014 came to the conclusion that observational studies are very similar in results reported by similarly conducted randomized controlled trials. In other words, it reported little evidence for significant effect estimate differences between observational studies and randomized controlled trials, regardless of specific observational study design, heterogeneity, or inclusion of studies of pharmacological interventions. It therefore recommended that factors other than study design per se need to be considered when exploring reasons for a lack of agreement between results of randomized controlled trials and observational studies.
In 2007, several prominent medical researchers issued the Strengthening the reporting of observational studies in epidemiology (STROBE) statement, in which they called for observational studies to conform to 22 criteria that would make their conclusions easier to understand and generalise.
- ^"Observational study". Retrieved 2008-06-25.
- ^Porta, M., ed. (2008). A Dictionary of Epidemiology (5th ed.). New York: Oxford University Press. ISBN 9780195314496.
- ^ ab"Although observational studies cannot provide definitive evidence of safety, efficacy, or effectiveness, they can: 1) provide information on "real world" use and practice; 2) detect signals about the benefits and risks of complementary therapies use in the general population; 3) help formulate hypotheses to be tested in subsequent experiments; 4) provide part of the community-level data needed to design more informative pragmatic clinical trials; and 5) inform clinical practice." "Observational Studies and Secondary Data Analyses To Assess Outcomes in Complementary and Integrative Health Care." Richard Nahin, Ph.D., M.P.H., Senior Advisor for Scientific Coordination and Outreach, National Center for Complementary and Integrative Health, June 25, 2012
- ^Rosenbaum, Paul R. 2009. Design of Observational Studies. New York: Springer.
- ^Anglemyer, Andrew; Horvath, Hacsi T; Bero, Lisa; Bero, Lisa (2014). "Healthcare outcomes assessed with observational study designs compared with those assessed in randomized trials". Cochrane Database Syst Rev. 4: MR000034. doi:10.1002/14651858.MR000034.pub2. PMID 24782322.
- ^von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP (2007). "The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement: Guidelines for Reporting Observational Studies". PLoS Med. 4 (10): e296. doi:10.1371/journal.pmed.0040296. PMC 2020495. PMID 17941714.