Essays
-
Ambulatory Assessment: Methods for Studying Everyday Life - Conner, Tamlin S.
Ambulatory assessment is a class of methods that use mobile technology to understand people's biopsychosocial processes in natural settings, in real time, and on repeated occasions. In this essay, we discuss the rationale for ambulatory assessment including the benefits of measuring people in the real world (greater ecological validity, better understanding of people in contexts), in real time (avoidance of memory bias, greater sensitivity for capturing change), and over time (capturing within‐person patterns and temporal trends). Then, we review the latest ambulatory assessment techniques for measuring experiences, behaviors, and physiology in daily life. Experiences such as emotions, physical pain, and daily stressors can be tracked using daily diaries and smartphone‐based experience sampling. Behaviors such as activity, movement, location, and natural language use can be tracked using accelerometers, portable actigraphs, global positioning system (GPS) coordinates, and the electronically activated recorder (EAR). Physiological processes such as heart rate, blood pressure, and electrodermal activity can be measured using an array of ambulatory biosensors. Ambulatory assessment will continue to be revolutionized by smartphones, which are becoming integrated seamlessly into people's lives. Emerging trends include social sensing applications that make inferences about users' psychological processes based on multi‐channel information collected from smartphones, emergence of “big data collection” whereby ambulatory assessment data is gathered en masse from large populations, and the growing field of mobile health. These trends raise questions around the protection of participants' privacy and the synthesis of immense amounts of digital data. Ultimately, these developments will narrow the separation between science and everyday life as ambulatory assessment becomes an integrated part of people's mobile lives. -
Content Analysis - Stemler, Steven E.
In the era of “big data,” the methodological technique of content analysis can be the most powerful tool in the researcher's kit. Content analysis is versatile enough to apply to textual, visual, and audio data. Given the massive explosion in permanent, archived linguistic, photographic, video, and audio data arising from the proliferation of technology, the technique of content analysis appears to be on the verge of a renaissance. In this essay, I discuss cutting‐edge examples of how content analysis is being applied or might be applied to the study of areas as diverse as education, criminology, and social intelligence. -
Data Mining - Murray, Gregg R.
This essay introduces data mining as an analytical technique for novice to professional social and behavioral scientists. It presents data mining, which is also known as, among other things, data analytics and predictive analytics, as an effective tool for researchers who are interested in the analysis of “big data” as well as small, unique data sets. It addresses foundational elements of data mining such as how to avoid “data dredging” and the importance of theory as embodied in researcher domain expertise. It also briefly defines and describes classification analysis, association rules, and clustering, which are the major methodologies among a large number of methodologies that constitute data mining. This essay identifies analytical problems and data for which the techniques are best suited. It goes on to highlight a number of cutting‐edge studies that relied on data mining techniques in disciplines such as criminal justice, education, health sciences, linguistics, political science, and sociology. This essay concludes with a review of key considerations for future research to include discussions of the burgeoning of new analytical techniques and new data sets and sources, the importance and protection of data‐source privacy, and the ethical obligation researchers have to exploit to their fullest extent the costly data on social and behavioral issues collected by scientists and society. -
Digital Methods for Web Research - Rogers, Richard
Digital methods are techniques for the study of societal change and cultural condition with online data. They make use of available digital objects such as the hyperlink, tag, timestamp, like, share, retweet, and seek to learn from how the objects are treated by the methods built into the dominant devices online, such as Google Web Search and Facebook's Graph Search. They endeavor to repurpose the online methods and services with a social research outlook. Ultimately the question is the location of the baseline, and whether the findings made may be grounded online. Digital methods as a research practice is part of the computational turn in the humanities and social sciences, and as such may be situated alongside other recent approaches, such as cultural analytics, culturomics, and virtual methods, where distinctions may be made about the types of data employed (natively digital and digitized) as well as method (written for the medium, or migrated to it). The limitations of digital methods are also treated. Digital methods recognize the problems with web data, such as the impermanence of web services, and the instability of data streams, where, for example, APIs (application programming interfaces) are reconfigured or discontinued. They also grapple with the quality of web data, and the challenges of longitudinal study, where, for instance, all of Twitter's tweets may be archived by the Library of Congress, but new types of gaps emerge owing to changes over the years in the company's terms of service. -
Ethnography in the Digital Age - Howard, Alan
This essay explores the ways in which ethnography, both as a methodology and a product of research, has adapted to the rapid growth of digital technology and the new venues for research that it has spawned. On the one hand, digital technology affords social scientists new means of recording, storing, and analyzing data. On the other hand, digital media have been responsible for the creation of new venues for research, mostly on the Internet in the form of websites, blogs, social networks, and multiplayer online games. As a methodology, ethnography, with its beginnings in the anthropological study of non‐Western societies, has proved to be highly adaptable to the task of making sense of, and giving meaning to, computer‐mediated communications in its various forms. This has led to its adoption in the study of online sites by researchers from a number of different disciplines attempting to come to grips with the cultural nuances of digitally formed communities. Ethical problems posed by more powerful forms of surveillance and access to personal information are discussed. The boundaries between public and private domains have become increasingly blurred, resulting in complex issues relating to informed consent. As a product, digital ethnographies allow for nonlineal, hyperlinked presentations that permit new forms of engagement between authors and readers not afforded by traditional published monographs. -
Ethnography: Telling Practice Stories - O'Reilly, Karen
In this essay I argue that the central emerging trend in ethnography is the telling of practice stories, that is narrative (or story‐like) accounts that make sense of social phenomena by understanding how people respond to constraints and opportunities but in turn create the cultures, constraints, and opportunities within which others act. Drawing either overtly or implicitly on different versions of what has become known as practice theory, contemporary ethnographers increasingly aspire to unravel the processes involved in the ongoing constitution of social life. This constitution is made up of free will as well as structures that restrict action. The key principles of ethnography, established to challenge preconceptions and to yield complex understandings, remain fundamental to its methodology. This is despite massive social change and the emergence of “new ethnographies” to understand such things as globalization and technological change. These key principles are exactly what are required for the analysis of social life as practice. Ethnography pays attention to people's feeling and emotions, their experiences and their free choices, but also to the wider constraints and opportunities that frame their agency. And they do this always in the context of people's daily lives, cultures, and communities, using the key methods of watching, taking part, sharing in conversations and listening. -
Hierarchical Models for Causal Effects - Feller, Avi
Hierarchical models play three important roles in modeling causal effects: (i) accounting for data collection, such as in stratified and split‐plot experimental designs; (ii) adjusting for unmeasured covariates, such as in panel studies; and (iii) capturing treatment effect variation, such as in subgroup analyses. Across all three areas, hierarchical models, especially Bayesian hierarchical modeling, offer substantial benefits over classical, non‐hierarchical approaches. After discussing each of these topics, we explore some recent developments in the use of hierarchical models for causal inference and conclude with some thoughts on new directions for this research area. -
How Brief Social‐Psychological Interventions Can Cause Enduring Effects - Kenthirarajah, Dushiyanthini (Toni)
In recent years, several studies have shown that brief, theory‐based social‐psychological interventions can cause large, enduring effects on important outcomes, such as school achievement and marital relationships. How are such effects possible? We propose a field‐theory model: this model distinguishes “nudge” interventions—interventions designed to change a “snapshot” in time such as a particular decision or behavior—from interventions designed to change a “movie”—core beliefs or other aspects of the self and thus people's behavior as it unfolds over time in diverse settings. Movie interventions target underlying social‐psychological processes—such as students' confidence that they belong in school or individuals' felt security in close relationships. These psychological processes can interact with naturalistic variables—such as how people interact with one another and the relationships they build—to propel intervention effects forward in time. In this model, real‐world factors can serve as proximal outcomes that catalyze long‐term effects. An important implication is that such interventions can sometimes amplify their effects over time, if the targeted recursive process “snowballs.” A second implication is that the long‐term effects of movie interventions are dependent on the context—specifically, on whether the context affords naturalistic variables that can catalyze changes in the self forward in time. To illustrate this field‐theory model, we compare it to Mortensen and Cialdini's (2010) full‐cycle model. Although both models share important features, including an emphasis on laboratory research, the latter treats forces in the world as “noise” and predicts that the effects of psychological interventions will dissipate, not strengthen with time. In addition to their applied potential, movie interventions raise profound new theoretical questions, such as how psychological processes unfold over time and do so in interaction with social contexts. Exploring these questions represents an exciting direction for future research. -
Longitudinal Data Analysis - Little, Todd D.
In this essay we review some of the emerging trends in modeling repeated measures data. Three general forms of longitudinal models are discussed: panel model designs, growth curve models, and intensive within‐person assessments. Each section discusses design elements that should be considered when using each of these types of longitudinal models, and introduces some emerging trends. In the section on panel designs, continuous time models and planned missing data models are introduced; these ideas will revolutionize the modeling and collection of panel data. In the section on growth curve models, the necessity of separately evaluating mean and covariance model fit is discussed. This section also introduces methods being used to carefully consider the time of measurements in temporal designs. Finally, the budding analysis of intensive within individual observations is considered, including recent work from mathematics that limits the generalizability of interindividual studies to individual outcomes. -
Meta‐Analysis - Hedges, Larry V.
Meta‐analysis is the use of statistical methods to combine the results of independent research studies. The results of each study are summarized by one or more indices of effect size and a sampling uncertainty (variance) for each effect. Representing study results by effect sizes permits the use of statistical methods to synthesize these results across studies. This essay describes the most frequently used effect sizes and their properties. It describes how the two principal types of analytic methodology in meta‐analysis (fixed and random effects models) are used to estimate an average effect across studies. It also discusses how heterogeneity of effects across studies can be detected via a heterogeneity test and modeled as a function of study characteristics. In addition, this essay describes areas of current research in meta‐analysis. One area is the development of methods to handle dependencies that can arise when the results of studies are described by several effect sizes computed from data on the same individuals. Another area involves methods for detecting and correcting publication bias. A third is the development of methods to incorporate more complex study designs into metaanalyses, including multilevel experiments and single case designs used in behavioral psychology, special education, and some medicine. -
Models of Nonlinear Growth - Coulombe, Patrick
Models for nonlinear growth are not new, but have not been widely applied in the social and behavioral sciences. In this essay, we describe the fundamental issues relevant to choosing and using a nonlinear growth model. We discuss how researchers can go about choosing a model and then focus on the application of two specific nonlinear models: the fractional polynomial model and the piecewise model. We highlight recent work in reparameterization that allows researchers to choose models with parameters tailored specifically to research questions. We also review recent work on the topic of growth rates in nonlinear models that will allow researchers to obtain richer information from the application of nonlinear models. We conclude by pointing out some of the unresolved issues in the use of nonlinear growth models. -
Network Research Experiments - Linton, Allen L.
This essay attempts to lay the foundation of modern social networks research with a jolt toward innovative ways to create data or finding ways to access newly available data to address meaningful political questions. We focus on outlining potential new resources for data, discuss the emergent theoretical arguments involving political networks, and present some current empirical estimates for the magnitude of the effects of political networks. With the rise of social media and new technology, ordinary citizens socialize online with old friends from elementary school, siblings across the country, and local neighbors. While these relationships have long been part of the social fabric of ordinary life, the ability to observe these exchanges directly and on a daily basis is new, for both researchers and citizens. Records of our social interactions have the potential to transform our academic understanding of the relationship between communication among family, friends, and coworkers and how we become informed about politics and act politically. Whether the relationship occurs on or offline, the social element of the relationship can be incredibly vital in understanding the way individuals react and interact with their political environments. Processing and understanding these interactions, however, can be difficult without knowing where to look for new information, what patterns to look for, and how to interpret data in the context of other findings on the effects of social and political networks. We conclude by considering the new and exciting directions this research may take in the future. -
Participant Observation - Jorgensen, Danny L.
Investigating the meanings of human existence as they are constructed and enacted by people in everyday life situations and settings presents serious challenges for all forms of human studies. Participant observation, whereby the researcher interacts with people in everyday life while collecting information, is a unique method for investigating the enormously rich, complex, conflictual, problematic, and diverse experiences, thoughts, feelings, and activities of human beings and the meanings of their existence. Use of this distinctive method emerged with the professionalization of anthropology and sociology where it gradually was formalized and later spread to a full range of human studies fields. Its practice nevertheless remains artful, requiring creative decision making about problems and questions to be studied, appropriate settings and situations for gathering information, the performance of membership roles, establishing and sustaining trusting relationships, ethics, values, and politics, as well as record making, data analysis and interpretation, and reporting results. This essay provides a brief sketch of the method of participant observation and an overview of a few of the more central issues of its practice, including its location historically within the framework of different views of social scientific methodology. -
Person‐Centered Analysis - Von Eye, Alexander
The majority of data analyses in the empirical sciences that are concerned with humans proceeds at the level of variables. Typical results relate variables to each other, for example, in correlational or regression‐type statements. In these analyses, individuals are considered random data carriers, replaceable without damage by other individuals, also random data carriers. This type of research is known as variable‐oriented. It has been shown that statements at the aggregate level, that is, variable‐oriented statements, are rarely applicable to the individual case. In contrast, person‐oriented research, also known as person‐centered research, proposes focusing on the individual. Analyses in person‐oriented research differ from procedures that are customary in variable‐oriented research. In person‐oriented research, parameters are estimated first at the level of the individual. If generalization is the goal of analysis, aggregation takes place at the level of parameters instead of raw data. Implications of this strategy are major. Data need to be collected in a way different than in variable‐oriented research, data analysis is different, and the resulting statements are different as well. This article introduces readers to person‐oriented research and gives two examples of person‐oriented data analysis, that is, configural frequency analysis and item response modeling. -
Quantile Regression Methods - Fitzenberger, Bernd
Quantile regression is emerging as a popular statistical approach, which complements the estimation of conditional mean models. While the latter only focuses on one aspect of the conditional distribution of the dependent variable, the mean, quantile regression provides more detailed insights by modeling conditional quantiles. Quantile regression can therefore detect whether the partial effect of a regressor on the conditional quantiles is the same for all quantiles or differs across quantiles. Quantile regression can provide evidence for a statistical relationship between two variables even if the mean regression model does not. -
Quasi‐Experiments - Reichardt, Charles S.
Quasi‐experiments are research designs used to estimate treatment effects when treatments are not assigned at random. Research in quasi‐experimentation will advance on four fronts. First, researchers will elaborate the complete array of quasi‐experimental comparisons. Second, researchers will refine statistical methods for taking account of initial selection differences. Third, researchers will both improve sensitivity analyses to take account of biases and create empirically based theories of the degree to which biases are removed. And fourth, researchers will assess how well quasi‐experiments address the full panoply of complications that arise in practice. -
Regression Discontinuity Design - Meredith, Marc
Social scientists search for interventions in the real world that approximate the conditions of an experiment. One form of such natural experiments that is increasingly used in social science research is regression discontinuity (RD). RD designs are possible when there are thresholds that cause large changes in the assignment of treatments on the basis of small differences in a variable. For example, a high school junior in the state of Pennsylvania who scored 214 out of 240 on the 2012 PSAT test received the treatment of being a National Merit Semi‐Finalist, whereas a comparable student who scored 213 did not. The intuition behind a RD design is that we often can learn something about the effects of a treatment by comparing observations that barely receive a treatment (e.g., individuals with scores of 214 and just above on the PSAT) to observations that barely miss receiving a treatment (e.g., individuals who score 213 and just below on the PSAT). We discuss the assumptions under which the effects of treatment that are assigned based on a discontinuous threshold can be estimated using a RD design. We then illustrate how graphical analysis can be used to illustrate whether these assumptions are likely to hold. We conclude by discussing two examples of cutting‐edge research that employs RD designs and discussing areas of future research. -
Repeated Cross‐Sections in Survey Data - Brady, Henry E.
Examples of repeated cross‐sections (RCS) include daily tracking polls of political opinions during campaigns, monthly Current Population Surveys of unemployment, yearly national health interview surveys, and quadrennial election studies of presidential voting. Each iteration is a distinct sample, as opposed to panels in which the same people are interviewed two or more times. By asking the same questions on repeated survey samples from the same population, RCS studies allow us to track trends and to establish causal inferences. One analytic challenge is to maintain both the representativeness and the comparability of samples as fieldwork methods or sources change. The longer the span covered by an RCS, the likelier it is that the universe will change. For an RCS spanning decades, populations can change in fundamental ways. The universe of content also changes, as issues of one period are redefined or even rendered irrelevant in another. Extracting trends from RCS data typically requires smoothing to separate signal from noise, especially where samples or subsamples are small, but this can lead to bias due to excessive smoothing or to mistaking noise for signal because of sampling variability when there is not enough smoothing. By deploying time the RCS design enables certain kinds of causal inference, but many alternative micro‐processes are observationally equivalent, and so the RCS benefits from being combined with the panel design. -
Statistical Power Analysis - Aberson, Christopher L.
Statistical power refers to the probability of rejecting a false null hypothesis (i.e., finding what the researcher wants to find). Power analysis allows researchers to determine adequate sample size for designing studies with an optimal probability for rejecting false null hypotheses. When conducted correctly, power analysis helps researchers make informed decisions about sample size selection. Statistical power analysis most commonly involves specifying statistic test criteria (type I error rate), desired level of power, and the effect size expected in the population. This article outlines the basic concepts relevant to statistical power, factors that influence power, how to establish the different parameters for power analysis, and determination and interpretation of the effect size estimates for power. I also address innovative work such as the continued development of software resources for power analysis and protocols for designing for precision of confidence intervals (aka, accuracy in parameter estimation). Finally, I outline understudied areas such as power analysis for designs with multiple predictors, reporting and interpreting power analyses in published work, designing for meaningfully sized effects, and power to detect multiple effects in the same study. -
Structural Equation Modeling and Latent Variable Approaches - Liu, Alex
Structural equation modeling and latent variable approach (SEM) is experiencing rapid development with wide application as a result of using big data and modern computing technologies. This essay first gives an introduction of SEM, and then summarizes the foundational research in developing better fit indices and in developing more efficient computing algorithms. Also, we review two most important cutting‐edge researches in using SEM for causal analysis and in managing workflows of SEM. For the future SEM research, we have discussed issues of big data, new applications, equivalent models, and hybrid modeling. -
Text Analysis - Roberts, Carl W.
Even once words have been counted, or their themes and semantics quantitatively rendered as networks or grammars, it remains unclear what they reveal. Are the texts windows into historical facts that the analyst cannot experience in person, or are they windows into their authors' perspectives? A choice is needed here, because authors' perspectives may alter their renderings of “the facts” and, conversely, changes in an author's surroundings may prompt changes in her or his perspective. Next, is the researcher a novice who strives for fidelity to authors' perspectives, or is the researcher an expert whose perspective affords insights unknown to the authors? With contemporary growth in both world population and communication technologies, increasing contacts among peoples with disparate perspectives afford the social sciences an opportunity both to improve our understanding of these perspectives (or cultures) and to discontinue mining words for evidence consistent with theoretical perspectives of our own choosing. Modality analysis is a promising method for performing historical‐comparative analyses of political cultures based on the volumes of texts only recently available to us. -
The Experimental Approach to Studying Employers' Hiring Behavior - Gërxhani, Klarita
This essay advocates the use of experimental methods to study labor demand. Experimentation contributes to a better understanding of employers' hiring behavior by establishing what is cause and what is effect in observed behavior and allows for a better grip on the mechanisms underlying the hiring process. Given the difficulties in obtaining information from employers, experiments offer a fruitful alternative route to collecting information about the hiring process. The limited existing research provides a basis for new and promising steps into the future. To address research questions related to employers' hiring behavior, I propose combining experimental methods; implementing cross‐country experimental designs; conducting experiments on online labor markets; and using experimental control to explore the interaction between social context and biological factors. Setting these steps will give employers' decisions the attention they deserve when it comes to the important role that hiring plays in generating labor market (in)equalities. -
The Rise of Experimentation in Political Science - Rogowski, Ronald
Experimental research has expanded markedly in political science over the past 30 years: the number of experimental articles in the American Political Science Review has almost quintupled since the mid‐1980s. The main reason is intellectual: most scholars by now agree that random assignment of cases to “treatment” provides the most (perhaps the only) convincing evidence of causation. The second reason is technical advances that permit kinds of experimentation that, before about 2000, hardly existed: field, natural, and survey experiments. These have grown, while laboratory experiments have receded. While concerns remain about the external validity of these experiments, both journals and funding agencies will likely move increasingly in this direction. -
The Use of Geophysical Survey in Archaeology - Horsley, Timothy J.
This essay aims to introduce readers to geophysical methods that are currently employed to help archaeologists study the past. Geophysical techniques exploit differences between the physical properties of buried remains and the natural soil to allow their detection and characterization without—or in advance of—digging. When successfully applied, they have the potential to dramatically enhance archaeological investigations by providing a map of buried remains that can (i) help to assess an area for its archaeological potential; (ii) guide subsequent excavation; or (iii) be used as a tool to define and test research questions in their own right. Given the relatively rapid and noninvasive nature of these methods, it is possible to examine entire sites and landscapes, in some instances detecting features as small as individual post holes. While these techniques are routinely integrated into archaeological investigations in some parts of the world, their potential in many areas is only starting to be realized. It is expected that we will see continued growth in the number of surveys being conducted, as well as in the sizes of areas encompassed and in the range of their archaeological application. -
To Flop Is Human: Inventing Better Scientific Approaches to Anticipating Failure - Boruch, Robert
Postmortems and autopsies, at the individual and hospital unit levels, are disciplined approaches to learning from medical failures. “Safety factors” that engineers use in designing structures and systems are based on past failures or trials and experiments to find points of failure.