Essays
-
Ambulatory Assessment: Methods for Studying Everyday Life - Conner, Tamlin S.
Ambulatory assessment is a class of methods that use mobile technology to understand people's biopsychosocial processes in natural settings, in real time, and on repeated occasions. In this essay, we discuss the rationale for ambulatory assessment including the benefits of measuring people in the real world (greater ecological validity, better understanding of people in contexts), in real time (avoidance of memory bias, greater sensitivity for capturing change), and over time (capturing within‐person patterns and temporal trends). Then, we review the latest ambulatory assessment techniques for measuring experiences, behaviors, and physiology in daily life. Experiences such as emotions, physical pain, and daily stressors can be tracked using daily diaries and smartphone‐based experience sampling. Behaviors such as activity, movement, location, and natural language use can be tracked using accelerometers, portable actigraphs, global positioning system (GPS) coordinates, and the electronically activated recorder (EAR). Physiological processes such as heart rate, blood pressure, and electrodermal activity can be measured using an array of ambulatory biosensors. Ambulatory assessment will continue to be revolutionized by smartphones, which are becoming integrated seamlessly into people's lives. Emerging trends include social sensing applications that make inferences about users' psychological processes based on multi‐channel information collected from smartphones, emergence of “big data collection” whereby ambulatory assessment data is gathered en masse from large populations, and the growing field of mobile health. These trends raise questions around the protection of participants' privacy and the synthesis of immense amounts of digital data. Ultimately, these developments will narrow the separation between science and everyday life as ambulatory assessment becomes an integrated part of people's mobile lives. -
Content Analysis - Stemler, Steven E.
In the era of “big data,” the methodological technique of content analysis can be the most powerful tool in the researcher's kit. Content analysis is versatile enough to apply to textual, visual, and audio data. Given the massive explosion in permanent, archived linguistic, photographic, video, and audio data arising from the proliferation of technology, the technique of content analysis appears to be on the verge of a renaissance. In this essay, I discuss cutting‐edge examples of how content analysis is being applied or might be applied to the study of areas as diverse as education, criminology, and social intelligence. -
Data Mining - Murray, Gregg R.
This essay introduces data mining as an analytical technique for novice to professional social and behavioral scientists. It presents data mining, which is also known as, among other things, data analytics and predictive analytics, as an effective tool for researchers who are interested in the analysis of “big data” as well as small, unique data sets. It addresses foundational elements of data mining such as how to avoid “data dredging” and the importance of theory as embodied in researcher domain expertise. It also briefly defines and describes classification analysis, association rules, and clustering, which are the major methodologies among a large number of methodologies that constitute data mining. This essay identifies analytical problems and data for which the techniques are best suited. It goes on to highlight a number of cutting‐edge studies that relied on data mining techniques in disciplines such as criminal justice, education, health sciences, linguistics, political science, and sociology. This essay concludes with a review of key considerations for future research to include discussions of the burgeoning of new analytical techniques and new data sets and sources, the importance and protection of data‐source privacy, and the ethical obligation researchers have to exploit to their fullest extent the costly data on social and behavioral issues collected by scientists and society. -
Digital Methods for Web Research - Rogers, Richard
Digital methods are techniques for the study of societal change and cultural condition with online data. They make use of available digital objects such as the hyperlink, tag, timestamp, like, share, retweet, and seek to learn from how the objects are treated by the methods built into the dominant devices online, such as Google Web Search and Facebook's Graph Search. They endeavor to repurpose the online methods and services with a social research outlook. Ultimately the question is the location of the baseline, and whether the findings made may be grounded online. Digital methods as a research practice is part of the computational turn in the humanities and social sciences, and as such may be situated alongside other recent approaches, such as cultural analytics, culturomics, and virtual methods, where distinctions may be made about the types of data employed (natively digital and digitized) as well as method (written for the medium, or migrated to it). The limitations of digital methods are also treated. Digital methods recognize the problems with web data, such as the impermanence of web services, and the instability of data streams, where, for example, APIs (application programming interfaces) are reconfigured or discontinued. They also grapple with the quality of web data, and the challenges of longitudinal study, where, for instance, all of Twitter's tweets may be archived by the Library of Congress, but new types of gaps emerge owing to changes over the years in the company's terms of service. -
Hierarchical Models for Causal Effects - Feller, Avi
Hierarchical models play three important roles in modeling causal effects: (i) accounting for data collection, such as in stratified and split‐plot experimental designs; (ii) adjusting for unmeasured covariates, such as in panel studies; and (iii) capturing treatment effect variation, such as in subgroup analyses. Across all three areas, hierarchical models, especially Bayesian hierarchical modeling, offer substantial benefits over classical, non‐hierarchical approaches. After discussing each of these topics, we explore some recent developments in the use of hierarchical models for causal inference and conclude with some thoughts on new directions for this research area. -
Meta‐Analysis - Hedges, Larry V.
Meta‐analysis is the use of statistical methods to combine the results of independent research studies. The results of each study are summarized by one or more indices of effect size and a sampling uncertainty (variance) for each effect. Representing study results by effect sizes permits the use of statistical methods to synthesize these results across studies. This essay describes the most frequently used effect sizes and their properties. It describes how the two principal types of analytic methodology in meta‐analysis (fixed and random effects models) are used to estimate an average effect across studies. It also discusses how heterogeneity of effects across studies can be detected via a heterogeneity test and modeled as a function of study characteristics. In addition, this essay describes areas of current research in meta‐analysis. One area is the development of methods to handle dependencies that can arise when the results of studies are described by several effect sizes computed from data on the same individuals. Another area involves methods for detecting and correcting publication bias. A third is the development of methods to incorporate more complex study designs into metaanalyses, including multilevel experiments and single case designs used in behavioral psychology, special education, and some medicine. -
Network Research Experiments - Linton, Allen L.
This essay attempts to lay the foundation of modern social networks research with a jolt toward innovative ways to create data or finding ways to access newly available data to address meaningful political questions. We focus on outlining potential new resources for data, discuss the emergent theoretical arguments involving political networks, and present some current empirical estimates for the magnitude of the effects of political networks. With the rise of social media and new technology, ordinary citizens socialize online with old friends from elementary school, siblings across the country, and local neighbors. While these relationships have long been part of the social fabric of ordinary life, the ability to observe these exchanges directly and on a daily basis is new, for both researchers and citizens. Records of our social interactions have the potential to transform our academic understanding of the relationship between communication among family, friends, and coworkers and how we become informed about politics and act politically. Whether the relationship occurs on or offline, the social element of the relationship can be incredibly vital in understanding the way individuals react and interact with their political environments. Processing and understanding these interactions, however, can be difficult without knowing where to look for new information, what patterns to look for, and how to interpret data in the context of other findings on the effects of social and political networks. We conclude by considering the new and exciting directions this research may take in the future. -
Person‐Centered Analysis - Von Eye, Alexander
The majority of data analyses in the empirical sciences that are concerned with humans proceeds at the level of variables. Typical results relate variables to each other, for example, in correlational or regression‐type statements. In these analyses, individuals are considered random data carriers, replaceable without damage by other individuals, also random data carriers. This type of research is known as variable‐oriented. It has been shown that statements at the aggregate level, that is, variable‐oriented statements, are rarely applicable to the individual case. In contrast, person‐oriented research, also known as person‐centered research, proposes focusing on the individual. Analyses in person‐oriented research differ from procedures that are customary in variable‐oriented research. In person‐oriented research, parameters are estimated first at the level of the individual. If generalization is the goal of analysis, aggregation takes place at the level of parameters instead of raw data. Implications of this strategy are major. Data need to be collected in a way different than in variable‐oriented research, data analysis is different, and the resulting statements are different as well. This article introduces readers to person‐oriented research and gives two examples of person‐oriented data analysis, that is, configural frequency analysis and item response modeling. -
Quasi‐Experiments - Reichardt, Charles S.
Quasi‐experiments are research designs used to estimate treatment effects when treatments are not assigned at random. Research in quasi‐experimentation will advance on four fronts. First, researchers will elaborate the complete array of quasi‐experimental comparisons. Second, researchers will refine statistical methods for taking account of initial selection differences. Third, researchers will both improve sensitivity analyses to take account of biases and create empirically based theories of the degree to which biases are removed. And fourth, researchers will assess how well quasi‐experiments address the full panoply of complications that arise in practice. -
Repeated Cross‐Sections in Survey Data - Brady, Henry E.
Examples of repeated cross‐sections (RCS) include daily tracking polls of political opinions during campaigns, monthly Current Population Surveys of unemployment, yearly national health interview surveys, and quadrennial election studies of presidential voting. Each iteration is a distinct sample, as opposed to panels in which the same people are interviewed two or more times. By asking the same questions on repeated survey samples from the same population, RCS studies allow us to track trends and to establish causal inferences. One analytic challenge is to maintain both the representativeness and the comparability of samples as fieldwork methods or sources change. The longer the span covered by an RCS, the likelier it is that the universe will change. For an RCS spanning decades, populations can change in fundamental ways. The universe of content also changes, as issues of one period are redefined or even rendered irrelevant in another. Extracting trends from RCS data typically requires smoothing to separate signal from noise, especially where samples or subsamples are small, but this can lead to bias due to excessive smoothing or to mistaking noise for signal because of sampling variability when there is not enough smoothing. By deploying time the RCS design enables certain kinds of causal inference, but many alternative micro‐processes are observationally equivalent, and so the RCS benefits from being combined with the panel design. -
The Use of Geophysical Survey in Archaeology - Horsley, Timothy J.
This essay aims to introduce readers to geophysical methods that are currently employed to help archaeologists study the past. Geophysical techniques exploit differences between the physical properties of buried remains and the natural soil to allow their detection and characterization without—or in advance of—digging. When successfully applied, they have the potential to dramatically enhance archaeological investigations by providing a map of buried remains that can (i) help to assess an area for its archaeological potential; (ii) guide subsequent excavation; or (iii) be used as a tool to define and test research questions in their own right. Given the relatively rapid and noninvasive nature of these methods, it is possible to examine entire sites and landscapes, in some instances detecting features as small as individual post holes. While these techniques are routinely integrated into archaeological investigations in some parts of the world, their potential in many areas is only starting to be realized. It is expected that we will see continued growth in the number of surveys being conducted, as well as in the sizes of areas encompassed and in the range of their archaeological application. -
To Flop Is Human: Inventing Better Scientific Approaches to Anticipating Failure - Boruch, Robert
Postmortems and autopsies, at the individual and hospital unit levels, are disciplined approaches to learning from medical failures. “Safety factors” that engineers use in designing structures and systems are based on past failures or trials and experiments to find points of failure. -
Virtual Worlds as Laboratories - Ross, Travis L.
A virtual world is a persistent space where tens, hundreds, thousands, or even millions of users interact with each other and a mediated environment defined as a physical space through rules created by designers and enforced by computer code. Researchers have argued that these characteristics make virtual worlds particularly well suited for conducting parallel experiments to test macro‐level social theory. The purpose of this essay is to provide an introduction into virtual worlds research. It is not an exhaustive resource chronicling the history of virtual worlds, but rather an introduction broken into three sections for those wishing to learn more about the past, present, and future directions of the topic. First, it explores what researchers have said about using virtual worlds research and the fields of research where virtual worlds have been used. In doing so, it focuses on research in video games studies and complex systems. Second, it examines cutting‐edge work in virtual worlds research, identifying that both academia and the game industry will play a significant role in the success and direction. Third, it identifies six key issues that scholars using virtual worlds research will face as they move forward.