American Sociological Association

Search

Search

The search found 117 results in 0.024 seconds.

Search results

  1. Causal Inference with Networked Treatment Diffusion

    Treatment interference (i.e., one unit’s potential outcomes depend on other units’ treatment) is prevalent in social settings. Ignoring treatment interference can lead to biased estimates of treatment effects and incorrect statistical inferences. Some recent studies have started to incorporate treatment interference into causal inference. But treatment interference is often assumed to follow a simple structure (e.g., treatment interference exists only within groups) or measured in a simplistic way (e.g., only based on the number of treated friends).
  2. Status Threat, Material Interests, and the 2016 Presidential Vote

    The April 2018 article of Diana Mutz “Status Threat, Not Economic Hardship, Explains the 2016 Presidential Vote,” was published in the Proceedings of the National Academy of Sciences and contradicts prior sociological research on the 2016 election. Mutz’s article received widespread media coverage because of the strength of its primary conclusion, declaimed in its title. The present article is a critical reanalysis of the models offered by Mutz, using the data files released along with her article.
  3. Limitations of Design-based Causal Inference and A/B Testing under Arbitrary and Network Interference

    Randomized experiments on a network often involve interference between connected units, namely, a situation in which an individual’s treatment can affect the response of another individual. Current approaches to deal with interference, in theory and in practice, often make restrictive assumptions on its structure—for instance, assuming that interference is local—even when using otherwise nonparametric inference strategies.
  4. The Problem of Underdetermination in Model Selection

    Conventional model selection evaluates models on their ability to represent data accurately, ignoring their dependence on theoretical and methodological assumptions. Drawing on the concept of underdetermination from the philosophy of science, the author argues that uncritical use of methodological assumptions can pose a problem for effective inference. By ignoring the plausibility of assumptions, existing techniques select models that are poor representations of theory and are thus suboptimal for inference.
  5. We Ran 9 Billion Regressions: Eliminating False Positives through Computational Model Robustness

    False positive findings are a growing problem in many research literatures. We argue that excessive false positives often stem from model uncertainty. There are many plausible ways of specifying a regression model, but researchers typically report only a few preferred estimates. This raises the concern that such research reveals only a small fraction of the possible results and may easily lead to nonrobust, false positive conclusions. It is often unclear how much the results are driven by model specification and how much the results would change if a different plausible model were used.
  6. Estimating Income Statistics from Grouped Data: Mean-constrained Integration over Brackets

    Researchers studying income inequality, economic segregation, and other subjects must often rely on grouped data—that is, data in which thousands or millions of observations have been reduced to counts of units by specified income brackets.
  7. How to Think Scientifically about Scientists’ Proposals for Fixing Science

    Science is in crisis. Any doubt about this status has surely been dispelled by the loud assurances to the contrary by various authority figures who are deeply invested in the current system and have written things such as, “Psychology is not in crisis, contrary to popular rumor. . . . Crisis or no crisis, the field develops consensus about the most valuable insights. . . . National panels will convene and caution scientists, reviewers, and editors to uphold standards” (Fiske, Schacter, and Taylor 2016: v).
  8. Deciding on the Starting Number of Classes of a Latent Class Tree

    In recent studies, latent class tree (LCT) modeling has been proposed as a convenient alternative to standard latent class (LC) analysis. Instead of using an estimation method in which all classes are formed simultaneously given the specified number of classes, in LCT analysis a hierarchical structure of mutually linked classes is obtained by sequentially splitting classes into two subclasses. The resulting tree structure gives a clear insight into how the classes are formed and how solutions with different numbers of classes are substantively linked to one another.
  9. Item Location, the Interviewer–Respondent Interaction, and Responses to Battery Questions in Telephone Surveys

    Survey researchers often ask a series of attitudinal questions with a common question stem and response options, known as battery questions. Interviewers have substantial latitude in deciding how to administer these items, including whether to reread the common question stem on items after the first one or to probe respondents’ answers. Despite the ubiquity of use of these items, there is virtually no research on whether respondent and interviewer behaviors on battery questions differ over items in a battery or whether interview behaviors are associated with answers to these questions.
  10. Anticipatory Minority Stressors among Same-sex Couples: A Relationship Timeline Approach

    The authors build on previous stress theories by drawing attention to the concept of anticipatory couple-level minority stressors (i.e., stressors expected to occur in the future that emanate from the stigmatization of certain relationship forms). A focus on anticipatory couple-level minority stressors brings with it the potential for important insight into vulnerabilities and resiliencies of people in same-sex relationships, the focus of this study. The authors use relationship timelines to examine stressors among a diverse sample of same-sex couples (n = 120).