American Sociological Association

Search

Search

The search found 221 results in 0.024 seconds.

Search results

  1. Estimating the Relationship between Time-varying Covariates and Trajectories: The Sequence Analysis Multistate Model Procedure

    The relationship between processes and time-varying covariates is of central theoretical interest in addressing many social science research questions. On the one hand, event history analysis (EHA) has been the chosen method to study these kinds of relationships when the outcomes can be meaningfully specified as simple instantaneous events or transitions.
  2. Limitations of Design-based Causal Inference and A/B Testing under Arbitrary and Network Interference

    Randomized experiments on a network often involve interference between connected units, namely, a situation in which an individual’s treatment can affect the response of another individual. Current approaches to deal with interference, in theory and in practice, often make restrictive assumptions on its structure—for instance, assuming that interference is local—even when using otherwise nonparametric inference strategies.
  3. Rejoinder: On the Assumptions of Inferential Model Selection—A Response to Vassend and Weakliem

    I am grateful to Professors Vassend and Weakliem for their comments on my paper (this volume, pp. 52–87) and its admittedly unusual approach to model selection and to the Sociological Methodology editors for the opportunity to respond. My goal here is not to defend the inferential information criterion (IIC) against all the points brought out by Vassend (this volume, pp. 91–97) and Weakliem (this volume, pp. 88–91). My paper aimed to (1) show how methodological assumptions interfere with inferences about theory and (2) develop a practical approach to minimize this interference.
  4. Comment: The Inferential Information Criterion from a Bayesian Point of View

    As Michael Schultz notes in his very interesting paper (this volume, pp. 52–87), standard model selection criteria, such as the Akaike information criterion (AIC; Akaike 1974), the Bayesian information criterion (BIC; Schwarz 1978), and the minimum description length principle (MDL; Rissanen 1978), are purely empirical criteria in the sense that the score a model receives does not depend on how well the model coheres with background theory. This is unsatisfying because we would like our models to be theoretically plausible, not just empirically successful.
  5. Comment: Evidence, Plausibility, and Model Selection

    In his article, Michael Schultz examines the practice of model selection in sociological research. Model selection is often carried out by means of classical hypothesis tests. A fundamental problem with this practice is that these tests do not give a measure of evidence. For example, if we test the null hypothesis β = 0 against the alternative hypothesis β ≠ 0, what is the largest p value that can be regarded as strong evidence against the null hypothesis? What is the largest p value that can be regarded as any kind of evidence against the null hypothesis?
  6. The Problem of Underdetermination in Model Selection

    Conventional model selection evaluates models on their ability to represent data accurately, ignoring their dependence on theoretical and methodological assumptions. Drawing on the concept of underdetermination from the philosophy of science, the author argues that uncritical use of methodological assumptions can pose a problem for effective inference. By ignoring the plausibility of assumptions, existing techniques select models that are poor representations of theory and are thus suboptimal for inference.
  7. Rejoinder: Can We Weight Models by Their Probability of Being True?

    We thank the commenters for thoughtful, constructive engagement with our paper (this volume, pp. 1–33). Throughout this discussion, there is strong consensus that model robustness analysis is essential to sociological research methods in the twenty-first century. Indeed, both O’Brien (this volume, pp. 34–39) and Western (this volume, pp. 39–43) identify examples of sociological research that is plagued by uncertainty over modeling decisions and how those decisions can change the results and conclusions of the analyses.
  8. Comment: Bayes, Model Uncertainty, and Learning from Data

    The problem of model uncertainty is a fundamental applied challenge in quantitative sociology. The authors’ language of false positives is reminiscent of Bonferroni adjustments and the frequentist analysis of multiple independent comparisons, but the distinct problem of model uncertainty has been fully formalized from a Bayesian perspective.
  9. Comment: Some Challenges When Estimating the Impact of Model Uncertainty on Coefficient Instability

    I once had a colleague who knew that inequality was related to an important dependent variable. This colleague knew many other things, but I focus on inequality as an example. It was difficult for my colleague to know just how to operationalize inequality. Should it be the percentage of income held by the top 10 percent, top 5 percent, or top 1 percent of the population? Should it be based on the ratio of median black income to median white income, or should it be the log of that ratio? Should it be based on the Gini index, or perhaps the Theil index would be better?
  10. We Ran 9 Billion Regressions: Eliminating False Positives through Computational Model Robustness

    False positive findings are a growing problem in many research literatures. We argue that excessive false positives often stem from model uncertainty. There are many plausible ways of specifying a regression model, but researchers typically report only a few preferred estimates. This raises the concern that such research reveals only a small fraction of the possible results and may easily lead to nonrobust, false positive conclusions. It is often unclear how much the results are driven by model specification and how much the results would change if a different plausible model were used.