American Sociological Association

Search

Search

The search found 228 results in 0.024 seconds.

Search results

  1. Comment: Evidence, Plausibility, and Model Selection

    In his article, Michael Schultz examines the practice of model selection in sociological research. Model selection is often carried out by means of classical hypothesis tests. A fundamental problem with this practice is that these tests do not give a measure of evidence. For example, if we test the null hypothesis β = 0 against the alternative hypothesis β ≠ 0, what is the largest p value that can be regarded as strong evidence against the null hypothesis? What is the largest p value that can be regarded as any kind of evidence against the null hypothesis?
  2. The Problem of Underdetermination in Model Selection

    Conventional model selection evaluates models on their ability to represent data accurately, ignoring their dependence on theoretical and methodological assumptions. Drawing on the concept of underdetermination from the philosophy of science, the author argues that uncritical use of methodological assumptions can pose a problem for effective inference. By ignoring the plausibility of assumptions, existing techniques select models that are poor representations of theory and are thus suboptimal for inference.
  3. Rejoinder: Can We Weight Models by Their Probability of Being True?

    We thank the commenters for thoughtful, constructive engagement with our paper (this volume, pp. 1–33). Throughout this discussion, there is strong consensus that model robustness analysis is essential to sociological research methods in the twenty-first century. Indeed, both O’Brien (this volume, pp. 34–39) and Western (this volume, pp. 39–43) identify examples of sociological research that is plagued by uncertainty over modeling decisions and how those decisions can change the results and conclusions of the analyses.
  4. Comment: Bayes, Model Uncertainty, and Learning from Data

    The problem of model uncertainty is a fundamental applied challenge in quantitative sociology. The authors’ language of false positives is reminiscent of Bonferroni adjustments and the frequentist analysis of multiple independent comparisons, but the distinct problem of model uncertainty has been fully formalized from a Bayesian perspective.
  5. Comment: Some Challenges When Estimating the Impact of Model Uncertainty on Coefficient Instability

    I once had a colleague who knew that inequality was related to an important dependent variable. This colleague knew many other things, but I focus on inequality as an example. It was difficult for my colleague to know just how to operationalize inequality. Should it be the percentage of income held by the top 10 percent, top 5 percent, or top 1 percent of the population? Should it be based on the ratio of median black income to median white income, or should it be the log of that ratio? Should it be based on the Gini index, or perhaps the Theil index would be better?
  6. We Ran 9 Billion Regressions: Eliminating False Positives through Computational Model Robustness

    False positive findings are a growing problem in many research literatures. We argue that excessive false positives often stem from model uncertainty. There are many plausible ways of specifying a regression model, but researchers typically report only a few preferred estimates. This raises the concern that such research reveals only a small fraction of the possible results and may easily lead to nonrobust, false positive conclusions. It is often unclear how much the results are driven by model specification and how much the results would change if a different plausible model were used.
  7. Estimating Heterogeneous Treatment Effects with Observational Data

    Individuals differ not only in their background characteristics but also in how they respond to a particular treatment, intervention, or stimulation. In particular, treatment effects may vary systematically by the propensity for treatment. In this paper, we discuss a practical approach to studying heterogeneous treatment effects as a function of the treatment propensity, under the same assumption commonly underlying regression analysis: ignorability.

  8. Comparing Regression Coefficients Between Same-sample Nested Models Using Logit and Probit: A New Method

    Logit and probit models are widely used in empirical sociological research. However, the common practice of comparing the coefficients of a given variable across differently specified models fitted to the same sample does not warrant the same interpretation in logits and probits as in linear regression. Unlike linear models, the change in the coefficient of the variable of interest cannot be straightforwardly attributed to the inclusion of confounding variables. The reason for this is that the variance of the underlying latent variable is not identified and will differ between models.

  9. Telephone Versus Face-to-Face Interviews: Mode Effect on Semistructured Interviews with Children

    Usually, semistructured interviews are conducted face-to-face, and because of the importance of personal contact in qualitative interviews, telephone interviews are often discounted. Missing visual communication can make a telephone conversation appear less personal and more anonymous but can also help prevent some distortions and place the power imbalance between adult interviewer and (child) respondent into perspective.

  10. Terror, Terrorism, Terrorists

    The terms terror, terrorism, and terrorist do not identify causally coherent and distinct social phenomena but strategies that recur across a wide variety of actors and political situations. Social scientists who reify the terms confuse themselves and render a disservice to public discussion. The U.S. government's own catalogs of terrorist events actually support both claims.