Clemente, E. G. (Elena Giulia)

Search Results

Now showing 1 - 3 of 3
  • Thumbnail Image
    On the trajectory of discrimination: A meta-analysis and forecasting survey capturing 44 years of field experiments on gender and hiring decisions
    (Elsevier, 2023-11-10) Clark, C.J. (Cory J.); Lakens, D. (Daniël); Gender Audits Forecasting Collaboration; Pfeiffer, T. (Thomas); Dreber, A. (Anna); Nguyen, M.H.B. (My Hoang Bao); Schaerer, M. (Michael); Tiokhin, L. (Leo); Du-Plessis, C. (Christilene); Uhlmann, E. L. (Eric Luis); Aert, R.C.M. (Robbie C.M.) van; Clemente, E. G. (Elena Giulia); Johannesson, M. (Magnus)
    A preregistered meta-analysis, including 244 effect sizes from 85 field audits and 361,645 individual job applications, tested for gender bias in hiring practices in female-stereotypical and gender-balanced as well as male-stereotypical jobs from 1976 to 2020. A “red team” of independent experts was recruited to increase the rigor and robustness of our meta-analytic approach. A forecasting survey further examined whether laypeople (n = 499 nationally representative adults) and scientists (n = 312) could predict the results. Forecasters correctly anticipated reductions in discrimination against female candidates over time. However, both scientists and laypeople overestimated the continuation of bias against female candidates. Instead, selection bias in favor of male over female candidates was eliminated and, if anything, slightly reversed in sign starting in 2009 for mixed-gender and male-stereotypical jobs in our sample. Forecasters further failed to anticipate that discrimination against male candidates for stereotypically female jobs would remain stable across the decades.
  • Thumbnail Image
    Creative destruction in science
    (Elsevier, 2020-09-29) Hardy-III, J.H. (Jay H.); Ebersole, C. R. (Charles R.); Viganola, D. (Domenico); Pfeiffer, T. (Thomas); Dreber, A. (Anna); Hiring Decisions Forecasting Collaboration; Leavitt, K. (Keith); Uhlmann, E. L. (Eric Luis); Tierney, W. (Warren); Clemente, E. G. (Elena Giulia); Gordon, M. (Michael); Johannesson, M. (Magnus)
    Drawing on the concept of a gale of creative destruction in a capitalistic economy, we argue that initiatives to assess the robustness of findings in the organizational literature should aim to simultaneously test competing ideas operating in the same theoretical space. In other words, replication efforts should seek not just to support or question the original findings, but also to replace them with revised, stronger theories with greater explanatory power. Achieving this will typically require adding new measures, conditions, and subject populations to research designs, in order to carry out conceptual tests of multiple theories in addition to directly replicating the original findings. To illustrate the value of the creative destruction approach for theory pruning in organizational scholarship, we describe recent replication initiatives re-examining culture and work morality, working parents’ reasoning about day care options, and gender discrimination in hiring decisions.
  • Thumbnail Image
    Examining the generalizability of research findings from archival data
    (National Academy of Sciences, 2022-07-19) Wang, Y. (Yong); Delios, A. (Andrew); Viganola, D. (Domenico); Pfeiffer, T. (Thomas); Dreber, A. (Anna); Chen, Z. (Zhaowei); Tan, H. (Hongbin); Uhlmann, E. L. (Eric Luis); Wu, T. (Tao); Clemente, E. G. (Elena Giulia); Gordon, M. (Michael); Generalizability Tests Forecasting Collaboration; Johannesson, M. (Magnus)
    This initiative examined systematically the extent to which a large set of archival research findings generalizes across contexts. We repeated the key analyses for 29 original strategic management effects in the same context (direct reproduction) as well as in 52 novel time periods and geographies; 45% of the reproductions returned results matching the original reports together with 55% of tests in different spans of years and 40% of tests in novel geographies. Some original findings were associated with multiple new tests. Reproducibility was the best predictor of generalizability—for the findings that proved directly reproducible, 84% emerged in other available time periods and 57% emerged in other geographies. Overall, only limited empirical evidence emerged for context sensitivity. In a forecasting survey, independent scientists were able to anticipate which effects would find support in tests in new samples.