Drawing on the concept of a gale of creative destruction in a capitalistic economy, we argue that initiatives to assess the robustness of findings in the organizational literature should aim to simultaneously test competing ideas operating in the same theoretical space. In other words, replication efforts should seek not just to support or question the original findings, but also to replace them with revised, stronger theories with greater explanatory power. Achieving this will typically require adding new measures, conditions, and subject populations to research designs, in order to carry out conceptual tests of multiple theories in addition to directly replicating the original findings. To illustrate the value of the creative destruction approach for theory pruning in organizational scholarship, we describe recent replication initiatives re-examining culture and work morality, working parents’ reasoning about day care options, and gender discrimination in hiring decisions. *Significance statement:* It is becoming increasingly clear that many, if not most, published research findings across scientific fields are not readily replicable when the same method is repeated. Although extremely valuable, failed replications risk leaving a theoretical void—reducing confidence the original theoretical prediction is true, but not replacing it with positive evidence in favor of an alternative theory. We introduce the creative destruction approach to replication, which combines theory pruning methods from the field of management with emerging best practices from the open science movement, with the aim of making replications as generative as possible. In effect, we advocate for a Replication 2.0 movement in which the goal shifts from checking on the reliability of past findings to actively engaging in competitive theory testing and theory building. *Scientific transparency statement:* The materials, code, and data for this article are posted publicly on the Open Science Framework, with links provided in the article.
The issues of calibrating and validating a theoretical model are considered, when it is required to select the parameters that better approximate the data among a finite number of alternatives. Based on a user-defined loss function, Model Confidence Sets are proposed as a tool to restrict the number of plausible alternatives, and measure the uncertainty associated to the preferred model. Furthermore, an asymptotically exact logarithmic approximation of the probability of choosing a model via a multivariate rate function is suggested. A simple numerical procedure is outlined for the computation of the latter and it is shown that the procedure yields results consistent with Model Confidence Sets. The illustration and implementation of the proposed approach is showcased in a model of inquisitiveness in ad hoc teams, relevant for bounded rationality and organizational research.
In Stochastic Programming, Statistics or Econometrics, one often looks for the solution of optimization problems of the following form: \begin{equation} \inf_{\theta\in\Theta} \mathbb{E}_{\mathbb{P}_{}} g(\cdot,\theta)=\inf_{\theta\in\Theta} \int_{\mathbb{R}^{q}}g(y,\theta)\mathbb{P}_{}(dy) \end{equation} where $\Theta$ is a Borel subset of $\mathbb{R}^{p}$ and $\mathbb{P}$ is a probability measure defined on $\mathbf{Y}=\mathbb{R}^{q}$ endowed with its Borel $\sigma$-field $\mathcal{B}(\mathbf{Y})$ (but more general spaces can be considered).
Measurement theory is “a field of study that examines the attribution of values to traits, characteristics, or constructs. Measurement theory focuses on assessing the true score of an attribute, such that an obtained value has a close correspondence with the actual quantity” (APA Dictionary of Psychology, 2nd ed.