Under general conditions, the asymptotic distribution of degenerate second-order $U$- and $V$-statistics is an (infinite) weighted sum of $\chi^2$ random variables whose weights are the eigenvalues of an integral operator associated with the kernel of the statistic. Also the behavior of the statistic in terms of power can be characterized through the eigenvalues and the eigenfunctions of the same integral operator. No general algorithm seems to be available to compute these quantities starting from the kernel of the statistic. An algorithm is proposed to approximate (as precisely as needed) the asymptotic distribution and the power of the test statistics, and to build several measures of performance for tests based on $U$- and $V$-statistics. The algorithm uses the Wielandt–Nyström method of approximation of an integral operator based on quadrature, and can be used with several methods of numerical integration. An extensive numerical study shows that the Wielandt–Nyström method based on Clenshaw–Curtis quadrature performs very well both for the eigenvalues and the eigenfunctions.
Drawing on the concept of a gale of creative destruction in a capitalistic economy, we argue that initiatives to assess the robustness of findings in the organizational literature should aim to simultaneously test competing ideas operating in the same theoretical space. In other words, replication efforts should seek not just to support or question the original findings, but also to replace them with revised, stronger theories with greater explanatory power. Achieving this will typically require adding new measures, conditions, and subject populations to research designs, in order to carry out conceptual tests of multiple theories in addition to directly replicating the original findings. To illustrate the value of the creative destruction approach for theory pruning in organizational scholarship, we describe recent replication initiatives re-examining culture and work morality, working parents’ reasoning about day care options, and gender discrimination in hiring decisions. *Significance statement:* It is becoming increasingly clear that many, if not most, published research findings across scientific fields are not readily replicable when the same method is repeated. Although extremely valuable, failed replications risk leaving a theoretical void—reducing confidence the original theoretical prediction is true, but not replacing it with positive evidence in favor of an alternative theory. We introduce the creative destruction approach to replication, which combines theory pruning methods from the field of management with emerging best practices from the open science movement, with the aim of making replications as generative as possible. In effect, we advocate for a Replication 2.0 movement in which the goal shifts from checking on the reliability of past findings to actively engaging in competitive theory testing and theory building. *Scientific transparency statement:* The materials, code, and data for this article are posted publicly on the Open Science Framework, with links provided in the article.
The issues of calibrating and validating a theoretical model are considered, when it is required to select the parameters that better approximate the data among a finite number of alternatives. Based on a user-defined loss function, Model Confidence Sets are proposed as a tool to restrict the number of plausible alternatives, and measure the uncertainty associated to the preferred model. Furthermore, an asymptotically exact logarithmic approximation of the probability of choosing a model via a multivariate rate function is suggested. A simple numerical procedure is outlined for the computation of the latter and it is shown that the procedure yields results consistent with Model Confidence Sets. The illustration and implementation of the proposed approach is showcased in a model of inquisitiveness in ad hoc teams, relevant for bounded rationality and organizational research.