2

Asymptotic Distributions of Covering and Separation Measures on the Hypersphere

We consider measures of covering and separation that are expressed through maxima and minima of distances between points of an hypersphere. We investigate the behavior of these measures when applied to a sample of independent and uniformly distributed points. In particular, we derive their asymptotic distributions when the number of points diverges. These results can be useful as a benchmark against which deterministic point sets can be evaluated. Whenever possible, we supplement the rigorous derivation of these limiting distributions with some heuristic reasonings based on extreme value theory. As a by-product, we provide a proof for a conjecture on the hole radius associated to a facet of the convex hull of points distributed on the hypersphere.

Examining the context sensitivity of research findings from archival data

This initiative examined systematically the extent to which a large set of archival research findings generalizes across contexts. We repeated the key analyses for 29 original strategic management effects in the same context (direct reproduction) as well as in 52 novel time periods and geographies; 45% of the reproductions returned results matching the original reports together with 55% of tests in different spans of years and 40% of tests in novel geographies. Some original findings were associated with multiple new tests. Reproducibility was the best predictor of generalizability—for the findings that proved directly reproducible, 84% emerged in other available time periods and 57% emerged in other geographies. Overall, only limited empirical evidence emerged for context sensitivity. In a forecasting survey, independent scientists were able to anticipate which effects would find support in tests in new samples.

Computing the Asymptotic Distribution of Second-order $U$- and $V$-statistics

Under general conditions, the asymptotic distribution of degenerate second-order $U$- and $V$-statistics is an (infinite) weighted sum of $\chi^2$ random variables whose weights are the eigenvalues of an integral operator associated with the kernel of the statistic. Also the behavior of the statistic in terms of power can be characterized through the eigenvalues and the eigenfunctions of the same integral operator. No general algorithm seems to be available to compute these quantities starting from the kernel of the statistic. An algorithm is proposed to approximate (as precisely as needed) the asymptotic distribution and the power of the test statistics, and to build several measures of performance for tests based on $U$- and $V$-statistics. The algorithm uses the Wielandt–Nyström method of approximation of an integral operator based on quadrature, and can be used with several methods of numerical integration. An extensive numerical study shows that the Wielandt–Nyström method based on Clenshaw–Curtis quadrature performs very well both for the eigenvalues and the eigenfunctions.

Asymptotic Properties of the Plug-in Estimator of the Discrete Entropy under Dependence

We consider the estimation of the entropy of a discretely-supported time series through a plug-in estimator. We provide a correction of the bias and we study the asymptotic properties of the estimator. We show that the widely-used correction proposed by Roulston (1999) is incorrect as it does not remove the $O\left(N^{-1}\right)$ part of the bias while ours does. We provide the asymptotic distribution and we show that it differs when the values taken by the marginal distribution of the process are equiprobable (a situation that we call *degeneracy*) and when they are not. We introduce estimators of the bias, the variance and the distribution under degeneracy and we study the estimation error. Finally, we propose a goodness-of-fit test based on entropy and give two motivations for it. The theoretical results are supported by specific numerical examples.

On the quest for defining organisational plasticity: a community modelling experiment

Purpose – This viewpoint article is concerned with an attempt to advance organisational plasticity (OP) modelling concepts by using a novel community modelling framework (PhiloLab) from the social simulation community to drive the process of idea generation. In addition, the authors want to feed back their experience with PhiloLab as they believe that this way of idea generation could also be of interest to the wider evidence-based human resource management (EBHRM) community. Design/methodology/approach – The authors used some workshop sessions to brainstorm new conceptual ideas in a structured and efficient way with a multidisciplinary group of 14 (mainly academic) participants using PhiloLab. This is a tool from the social simulation community, which stimulates and formally supports discussions about philosophical questions of future societal models by means of developing conceptual agent-based simulation models. This was followed by an analysis of the qualitative data gathered during the PhiloLab sessions, feeding into the definition of a set of primary axioms of a plastic organisation. Findings – The PhiloLab experiment helped with defining a set of primary axioms of a plastic organisation, which are presented in this viewpoint article. The results indicated that the problem was rather complex, but it also showed good potential for an agent-based simulation model to tackle some of the key issues related to OP. The experiment also showed that PhiloLab was very useful in terms of knowledge and idea gathering. Originality/value – Through information gathering and open debates on how to create an agent-based simulation model of a plastic organisation, the authors could identify some of the characteristics of OP and start structuring some of the parameters for a computational simulation. With the outcome of the PhiloLab experiment, the authors are paving the way towards future exploratory computational simulation studies of OP.

Creative destruction in science

Drawing on the concept of a gale of creative destruction in a capitalistic economy, we argue that initiatives to assess the robustness of findings in the organizational literature should aim to simultaneously test competing ideas operating in the same theoretical space. In other words, replication efforts should seek not just to support or question the original findings, but also to replace them with revised, stronger theories with greater explanatory power. Achieving this will typically require adding new measures, conditions, and subject populations to research designs, in order to carry out conceptual tests of multiple theories in addition to directly replicating the original findings. To illustrate the value of the creative destruction approach for theory pruning in organizational scholarship, we describe recent replication initiatives re-examining culture and work morality, working parents’ reasoning about day care options, and gender discrimination in hiring decisions. *Significance statement:* It is becoming increasingly clear that many, if not most, published research findings across scientific fields are not readily replicable when the same method is repeated. Although extremely valuable, failed replications risk leaving a theoretical void—reducing confidence the original theoretical prediction is true, but not replacing it with positive evidence in favor of an alternative theory. We introduce the creative destruction approach to replication, which combines theory pruning methods from the field of management with emerging best practices from the open science movement, with the aim of making replications as generative as possible. In effect, we advocate for a Replication 2.0 movement in which the goal shifts from checking on the reliability of past findings to actively engaging in competitive theory testing and theory building. *Scientific transparency statement:* The materials, code, and data for this article are posted publicly on the Open Science Framework, with links provided in the article.

Model Calibration and Validation via Confidence Sets

The issues of calibrating and validating a theoretical model are considered, when it is required to select the parameters that better approximate the data among a finite number of alternatives. Based on a user-defined loss function, Model Confidence Sets are proposed as a tool to restrict the number of plausible alternatives, and measure the uncertainty associated to the preferred model. Furthermore, an asymptotically exact logarithmic approximation of the probability of choosing a model via a multivariate rate function is suggested. A simple numerical procedure is outlined for the computation of the latter and it is shown that the procedure yields results consistent with Model Confidence Sets. The illustration and implementation of the proposed approach is showcased in a model of inquisitiveness in ad hoc teams, relevant for bounded rationality and organizational research.

Generic Consistency for Approximate Stochastic Programming and Statistical Problems

In stochastic programming, statistics, or econometrics, the aim is in general the optimization of a criterion function that depends on a decision variable theta and reads as an expectation with respect to a probability $\mathbb{P}$. When this function cannot be computed in closed form, it is customary to approximate it through an empirical mean function based on a random sample. On the other hand, several other methods have been proposed, such as quasi-Monte Carlo integration and numerical integration rules. In this paper, we propose a general approach for approximating such a function, in the sense of epigraphical convergence, using a sequence of functions of simpler type which can be expressed as expectations with respect to probability measures $\mathbb{P}_n$ that, in some sense, approximate $\mathbb{P}$. The main difference with the existing results lies in the fact that our main theorem does not impose conditions directly on the approximating probabilities but only on some integrals with respect to them. In addition, the $\mathbb{P}_n$'s can be transition probabilities, i.e., are allowed to depend on a further parameter, $\xi$, whose value results from deterministic or stochastic operations, depending on the underlying model. This framework allows us to deal with a large variety of approximation procedures such as Monte Carlo, quasi-Monte Carlo, numerical integration, quantization, several variations on Monte Carlo sampling, and some density approximation algorithms. As by-products, we discuss convergence results for stochastic programming and statistical inference based on dependent data, for programming with estimated parameters, and for robust optimization; we also provide a general result about the consistency of the bootstrap for $M$-estimators.

Controlling for False Negatives in Agent-Based Models: A Review of Power Analysis in Organizational Research

This article is concerned with the study of statistical power in agent-based modeling (ABM). After an overview of classic statistics theory on how to interpret Type-II error (whose occurrence is also referred to as a false negative) and power, the manuscript presents a study on ABM simulation articles published in management journals and other outlets likely to publish management and organizational research. Findings show that most studies are underpowered, with some being overpowered. After discussing the risks of under- and overpower, we present two formulas to approximate the number of simulation runs to reach an appropriate level of power. The study concludes with the importance for organizational behavior scholars to perform their models in an attempt to reach a power of 0.95 or higher at the 0.01 significance level.

Experienced Discrimination in Home Mortgage Lending: A Case of Hospital Employees in Northern Italy

This article proposes a framework for the analysis of experienced discrimination in home mortgages. It addresses the problem of home mortgage lending discrimination in one of the richest areas of northern Italy. Employees of a local hospital were interviewed to study their perception (or experience) of discriminatory behavior related to home financing. The analysis follows two steps. The first evaluates self-selection (the probability that individuals apply) and the second focuses on the likelihood that applications are accepted by the bank. Findings show that discrimination is likely to appear when the applicant's nationality is considered. In addition to its findings, the study (a) provides an original econometric model on a two-step procedure to test perceived discrimination and (b) suggests a method and approach that may constitute a point of reference for those willing to study perceived discrimination.