2

On the quest for defining organisational plasticity: a community modelling experiment

Purpose – This viewpoint article is concerned with an attempt to advance organisational plasticity (OP) modelling concepts by using a novel community modelling framework (PhiloLab) from the social simulation community to drive the process of idea generation. In addition, the authors want to feed back their experience with PhiloLab as they believe that this way of idea generation could also be of interest to the wider evidence-based human resource management (EBHRM) community. Design/methodology/approach – The authors used some workshop sessions to brainstorm new conceptual ideas in a structured and efficient way with a multidisciplinary group of 14 (mainly academic) participants using PhiloLab. This is a tool from the social simulation community, which stimulates and formally supports discussions about philosophical questions of future societal models by means of developing conceptual agent-based simulation models. This was followed by an analysis of the qualitative data gathered during the PhiloLab sessions, feeding into the definition of a set of primary axioms of a plastic organisation. Findings – The PhiloLab experiment helped with defining a set of primary axioms of a plastic organisation, which are presented in this viewpoint article. The results indicated that the problem was rather complex, but it also showed good potential for an agent-based simulation model to tackle some of the key issues related to OP. The experiment also showed that PhiloLab was very useful in terms of knowledge and idea gathering. Originality/value – Through information gathering and open debates on how to create an agent-based simulation model of a plastic organisation, the authors could identify some of the characteristics of OP and start structuring some of the parameters for a computational simulation. With the outcome of the PhiloLab experiment, the authors are paving the way towards future exploratory computational simulation studies of OP.

Creative destruction in science

Drawing on the concept of a gale of creative destruction in a capitalistic economy, we argue that initiatives to assess the robustness of findings in the organizational literature should aim to simultaneously test competing ideas operating in the same theoretical space. In other words, replication efforts should seek not just to support or question the original findings, but also to replace them with revised, stronger theories with greater explanatory power. Achieving this will typically require adding new measures, conditions, and subject populations to research designs, in order to carry out conceptual tests of multiple theories in addition to directly replicating the original findings. To illustrate the value of the creative destruction approach for theory pruning in organizational scholarship, we describe recent replication initiatives re-examining culture and work morality, working parents’ reasoning about day care options, and gender discrimination in hiring decisions. *Significance statement:* It is becoming increasingly clear that many, if not most, published research findings across scientific fields are not readily replicable when the same method is repeated. Although extremely valuable, failed replications risk leaving a theoretical void—reducing confidence the original theoretical prediction is true, but not replacing it with positive evidence in favor of an alternative theory. We introduce the creative destruction approach to replication, which combines theory pruning methods from the field of management with emerging best practices from the open science movement, with the aim of making replications as generative as possible. In effect, we advocate for a Replication 2.0 movement in which the goal shifts from checking on the reliability of past findings to actively engaging in competitive theory testing and theory building. *Scientific transparency statement:* The materials, code, and data for this article are posted publicly on the Open Science Framework, with links provided in the article.

Model Calibration and Validation via Confidence Sets

The issues of calibrating and validating a theoretical model are considered, when it is required to select the parameters that better approximate the data among a finite number of alternatives. Based on a user-defined loss function, Model Confidence Sets are proposed as a tool to restrict the number of plausible alternatives, and measure the uncertainty associated to the preferred model. Furthermore, an asymptotically exact logarithmic approximation of the probability of choosing a model via a multivariate rate function is suggested. A simple numerical procedure is outlined for the computation of the latter and it is shown that the procedure yields results consistent with Model Confidence Sets. The illustration and implementation of the proposed approach is showcased in a model of inquisitiveness in ad hoc teams, relevant for bounded rationality and organizational research.

Generic Consistency for Approximate Stochastic Programming and Statistical Problems

In stochastic programming, statistics, or econometrics, the aim is in general the optimization of a criterion function that depends on a decision variable theta and reads as an expectation with respect to a probability $\mathbb{P}$. When this function cannot be computed in closed form, it is customary to approximate it through an empirical mean function based on a random sample. On the other hand, several other methods have been proposed, such as quasi-Monte Carlo integration and numerical integration rules. In this paper, we propose a general approach for approximating such a function, in the sense of epigraphical convergence, using a sequence of functions of simpler type which can be expressed as expectations with respect to probability measures $\mathbb{P}_n$ that, in some sense, approximate $\mathbb{P}$. The main difference with the existing results lies in the fact that our main theorem does not impose conditions directly on the approximating probabilities but only on some integrals with respect to them. In addition, the $\mathbb{P}_n$'s can be transition probabilities, i.e., are allowed to depend on a further parameter, $\xi$, whose value results from deterministic or stochastic operations, depending on the underlying model. This framework allows us to deal with a large variety of approximation procedures such as Monte Carlo, quasi-Monte Carlo, numerical integration, quantization, several variations on Monte Carlo sampling, and some density approximation algorithms. As by-products, we discuss convergence results for stochastic programming and statistical inference based on dependent data, for programming with estimated parameters, and for robust optimization; we also provide a general result about the consistency of the bootstrap for $M$-estimators.

Controlling for False Negatives in Agent-Based Models: A Review of Power Analysis in Organizational Research

This article is concerned with the study of statistical power in agent-based modeling (ABM). After an overview of classic statistics theory on how to interpret Type-II error (whose occurrence is also referred to as a false negative) and power, the manuscript presents a study on ABM simulation articles published in management journals and other outlets likely to publish management and organizational research. Findings show that most studies are underpowered, with some being overpowered. After discussing the risks of under- and overpower, we present two formulas to approximate the number of simulation runs to reach an appropriate level of power. The study concludes with the importance for organizational behavior scholars to perform their models in an attempt to reach a power of 0.95 or higher at the 0.01 significance level.

Experienced Discrimination in Home Mortgage Lending: A Case of Hospital Employees in Northern Italy

This article proposes a framework for the analysis of experienced discrimination in home mortgages. It addresses the problem of home mortgage lending discrimination in one of the richest areas of northern Italy. Employees of a local hospital were interviewed to study their perception (or experience) of discriminatory behavior related to home financing. The analysis follows two steps. The first evaluates self-selection (the probability that individuals apply) and the second focuses on the likelihood that applications are accepted by the bank. Findings show that discrimination is likely to appear when the applicant's nationality is considered. In addition to its findings, the study (a) provides an original econometric model on a two-step procedure to test perceived discrimination and (b) suggests a method and approach that may constitute a point of reference for those willing to study perceived discrimination.

Statistical Properties of $b$-adic Diaphonies

The aim of this paper is to derive the asymptotic statistical properties of a class of discrepancies on the unit hypercube called $b$-adic diaphonies. They have been introduced to evaluate the equidistribution of quasi-Monte Carlo sequences on the unit hypercube. We consider their properties when applied to a sample of independent and uniformly distributed random points. We show that the limiting distribution of the statistic is an infinite weighted sum of chi-squared random variables, whose weights can be explicitly characterized and computed. We also describe the rate of convergence of the finite-sample distribution to the asymptotic one and show that this is much faster than in the classical Berry-Esséen bound. Then, we consider in detail the approximation of the asymptotic distribution through two truncations of the original infinite weighted sum, and we provide explicit and tight bounds for the truncation error. Numerical results illustrate the findings of the paper, and an empirical example shows the relevance of the results in applications.

What Are We Estimating When We Fit Stevens' Power Law?

Estimates of the Stevens' power law model are often based on the averaging over individuals of experiments conducted at the individual level. In this paper we suppose that each individual generates responses to stimuli on the basis of a model proposed by Luce and Narens, sometimes called separable representation model, featuring two distinct perturbations, called psychophysical and subjective weighting function, that may differ across individuals. Exploiting the form of the estimator of the exponent of Stevens' power law, we obtain an expression for this parameter as a function of the original two functions. The results presented in the paper help clarifying several well-known paradoxes arising with Stevens' power laws, including the range effect, i.e. the fact that the estimated exponent seems to depend on the range of the stimuli, the location effect, i.e. the fact that it depends on the position of the standard within the range, and the averaging effect, i.e. the fact that power laws seem to fit better data aggregated over individuals. Theoretical results are illustrated using data from papers of R. Duncan Luce.

A Non-Recursive Formula for the Higher Derivatives of the Hurwitz Zeta Function

In this paper, we provide an asymptotic formula for the higher derivatives of the Hurwitz zeta function with respect to its first argument that does not need recurrences. As a by-product, we correct some formulas that have appeared in the literature.

A Tight Bound on the Distance Between a Noncentral Chi Square and a Normal Distribution

We provide a nonasymptotic bound on the distance between a noncentral chi square distribution and a normal approximation. It improves on both the classical Berry-Esséen bound and previous distances derived specifically for this situation. First, the bound is nonasymptotic and provides an upper limit for the real distance. Second, the bound has the correct rate of decrease and even the correct leading constant when either the number of degrees of freedom or the noncentrality parameter (or both) diverge to infinity. The bound is applied to some probabilities arising in energy detection and Rician fading.