# Statistics

Seminar

Seminar

Seminar

## Nonparametric moment-based estimation of simulated models without optimization

International conference

## Creative destruction in science

Drawing on the concept of a gale of creative destruction in a capitalistic economy, we argue that initiatives to assess the robustness of findings in the organizational literature should aim to simultaneously test competing ideas operating in the same theoretical space. In other words, replication efforts should seek not just to support or question the original findings, but also to replace them with revised, stronger theories with greater explanatory power. Achieving this will typically require adding new measures, conditions, and subject populations to research designs, in order to carry out conceptual tests of multiple theories in addition to directly replicating the original findings. To illustrate the value of the creative destruction approach for theory pruning in organizational scholarship, we describe recent replication initiatives re-examining culture and work morality, working parents’ reasoning about day care options, and gender discrimination in hiring decisions. *Significance statement:* It is becoming increasingly clear that many, if not most, published research findings across scientific fields are not readily replicable when the same method is repeated. Although extremely valuable, failed replications risk leaving a theoretical void—reducing confidence the original theoretical prediction is true, but not replacing it with positive evidence in favor of an alternative theory. We introduce the creative destruction approach to replication, which combines theory pruning methods from the field of management with emerging best practices from the open science movement, with the aim of making replications as generative as possible. In effect, we advocate for a Replication 2.0 movement in which the goal shifts from checking on the reliability of past findings to actively engaging in competitive theory testing and theory building. *Scientific transparency statement:* The materials, code, and data for this article are posted publicly on the Open Science Framework, with links provided in the article.

Seminar

## Model Calibration and Validation via Confidence Sets

The issues of calibrating and validating a theoretical model are considered, when it is required to select the parameters that better approximate the data among a finite number of alternatives. Based on a user-defined loss function, Model Confidence Sets are proposed as a tool to restrict the number of plausible alternatives, and measure the uncertainty associated to the preferred model. Furthermore, an asymptotically exact logarithmic approximation of the probability of choosing a model via a multivariate rate function is suggested. A simple numerical procedure is outlined for the computation of the latter and it is shown that the procedure yields results consistent with Model Confidence Sets. The illustration and implementation of the proposed approach is showcased in a model of inquisitiveness in ad hoc teams, relevant for bounded rationality and organizational research.

## Approximation of Stochastic Programming Problems

In Stochastic Programming, Statistics or Econometrics, one often looks for the solution of optimization problems of the following form: $$\inf_{\theta\in\Theta} \mathbb{E}_{\mathbb{P}_{}} g(\cdot,\theta)=\inf_{\theta\in\Theta} \int_{\mathbb{R}^{q}}g(y,\theta)\mathbb{P}_{}(dy)$$ where $\Theta$ is a Borel subset of $\mathbb{R}^{p}$ and $\mathbb{P}$ is a probability measure defined on $\mathbf{Y}=\mathbb{R}^{q}$ endowed with its Borel $\sigma-$field $\mathcal{B}(\mathbf{Y})$ (but more general spaces can be considered).

## Inference for Simulation Models

Still to be written