I am a professor of econometrics. My research is at the intersection of statistics, numerical analysis, operations research, psychology, economics and management.

- Statistics and econometrics
- Numerical analysis
- Operations research
- Mathematical psychology

PhD in Science, major in Mathematics, 2005

Université Paris Dauphine, Paris (France)

PhD in Management and Industrial Engineering, 1999

Politecnico di Milano, Milano (Italy)

DEA MASE, 1999

Université Paris Dauphine/ENSAE, Paris (France)

MSc in Management and Industrial Engineering, 1996

Politecnico di Milano, Milano (Italy)

Organized with Francesco Guala, Caterina Marchionni and Ivan Moscati, together with Lake Como School of Advanced Studies and INEM (International Network for Economic Method).

Organized with Fabrizio Adani, Garabed Antranikian, Diego Bosco, Marcella Bracale, Loredano Pollegioni, Andreas Pyka, Daniela Ubiali, Andrea Vezzulli.

Organized with Francesco Guala, Caterina Marchionni and Ivan Moscati, together with Lake Como School of Advanced Studies and INEM (International Network for Economic Method).

Organized with Francesco Guala and Ivan Moscati, together with Lake Como School of Advanced Studies and ESHET (European Society for the History of Economic Thought).

Organized within PRIN 2010J3LZEN.

Organized with EMPG (European Mathematical Psychology Group).

*Master Analisi e Valutazione delle Politiche Pubbliche*

- Individui, comportamenti ed istituzioni - 20 hrs (A.Y. 2018-19, 2017-18, 2016-17)
- Politiche per la legalità, di contrasto alla corruzione e all’evasione fiscale - 5 hrs (A.Y. 2016-17)

*Central-German Doctoral Program Economics*

- Econometrics of Competitive and Regulated Markets - 20 hrs (A.Y. 2015-16)

*PhD in Methods and Models for Economic Decision*

- Advanced Econometrics for Decision Making - 20 hrs (A.Y. 2019-20, 2018-19, 2017-18, 2016-17, 2015-16)

*PhD in Econometrics and Empirical Economics*

- Survival analysis - 8 hrs (A.Y. 2007-08)
- Extreme value theory - 8 hrs (A.Y. 2007-08, 2006-07)
- Alternative Factorizations of the Likelihood - 8 hrs (A.Y. 2005-06, 2004-05)

- Econometrics of Competitive and Regulated Markets - 80 hrs (A.Y. 2019-20, 2018-19, 2017-18, 2016-17, 2015-16, 2014-15, 2013-14)
- Econometria - 40 hrs (A.Y. 2019-20, 2018-19, 2017-18, 2016-17, 2015-16, 2014-15, 2013-14, 2009-10, 2008-09, 2007-08, 2006-07)
- Econometria dell’Organizzazione Industriale (Econometrics of Industrial Organization) - 40 hrs (A.Y. 2012-13)
- Statistica per l’Impresa (Marketing Statistics) - 20 hrs (A.Y. 2011-12)
- Econometria delle durate (Advanced Econometrics) - 40 hrs (A.Y. 2009-10, 2008-09, 2007-08)
- Econometria dei mercati finanziari A (Financial Econometrics) - 40 hrs (A.Y. 2008-09, 2006-07)
- Extreme value theory - 6 hrs (A.Y. 2006-07)
- Microeconomia applicata/Microeconometria - 36 hrs (A.Y. 2005-06, 2004-05)
- Econometria dei mercati finanziari B (Advanced Financial Econometrics) - assistant (A.Y. 2005–06, 2004–05, 2003–04)

*Dipartimento di Economia e Produzione*

- Econometria - assistant (A.Y. 1997–98)

*Master MEDEA (Management ed Economia dell’Energia e dell’Ambiente)*

- Metodi matematici per la gestione d’impresa - assistant (A.Y. 2004–05, 2003–04, 1997–98, 1996–97)

*#### Approximation of Stochastic Programming Problems

#### Bioeconomy

#### Inference for Simulation Models

#### Psychological Measurement

#### Statistical Testing

#### Uniformity of Points

In Stochastic Programming, Statistics or Econometrics, one often looks for the solution of optimization problems of the following form: \begin{equation} \inf_{\theta\in\Theta} \mathbb{E}_{\mathbb{P}_{}} g(\cdot,\theta)=\inf_{\theta\in\Theta} \int_{\mathbb{R}^{q}}g(y,\theta)\mathbb{P}_{}(dy) \end{equation} where $\Theta$ is a Borel subset of $\mathbb{R}^{p}$ and $\mathbb{P}$ is a probability measure defined on $\mathbf{Y}=\mathbb{R}^{q}$ endowed with its Borel $\sigma-$field $\mathcal{B}(\mathbf{Y})$ (but more general spaces can be considered).

The increasing importance of biological sciences for creating value added in many economic sectors contributed to the rise of the now popular term “bioeconomy,” referring to “the set of economic activities relating to the invention, development, production and use of biological products and processes” (OECD, 2009), which are characterized by the accent on the reduction of environmental pollution and the adoption of sustainable practices.

Still to be written

Measurement theory is “a field of study that examines the attribution of values to traits, characteristics, or constructs. Measurement theory focuses on assessing the true score of an attribute, such that an obtained value has a close correspondence with the actual quantity” (APA Dictionary of Psychology, 2nd ed.

Still to be written

Quantifying uniformity of a configuration of points on a space is a topic that is receiving growing attention in computer science, physics and mathematics. The problem has interesting connections with statistics, where several tests of uniformity have been introduced.

Purpose – This viewpoint article is concerned with an attempt to advance organisational plasticity (OP) modelling concepts by using a novel community modelling framework (PhiloLab) from the social simulation community to drive the process of idea generation. In addition, the authors want to feed back their experience with PhiloLab as they believe that this way of idea generation could also be of interest to the wider evidence-based human resource management (EBHRM) community. Design/methodology/approach – The authors used some workshop sessions to brainstorm new conceptual ideas in a structured and efficient way with a multidisciplinary group of 14 (mainly academic) participants using PhiloLab. This is a tool from the social simulation community, which stimulates and formally supports discussions about philosophical questions of future societal models by means of developing conceptual agent-based simulation models. This was followed by an analysis of the qualitative data gathered during the PhiloLab sessions, feeding into the definition of a set of primary axioms of a plastic organisation. Findings – The PhiloLab experiment helped with defining a set of primary axioms of a plastic organisation, which are presented in this viewpoint article. The results indicated that the problem was rather complex, but it also showed good potential for an agent-based simulation model to tackle some of the key issues related to OP. The experiment also showed that PhiloLab was very useful in terms of knowledge and idea gathering. Originality/value – Through information gathering and open debates on how to create an agent-based simulation model of a plastic organisation, the authors could identify some of the characteristics of OP and start structuring some of the parameters for a computational simulation. With the outcome of the PhiloLab experiment, the authors are paving the way towards future exploratory computational simulation studies of OP.

Drawing on the concept of a gale of creative destruction in a capitalistic economy, we argue that initiatives to assess the robustness of findings in the organizational literature should aim to simultaneously test competing ideas operating in the same theoretical space. In other words, replication efforts should seek not just to support or question the original findings, but also to replace them with revised, stronger theories with greater explanatory power. Achieving this will typically require adding new measures, conditions, and subject populations to research designs, in order to carry out conceptual tests of multiple theories in addition to directly replicating the original findings. To illustrate the value of the creative destruction approach for theory pruning in organizational scholarship, we describe recent replication initiatives re-examining culture and work morality, working parents’ reasoning about day care options, and gender discrimination in hiring decisions. *Significance statement:* It is becoming increasingly clear that many, if not most, published research findings across scientific fields are not readily replicable when the same method is repeated. Although extremely valuable, failed replications risk leaving a theoretical void—reducing confidence the original theoretical prediction is true, but not replacing it with positive evidence in favor of an alternative theory. We introduce the creative destruction approach to replication, which combines theory pruning methods from the field of management with emerging best practices from the open science movement, with the aim of making replications as generative as possible. In effect, we advocate for a Replication 2.0 movement in which the goal shifts from checking on the reliability of past findings to actively engaging in competitive theory testing and theory building. *Scientific transparency statement:* The materials, code, and data for this article are posted publicly on the Open Science Framework, with links provided in the article.

The issues of calibrating and validating a theoretical model are considered, when it is required to select the parameters that better approximate the data among a finite number of alternatives. Based on a user-defined loss function, Model Confidence Sets are proposed as a tool to restrict the number of plausible alternatives, and measure the uncertainty associated to the preferred model. Furthermore, an asymptotically exact logarithmic approximation of the probability of choosing a model via a multivariate rate function is suggested. A simple numerical procedure is outlined for the computation of the latter and it is shown that the procedure yields results consistent with Model Confidence Sets. The illustration and implementation of the proposed approach is showcased in a model of inquisitiveness in ad hoc teams, relevant for bounded rationality and organizational research.

In stochastic programming, statistics, or econometrics, the aim is in general the optimization of a criterion function that depends on a decision variable theta and reads as an expectation with respect to a probability $\mathbb{P}$. When this function cannot be computed in closed form, it is customary to approximate it through an empirical mean function based on a random sample. On the other hand, several other methods have been proposed, such as quasi-Monte Carlo integration and numerical integration rules. In this paper, we propose a general approach for approximating such a function, in the sense of epigraphical convergence, using a sequence of functions of simpler type which can be expressed as expectations with respect to probability measures $\mathbb{P}_n$ that, in some sense, approximate $\mathbb{P}$. The main difference with the existing results lies in the fact that our main theorem does not impose conditions directly on the approximating probabilities but only on some integrals with respect to them. In addition, the $\mathbb{P}_n$’s can be transition probabilities, i.e., are allowed to depend on a further parameter, $\xi$, whose value results from deterministic or stochastic operations, depending on the underlying model. This framework allows us to deal with a large variety of approximation procedures such as Monte Carlo, quasi-Monte Carlo, numerical integration, quantization, several variations on Monte Carlo sampling, and some density approximation algorithms. As by-products, we discuss convergence results for stochastic programming and statistical inference based on dependent data, for programming with estimated parameters, and for robust optimization; we also provide a general result about the consistency of the bootstrap for $M$-estimators.

Randomness, Emergence and Causation: A Historical Perspective of Simulation in the Social Sciences.
*Complexity and Emergence*, edited by S. Albeverio, E. Mastrogiacomo, E. Rosazza Gianin and S. Ugolini, Springer Proceedings in Mathematics & Statistics, Springer, pp. ??-??.

(2021).
On the quest for defining organisational plasticity: a community modelling experiment.
*Evidence-based HRM: a Global Forum for Empirical Scholarship, ??*, ??-??.

(2020).
Creative destruction in science.
*Organizational Behavior and Human Decision Processes, 161*, 291-309.

(2020).
Model Calibration and Validation via Confidence Sets.
*Econometrics and Statistics, ??*(??), ??-??.

(2020).
Generic Consistency for Approximate Stochastic Programming and Statistical Problems.
*SIAM Journal on Optimization, 29*(1), 290-317.

(2019).