The transition from a linear to a circular production system involves transforming waste (such as the organic fraction of municipal solid waste, OFMSW) into valuable resources. Insect-mediated bioconversion, particularly using black soldier fly (BSF) larvae, can offer a promising opportunity to convert OFMSW into protein-rich biomass. However, current regulatory restrictions limit the use of insect proteins for animal feed, prompting the exploration of other applications, such as the production of bioplastics. Here, we explored an innovative and integrated circular supply chain model which aims to valorise the OFMSW through BSF larvae for the production of biobased materials with high technological value. BSF larvae reared on the OFMSW showed excellent growth performance and bioconversion rate of the substrate. The use of well-suited extraction methods allowed the isolation of high-purity lipids, proteins, and chitin fractions, suitable building blocks to produce biobased materials. In particular, the protein fraction was used to develop biodegradable plastic films which showed potential for replacing traditional petroleum-based materials, with the promise to be fully recycle back to amino acids, thus promoting a circular economy process. Socioeconomic analysis highlighted values generated along the entire supply chain, and life cycle assessment pointed out that lipid extraction was the most challenging step. Implementation of more sustainable methods is thus needed to reduce the overall environmental impact of the proposed chain. In conclusion, this study represents a proof of concept gathering evidence to support the feasibility of an alternative supply chain that can promote circular economy while valorising organic waste.
We consider measures of covering and separation that are expressed through maxima and minima of distances between points of an hypersphere. We investigate the behavior of these measures when applied to a sample of independent and uniformly distributed points. In particular, we derive their asymptotic distributions when the number of points diverges. These results can be useful as a benchmark against which deterministic point sets can be evaluated. Whenever possible, we supplement the rigorous derivation of these limiting distributions with some heuristic reasonings based on extreme value theory. As a by-product, we provide a proof for a conjecture on the hole radius associated to a facet of the convex hull of points distributed on the hypersphere.
This initiative examined systematically the extent to which a large set of archival research findings generalizes across contexts. We repeated the key analyses for 29 original strategic management effects in the same context (direct reproduction) as well as in 52 novel time periods and geographies; 45% of the reproductions returned results matching the original reports together with 55% of tests in different spans of years and 40% of tests in novel geographies. Some original findings were associated with multiple new tests. Reproducibility was the best predictor of generalizability—for the findings that proved directly reproducible, 84% emerged in other available time periods and 57% emerged in other geographies. Overall, only limited empirical evidence emerged for context sensitivity. In a forecasting survey, independent scientists were able to anticipate which effects would find support in tests in new samples.
This chapter is a review of some simulation models, with special reference to social sciences. Three critical aspects are identified–i.e. randomness, emergence and causation–that may help understand the evolution and the main characteristics of these simulation models. Several examples illustrate the concepts of the paper.
Under general conditions, the asymptotic distribution of degenerate second-order
We consider the estimation of the entropy of a discretely-supported time series through a plug-in estimator. We provide a correction of the bias and we study the asymptotic properties of the estimator. We show that the widely-used correction proposed by Roulston (1999) is incorrect as it does not remove the
Purpose – This viewpoint article is concerned with an attempt to advance organisational plasticity (OP) modelling concepts by using a novel community modelling framework (PhiloLab) from the social simulation community to drive the process of idea generation. In addition, the authors want to feed back their experience with PhiloLab as they believe that this way of idea generation could also be of interest to the wider evidence-based human resource management (EBHRM) community. Design/methodology/approach – The authors used some workshop sessions to brainstorm new conceptual ideas in a structured and efficient way with a multidisciplinary group of 14 (mainly academic) participants using PhiloLab. This is a tool from the social simulation community, which stimulates and formally supports discussions about philosophical questions of future societal models by means of developing conceptual agent-based simulation models. This was followed by an analysis of the qualitative data gathered during the PhiloLab sessions, feeding into the definition of a set of primary axioms of a plastic organisation. Findings – The PhiloLab experiment helped with defining a set of primary axioms of a plastic organisation, which are presented in this viewpoint article. The results indicated that the problem was rather complex, but it also showed good potential for an agent-based simulation model to tackle some of the key issues related to OP. The experiment also showed that PhiloLab was very useful in terms of knowledge and idea gathering. Originality/value – Through information gathering and open debates on how to create an agent-based simulation model of a plastic organisation, the authors could identify some of the characteristics of OP and start structuring some of the parameters for a computational simulation. With the outcome of the PhiloLab experiment, the authors are paving the way towards future exploratory computational simulation studies of OP.
Drawing on the concept of a gale of creative destruction in a capitalistic economy, we argue that initiatives to assess the robustness of findings in the organizational literature should aim to simultaneously test competing ideas operating in the same theoretical space. In other words, replication efforts should seek not just to support or question the original findings, but also to replace them with revised, stronger theories with greater explanatory power. Achieving this will typically require adding new measures, conditions, and subject populations to research designs, in order to carry out conceptual tests of multiple theories in addition to directly replicating the original findings. To illustrate the value of the creative destruction approach for theory pruning in organizational scholarship, we describe recent replication initiatives re-examining culture and work morality, working parents’ reasoning about day care options, and gender discrimination in hiring decisions. Significance statement: It is becoming increasingly clear that many, if not most, published research findings across scientific fields are not readily replicable when the same method is repeated. Although extremely valuable, failed replications risk leaving a theoretical void—reducing confidence the original theoretical prediction is true, but not replacing it with positive evidence in favor of an alternative theory. We introduce the creative destruction approach to replication, which combines theory pruning methods from the field of management with emerging best practices from the open science movement, with the aim of making replications as generative as possible. In effect, we advocate for a Replication 2.0 movement in which the goal shifts from checking on the reliability of past findings to actively engaging in competitive theory testing and theory building. Scientific transparency statement: The materials, code, and data for this article are posted publicly on the Open Science Framework, with links provided in the article.
The issues of calibrating and validating a theoretical model are considered, when it is required to select the parameters that better approximate the data among a finite number of alternatives. Based on a user-defined loss function, Model Confidence Sets are proposed as a tool to restrict the number of plausible alternatives, and measure the uncertainty associated to the preferred model. Furthermore, an asymptotically exact logarithmic approximation of the probability of choosing a model via a multivariate rate function is suggested. A simple numerical procedure is outlined for the computation of the latter and it is shown that the procedure yields results consistent with Model Confidence Sets. The illustration and implementation of the proposed approach is showcased in a model of inquisitiveness in ad hoc teams, relevant for bounded rationality and organizational research.
In stochastic programming, statistics, or econometrics, the aim is in general the optimization of a criterion function that depends on a decision variable theta and reads as an expectation with respect to a probability
The aim of this paper is to derive the asymptotic statistical properties of a class of discrepancies on the unit hypercube called
This chapter is an attempt to answer the question “how many runs of a computational simulation should one do,” and it gives an answer by means of statistical analysis. After defining the nature of the problem and which types of simulation are mostly affected by it, the chapter introduces statistical power analysis as a way to determine the appropriate number of runs. Two examples are then produced using results from an agent-based model. The reader is then guided through the application of this statistical technique and exposed to its limits and potentials.
This article proposes a framework for the analysis of experienced discrimination in home mortgages. It addresses the problem of home mortgage lending discrimination in one of the richest areas of northern Italy. Employees of a local hospital were interviewed to study their perception (or experience) of discriminatory behavior related to home financing. The analysis follows two steps. The first evaluates self-selection (the probability that individuals apply) and the second focuses on the likelihood that applications are accepted by the bank. Findings show that discrimination is likely to appear when the applicant’s nationality is considered. In addition to its findings, the study (a) provides an original econometric model on a two-step procedure to test perceived discrimination and (b) suggests a method and approach that may constitute a point of reference for those willing to study perceived discrimination.
This article is concerned with the study of statistical power in agent-based modeling (ABM). After an overview of classic statistics theory on how to interpret Type-II error (whose occurrence is also referred to as a false negative) and power, the manuscript presents a study on ABM simulation articles published in management journals and other outlets likely to publish management and organizational research. Findings show that most studies are underpowered, with some being overpowered. After discussing the risks of under- and overpower, we present two formulas to approximate the number of simulation runs to reach an appropriate level of power. The study concludes with the importance for organizational behavior scholars to perform their models in an attempt to reach a power of 0.95 or higher at the 0.01 significance level.
Estimates of the Stevens' power law model are often based on the averaging over individuals of experiments conducted at the individual level. In this paper we suppose that each individual generates responses to stimuli on the basis of a model proposed by Luce and Narens, sometimes called separable representation model, featuring two distinct perturbations, called psychophysical and subjective weighting function, that may differ across individuals. Exploiting the form of the estimator of the exponent of Stevens' power law, we obtain an expression for this parameter as a function of the original two functions. The results presented in the paper help clarifying several well-known paradoxes arising with Stevens' power laws, including the range effect, i.e. the fact that the estimated exponent seems to depend on the range of the stimuli, the location effect, i.e. the fact that it depends on the position of the standard within the range, and the averaging effect, i.e. the fact that power laws seem to fit better data aggregated over individuals. Theoretical results are illustrated using data from papers of R. Duncan Luce.
The aim of this article is to present an approach to the analysis of simple systems composed of a large number of units in interaction. Suppose to have a large number of agents belonging to a finite number of different groups: as the agents randomly interact with each other, they move from a group to another as a result of the interaction. The object of interest is the stochastic process describing the number of agents in each group. As this is generally intractable, it has been proposed in the literature to approximate it in several ways. We review these approximations and we illustrate them with reference to a version of the epidemic model. The tools presented in the paper should be considered as a complement rather than as a substitute of the classical analysis of ABMs through simulation.
In this paper, we compare the error in several approximation methods for the cumulative aggregate claim distribution customarily used in the collective model of insurance theory. In this model, it is usually supposed that a portfolio is at risk for a time period of length
We provide a nonasymptotic bound on the distance between a noncentral chi square distribution and a normal approximation. It improves on both the classical Berry-Esséen bound and previous distances derived specifically for this situation. First, the bound is nonasymptotic and provides an upper limit for the real distance. Second, the bound has the correct rate of decrease and even the correct leading constant when either the number of degrees of freedom or the noncentrality parameter (or both) diverge to infinity. The bound is applied to some probabilities arising in energy detection and Rician fading.
In this paper, we provide an asymptotic formula for the higher derivatives of the Hurwitz zeta function with respect to its first argument that does not need recurrences. As a by-product, we correct some formulas that have appeared in the literature.
We study the effects of the tax burden on tax evasion both theoretically and experimentally. We develop a theoretical framework of tax evasion decisions that is based on two behavioral assumptions: (1) taxpayers are endowed with reference dependent preferences that are subject to hedonic adaptation and (2) in making their choices, taxpayers are affected by ethical concerns. The model generates new predictions on how a change in the tax rate affects the decision to evade taxes. Contrary to the classical expected utility theory, but in line with previous applications of reference dependent preferences to taxpayers' decisions, an increase in the tax rate increases tax evasion. Moreover, as taxpayers adapt to the new legal tax rate, the decision to evade taxes becomes independent of the tax rate. We present results from a laboratory experiment that support the main predictions of the model.
We study the error in quadrature rules on a compact manifold. Our estimates are in the same spirit of the Koksma-Hlawka inequality and they depend on a sort of discrepancy of the sampling points and a generalized variation of the function. In particular, we give sharp quantitative estimates for quadrature rules of functions in Sobolev classes.
We examine the concept of essential intersection of a random set in the framework of robust optimization programs and ergodic theory. Using a recent extension of Birkhoff’s Ergodic Theorem developed by the present authors, it is shown that essential intersection can be represented as the countable intersection of random sets involving an asymptotically mean stationary transformation. This is applied to the approximation of a robust optimization program by a sequence of simpler programs with only a finite number of constraints. We also discuss some formulations of robust optimization programs that have appeared in the literature and we make them more precise, especially from the probabilistic point of view. We show that the essential intersection appears naturally in the correct formulation.
We study various methods of aggregating individual judgments and individual priorities in group decision making with the AHP. The focus is on the empirical properties of the various methods, mainly on the extent to which the various aggregation methods represent an accurate approximation of the priority vector of interest. We identify five main classes of aggregation procedures which provide identical or very similar empirical expressions for the vectors of interest. We also propose a method to decompose in the AHP response matrix distortions due to random errors and perturbations caused by cognitive biases predicted by the mathematical psychology literature. We test the decomposition with experimental data and find that perturbations in group decision making caused by cognitive distortions are more important than those caused by random errors. We propose methods to correct the systematic distortions.
The objective is to develop a reliable method to build confidence sets for the Aumann mean of a random closed set as estimated through the Minkowski empirical mean. First, a general definition of the confidence set for the mean of a random set is provided. Then, a method using a characterization of the confidence set through the support function is proposed and a bootstrap algorithm is described, whose performance is investigated in Monte Carlo simulations.
We consider scenario approximation of problems given by the optimization of a function over a constraint that is too difficult to be handled but can be efficiently approximated by a finite collection of constraints corresponding to alternative scenarios. The covered programs include min-max games, and semi-infinite, robust and chance-constrained programming problems. We prove convergence of the solutions of the approximated programs to the given ones, using mainly epigraphical convergence, a kind of variational convergence that has demonstrated to be a valuable tool in optimization problems.
Quantifying uniformity of a configuration of points on the sphere is an interesting topic that is receiving growing attention in numerical analysis. An elegant solution has been provided by Cui and Freeden [J. Cui, W. Freeden, Equidistribution on the sphere, SIAM J. Sci. Comput. 18 (2) (1997) 595-609], where a class of discrepancies, called generalized discrepancies and originally associated with pseudodifferential operators on the unit sphere in R3, has been introduced. The objective of this paper is to extend to the sphere of arbitrary dimension this class of discrepancies and to study their numerical properties. First we show that generalized discrepancies are diaphonies on the hypersphere. This allows us to completely characterize the sequences of points for which convergence to zero of these discrepancies takes place. Then we discuss the worst-case error of quadrature rules and we derive a result on tractability of multivariate integration on the hypersphere. At last we provide several versions of Koksma-Hlawka type inequalities for integration of functions defined on the sphere.
In this paper, we derive the asymptotic statistical properties of a class of generalized discrepancies introduced by Cui and Freeden (SIAM J. Sci. Comput., 1997) to test equidistribution on the sphere. We show that they have highly desirable properties and encompass several statistics already proposed in the literature. In particular, it turns out that the limiting distribution is an (infinite) weighted sum of chi-squared random variables. Issues concerning the approximation of this distribution are considered in detail and explicit bounds for the approximation error are given. The statistics are then applied to assess the equidistribution of Hammersley low discrepancy sequences on the sphere and the uniformity of a dataset concerning magnetic orientations.
In some estimation problems, especially in applications dealing with information theory, signal processing and biology, theory provides us with additional information allowing us to restrict the parameter space to a finite number of points. In this case, we speak of discrete parameter models. Even though the problem is quite old and has interesting connections with testing and model selection, asymptotic theory for these models has hardly ever been studied. Therefore, we discuss consistency, asymptotic distribution theory, information inequalities and their relations with efficiency and superefficiency for a general class of
The Analytic Hierarchy Process (AHP) ratio-scaling approach is re-examined in view of the recent developments in mathematical psychology based on the so-called separable representations. The study highlights the distortions in the estimates based on the maximum eigenvalue method used in the AHP distinguishing the contributions due to random noises from the effects due to the nonlinearity of the subjective weighting function of separable representations. The analysis is based on the second order expansion of the Perron eigenvector and Perron eigenvalue in reciprocally symmetric matrices with perturbations. The asymptotic distributions of the Perron eigenvector and Perron eigenvalue are derived and related to the eigenvalue-based index of cardinal consistency used in the AHP. The results show the limits of using the latter index as a rule to assess the quality of the estimates of a ratio scale. The AHP method to estimate the ratio scales is compared with the classical ratio magnitude approach used in psychophysics.
The analytic hierarchy process (AHP) is a decision-making procedure widely used in management for establishing priorities in multicriteria decision problems. Underlying the AHP is the theory of ratio-scale measures developed in psychophysics since the middle of the last century. It is, however, well known that classical ratio-scaling approaches have several problems. We reconsider the AHP in the light of the modern theory of measurement based on the so-called separable representations recently axiomatized in mathematical psychology. We provide various theoretical and empirical results on the extent to which the AHP can be considered a reliable decision-making procedure in terms of the modern theory of subjective measurement.
We first establish a general version of the Birkhoff Ergodic Theorem for quasi-integrable extended real-valued random variables without assuming ergodicity. The key argument involves the Poincaré Recurrence Theorem. Our extension of the Birkhoff Ergodic Theorem is also shown to hold for asymptotic mean stationary sequences. This is formulated in terms of necessary and sufficient conditions. In particular, we examine the case where the probability space is endowed with a metric and we discuss the validity of the Birkhoff Ergodic Theorem for continuous random variables. The interest of our results is illustrated by an application to the convergence of statistical transforms, such as the moment generating function or the characteristic function, to their theoretical counterparts.
When testing that a sample of n points in the unit hypercube
Studying how individuals compare two given quantitative stimuli, say
In this paper we develop a dynamic discrete-time bivariate probit model, in which the conditions for Granger non-causality can be represented and tested. The conditions for simultaneous independence are also worked out. The model is extended in order to allow for covariates, representing individual as well as time heterogeneity. The proposed model can be estimated by Maximum Likelihood. Granger non-causality and simultaneous independence can be tested by Likelihood Ratio or Wald tests. A specialized version of the model, aimed at testing Granger non-causality with bivariate discrete-time survival data is also discussed. The proposed tests are illustrated in two empirical applications.
In this paper, we prove a new version of the Birkhoff ergodic theorem (BET) for random variables depending on a parameter (alias integrands). This involves variational convergences, namely epigraphical, hypographical and uniform convergence and requires a suitable definition of the conditional expectation of integrands. We also have to establish the measurability of the epigraphical lower and upper limits with respect to the