In this paper, we compare the error in several approximation methods for the cumulative aggregate claim distribution customarily used in the collective model of insurance theory. In this model, it is usually supposed that a portfolio is at risk for a time period of length $t$. The occurrences of the claims are governed by a Poisson process of intensity $\mu$ so that the number of claims in $[0,t]$ is a Poisson random variable with parameter $\lambda = \mu t$. Each single claim is an independent replication of the random variable $X$, representing the claim severity. The aggregate claim or total claim amount process in $[0,t]$ is represented by the random sum of $N$ independent replications of $X$, whose cumulative distribution function (cdf) is the object of study. Due to its computational complexity, several approximation methods for this cdf have been proposed. In this paper, we consider 15 approximations put forward in the literature that only use information on the lower order moments of the involved distributions. For each approximation, we consider the difference between the true distribution and the approximating one and we propose to use expansions of this difference related to Edgeworth series to measure their accuracy as $\lambda = \mu t$ diverges to infinity. Using these expansions, several statements concerning the quality of approximations for the distribution of the aggregate claim process can find theoretical support. Other statements can be disproved on the same grounds. Finally, we investigate numerically the accuracy of the proposed formulas.
The objective is to develop a reliable method to build confidence sets for the Aumann mean of a random closed set as estimated through the Minkowski empirical mean. First, a general definition of the confidence set for the mean of a random set is provided. Then, a method using a characterization of the confidence set through the support function is proposed and a bootstrap algorithm is described, whose performance is investigated in Monte Carlo simulations.
We study various methods of aggregating individual judgments and individual priorities in group decision making with the AHP. The focus is on the empirical properties of the various methods, mainly on the extent to which the various aggregation methods represent an accurate approximation of the priority vector of interest. We identify five main classes of aggregation procedures which provide identical or very similar empirical expressions for the vectors of interest. We also propose a method to decompose in the AHP response matrix distortions due to random errors and perturbations caused by cognitive biases predicted by the mathematical psychology literature. We test the decomposition with experimental data and find that perturbations in group decision making caused by cognitive distortions are more important than those caused by random errors. We propose methods to correct the systematic distortions.
We examine the concept of essential intersection of a random set in the framework of robust optimization programs and ergodic theory. Using a recent extension of Birkhoff's Ergodic Theorem developed by the present authors, it is shown that essential intersection can be represented as the countable intersection of random sets involving an asymptotically mean stationary transformation. This is applied to the approximation of a robust optimization program by a sequence of simpler programs with only a finite number of constraints. We also discuss some formulations of robust optimization programs that have appeared in the literature and we make them more precise, especially from the probabilistic point of view. We show that the essential intersection appears naturally in the correct formulation.
We study the error in quadrature rules on a compact manifold. Our estimates are in the same spirit of the Koksma-Hlawka inequality and they depend on a sort of discrepancy of the sampling points and a generalized variation of the function. In particular, we give sharp quantitative estimates for quadrature rules of functions in Sobolev classes.
We study the effects of the tax burden on tax evasion both theoretically and experimentally. We develop a theoretical framework of tax evasion decisions that is based on two behavioral assumptions: (1) taxpayers are endowed with reference dependent preferences that are subject to hedonic adaptation and (2) in making their choices, taxpayers are affected by ethical concerns. The model generates new predictions on how a change in the tax rate affects the decision to evade taxes. Contrary to the classical expected utility theory, but in line with previous applications of reference dependent preferences to taxpayers' decisions, an increase in the tax rate increases tax evasion. Moreover, as taxpayers adapt to the new legal tax rate, the decision to evade taxes becomes independent of the tax rate. We present results from a laboratory experiment that support the main predictions of the model.
In this paper, we derive the asymptotic statistical properties of a class of generalized discrepancies introduced by Cui and Freeden (*SIAM J. Sci. Comput.*, 1997) to test equidistribution on the sphere. We show that they have highly desirable properties and encompass several statistics already proposed in the literature. In particular, it turns out that the limiting distribution is an (infinite) weighted sum of chi-squared random variables. Issues concerning the approximation of this distribution are considered in detail and explicit bounds for the approximation error are given. The statistics are then applied to assess the equidistribution of Hammersley low discrepancy sequences on the sphere and the uniformity of a dataset concerning magnetic orientations.
Quantifying uniformity of a configuration of points on the sphere is an interesting topic that is receiving growing attention in numerical analysis. An elegant solution has been provided by Cui and Freeden [J. Cui, W. Freeden, Equidistribution on the sphere, *SIAM J. Sci. Comput.* 18 (2) (1997) 595-609], where a class of discrepancies, called generalized discrepancies and originally associated with pseudodifferential operators on the unit sphere in R3, has been introduced. The objective of this paper is to extend to the sphere of arbitrary dimension this class of discrepancies and to study their numerical properties. First we show that generalized discrepancies are diaphonies on the hypersphere. This allows us to completely characterize the sequences of points for which convergence to zero of these discrepancies takes place. Then we discuss the worst-case error of quadrature rules and we derive a result on tractability of multivariate integration on the hypersphere. At last we provide several versions of Koksma-Hlawka type inequalities for integration of functions defined on the sphere.
We consider scenario approximation of problems given by the optimization of a function over a constraint that is too difficult to be handled but can be efficiently approximated by a finite collection of constraints corresponding to alternative scenarios. The covered programs include min-max games, and semi-infinite, robust and chance-constrained programming problems. We prove convergence of the solutions of the approximated programs to the given ones, using mainly epigraphical convergence, a kind of variational convergence that has demonstrated to be a valuable tool in optimization problems.
In some estimation problems, especially in applications dealing with information theory, signal processing and biology, theory provides us with additional information allowing us to restrict the parameter space to a finite number of points. In this case, we speak of discrete parameter models. Even though the problem is quite old and has interesting connections with testing and model selection, asymptotic theory for these models has hardly ever been studied. Therefore, we discuss consistency, asymptotic distribution theory, information inequalities and their relations with efficiency and superefficiency for a general class of $m$-estimators.