Operations Research

Approximation of Stochastic Programming Problems

In Stochastic Programming, Statistics or Econometrics, one often looks for the solution of optimization problems of the following form: \begin{equation} \inf_{\theta\in\Theta} \mathbb{E}_{\mathbb{P}_{}} g(\cdot,\theta)=\inf_{\theta\in\Theta} \int_{\mathbb{R}^{q}}g(y,\theta)\mathbb{P}_{}(dy) \end{equation} where $\Theta$ is a Borel subset of $\mathbb{R}^{p}$ and $\mathbb{P}$ is a probability measure defined on $\mathbf{Y}=\mathbb{R}^{q}$ endowed with its Borel $\sigma-$field $\mathcal{B}(\mathbf{Y})$ (but more general spaces can be considered).

Generic Consistency for Approximate Stochastic Programming and Statistical Problems

In stochastic programming, statistics, or econometrics, the aim is in general the optimization of a criterion function that depends on a decision variable theta and reads as an expectation with respect to a probability $\mathbb{P}$. When this function cannot be computed in closed form, it is customary to approximate it through an empirical mean function based on a random sample. On the other hand, several other methods have been proposed, such as quasi-Monte Carlo integration and numerical integration rules. In this paper, we propose a general approach for approximating such a function, in the sense of epigraphical convergence, using a sequence of functions of simpler type which can be expressed as expectations with respect to probability measures $\mathbb{P}_n$ that, in some sense, approximate $\mathbb{P}$. The main difference with the existing results lies in the fact that our main theorem does not impose conditions directly on the approximating probabilities but only on some integrals with respect to them. In addition, the $\mathbb{P}_n$'s can be transition probabilities, i.e., are allowed to depend on a further parameter, $\xi$, whose value results from deterministic or stochastic operations, depending on the underlying model. This framework allows us to deal with a large variety of approximation procedures such as Monte Carlo, quasi-Monte Carlo, numerical integration, quantization, several variations on Monte Carlo sampling, and some density approximation algorithms. As by-products, we discuss convergence results for stochastic programming and statistical inference based on dependent data, for programming with estimated parameters, and for robust optimization; we also provide a general result about the consistency of the bootstrap for $M$-estimators.

Empirical Properties of Group Preference Aggregation Methods Employed in AHP: Theory and Evidence

We study various methods of aggregating individual judgments and individual priorities in group decision making with the AHP. The focus is on the empirical properties of the various methods, mainly on the extent to which the various aggregation methods represent an accurate approximation of the priority vector of interest. We identify five main classes of aggregation procedures which provide identical or very similar empirical expressions for the vectors of interest. We also propose a method to decompose in the AHP response matrix distortions due to random errors and perturbations caused by cognitive biases predicted by the mathematical psychology literature. We test the decomposition with experimental data and find that perturbations in group decision making caused by cognitive distortions are more important than those caused by random errors. We propose methods to correct the systematic distortions.

Essential Intersection and Approximation Results for Robust Optimization

We examine the concept of essential intersection of a random set in the framework of robust optimization programs and ergodic theory. Using a recent extension of Birkhoff's Ergodic Theorem developed by the present authors, it is shown that essential intersection can be represented as the countable intersection of random sets involving an asymptotically mean stationary transformation. This is applied to the approximation of a robust optimization program by a sequence of simpler programs with only a finite number of constraints. We also discuss some formulations of robust optimization programs that have appeared in the literature and we make them more precise, especially from the probabilistic point of view. We show that the essential intersection appears naturally in the correct formulation.

Quadrature Rules and Distribution of Points on Manifolds

We study the error in quadrature rules on a compact manifold. Our estimates are in the same spirit of the Koksma-Hlawka inequality and they depend on a sort of discrepancy of the sampling points and a generalized variation of the function. In particular, we give sharp quantitative estimates for quadrature rules of functions in Sobolev classes.

Scenario Approximation of Robust and Chance-Constrained Programs

We consider scenario approximation of problems given by the optimization of a function over a constraint that is too difficult to be handled but can be efficiently approximated by a finite collection of constraints corresponding to alternative scenarios. The covered programs include min-max games, and semi-infinite, robust and chance-constrained programming problems. We prove convergence of the solutions of the approximated programs to the given ones, using mainly epigraphical convergence, a kind of variational convergence that has demonstrated to be a valuable tool in optimization problems.

Separable representations in mathematical psychology and decision making

International conference

Separable representations and group decision making in the AHP

International conference

A Re-Examination of the Algebraic Properties of the AHP as a Ratio-Scaling Technique

The Analytic Hierarchy Process (AHP) ratio-scaling approach is re-examined in view of the recent developments in mathematical psychology based on the so-called separable representations. The study highlights the distortions in the estimates based on the maximum eigenvalue method used in the AHP distinguishing the contributions due to random noises from the effects due to the nonlinearity of the subjective weighting function of separable representations. The analysis is based on the second order expansion of the Perron eigenvector and Perron eigenvalue in reciprocally symmetric matrices with perturbations. The asymptotic distributions of the Perron eigenvector and Perron eigenvalue are derived and related to the eigenvalue-based index of cardinal consistency used in the AHP. The results show the limits of using the latter index as a rule to assess the quality of the estimates of a ratio scale. The AHP method to estimate the ratio scales is compared with the classical ratio magnitude approach used in psychophysics.

The Analytic Hierarchy Process and the Theory of Measurement

The analytic hierarchy process (AHP) is a decision-making procedure widely used in management for establishing priorities in multicriteria decision problems. Underlying the AHP is the theory of ratio-scale measures developed in psychophysics since the middle of the last century. It is, however, well known that classical ratio-scaling approaches have several problems. We reconsider the AHP in the light of the modern theory of measurement based on the so-called separable representations recently axiomatized in mathematical psychology. We provide various theoretical and empirical results on the extent to which the AHP can be considered a reliable decision-making procedure in terms of the modern theory of subjective measurement.