This article proposes a framework for the analysis of experienced discrimination in home mortgages. It addresses the problem of home mortgage lending discrimination in one of the richest areas of northern Italy. Employees of a local hospital were interviewed to study their perception (or experience) of discriminatory behavior related to home financing. The analysis follows two steps. The first evaluates self-selection (the probability that individuals apply) and the second focuses on the likelihood that applications are accepted by the bank. Findings show that discrimination is likely to appear when the applicant's nationality is considered. In addition to its findings, the study (a) provides an original econometric model on a two-step procedure to test perceived discrimination and (b) suggests a method and approach that may constitute a point of reference for those willing to study perceived discrimination.
The aim of this paper is to derive the asymptotic statistical properties of a class of discrepancies on the unit hypercube called $b$-adic diaphonies. They have been introduced to evaluate the equidistribution of quasi-Monte Carlo sequences on the unit hypercube. We consider their properties when applied to a sample of independent and uniformly distributed random points. We show that the limiting distribution of the statistic is an infinite weighted sum of chi-squared random variables, whose weights can be explicitly characterized and computed. We also describe the rate of convergence of the finite-sample distribution to the asymptotic one and show that this is much faster than in the classical Berry-Esséen bound. Then, we consider in detail the approximation of the asymptotic distribution through two truncations of the original infinite weighted sum, and we provide explicit and tight bounds for the truncation error. Numerical results illustrate the findings of the paper, and an empirical example shows the relevance of the results in applications.
Estimates of the Stevens' power law model are often based on the averaging over individuals of experiments conducted at the individual level. In this paper we suppose that each individual generates responses to stimuli on the basis of a model proposed by Luce and Narens, sometimes called separable representation model, featuring two distinct perturbations, called psychophysical and subjective weighting function, that may differ across individuals. Exploiting the form of the estimator of the exponent of Stevens' power law, we obtain an expression for this parameter as a function of the original two functions. The results presented in the paper help clarifying several well-known paradoxes arising with Stevens' power laws, including the range effect, i.e. the fact that the estimated exponent seems to depend on the range of the stimuli, the location effect, i.e. the fact that it depends on the position of the standard within the range, and the averaging effect, i.e. the fact that power laws seem to fit better data aggregated over individuals. Theoretical results are illustrated using data from papers of R. Duncan Luce.
In this paper, we provide an asymptotic formula for the higher derivatives of the Hurwitz zeta function with respect to its first argument that does not need recurrences. As a by-product, we correct some formulas that have appeared in the literature.
We provide a nonasymptotic bound on the distance between a noncentral chi square distribution and a normal approximation. It improves on both the classical Berry-Esséen bound and previous distances derived specifically for this situation. First, the bound is nonasymptotic and provides an upper limit for the real distance. Second, the bound has the correct rate of decrease and even the correct leading constant when either the number of degrees of freedom or the noncentrality parameter (or both) diverge to infinity. The bound is applied to some probabilities arising in energy detection and Rician fading.
In this paper, we compare the error in several approximation methods for the cumulative aggregate claim distribution customarily used in the collective model of insurance theory. In this model, it is usually supposed that a portfolio is at risk for a time period of length $t$. The occurrences of the claims are governed by a Poisson process of intensity $\mu$ so that the number of claims in $[0,t]$ is a Poisson random variable with parameter $\lambda = \mu t$. Each single claim is an independent replication of the random variable $X$, representing the claim severity. The aggregate claim or total claim amount process in $[0,t]$ is represented by the random sum of $N$ independent replications of $X$, whose cumulative distribution function (cdf) is the object of study. Due to its computational complexity, several approximation methods for this cdf have been proposed. In this paper, we consider 15 approximations put forward in the literature that only use information on the lower order moments of the involved distributions. For each approximation, we consider the difference between the true distribution and the approximating one and we propose to use expansions of this difference related to Edgeworth series to measure their accuracy as $\lambda = \mu t$ diverges to infinity. Using these expansions, several statements concerning the quality of approximations for the distribution of the aggregate claim process can find theoretical support. Other statements can be disproved on the same grounds. Finally, we investigate numerically the accuracy of the proposed formulas.
The objective is to develop a reliable method to build confidence sets for the Aumann mean of a random closed set as estimated through the Minkowski empirical mean. First, a general definition of the confidence set for the mean of a random set is provided. Then, a method using a characterization of the confidence set through the support function is proposed and a bootstrap algorithm is described, whose performance is investigated in Monte Carlo simulations.
We study various methods of aggregating individual judgments and individual priorities in group decision making with the AHP. The focus is on the empirical properties of the various methods, mainly on the extent to which the various aggregation methods represent an accurate approximation of the priority vector of interest. We identify five main classes of aggregation procedures which provide identical or very similar empirical expressions for the vectors of interest. We also propose a method to decompose in the AHP response matrix distortions due to random errors and perturbations caused by cognitive biases predicted by the mathematical psychology literature. We test the decomposition with experimental data and find that perturbations in group decision making caused by cognitive distortions are more important than those caused by random errors. We propose methods to correct the systematic distortions.
We examine the concept of essential intersection of a random set in the framework of robust optimization programs and ergodic theory. Using a recent extension of Birkhoff's Ergodic Theorem developed by the present authors, it is shown that essential intersection can be represented as the countable intersection of random sets involving an asymptotically mean stationary transformation. This is applied to the approximation of a robust optimization program by a sequence of simpler programs with only a finite number of constraints. We also discuss some formulations of robust optimization programs that have appeared in the literature and we make them more precise, especially from the probabilistic point of view. We show that the essential intersection appears naturally in the correct formulation.
We study the error in quadrature rules on a compact manifold. Our estimates are in the same spirit of the Koksma-Hlawka inequality and they depend on a sort of discrepancy of the sampling points and a generalized variation of the function. In particular, we give sharp quantitative estimates for quadrature rules of functions in Sobolev classes.