The aim of this paper is to provide a statistical foundation for a nonparametric method of analysis suitable for economic and psychophysical experiments. Consider a quantity depending on some primitive quantities through an unknown function $f$ (such as the certain equivalent as a function of the probabilities and the payments in a lottery). $f$ can be studied in an experiment in which $J$ subjects are gathered, the same $I$ questions are proposed to each individual and he/she provides a set of $I$ responses. This rises the point whether a large number of individuals or of questions is better for the design of an economic experiment. The function $f$ is estimated nonparametrically through $f_P$, a linear combination of a set of $P$ basis functions (power series, regression splines, trigonometric functions, etc.); we suppose that, when $P$ diverges, the set of basis functions describes accurately the function $f$. The weights in the function $f_P$ can be estimated through linear regression supposing independence between individuals and answers across individuals. Estimators are consistent when $J$ and $I$ diverge to infinity, even if the answers of an individual are correlated and if this correlation is different across individuals. The rate of convergence of the estimated $f_P$ to $f$ depends upon $J$, $I$, the covariance structure of errors across individuals and the degree of approximability of $f$ through a set of basis functions (i.e. the smoothness of $f$); a convergence rate is also obtained for derivatives. We consider in detail the case in which $f$ is estimated nonparametrically through power series or regression splines. We derive the optimal divergence rate of $P$ with both $I$ and $J$ and choose the optimal balance between $I$ and $J$. It turns out that in general a large value of $J$ is better than a large value of $I$. Conditions for asymptotic normality of linear and nonlinear functionals of the estimated function of interest are derived. This is used to derive the asymptotic distribution of Wald tests when the number of constraints under test is finite (a chi-squared distribution) and when it diverges to infinity (a normal distribution), and the distribution of LR tests of linear constraints. We provide bounds on the error of the approximation. Lastly, we investigate what happens when the average variance matrix appearing in the previous tests is replaced by an estimator, and consider nonparametric estimation of the conditional variance.