# Distances Based on the Perimeter of the Risk Set of a Testing Problem

### Abstract

At the core of this paper is a simple geometric object, namely the risk set of a statistical testing problem on the one hand and f-divergences, which were introduced by Csiszár (1963) on the other hand. f-divergences are measures for the hardness of a testing problem depending on a convex

real valued function f on the interval [0,∞). The choice of this parameter f can be adjusted so as to match the needs for specific applications.

One of these adjustments of the parameter f is exemplified in Section 3 of this paper. There it is illustrated that the appropriate choice of f for the construction of least favourable distributions in robust statistics is the convex function f(u) =√(1 + u^2) −(1+u)/√2 yielding the perimeter of the risk set

of a testing problem.

After presenting the definition, mentioning the basic properties of a risk set and giving the integral geometric representation of f-divergences the paper will focus on the perimeter of the risk set.

All members of the class of f-divergences of perimeter-type introduced and investigated in Österreicher and Vajda (2003) and Vajda (2009) turn out to be metric divergences corresponding to a class of entropies introduced by Arimoto (1971).

Without essential loss of insight we restrict ourselves to discrete probability distributions and note that the extension to the general case relies strongly on the Lebesgue-Radon-Nikodym Theorem.

### References

Ali, S. M., and Silvey, S. D. (1966). A general class of coefficients of divergence of one distribution from another. Journal of the Royal Statistical Society, Series B, 28, 131-142.

Arimoto, S. (1971). Information-theoretical considerations on estimation problems. Information and Control, 19, 181-194.

Csiszár, I. (1963). Eine informationstheoretische Ungleichung und ihre Anwendung auf den Beweis der Ergodizität von Markoffschen Ketten. Publications of the Mathematical Institute of the Hungarian Academy of Sciences, 8, 85–107.

Csiszár, I. (1974). Information measures: A critical survey. In J. Kozesnik (Ed.), Transactions of the Seventh Prague Conference on Information Theory, Statistical Decision Functions, Random Processes and of the 1974 European European Meeting of Statisticians (Vol. A, p. 73-86). Academia Prague.

Dalton, H. (1920). The measurement of the inequality of incomes. The Economic Journal, 30, 348-361.

Feldman, D., and Österreicher, F. (1981). Divergenzen vonWahrscheinlichkeitsverteilungen – integralgeometrisch betrachtet. Acta Mathematica Hungarica, 37, 329-337.

Feldman, D., and Österreicher, F. (1989). A note on f-divergences. Studia Scientiarum Mathematicarum Hungarica, 24, 191-200.

Huber, P. J., and Strassen, V. (1973). Minimax tests and Neyman-Pearson lemma for capacities. The Annals of Statistics, 1, 251-263.

Kafka, P., Österreicher, F., and Vincze, I. (1991). On powers of f-divergences defining a distance. Studia Scientiarum Mathematicarum Hungarica, 26, 415-422.

Linhart, J., and Österreicher, F. (1985). Uniformity and distance – a vivid example from statistics. International Journal of Mathematical Education in Science and Technology, 16, 645-649.

Lorenz, M. O. (1905). Methods of measuring concentration of wealth. Journal of the American Statistical Association, 9, 209-219.

Österreicher, F. (1983). Least favourable distributions. In Kotz-Johnson (Ed.), Encyclopedia of Statistical Sciences, Volume 3 (p. 588-592). New York: John Wiley & Sons.

Österreicher, F. (1996). On a class of perimeter-type distances of probability distributions. Kybernetika, 32, 389-393.

Österreicher, F., and Vajda, I. (2003). A new class of metric divergences on probability spaces and its applicability in statistics. Annals of the Institute of Statistical Mathematics, 55, 639-653.

Puri, M. L., and Vincze, I. (1988). Information and mathematical statistics. In P. Mandl and M. Huskova (Eds.), Proceedings of the 4th Conference on Asymptotic Statistics. Prague: Charles University.

Reschenhofer, E., and Bomze, I. M. (1991). Lengths tests for goodness of fit. Biometrika, 78, 207–216.

Sanghvi, L. D. (1953). Comparison of genetics and morphological methods for a study of biological differences. American Journal of Physical Anthropology, 11, 385-404.

Vajda, I. (1972). On f-divergence and and singularity of probability measures. Periodica Mathematica Hungarica, 2, 223-234.

Vajda, I. (1989). Theory of Statistical Inference and Information. Dordrecht-Boston-London: Kluwer Academic Publishers.

Vajda, I. (2009). On metric divergences of probability distributions. Kybernetika, 45, 885-900.

*Austrian Journal of Statistics*,

*42*(1), 3-19. https://doi.org/10.17713/ajs.v42i1.162

The Austrian Journal of Statistics publish open access articles under the terms of the Creative Commons Attribution (CC BY) License.

The Creative Commons Attribution License (CC-BY) allows users to copy, distribute and transmit an article, adapt the article and make commercial use of the article. The CC BY license permits commercial and non-commercial re-use of an open access article, as long as the author is properly attributed.

Copyright on any research article published by the Austrian Journal of Statistics is retained by the author(s). Authors grant the Austrian Journal of Statistics a license to publish the article and identify itself as the original publisher. Authors also grant any third party the right to use the article freely as long as its original authors, citation details and publisher are identified.

Manuscripts should be unpublished and not be under consideration for publication elsewhere. By submitting an article, the author(s) certify that the article is their original work, that they have the right to submit the article for publication, and that they can grant the above license.