Nancy Reid

Brief Bio

Nancy Reid is University Professor and Canada Research Chair in Statistical Methodology at the University of Toronto. Her research interests are in statistical theory, likelihood inference, and design of studies. Along with her colleagues she has developed higher order asymptotic methods both for use in applications, and as a means to study theoretical aspects of the foundations of inference, including the interface between Bayesian and frequentist methods.

Professor Reid received her PhD from Stanford University, under the supervision of Rupert Miller. She taught at the University of British Columbia before moving to the University of Toronto, and has held visiting positions at the Harvard School of Public Health, University of Texas at Austin, Ecole Polytechnique Federale de Lausanne, and University College London. She was the first female and first resident Canadian to win the Presidents’ Award of the Committee of Presidents of Statistical Societies, awarded to a statistician under the age of 41 in recognition of outstanding contributions to the profession of statistics. In 2009 she received the Gold Medal of the Statistical Society of Canada and the Emanuel and Carol Parzen Prize for statistical innovation. She is a Fellow of the Royal Society of Canada, the American Association for the Advancement of Science, the Institute of Mathematical Statistics and the American Statistical Association.

Current Research Field

Modern technology has simplified the collection of large and complex sets of data, which are being used to answer important research questions in many fields of science and social science. Statistical models and methods are an essential part of this research, and understanding these methods requires progress on the theory of statistical modelling and inference for complex data. My research program develops the theory of statistical inference in these complex settings, both to deepen our understanding of the intellectual development of the field of statistics and to provide a framework for developing new methods of analysis.

The use of the *likelihood function* for inference is central to modern approaches to statistics. The likelihood function is simply the statistical model, typically a set of probability distributions indexed by a set of unknown parameters. The likelihood function is proportional to the related density function, considered though as a function of the parameters, with the random outcome, or response, considered fixed. Although a simple change of focus from probability modelling, this change emphasizes the 'inverse problem' of statistics, by focussing on the route from observed data to information about an underlying process that may have generated the data, or at least serves as an adequate approximation to the generating mechanism.

My research program includes the continuing study of asymptotic techniques for likelihood-based inference, exploration of theories of inference and their overlap, and the study of a collection of likelihood-like modes of inference, with special emphasis on so-called *composite likelihood*.

Composite likelihood is a generic term for an inference function that is derived by simplifying the original probability model, either for reasons of computational complexity, or because the model is incompletely specified via a number of smaller components. Much of the literature in this area in contrast tends to be focussed on particular applications, or properties of composite likelihood inference in applied settings. My focus has been deepening understanding of the theoretical properties of this process, with a view to developing a more general theory for composite likelihood construction and inference. Together with colleagues and students we have made progress in understanding from a more general point of view how the construction of composite likelihood impacts the inferences that can be made.

The asymptotic theory of likelihood inference, with special emphasis on so-called higher-order approximations, enables more precise calibration of inference, by providing approximations to inferential quantities that are more accurate than those provided by the limiting distribution (typically Gaussian). One focus of my recent work in this area has been the simplification of both the presentation and the calculation of the relevant quantities, in order to enable their application in practice. An interesting theoretical aspect of this work is a comparison of Bayesian and frequentist methods of inference, with a view to both finding common ground and understanding the limitations of each method.

The profession of statistics is struggling with its role in data science, with the enormous public interest in so-called 'Big Data', and with its history of debate within the community on various modes of inference. It is becoming clear that large amounts of data are not synonymous with large amounts of information, and basic ideas of statistical theory and practice, including an emphasis on careful design of investigations and careful attention to properties of inferential methods, are as important as ever in helping science and society to advance learning in the presence of uncertainty. It is also clear that these problems need to be tackled with teams of researchers combining expertise in mathematics, statistics and computer science.

Personal Homepage : http://www.utstat.utoronto.ca/reid/