Summary

Functional linear discriminant analysis provides a simple yet efficient method for classification, with the possibility of achieving perfect classification. Several methods have been proposed in the literature that mostly address the dimensionality of the problem. On the other hand, there is growing interest in interpretability of the analysis, which favours a simple and sparse solution. In this paper we propose a new approach that incorporates a type of sparsity that identifies nonzero subdomains in the functional setting, yielding a solution that is easier to interpret without compromising performance. Given the need to embed additional constraints in the solution, we reformulate functional linear discriminant analysis as a regularization problem with an appropriate penalty. Inspired by the success of |$\ell_1$|-type regularization at inducing zero coefficients for scalar variables, we develop a new regularization method for functional linear discriminant analysis that incorporates an |$L^1$|-type penalty, |$\int |f|$|⁠, to induce zero regions. We demonstrate that our formulation has a well-defined solution that contains zero regions, achieving functional sparsity in the sense of domain selection. In addition, the misclassification probability of the regularized solution is shown to converge to the Bayes error if the data are Gaussian. Our method does not assume that the underlying function has zero regions in the domain, but it produces a sparse estimator that consistently estimates the true function whether or not the latter is sparse. Using both simulated and real data examples, we demonstrate this property of our method in finite samples through comparisons with existing methods.

This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model)
You do not currently have access to this article.