Familial inference: tests for hypotheses on a family of centres

Statistical hypotheses are translations of scientific hypotheses into statements about one or more distributions, often concerning their centre. Tests that assess statistical hypotheses of centre implicitly assume a specific centre, e.g., the mean or median. Yet, scientific hypotheses do not always specify a particular centre. This ambiguity leaves the possibility for a gap between scientific theory and statistical practice that can lead to rejection of a true null. In the face of replicability crises in many scientific disciplines, significant results of this kind are concerning. Rather than testing a single centre, this paper proposes testing a family of plausible centres, such as that induced by the Huber loss function (the Huber family). Each centre in the family generates a testing problem, and the resulting family of hypotheses constitutes a familial hypothesis. A Bayesian nonparametric procedure is devised to test familial hypotheses, enabled by a novel pathwise optimization routine to fit the Huber family. The favourable properties of the new test are demonstrated theoretically and experimentally. Two examples from psychology serve as real-world case studies.


Introduction
Hypothesis testing is one of statistics' most important contributions to the scientific method.Testing helps advance diverse lines of inquiry, from evaluating the efficacy of experimental drugs to assessing the validity of psychological theories.Researchers working on these problems often characterize their questions as competing statements about a centre µ of one or more distributions.In the simplest one-sample setting, these statements take the form where M 0 and M 1 are a partition of the support M of µ.There are myriads of classical tests for one-and two-sample hypotheses of centre.When µ is the mean, the most well-known of these is the t test (Student 1908), and its extension to independent samples from populations with differing variances (Welch 1947).When µ is the median, the sign test (Fisher 1925) is available, as is the median test for independent samples (Mood 1950).The signed-rank test, or rank-sum test for independent samples, are also tests of medians under certain assumptions (Wilcoxon 1945;Mann and Whitney 1947).
The possibility to test different centres such as the mean and median raises the question of what qualifies as a centre.We posit that a centre of a random variable X should satisfy at least two criteria: (1) a reflection of X about the centre should preserve the centre, and (2) a shift in X by a constant should move the centre by that same constant.This definition is purposefully broad to accommodate the many notions of centre used throughout statistics.The mean and median trivially satisfy these criteria, as do other popular notions such as the mode, trimmed mean, and Winsorized mean.Quantiles other than the median, and by extension order statistics such as the minimum and maximum, are not centres under these criteria as they are not preserved by reflection in general.Still, the fact that there are many possibilities for centre can complicate hypothesis testing in science.
In certain applied areas, e.g., psychology and medicine, scientific hypotheses are often silent about a specific centre and instead tend to be statistically vague, e.g., treatment A is more efficacious than treatment B. This ambiguity makes translation to statistical hypotheses inherently subjective and can leave researchers questioning which centre to use.See Blakely and Kawachi (2001), Ben-Aharon et al. (2019), and Rousselet and Wilcox (2020) for discussions of this issue in epidemiology, medicine, and psychology.Moreover, ambiguity about the correct or best centre leaves the possibility for a gap between scientific theory and statistical practice that can lead to rejection of a true null, threatening the validity of findings.Sometimes H 0 can be rejected just by switching from one centre to another, say from the mean to the median.In the face of replicability crises in various disciplines (see, e.g., Ioannidis 2005; Open Science Collaboration 2015; Christensen and Miguel 2018), the possibility for significant results of this sort is concerning.The fact that statistical experts often have no input on the statistical aspects of scientific research only aggravates the issue (see, e.g., Strasak et al. 2007;Hardwicke and Goodman 2020).Transparent statistical tools are needed to instil confidence in scientific claims.
Motivated by the preceding discussion, this paper proposes a new approach to hypothesis testing: familial inference.Unlike existing inferential methods, which test hypotheses about a single centre, methods for familial inference test hypotheses about a family of plausible centres, with the ultimate goal of strengthening any claims of significance.More specifically, consider a family of centres {µ(λ) : λ ∈ Λ} where λ indexes each member (centre).The familial testing problem is to decide which hypothesis concerning this family is correct: H 0 : µ(λ) ∈ M 0 for some λ ∈ Λ vs. H 1 : µ(λ) ∈ M 1 for all λ ∈ Λ.
The familial null hypothesis states that at least one member (centre) of the family is contained in the null set M 0 .The alternative hypothesis is that no member is in M 0 . 1 This paper studies the family of centres induced by the Huber loss function (Huber 1964).The Huber function is a mixture of square and absolute loss, where λ controls the mixture.By sweeping λ between 0 and infinity, one obtains a family of centres that includes the mean and median as limit points.All members of this Huber family satisfy our criteria for centre.While this more conservative testing approach could potentially overlook a useful treatment, it reduces the risk of promoting an ineffective (or harmful) treatment, wasting limited resources, and misdirecting future research.Given the well-reported scientific reproducibility crises, the benefits may far outweigh the cost.
Familial inference is more sophisticated than inference for a single centre and requires new tools developed in this paper.Our first methodological development is a Bayesian nonparametric procedure for one-and two-sample testing with continuous and discrete random variables.The procedure is based on the limit of a Dirichlet process prior (Ferguson 1973), sometimes referred to as the Bayesian bootstrap (Rubin 1981).Bayesian tests have several advantages over frequentist tests, including that they measure the probability of H 0 .Unlike p-values, Bayesian probabilities directly quantify uncertainty, arguably providing a clearer perspective on evidence for practitioners.We refer the reader to Kruschke (2013) and Benavoli et al. (2017) for discussions on the merits of Bayesian testing.Besides the advantages of a Bayesian approach, the nonparametric nature of our test ameliorates concern about model misspecification.Though numerous existing works address Bayesian nonparametric testing (Ma and Wong 2011;Benavoli et al. 2014;Huang and Ghosh 2014;Benavoli et al. 2015;Holmes et al. 2015;Filippi and Holmes 2017;Gutiérrez et al. 2019;Pereira, Taylor-Rodríguez, and Gutiérrez 2020), these treat hypotheses about single statistical parameters or entire distributions, distinct from the familial hypotheses treated in this paper.
Our second methodological development is an algorithm for fitting the Huber family, necessary to implement the new test.The algorithm is a pathwise optimization routine that exploits piecewise linearity of the Huber solution path to fit the family, containing infinitely many centres, in a single pass over the data.It has low computational complexity and terminates in at most n − 1 steps, where n is the sample size.We elucidate the connection between our algorithm and least angle regression (Efron et al. 2004;Rosset and Zhu 2007), popularly used for fitting the lasso regularization path (Tibshirani 1996).The algorithms devised in this paper are made available in the open-source R package familial, designed with a standard interface similar to that of existing tests in the stats package.Methods for visualizing the posterior family via functional boxplots (Sun and Genton 2011) are provided.familial is publicly available on the R repository CRAN.
To illustrate our proposal, we consider data from a study of mammalian sleep patterns in Savage and West (2007).The data contains sleep times for n = 83 species of mammals.A histogram of the ratio of sleeping hours to waking hours is plotted in Figure 1.The data are heavily right-skewed, suggesting the mean and median are probably far separated.Suppose we ask whether mammals tend to spend as much time sleeping as they do awake, i.e., whether µ = 1.A t test that the mean is one yields a p-value of 0.698.A bootstrap test that the mean is one, conducted as a robustness check, produces only a marginally smaller p-value of 0.668.A sign test that the median is one gives a p-value of 0.028.At a conventional 0.05 significance level, these tests do not yield the same answer to our scientific question.This inconsistency raises the question of how exactly to proceed in the absence of a guiding scientific theory.Using our procedure, we estimate the posterior Huber family via 1,000 Bayesian bootstraps, summarized in Figure 2 by a functional boxplot.As the Huber parameter λ → ∞, the 50% central region of the posterior encloses the null value (recall the mean is attained in the limit).By querying the posterior, we find a probability of 0.633 that at least one centre in the family equals one.Under zero-one loss configured analogously to using a 0.05 frequentist significance level (detailed later), the familial test finds insufficient evidence to reject the null in favour of the alternative.Because no specific choice was made about the centre, the problem of choosing between conflicting tests does not arise.Most importantly, we do not arrive at a result that would hold only under a certain centre.Before delving into the details of our approach, we pause to ask whether a familial test of centres might be less useful than a more general test of distributions.Earlier Bayesian nonparametric tests of distributions, such as those of Holmes et al. (2015) and Pereira, Taylor-Rodríguez, and Gutiérrez (2020), consider a larger class of alternatives than ours.The usefulness of these tests depends, however, upon the scientific question.A change in scale, e.g., can cause them to reject even with equal centres.If such changes are not scientifically pertinent, these tests are unsuitable.In contrast, our familial test directly identifies discrepancies in centres, something which alternative tests overlook or conflate with other distributional attributes.

Inference problem
Let X 1 , . . ., X n be an iid sample according to a distribution P 0 .Our goal is to carry out inference on the set {µ 0 (λ) : λ ∈ Λ}, where Here, ℓ λ : R → R + is a loss function controlled by the parameter λ.The constant σ > 0 is necessary in certain loss functions to make λ invariant to the spread of X. Invariance of λ to spread is relevant for testing independent samples, addressed later.The population centre µ 0 (λ) minimizes the expectation of the loss configured by λ under P 0 .To maintain generality throughout this section, we do not specify a particular loss function.However, to give a concrete example that will be the focus of subsequent sections, one may consider the Huber function The support of λ is Λ = (0, ∞).The mean of P 0 is the limiting solution as λ → ∞.The median is the limiting solution in the other direction.The continuum of centres therebetween comprises the Huber family.The approach we propose can accommodate the restriction to any subset of the family given by Λ = [a, b] for 0 < a < b < ∞.
If the true generative model P 0 were known, we would immediately have access to the family {µ 0 (λ) : λ ∈ Λ}.Of course, this is not the case in practice, P 0 is unknown.The traditional parametric Bayesian approach to this problem proceeds by means of a prior on parameters for a class of models for P .A valid criticism of this approach is the implicit assumption that P 0 is contained in the model class.Misspecified models can lead to false conclusions, which is troubling in the context of hypothesis testing.To this end, the Bayesian nonparametric approach is an appealing alternative.Rather than placing a prior on the parameters governing a distribution for P , one places a prior directly on the distribution itself.The Dirichlet process, a probability distribution on the space of probability distributions, is a natural candidate for this task.Since Dirichlet processes have support on a large class of distributions, they are a popular prior in Bayesian nonparametrics.The reader is referred to MacEachern (2016) for a recent and accessible overview of their properties.

Bayesian bootstrap
We denote by DP(cP π ) a Dirichlet process with base distribution P π and concentration parameter c > 0. The concentration parameter is used to impart confidence in P π .With a Dirichlet process as a prior on P , our Bayesian model is Ferguson (1973) shows the posterior corresponding to this model is also a Dirichlet process: where δ x i is the Dirac measure at x i .The Dirichlet process is a conjugate prior for iid sampling under P , and the posterior is the base distribution P π with added point masses at the sample realizations x 1 , . . ., x n .A base distribution P π must be chosen to operationalize this model.If one wishes to minimize the impact of the choice of P π , it is sensible to consider the limiting case where the concentration parameter c → 0, which leads to the posterior Gasparini (1995) shows that this posterior exactly matches the Bayesian bootstrap, proposed by Rubin (1981) as the Bayesian analog of the frequentist bootstrap (Efron 1979).MacEachern (1993) also establishes a unique connection of this posterior to the empirical distribution of the data.The Bayesian bootstrap places support only on the observed data and is equivalent to where Dirichlet(1, . . ., 1) is the n-dimensional Dirichlet distribution with all concentration parameters equal to one.Sometimes this distribution is referred to as flat or uniform.The first-and second-order asymptotic properties of the Bayesian bootstrap are described in Lo (1987) and Weng (1989).As well as being theoretically well-understood, the Bayesian bootstrap admits scalable sampling algorithms that are trivially parallelizable, making posterior exploration highly tractable.See Fong, Lyddon, and Holmes (2019), Lyddon, Holmes, and Walker (2019), and Barrientos and Peña (2020) for recent applications of the Bayesian bootstrap to complex models and data.As with those applications, tractability is key here.
We now have a posterior for P , and consequently also a posterior on any summaries of P (see, e.g., Lee and MacEachern 2014), including those of interest: families of centres.To estimate the posterior for a given family we propose Algorithm 1.The complexity of solving Algorithm 1: Bayesian bootstrap for familial inference the minimization problem in step two for all λ ∈ Λ depends on the loss function.In the next section, we present a numerical routine that addresses the case where the loss function is the Huber function.Since λ in the Huber function is sensitive to changes in spread, we configure σ (b) to be the median absolute deviation of the bootstrap sample, i.e., the weighted median absolute deviation with weights w n .The standard deviation of the bootstrap sample could also be used.In our experience, switching between standard deviation and median absolute deviation usually does not lead to materially different outcomes, and the choice is only relevant for independent samples testing (see Section 2.4).
From the output of Algorithm 1, the posterior probabilities p H 0 := Pr(H 0 | x 1 , . . ., x n ) and p H 1 := Pr(H 1 | x 1 , . . ., x n ) are estimable as and Since H 0 and H 1 are mutually exclusive and collectively exhaustive, p H 0 + p H 1 = 1 and, for any B, pH 0 + pH 1 = 1.Though our focus is the Bayesian bootstrap (i.e., the Dirichlet process prior with concentration parameter c → 0), one can extend Algorithm 1 to handle the prior with c > 0 via the stick-breaking construction of Sethuraman (1994).The interested reader may refer to Chapter 2 of Müller et al. (2015) for details of this approach.

Decision rule
To map the estimated posterior probabilities pH 0 and pH 1 to a decision, we assign a loss to each possible decision.Specifically, given the posterior probability vector p = (p H 0 , pH 1 ) ⊤ , we make the decision giving lowest posterior expected loss Lp, where L is loss matrix with rows corresponding to the decision to accept H 0 , accept H 1 , or accept neither (an indeterminate decision).We use where l H j |H k denotes the loss incurred in accepting H j when H k is true for j, k = 0, 1, and where l I|H k denotes the loss from an indeterminate decision for k = 0, 1.Under the above configuration of L, either H 0 or H 1 is accepted depending on whether pH 0 or pH 1 is greater than 0.95, analogous to a 0.05 level frequentist test.When both probabilities are less than 0.95 the decision is indeterminate.

Two-sample problem
The discussion up to now has focused on the one-sample setting.Consider now the two-sample setting with samples X 1 , . . ., X n 1 and Y 1 , . . ., Y n 2 .If X i and Y i are meaningfully coupled together, e.g., measurements on the same subject before and after treatment, the two samples are paired and n 1 = n 2 .Define the random variable Z i as the difference X i − Y i .Then the familial hypotheses are where µ Z (λ) is a centre of Z. Algorithm 1 applies directly to the sample Z 1 , . . ., Z n with n = n 1 = n 2 .
When X i and Y i are not coupled together the samples are independent.The familial hypotheses are then Here, the same centre of X is compared with the same centre of Y , i.e., the mean is compared with the mean, the median with the median, and so on.Testing these hypotheses requires bootstrapping the families of X and Y with independently drawn weights.When independent weights are drawn at a bootstrap iteration, each centre of Y is subtracted from the same centre of X.The posterior probability of H 0 is estimated by the proportion of times across bootstrap iterations that the set of differences intersects the null set M 0 .
3 Huber family

Optimization problem
To implement the testing procedure of the preceding section, we require a method for fitting the family of centres to each distribution drawn from the posterior, i.e., for solving the optimization problems in step two of Algorithm 1 given fixed bootstrap weights w n .For simplicity of exposition, we drop the bootstrap iteration superscript (b) and fix σ = 1 without loss of generality.The Huber function as a function of the residual x − µ can then be expressed as We denote the loss over the weighted (bootstrap) sample by Our goal is to devise an algorithm for computing the set {µ(λ) : λ ∈ Λ}, where and Λ = (0, ∞).For an equally weighted sample, (3.1) includes as limiting cases the sample mean and sample median.When the weights are unequal, the limit points become the weighted mean and weighted median, interpretable as the mean and median of the bootstrap sample.The weighted mean is defined by and the weighted median by There is no analytical solution for the weighted median.In fact, the weighted mean is the only Huber centre that admits an analytical solution in general.For general λ, the existence of a solution to the minimization (2) requires only that the realized sample x 1 , . . ., x n and weights w 1 , . . ., w n be finite.Hence, successful computation of a solution does not hinge upon the existence of a solution in the population, e.g., if x 1 , . . ., x n are realizations of a Cauchy.
If Λ were a finite set, it would be possible to solve the optimization problem (3.1) for each of its elements.For given λ, the one-dimensional problem (3.1) is convex, and although it does not admit an analytical solution, it is amenable to simple numerical routines (Huber and Ronchetti 2009).Even if Λ is not finite, one might try approximating it using a fine grid and then proceed to solve each minimization individually.Recall though each set of minimization problems needs to be solved B times in the Bayesian bootstrap, where B might be 1,000, 10,000, or larger.Thus, even with an efficient algorithm, total cumulative runtime can be prohibitive.Notwithstanding runtime considerations, such an approach still only yields an approximation.Instead of an approximation, we propose a fast and exact pathwise algorithm that optimizes (3.1) for all values of λ.

Pathwise optimization routine
Our approach exploits piecewise linearity of the solution path µ(λ) for λ ∈ (0, ∞), a property we now demonstrate.The gradient of the Huber function with respect to µ is Hence, the gradient of the loss over the weighted sample is We denote the above gradient by L ′ (µ), suppressing the explicit dependency on λ.The first-order condition for optimality of µ(λ) requires that L ′ (µ(λ)) = 0.The implicit function theorem then gives which, after evaluating gradients on the right-hand side, yields Observe that the gradient of the solution path ∂µ(λ)/∂λ is piecewise constant as a function of λ, implying that µ(λ) is piecewise linear.It follows that µ(λ) is also piecewise continuous with left and right limits.It can be verified that the left and right limits at any knot λ ⋆ equal µ(λ ⋆ ), and hence that µ(λ) is continuous.
Since the solution path is piecewise linear, it is composed of a sequence of knots, i.e., certain values of λ at which |x i − µ(λ)| = λ for one or more sample points.These knots correspond to crossing events, where sample points transition between the square and absolute pieces of the Huber function.Lemma 1 characterizes a useful property in relation to these crossing events.
Lemma 1 implies that, for a decreasing sequence of λ, once a sample point has crossed to the absolute piece of the Huber function, it remains there.This property guarantees the existence of at most n knots along the solution path.The lemma is proven in Appendix A.
To trace out the solution path, we need only fit µ at each λ in the sequence of knots since any solution between knots is linearly interpolable.A method to efficiently determine the location and solution at each knot is required.Suppose we are at an arbitrary point (λ, µ) along the solution path.Then, thanks to piecewise linearity, the closest knot point (λ + , µ + ) to the left of (λ, µ) is computable by taking a step γ > 0, of a certain size, as follows: 2) provides an analytical expression for the gradient ∂µ(λ)/∂λ.An expression for the required step size γ is still needed.To this end, we present Proposition 1.
Proposition 1.Let (λ, µ) be any point along the solution path such that |x i − µ| < λ for at least one i = 1, . . ., n.Then the largest positive step size before the solution path reaches a knot point (λ + , µ + ) to the left of (λ, µ) is where s i = sign(x i − μ) and μ is the weighted median.
The requirement |x i − µ| < λ for at least one i = 1, . . ., n guarantees the existence of at least one more unexplored knot along the solution path.Beyond the first and last knots, the solution path is flat.The proposition is proven in Appendix A.
Putting together the above ingredients and letting λ = λ (m) , λ + = λ (m+1) , µ = µ (m) , and µ + = µ (m+1) we arrive at Algorithm 2. Starting at the rightmost knot point (λ (1) , µ (1) ), which corresponds to the weighted mean, the algorithm forges a path step-by-step to the leftmost knot point, which corresponds to the weighted median.Figure 3 illustrates this process on n = 30 iid draws from a standard normal distribution.The algorithm begins at a value of λ large enough to induce the weighted mean as the centre and then iteratively decreases λ.The final λ in this sequence of iterates is sufficiently small to induce the weighted median as the centre.Observe that the path is piecewise linear and continuous.

Relation to least angle regression
Algorithm 2 bears similarity to least angle regression (Efron et al. 2004;Rosset and Zhu 2007), a pathwise optimization routine that traces the solution path of lasso regression coefficients.
To clarify this similarity, first recall the Moreau envelope f λ (z) of a real-valued function f (z), which is the infimal convolution of f (z) and g(z) = 1/(2λ)z 2 (see, e.g., Polson, Scott, and Willard 2015).When f (z) = |z|, there is a precise relation between the Huber function ℓ λ (z) and the Moreau envelope: The right-hand side is equal to ℓ λ (z)/λ.In words, multiplying the Moreau envelope of the absolute value function by λ yields the Huber function, a known result from convex analysis (Beck 2017).Hence, we have the chain of equalities The infimum can be written as a minimum since the absolute value function is closed convex.The final line is a weighted lasso regression of x 1 , . . ., x n on an identity design matrix of dimensions n × n, showing that the Huber problem (3.1) can be recast as a weighted lasso problem.Thus, applying least angle regression, configured with weights, to an identity design matrix yields a path identical to that produced by Algorithm 2. Despite this equivalence, the development of Algorithm 2 remains essential.Least angle regression is designed for general design matrices and, as such, does not exploit the structure of regression with an identity design, i.e., the Huber problem.Algorithm 2, on the other hand, takes full advantage of this structure.In numerical experimentation, we observed that Algorithm 2 is typically an order of magnitude faster than least angle regression.Without this speedup, the Bayesian bootstrap would remain computationally burdensome.

Consistency
The asymptotic properties of the familial test are now established.We focus on a real-valued null m 0 in the one-sample (and paired samples) setting, which serves as a fundamental scenario, though our result is adaptable to a set-valued null M 0 and independent samples.We consider two cases: (1) the null hypothesis H 0 : µ 0 (λ) = m 0 for some λ ∈ Λ is true and (2) the alternative hypothesis H 1 : µ 0 (λ) ̸ = m 0 for all λ ∈ Λ is true.Theorem 1 states the familial test is asymptotically consistent under both H 0 and H 1 .
1.If H 0 : µ 0 (λ) = m 0 for some λ ∈ Λ is true with µ 0 (λ) strictly crossing through m 0 , then 2. If H 1 : µ 0 (λ) ̸ = m 0 for all λ ∈ Λ is true with µ 0 (λ) bounded away from m 0 , then A proof is available in Appendix B. The theorem requires only a single Bayesian bootstrap distribution, i.e., B = 1, for consistency of the test.Having B > 1 presents no technical complication, but it is unnecessary for consistency when n → ∞.
At a high level, the proof of Theorem 1 involves showing the estimator μ(λ) (actually, its gradient) under a Bayesian bootstrap distribution can be made arbitrarily close to that under the true distribution P 0 , uniformly over all λ ∈ Λ.This result allows us to argue if H 0 is true and µ 0 (λ) strictly crosses through m 0 , then so does μ(λ) in the limit.Alternatively, if H 0 is false and µ 0 (λ) is bounded above or below away from m 0 , then so is μ(λ) in the limit.
The case where the null is true and µ 0 (λ) does not strictly cross through m 0 is not covered by Theorem 1.This case would occur, say, when P 0 is a normal distribution with mean m 0 .However, symmetric distributions like the normal are uncommon in practice and not the focus of our test.

Familial package
This section reports experiments on real and synthetic data.To enable these exercises, the test and algorithms described in the preceding sections are implemented in the R package familial.For a sample of size n = 200, familial takes about half a second to perform 1,000 bootstraps for a single sample on one core of a modern processor.Parallelism is also supported.Run time scales linearly with the sample size, number of bootstraps, and, if parallelised, number of processor cores.Rosenbaum, Mama, and Algom (2017) conducted an experiment to ascertain the effect of body posture on selective attention (refer to Experiment 3 in that paper).The experiment employed the Stroop test, where subjects are asked to announce colours of a sequence of words and not the words themselves, e.g., announce blue when the word red is printed in blue.The difference in response times between congruent word-colour pairs and incongruent pairs is the Stroop effect.Experimental subjects took the test once while sitting and once while standing.The study found standing lowered the Stroop effect compared with sitting, indicating improved selective attention while standing.

Body posture study
The dataset (Mama 2018) contains paired observations on response times of n = 50 subjects.Figure 4 presents a histogram of differences in response time alongside a functional boxplot of the posterior Huber family.The response times do not deviate markedly from a normal distribution, though they are slightly left-skewed.The posterior concentrates well below zero, suggesting standing might reduce the Stroop effect.
The study reported a p-value of 0.004 from an F test of the interaction between congruency and posture in a repeated-measures ANOVA, equivalent to a Student t test that the mean difference in response times is zero.The Fisher sign test and Wilcoxon signed-rank test produce p-values of 0.007 and 0.006, respectively.The Huber familial test finds that the probability of the null is 0.005.All tests reject the null that body posture does not affect the Stroop effect.This result confirms that the original finding is not sensitive to the centre tested.As additional benchmarks, we ran the maximum mean discrepancy (MMD) test of Gretton et al. (2012) and the Bayesian Pólya tree test of Holmes et al. (2015), both tests for equality of distributions.The p-value and null probability is 0.499 and 0.223, respectively.These tests assume the samples are independent, so they are likely underpowered here.

Multi-task perception study
Srna, Schrift, and Zauberman (2018a) ran an experiment to investigate if human performance at certain activities is affected by whether the activity is perceived as multi-tasking (refer to Study 1a in that paper).Experimental subjects were required to watch a video and transcribe the audio.This activity was framed as multi-tasking to a treatment group and single-tasking to a control group.Assignment to either group was random.The study found that subjects in the treatment group transcribed more words than those in the control group and the accuracy of their transcriptions was higher, suggesting perceiving an activity as multi-tasking improves performance at that activity.We focus on the number of words transcribed.The dataset (Srna, Schrift, and Zauberman 2018b) contains n 1 = 82 subjects in the treatment group and n 2 = 80 in the control group; see Figure 5.The groups are dissimilar in distribution, with the multi-task group being unimodal and the single-task group being multimodal.For small values of λ, the 50% central region of the posterior includes zero, indicating the null might be plausible.
The study reported a p-value of 0.033 from an F test of the multi-task condition in a one-way ANOVA, identical to a two-sample Student t test with equal variance that the mean number of words transcribed is equal between groups.The Mood median test yields a p-value of 0.271.The p-value from a Wilcoxon rank-sum test is 0.072.The MMD and Pólya tree tests produce a p-value and null probability of 0.590 and 0.978, respectively.The Huber familial test returns 0.170 as the probability of the null.In contrast to the t test, the familial, sign, rank-sum, MMD, and Pólya tree tests do not find the multi-task condition to affect performance.In particular, the familial test fails to find sufficient support for either hypothesis and returns an indeterminate result.Whether this is a meaningful discrepancy remains up to subject-matter experts to decide.

Simulations
Appendix D contains extensive results on synthetic datasets that verify the finite-sample properties of the familial test in one-and two-sample settings.The results cover a variety of distributions, both continuous and discrete, and validate that the familial test is well-behaved.
The test has good frequentist size and, for a sufficiently large departure from the null or a reasonably large sample size, high power.In fact, its power remains highly competitive with existing frequentist and Bayesian tests in regions where the familial null fails to hold.

Concluding remarks
It has become standard practice to translate scientific hypotheses into statistical hypotheses about a specific centre for the underlying distribution(s).Despite the ubiquity of this approach, there can be a lack of consensus about which centre bests reflects the original scientific hypotheses.When there is ambiguity, we argue one should adopt familial inference, which formulates hypotheses via a family of plausible centres.Our package familial implements the tools developed in this paper and is publicly available on CRAN.
A natural next step in this line of work is to develop familial inference for other statistical parameters and models.For instance, we found in ongoing work that our tools extend gracefully to linear models.The Huber family of linear models constitutes a continuum of models for conditional centres, from the conditional mean to the conditional median.The pathwise algorithm extends to this setting because the solution path remains piecewise linear under Huber loss.
Another intriguing direction is familial inference for multivariate random variables.The univariate Huber function ℓ λ (z) studied in this paper has a lesser-known multivariate counterpart (Hampel et al. 1986), which composes the ℓ 2 -norm with the univariate Huber function, i.e., ℓ λ (∥z∥).It remains to be determined whether the multivariate problem remains computationally tractable.
It is also interesting to consider other loss functions for the univariate, multivariate, and linear model problems.For instance, the univariate trimmed square loss (i.e., the trimmed mean), is also piecewise linear, and readily admits a pathwise algorithm.More generally, fundamentally different algorithms are required.
A frequentist version of the familial test is likewise an appealing avenue of future research.A major challenge is deriving the finite-sample, or asymptotic, distribution of the Huber family under the null hypothesis.A bootstrap test is also possible, but it remains to determine the appropriate way of imposing the null hypothesis on the observed sample to attain the correct size.

A.1 Proof of Lemma 1
Proof.A sufficient condition for the result of the lemma is for λ − |x 0 − µ(λ)| to be increasing as a function of λ.To establish the function is increasing, consider its gradient: A sufficient condition for the gradient to be positive, and hence for λ − |x 0 − µ(λ)| to be increasing, is that |∂µ(λ)/∂λ| < 1.The first-order condition for optimality of µ(λ) is Using the bound |x i − µ(λ)| < λ in the denominator of (A.2) yields Together with (A.2), the above bound shows |∂µ(λ)/∂λ| < 1, and hence the gradient (A.1) is positive.We conclude λ − |x 0 − µ(λ)| is increasing, thereby establishing the result of the lemma.

B.1 Proof of Theorem 1
We begin by introducing some notation.Denote the empirical distribution function by and the random Bayesian bootstrap distribution function by Denote the integral of the gradient of the Huber function under some distribution P by The gradient ℓ ′ λ (x − µ) is Lipschitz continuous in λ, x, and µ with Lipschitz constant 1.Our work focuses on the gradient of the Huber function, since its roots define the Huber family.The proof of the theorem requires several technical lemmas, which we now state and prove before arriving at the main argument.
We begin with Lemma 3, which provides a probabilistic finite-sample bound for the distance between A(G n , λ, µ)-the integral under a Bayesian bootstrap distribution-and A(P n , λ, µ)-the integral under the empirical distribution.This bound holds for fixed λ ∈ Λ. Lemma 3. Let X 1 , . . ., X n be iid random variables with distribution P 0 and, independently, let (w 1 , . . ., w n ) ∼ Dirichlet(1, . . ., 1).Let ϵ > 0 and λ ≥ 0.Then, for any µ ∈ R, it holds Proof.Since (w 1 , . . ., w n ) are Dirichlet(1, ..., 1), we have For the variance, we have The first-term on the right-hand side of (B.1) is zero, since The second term on the right-hand side of (B.1) satisfies where the inequality follows from the fact that, for all µ ∈ R, the gradient ℓ ′ λ (X i − µ) is bounded between −λ and λ.The result of the lemma now follows directly from Chebyshev's inequality.
Lemma 3 holds for fixed λ ∈ Λ.We now state and prove Lemma 4, which extends the bound in Lemma 3 to hold uniformly for all λ ∈ Λ.
Lemma 4. Let X 1 , . . ., X n be iid random variables with distribution P 0 and, independently, let (w 1 , . . ., w n ) ∼ Dirichlet(1, . . ., 1).Let ϵ > 0 and Λ = [0, λ] with 0 < λ < ∞.Then, for any µ ∈ R, it holds Proof.We proceed using an epsilon-net argument.First, we require a useful inequality.For any λ ̸ = λ ′ , we have The last inequality follows from the fact that ℓ ′ λ (X − µ) is Lipschitz in λ with constant 1 (also, at λ = 0 the left-hand side will be zero).Now, let E be an δ-net of Λ.For any λ ∈ Λ, there exists a λ ′ ∈ E such that |λ − λ ′ | ≤ δ.This result in combination with the previous bound gives Taking the maximum of the right-hand side and the supremum of the left-hand side yields

Pr sup
Next, setting δ = ϵ/4 and applying a union bound on the right-hand side, gives Pr max We can bound elements of the sum by Lemma 3 as The set E has cardinality at most ⌈ λ/δ⌉ + 1, so The statement of the theorem now follows.
We now turn to Lemma 5. Recall the Levy distance between two distributions G and F is defined as d L (G, F ) := inf{ϵ | F (x − ϵ) − ϵ ≤ G(x) ≤ F (x + ϵ) + ϵ for all x ∈ R}.
The following lemma provides a non-probabilistic bound on the distance between A(G, λ, µ) and A(F, λ, µ) when G and F are in an ϵ-neighbourhood under the Levy metric.Proof.We begin with a fixed λ ∈ Λ and then extend to a uniform bound for all λ ∈ Λ.Let δ = ϵ/2.Then, for any λ, we have The first inequality is due to d L (G, F ) < δ and the third due to ℓ ′ λ (x − µ) being Lipschitz in x with constant 1.We now extend this result to hold over all λ ∈ Λ by taking the supremum of the left-hand side of (B.2), yielding sup λ∈Λ |A(G, λ, µ) − A(F, λ, µ)| ≤ 2δ.
The statement of the lemma now follows from the fact that δ = ϵ/2.Lemma 6 is the last piece required before establishing the result of the theorem.It draws on the preceding lemmas to prove that A(G n , λ, µ) converges in probability to A(P 0 , λ, µ) uniformly for all λ ∈ Λ. Lemma 6.Let X 1 , . . ., X n be iid random variables with distribution P 0 and, independently, let (w 1 , . . ., w n ) ∼ Dirichlet(1, . . ., 1).Let ϵ > 0 and Λ = [0, λ] with 0 < λ < ∞.Then, for any µ ∈ R, it holds Consider the second term on the right-hand side of (B.3).We have that P n converges almost surely to P 0 by the Glivenko-Cantelli theorem and hence to a δ-neighbourhood of P 0 under the Levy metric.Set δ = ϵ/2.This result and Lemma 5 provide there exists an N 2 such that for all n ≥ N 2 , it holds The result of the lemma now follows.
We now have everything necessary to prove the two claims of Theorem 1.
Proof.We consider case 1 (H 0 is true) and case 2 (H 1 is true) in turn.To simplify the proof's presentation, we assume the hypothesized null value m 0 = 0 without loss of generality.
hypotheses we consider do not conform to this format, the analogous null hypothesis would impose that all familial centres lie in the null set and the alternative that at least one centre does not.For the Huber family, these hypotheses would correspond to a test of symmetry when M 0 is a singleton.Though tests of symmetry can address some distributional questions, they do not capture the nuances of differing centres, potentially leaving out scientifically relevant insights.

Figure 1 :
Figure 1: Histogram of the mammalian sleep data.

Figure 2 :
Figure 2: Functional boxplot of the posterior Huber family for the mammalian sleep data.Shading indicates different central regions of the posterior.

Figure 3 :
Figure3: Algorithm 2 applied with x 1 , . . ., x n drawn from a standard normal distribution and w 1 , . . ., w n drawn from a flat Dirichlet distribution with n = 30.The solid points are iterates (knots) from the algorithm.Centres between iterates are linearly interpolated.

Figure 4 :
Figure 4: Body posture data.The left plot is a histogram of the data.The right plot is a functional boxplot of the posterior Huber family.Shading indicates different central regions of the posterior.

Figure 5 :
Figure 5: Multi-task perception data.The left plot is a histogram of the data by control and treatment.The right plot is a functional boxplot of the posterior difference in Huber families.Shading indicates different central regions of the posterior.