Ancestor regression in linear structural equation models

We present a new method for causal discovery in linear structural equation models. We propose a simple ``trick'' based on statistical testing in linear models that can distinguish between ancestors and non-ancestors of any given variable. Naturally, this can then be extended to estimating the causal order among all variables. We provide explicit error control for false causal discovery, at least asymptotically. This holds true even under Gaussianity, where other methods fail due to non-identifiable structures. These type I error guarantees come at the cost of reduced empirical power. Additionally, we provide an asymptotically valid goodness of fit p-value to assess whether multivariate data stems from a linear structural equation model.


Introduction
We propose a very simple yet effective method to infer the ancestor variables in a linear structural equation model from observational data.
Consider a response variable of interest Y and covariates X in a linear structural equation model. The procedure is as follows. For a nonlinear function f (·), for example f (Y ) = Y 3 , run a least squares regression of f (Y ) versus Y and all covariates X: the p-value corresponding to the k-th covariate X k is measuring the significance that X k is an ancestor variable of Y , and it provides type I error control.
We refer to this methods as ancestor regression. Its power (i.e., type II error) depends on the nature of the underlying data-generating probability distribution. Obviously, the proposed method is extremely simple and easy to be used; yet, it deals with the difficult problem of finding the causal order among random variables. In particular, the proposed method does not need any new software and it is computationally very efficient.
Structure search methods based on observational data for the graphical structure in linear structural equation models have been developed extensively for various settings: for the Markov equivalence class in linear Gaussian structural equation models (Spirtes et al., 2001, Chapter 5.4;Chickering, 2002) or for the single identifiable directed acyclic graph in non-Gaussian linear structural equation models (Shimizu et al., 2006;Gnecco et al., 2021) or for models with equal error variances (Peters and Bühlmann, 2014). None of the methods comes with p-values and type I error control. In addition, for the identifiable cases, the corresponding algorithms require certain assumptions such as non-Gaussian errors. Particularly, the method from Shimizu et al. (2006) and extensions thereof are not consistent when there are at least two normally distributed additive error terms involved such that false causal claims cannot be avoided even in the large sample limit. If the errors are just slightly non-Gaussian, the method requires very many samples to achieve a favorable behavior. In contrast, our procedure does not rely on any condition apart from linearity, but automatically exploits whether the structure is identifiable or not. In the latter case, we miss out on some causal relationships but our type I error control retains the same asymptotic guarantees. The price to pay for these guarantees is a reduced empirical power compared to competing methods, sometimes being substantial.
Regarding notation, we use upper case letters to denote a random variable, e.g., X or Y . We use lower case letters to denote i.i.d. copies of a random variable, e.g., x. If X ∈ R p , then x ∈ R n×p . With a slight abuse of notation, x can either denote the copies or realizations thereof. We write x j to denote the j-th column of matrix x and x i,j to denote the element in row i and column j. With ←, we emphasize that an equality between random variables is induced by a causal mechanism. All proofs are given in Section A in the supplementary material.

Ancestor regression 2.1 Model and method
Let X ∈ R p be given by the following linear structural equation model where the Ψ 1 , . . . , Ψ p are independent and centered random variables. We assume that 0 < var(Ψ j ) = σ 2 j < ∞ ∀j such that the covariance matrix of X exists and has full rank. We use the notation PA(j), CH(j), AN(j) and DE(j) for j's parents, children, ancestors, and descendants. Further, assume that there exists a directed acyclic graph (DAG) representing this structure.
Let X j with j ∈ {1, . . . , p} be a variable of interest; it has been denoted as response Y in Section 1. Consider a nonlinear function f (·). The following result describes the population property of ancestor regression, with general function f (·).
Theorem 1. Assume that the data X follows the model (1). Consider the ordinary least squares regression f (X j ) versus X, denote the according OLS parameter by β f,j := E XX −1 E{Xf (X j )} and assume that it exists. Then, Importantly, X j itself must also be included in the set of predictors. The beauty of Theorem 1 lies in the fact that no assumptions on the distribution of the Ψ l or the size of the θ l,k , apart from existence of the moments, must be taken for any l and k ∈ {1, . . . , p}. This allows one to control against false discovery of ancestor variables.
Typically, β f,j k = 0 holds for ancestors since a nonlinear function of that ancestor cannot be completely regressed out by the other regressors using only linear terms. For ancestors that are much further upstream, this effect might become vanishingly small. However, this is not such an issue since when fitting a linear model using the detected ancestors, those indirect ancestors are assigned a direct causal effect of 0 anyway. Based on Theorem 1, we suggest testing for β f,j k = 0 in order to detect some or even all ancestors of X j . Doing so for all k, requires nothing more than fitting a multiple linear model and using its corresponding z-tests for individual covariates.
Let x ∈ R n×p be n i.i.d. copies from the model (1). Define the following quantitieŝ where f (·) is meant to be applied elementwise in f (x j ).
Theorem 2. Assume that the data X follows the model (1), E f (X j ) 2 < ∞, E X 4 k < ∞ ∀k and β f,j exists. Let x be n i.i.d copies thereof. Using the definitions from (2), it then holdŝ Due to this limiting distribution, we suggest testing the null hypothesis H 0,k→j : k ∈ AN(j) with the p-value where Φ(·) denotes the cumulative distribution function of the standard normal distribution. For ancestors, for which β f,j k = 0, the absolute z-statistic increases as √ n. In typical setups, one can thus detect all ancestors. Having found all ancestors, one could infer the parents with an ordinary least squares regression of X j versus X AN(j) , using the t-test for assigning the significance of being a parental variable. Such a procedure might have poor error control for low sample sizes as it requires full power in the first step to detect all ancestors; we provide error control only for the estimated ancestral set.
The choice of f (·) has an impact on the constant in the growth of z j k for ancestors. If the Ψ l are symmetric, any even function yields β f,j k = 0 ∀k. Therefore, odd functions should be used. In our simulations and the real data analysis, we use f (X j ) = X 3 j as it is the simplest odd function that only invokes slightly higher moments than linear functions. This choice leads to empirically competitive performance relative to other candidates in our simulations.

Adversarial setups
There are cases where β f,j k = 0 does not hold true for some ancestors leading to reduced power of the method. We provide necessary and sufficient conditions for this and present examples. Define first the j-restricted Markov boundary of k to be It contains all the variables in the Markov boundary of k which are ancestors of j or j itself. E.g., if k ∈ AN(j) all its parents are in the restricted Markov boundary, but not necessarily all its children.
Theorem 3. Let k ∈ AN(j). Then, where γ j,k is the least squares parameter for regressing X k versus X MA →j (k) . In particular, Intuitively speaking, if the conditional expectation of X k given the j-restricted Markov boundary is linear, k could also be a child of all these variables. Thus, it is not detectable as ancestor of j.
In the following, we present two examples that fulfil the conditions of Theorem 3. These are the only examples we know of.
Gaussian Ψ. It is well-known in causal discovery for linear structural equation models that Gaussian error terms lead to non-identifiability. Define i.e., the children of k through which a directed path from k to j begins.
Proposition 1. Assume that the data X follows the model (1).
. Under the additional assumptions of Theorem 2, Thus, if every directed path from k to j starts with an edge for which the nodes on both ends have Gaussian noise terms, we have no power to detect this ancestor relationship. However, we neither detect the opposite direction as guaranteed by Theorem 1, and thus, control against false positives is guaranteed.
Special constellation of distributions and coefficients. A pathological case occurs if a child's, say, l, error term has the same distribution as the inherited contribution from the parent's, say, k, error term. Then, k is not detectable as l's ancestor. Likewise, it is not detected as ancestor of any of l's descendants j to which all directed paths from k start with the edge k → l.
Proposition 2. Assume that the data X follows the model (1). Let k ∈ AN(j) and CH →j (k) = {l}.
For the variables discussed here, the limiting Gaussian distribution as stated in Theorem 2 is not guaranteed even though β f,j k = 0; see also the proof in the supplemental material.

Simulation example
We study ancestor regression in a small simulation example. We generate data from a linear structural equation model with 6 variables. The causal order is fixed to be X 1 to X 6 . Otherwise, the structure is randomized and changes per simulation run: X k is a parent of X l for k < l with probability 0.4 such that there is an average of 6 parental relationships. The edge weights are sampled uniformly and the Ψ k are assigned by permuting a fixed set of 6 error distributions. The full data generating process can be found in Section C of the supplementary material. We aim to find the ancestors of X 4 which can be any subset of {X 1 , X 2 , X 3 }. We create 1000 different setups and test each on sample sizes varying from 10 2 to 10 6 . As a nonlinear function, we use f (X j ) = X 3 j . By z-statistic, we mean z 4 k as defined in Theorem 2. We calculate p-values according to (3) and apply a Bonferroni-Holm correction (without cutting off at 1 for the sake of visualization) to them.
In Fig. 1, we see the desired √ n-growth of the absolute z-statistic for the ancestors, while for the non-ancestors their sample averages are close to the theoretical mean under the asymptotic null distribution. Indirect ancestors are harder to detect than parents. Although the null distribution is only asymptotically achieved, the type I family-wise error rate is controlled for every sample size, supporting our method's main benefit, i.e., robustness against false causal discovery. The results are based on 1000 simulation runs. On the left: Average absolute z-statistic for all ancestors (circles, black), parents (triangles, red), non-parental ancestors (pluses, green), and nonancestors (crosses, blue) for different sample sizes. The dashed diagonals correspond to √ n-growth fitted to perfectly match at n = 10 5 . The horizontal line corresponds to (2/π) 1/2 , i.e., the first absolute moment of the asymptotic null distribution, a standard Gaussian. On the right: fraction of simulation runs with at least one false causal detection versus fraction of detected ancestors for the different sample sizes 10 2 (solid, black), 10 3 (dashed, red), 10 4 (dotted, green), 10 5 (dotdashed, blue), and 10 6 (long-dashed, pink). The curve uses the level α of the test as implicit curve parameter. The pluses correspond to nominal α = 5%. The vertical line is at actual 5%.
3 Ancestor detection in networks: nodewise and recursive

Algorithm and goodness of fit test
In the previous section, we assumed that there is a (response) variable X j that is of special interest. This is not always the case. Instead, one might be interested in inferring the full set of causal connections between the variables. Naturally, our ancestor detection technique can be extended to that problem by applying it nodewise. We suggest the procedure sketched below. The detailed algorithm can be found in Section B of the supplementary material. Notably, the algorithm is invariant to the ordering of the variables.
First, the set of ancestors is defined based on the significant p-values, after multiplicity correction over all p(p − 1) z-tests, of ancestor regression. Any correction controlling the type I family-wise error rate is applicable, and we use here Bonferroni-Holm. Next, further ancestral relationships are constructed recursively by adding the estimated ancestors of every estimated ancestor. This recursive construction facilitates the detection of all ancestors. This procedure cannot increase the type I family-wise error rate compared to just using the significant p-values because a false causal discovery can only be propagated if it existed in the first place.
Since there is no guarantee that the recursive construction does not create directed cycles, i.e., variables are claimed to be their own ancestors, we need to address this. If such cycles are found, the significance level is gradually reduced until no more directed cycles are outputted. This means that the output becomes somewhat independent of the significance level, e.g., in a case with two variables and p 2 1 = 10 −6 and p 1 2 = 10 −3 as in (3), we would never claim X 2 → X 1 no matter how large α is chosen. We denote the estimated set of ancestors for X j by AN(j). Notably, the algorithm determines a causal order between the variables but does not always lead to a unique parental graph. For instance, if AN(3) = {1, 2} and AN(2) = {1}, X 1 might be a causal parent of X 3 but its effect could also be fully mediated by X 2 .
One can consider the largest significance level such that no loops are created as a p-value for the null hypothesis that the modeling assumption (1) holds true. We denote this level, which is a further output of our algorithm, byα. Thus, we provide a goodness of fit test for our modelling assumption with an asymptotically valid p-value: a small realizedα would provide evidence against the linear structural equation model in (1). If such evidence exists, it is advisable to take the outcome of ancestor regression or other causal discovery methods relying on linear structural equation models with a grain of salt. We make use of this p-value in the data analysis in Section 4. We summarize the properties of our algorithm.

Simulation example
We extend the simulation from Section 2.3 to estimating the ancestors of each variable using the algorithm described in Section 3.1. We compare our method to LiNGAM (Shimizu et al., 2006) using the implementation provided in the R-package pcalg (Kalisch et al., 2012). For every simulation run, we use two slighlty different data generating processes. In the first, only one of the Ψ k follows a Gaussian distribution, in the second, there are two error terms with normal distribution and an edge between the two respective nodes is always present. As LiNGAM provides an estimated set of parents, we additionally apply our recursive algorithm to the output to get an estimated set of ancestors which enables comparison with our method.
The results are shown in Fig. 2. For the model with only one Gaussian error variable, we can reliably detect almost all ancestors without any false causal claims for large enough sample sizes. The few exceptions can be explained as some setups can be very close to the non-identifiable case discussed in Proposition 2. Not all curves reach a power of 1 even when letting the significance level become arbitrarily large. This can be explained by the possible insensitivity to the significance level, as sketched in Section 3.1.
We are able to control the family-wise error rate even for low sample size using a nominal size of α = 5% supporting our theoretical results. This is not the case for LiNGAM. LiNGAM is designed such that it always must determine a causal order based on the underlying independent component analysis (Hyvarinen, 1999) even when sufficient information is not available. Therefore, no type I error guarantees can be provided. The power of LiNGAM approaches 1 much faster than ancestor regression and if one allows for a bit more liberate type I error, LiNGAM appears preferable in the model with one Gaussian noise term. The picture changes when looking at slight violations of the LiNGAM assumption, i.e., another Gaussian error term. LiNGAM is still more powerful but does not control the error at all. No matter the sample size, a wrong causal claim is made in around 40% of the setups. Ancestor regression is more robust to this deviation as the type I error guarantees do The other symbols correspond to the performance of the LiNGAM algorithm. We consider the different sample sizes 10 2 (solid / square, black), 10 3 (dashed / circle, red), 10 4 (dotted / triangle pointing upward, green), 10 5 (dot-dashed / diamond, blue), and 10 6 (long-dashed / triangle pointing downward, pink). On the left: exactly 1 error term follows a Gaussian distribution. On the right: exactly 2 error terms follow a Gaussian distribution. not require non-Gaussian error terms. For the unidentifiable edges, it avoids making any decision and can control the error rate at any desired level at the price of some power reduction. In this simulation, Proposition 1 applies to around 14% of the ancestral connections.
We provide additional simulation results for settings varying between non-Gaussian and Gaussian scenarios in Section D in the supplementary material. When being close to the fully Gaussian case, despite satisfying the LiNGAM assumption (Shimizu et al., 2006) in population, this clearly worsens the performance of LiNGAM for finite sample size.

Real data example
We analyze the flow cytometry dataset presented by Sachs et al. (2005). It contains cytometry measurements of 11 phosphorylated proteins and phospholipids. Data is available from various experimental conditions, some of which are interventional environments. The authors provide a "ground truth" on how these quantities affect each other, the so-called consensus network. The dataset has been further analyzed in various follow-up papers, see, e.g., Mooij and Heskes (2013) and Taeb et al. (2022). Following these works, we consider data from 8 different environments, 7 of which are interventional. The sample size per environment ranges from 707 to 913.
For each environment individually, we estimate the ancestral relationships using our recursive algorithm sketched in Section 3.1 with nonlinear function f (X j ) = X 3 j and α = 0.05. The goodness of fit p-valueα per environment, but corrected for the number of environments, ranges from 0.14 to 3 × 10 −12 . All but one p-value are lower than 0.04, indicating for these environments that the data does not follow the model (1). The deviation can be in terms of hidden variables, nonlinear effects, or noise that is not additive. While the before mentioned and other published findings usually result in one graph harmonized over different environments, our highly varying results across environments suggest to question a standard "autonomy assumption" in causality (Aldrich, 1989) that an intervention does not change the underlying graph (except for edges that point into   Sachs et al. (2005). The second column reports the raw p-value from ancestor regression, p j k , associated with this edge and the third column the raw p-value from the subsequent linear model fit. The rows are ordered by the p-value from ancestor regression from low to high. We present the conclusions of the consensus network in Sachs et al. (2005)  Subsequently, we focus on the environment with the highestα which seems to be most conformable with a linear structural equation model. The dataset contains 723 observations. For each node, we fit a linear model using the claimed set of ancestors as predictors to see which ancestors might be direct parents. We summarize our findings in Table 1. Most ancestors show indication of being direct parents. However, as laid out in Section 2.1, we do not have type I error guarantees for finding parents in case some ancestors are missing.
For comparison, we show what conclusion the consensus network as well as Mooij and Heskes (2013) draw for these edges. Our method is in agreement with at least one of these works except for the two edges coming from JNK. One of the indirect paths in Mooij and Heskes (2013) corresponds to the highest p-value in the linear model fit, which is a further agreement. Our outputted ancestral graph, see Figure 3, consists of 4 disconnected components. When considering these components individually, we note that the part containing JNK, where we receive somewhat unexpected findings, has the strongest indication of violating the model assumptions in terms of the goodness of fit pvalueα.

A Proofs
A.1 Additional notation We introduce additional notation that is used for these proofs.
Subindex −k, e.g., x −k denotes a matrix with all columns but the k-th. I n is the n-dimensional identity matrix. P −k denotes the orthogonal projection onto x −k and P ⊥ −k = I n − P −k denotes the orthogonal projection onto its complement. P x is the orthogonal projection onto all x.
For some random vector X, we have the moment matrix Σ X := E XX . This equals the covariance matrix for centered X. We assume this matrix to be invertible. Then, the principal submatrix Σ X −j,−j := E X −j X −j is also invertible. We denote statistical independence by ⊥.

A.2 Previous work
We adapt some definitions from and results proven in Schultheiss et al. (2023).
Using these definitions, we have β f,j We cite a Lemma fundamental to our results.
Lemma 1. Assume that the data follows the model (1) without hidden variables. Then, δ k,l Ψ l k = 1, . . . , p for an appropriate set of parameters. Further, the support of γ j (cf. (4)) is restricted to j's Markov boundary.
The third equality follows from the independence of the Ψ l . Naturally, for all l ∈ {AN(j) ∪ j} we have Ψ l ⊥ X j such that E{Ψ l f (X j )} = 0. Further, ω −1 lk = 0 if l ∈ {DE(k) ∪ k}. To see this, note that ω would be lower triangular, if 1, . . . , p denoted a causal order. Then, its inverse would be lower triangular as well. Naturally, the same principle applies for every other permutation. Thus, i.e., if k is not an ancestor of j. Alternatively, we could invoke Lemma 1 to see that Z k ⊥ X j for a non-ancestor k. Then, Since we assume the covariance matrix to be bounded, we find where we use invertibility and the continuous mapping theorem. This then implies and analogously We get a better rate for |z k x l | than for |x l w k | since we assume existence of the fourth moments. Then, The second line is restricted to covariates for which this variance exists, which includes all nonancestors as Z k ⊥ W k . Using Slutsky's theorem, we havê which proves the first part of the theorem. It remains to consider the variance estimate. Similar to above, we have Combined, we find proving the second part of the theorem. For non-ancestors, β f,j k = 0 such that W k = E. Then, such that the last statement of Theorem 2 follows again from Slutsky's theorem and (6).

A.5 Proof of Theoren 3
We generally have the following identity Consider first the simpler case where the j-restricted Markov boundary is the full Markov boundary, i.e., all children of k are ancestors of j or j itself. Let Ω := E XX −1 and Ω kk := d k .
Then, we have the off-diagonal elements which is a standard fact from least squares regression. Thus, This quantity is 0 for all possible f (·) iff the conditional expectation is the constant 0-function. The if-statement is trivial. For the only if, note that one could choose leading to a nonzero expectation unless f (X j ) ≡ 0. Using the last part of the theorem follows directly. For the general case, note that for l in the difference between the Markov boundary and the j-restricted Markov boundary, β f,j l = 0 ∀f (·) follows from Theorem 1. Thus, the least squares parameter for k and r ∈ MA →j (k) is the same as if these variables did not exist. Therefore, the result for the general case follows directly from the simpler case discussed above.

A.6 Proof of Proposition 1
Consider the least squares solution when only the variables from the restricted Markov boundary are the predictors. From Lemma 1, we get that the residuum, sayZ k is a linear combination of Ψ k and Ψ l for l ∈ CH →j (k). For every r ∈ MA →j (k), and, dependence withZ k could only be induced bỹ By the least squares property,X r andZ k are uncorrelated. By the Gaussianity of Ψ k and Ψ l , this implies independence. Thus,Z k is independent from all X r such that the linear least squares fit is also the conditional expectation. Thus, the sufficient condition from Theorem 3 for β f,j k = 0 holds. Due to Gaussianity and Lemma 1, Z k is independent from CH →j (k), their descendants, and all its non-descendants. Therefore, it is also independent from such that the variance as in (5) is consistently estimated.

A.7 Proof of Proposition 2
Decompose the conditional expectation as Recall the definition CH →j (k) = {l}. Then, The last equality follows since Ψ l D = θ l,k Ψ k and both random variables depend on the conditioning set on the same way. Therefore, all the terms in E X k | X MA →j (k) are linear combination such that the sufficient condition from Theorem 3 holds.
However, Z k ⊥ X l in general such that var(Z k W k ) = E Z 2 k E W 2 k is not generally true. Then, the limiting distribution of the estimator is not the same as for non-ancestors; see also (5).

A.8 Proof of Corollary 1
Let j, k be such that k ∈ AN(j). This means that there is at least one set M = {m 0 = k, m 1 , . . . , m t−1 , m t = j} such that P ms m s−1 < α ∀s ∈ {1, . . . , t}, where t ≥ 1. If k ∈ AN(j) at least one of these must correspond to a false causal discovery, i.e., there is an s such that P ms ms−1 < α but m s−1 ∈ AN(m s ). We conclude ∃j, k = j : k ∈ AN(j) and k ∈ AN(j) → ∃j, k = j : k ∈ AN(j) and P j k < α .
Let r = j,k =j 1 {H0,k→j is true} denote the number of true null hypotheses. By the construction of Bonferroni-Holm ∃j, k = j : k ∈ AN(j) and P j k < α → ∃j, k = j : k ∈ AN(j) and p j k < α/r .
Let z j k =β f,j k / √ var β f,j k as used in Theorem 2. We find lim n→∞ pr ∃j, k = j : k ∈ AN(j} and k ∈ AN(j) ≤ lim n→∞ pr ∃j, k = j : k ∈ AN(j) and P j k < α ≤ lim n→∞ pr ∃j, k = j : k ∈ AN(j) and p j k < α/r = lim n→∞ pr min j,k =j: k ∈AN(j) which proves the first part of the corollary. The second to last equality uses Theorem 2 and the continuous mapping theorem.
As the model (1) excludes the possibility of directed cycles, any output of BuildRecursive that contains cycles must include at least one false causal detection. Ifα < α, it corresponds to the maximal p-value such that including the corresponding ancestor relationship creates cycles. Therefore, it must hold min using similar arguments as above.

B Algorithm
Algorithm 1 Nodewise and recursive ancestor detection Input data x ∈ R n×p , significance level α ∈ (0, 1) and nonlinear function f (·) Output Estimated set of ancestors AN(j) ∀j ∈ {1, . . . , p}, adjusted significance levelα 1: for j = 1 to p do 2: Calculate p j k ∀k = j using (2) and (3) # Calculate the p-values of ancestor regression 3: Apply a multiplicity correction to the list of p j k ∀j, k = j denote the corrected p-values by P j k 4: # Store p-values in a matrix, descendants as rows, ancestors as columns 5: Define P ∈ R p×p such that P j,k = P j k and P j,j = 1 6: (A,α) ← FindStructure(P , α) 7: for j = 1 to p do Define A ∈ R d×d such that A j,k = TRUE if P j,k < α and else A j,k = FALSE A j, AN(j) ← TRUE # Store to matrix format 33: return A

C Details on the simulation setup
We use the following distributions for the Ψ j : two t 7 distributions, a centered Laplace distribution with scale 1, a centered uniform distribution, and a standard normal distribution. Depending on the scenario, the last error distribution is either uniform or Gaussian. The results in Section 2.3 are from the former case. All distributions are normalized to having unit variance. For each simulation run, we randomly permute the distributions to assign them to Ψ 1 to Ψ 6 . We create an edge between the two variables with (potentially) Gaussian error term. The remaining 14 edges X k → X j with k < j are present with probability 5/14 each such that an average of 6 parental connections exists.
We assign preliminary edge weights uniformly in [0.5, 1]. These are further scaled such that for every X j which is not a source node, the standard deviation of k∈PA(j) θ j,k X k is uniformly chosen from [ √ 0.5, √ 2]. Thus, the signal-to-noise ratio is between 1/2 and 2. To initialize the graph and the weights, we use the function randomDAG from the R-package pcalg (Kalisch et al., 2012) before applying our changes to enforce the constraints.

D Additional simulation results
To analyse the effect of close to Gaussian error distributions we consider a further variation of the first the scenario in C. Call the normalized error terms from before Ψ j . These are mixed with a standard Gaussian component Ψ j such that Ψ j = 1 − γΨ j + √ γΨ j ∀j.
Thus, the Gaussian term causes a fraction γ of the variance. We vary γ from 0, which is the setup from before, to 1 in steps of 0.25. We consider the same performance metrics as in Figure 2. For the sake of overview, we restrict ourselves to n = 10 3 and n = 10 4 in Figure 4. For both LiNGAM and ancestor regression, increasing the amount of Gaussianity leads to a performance drop. Thus, not only fully Gaussian error terms harm these methods. For γ = 0.75, 10 4 samples are not sufficient to keep the type I error of LiNGAM low. For ancestor regression, nearly Gaussian error distribution leads to a substantial drop in power. However, the type I error remains under control supporting Corollary 1. While power considerations are clearly in favor of LiNGAM, especially in easy scenarios, our method leads to fewer but more trustworthy findings in close to unidentifiable scenarios within the class of linear structural equation models. The other symbols correspond to the performance of the LiNGAM algorithm. We consider the different values of γ: 0 (solid / square, black), 0.25 (dashed / circle, red), 0.5 (dotted / triangle pointing upward, green), 0.75 (dot-dashed / diamond, blue), and 1 (long-dashed / triangle pointing downward, pink). The sample size is 10 3 on the left and 10 4 on the right.