Deep redshift topological lensing: strategies for the T^3 candidate

The 3-torus (T^3) FLRW model better fits the nearly zero large-scale auto-correlation of the WMAP CMB sky maps than the infinite flat model. The T^3 model's WMAP parameters imply approximately equal-redshift topological lensing at z \sim 6. We investigate observational strategies for rejecting the T^3 solution or providing candidate topologically lensed galaxy pairs. T^3 holonomies are applied to (i) existing z \sim 6 observations and (ii) simulated observations, creating multiply connected catalogues. Corresponding simply connected catalogues are generated. Each catalogue is analysed using a successive filter method and collecting matched quadruples. Quadruple statistics between the multiply and simply connected catalogues are compared. The expected rejection of the hypothesis, or detection of candidate topologically lensed galaxies, is possible at a significance of 5% for a pair of T^3 axis-centred northern and southern surveys if photometric redshift accuracy is \sigma(\zphot)<0.01 for a pair of nearly complete 100 deg^2 surveys with a total of>500 galaxies over 4.3<z<6.6, or for a pair of 196 deg^2 surveys with>400 galaxies and \sigma(\zphot)<0.02 over 4<z<7. Dropping the maximum time interval in a pair from \Delta t =1 Gyr/h to \Delta t =0.1 Gyr/h yields a requirement of \sigma(\zphot)<0.005 or \sigma(\zphot)<0.01, respectively. Millions of z \sim 6 galaxies will be observed over fields of these sizes during the coming decades, implying much stronger constraints. The question is not if the hypothesis will be rejected or confirmed, it is when.


INTRODUCTION
The predictions implied by cosmic topology interpretations of the lack of large-scale (hereafter, r > ∼ 10h −1 Gpc) structure in the cosmic microwave background (CMB) maps are relevant for the design of deep redshift observational strategies. The large-scale structures in the CMB as observed by the COsmic Background Explorer (COBE) and the Wilkinson Microwave Anisotropy Probe (WMAP) (Bennett et al. 2003), i.e., in particular, the second moment of the temperature fluctuation distribution, can be statistically analysed using either (i) spherical harmonics analysis of the temperature fluctuations, which at large angular scales yields estimates that depend strongly on statistical assumptions if observations near the galactic plane are contaminated (Copi et al. 2009, and references therein), or (ii) the angular or spatial two-point ⋆ BFR: during visiting lectureship.
auto-correlation function of the temperature fluctuations. [In principle, all orders of the correlation functions are taken into account in integrated form by using Minkowski functionals (Mecke et al. 1994); for iso-temperature excursion sets in relation to CMB analysis see Schmalzing & Buchert (1997); Schmalzing & Gorski (1998); Ducout et al. (2013).] Interpreting the (second moment) large-scale harmonics to be generated by statistically independent Gaussian distributions requires an unlikely "conspiracy" between the harmonics at low l values (Copi et al. 2007(Copi et al. , 2009Sarkar et al. 2011;Copi et al. 2010) in order to match the nearly zero value of the (two-point) auto-correlation function at large (r > ∼ r SLS ≈ 10.0h −1 Gpc) pair separations (Spergel et al. Sect. 7, Fig. 16 2003;Roukema et al. also Fig. 1 2008b) where r SLS is the comoving distance to the surface of last scattering (SLS). Thus, applying Occam's razor, our interpretation is that an approximately zero largescale auto-correlation function provides a simpler statistical model for the largest scales than the harmonic analysis.
While there is a lack of consensus in fitting models to the WMAP data, the T 3 model fits have a particular advantage from an observational point of view: they imply sub-SLS topological lensing at redshifts that are becoming observationally realistic. Many cosmic topology CMB analyses consider ensembles of possible sizes and orientations of the fundamental domain of a given model. A more empirical approach requires that the fundamental domain of the real Universe must have a specific size and orientation in astronomical coordinates. We live in a specific realisation of the physical processes that led to our Universe, not an ensemble of realisations.  applied the optimal cross-correlation method (Roukema et al. 2008b,a) to find a specific T 3 solution. This solution implies equal-redshift topological lensing at redshifts z > ∼ 6, the redshift range at which many detections are expected with the upcoming generation of new instruments and telescopes. Thus, this candidate 3-manifold is, in principle, empirically testable independently of the CMB. Roukema & Kazimierczak (2011) presented a corollary of the matched circles principle (Cornish et al. 1996(Cornish et al. , 1998: the matched discs principle. In general, topologically lensed pairs of images of a given physical object occur at different redshifts. This complicates observational tests: the lifetimes of quasars are short compared to typical differences in lookback times, and at high redshifts, a galaxy seen in one direction at a redshift of z 1 may be absent at an expected position with a higher redshift z 2 > z 1 because the initial starburst   T 3 solution in galactic coordinates a , with a directional uncertainty of ∼ 2 • (great circle degrees).
The typographical error (sign of b T 3 6 ) in Table 1 of  has been corrected (cf Figs 6, 7 of Aurich 2008). b The directions 4, 5, and 6 are exactly antipodal to the directions 1, 2, and 3, respectively, and are listed for convenience only. c Mean and standard error adopted from our analysis (2.1). occurs after z 2 and before z 1 . Matched discs minimise this problem, by selecting the set of spatial positions-discs-for which a topologically lensed pair of any given galaxy occurs at identical redshifts in the two matched discs (Fig. 1, Roukema & Kazimierczak 2011). The redshift is lowest at the centre of a disc and increases radially outwards. The Poincaré dodecahedral space matched discs occur with z min = 106±18, so detection of gravitationally collapsed objects (galaxies) would require the existence and detection of rare high overdensity peaks of the density fluctuation distribution that collapsed very early. In contrast, the Aurich (2008) T 3 candidate has z min ∼ 5 (Fig. 10, Sect. 3, Aurich 2008). While relatively small numbers of quasars have been detected with z > ∼ 5, many Lyman break galaxies (LBGs) and Lyman alpha emitters (LAEs) in this redshift range have now been observed [with spectroscopic confirmations up to z = 8.6 (Lehnert et al. 2010)] and many more are likely to be detected with the new instruments and telescopes of the coming decade. LBGs and LAEs have a topological lensing advantage over quasars: the emission from a given object is expected to be much more isotropic than the beamed jets of a quasar, and the lifetime is likely to be much longer, because of long galaxy dynamical time scales and stellar lifetimes.
Thus, the  solution is going to be testable over matched discs of many square degrees using observations to be made over the next few years (e.g., VISTA/VIKING, Subaru/CISCO, EUCLID, VLT/X-Shooter, VLT/MUSE, JWST; see Sect. 2.2.2 for details). Although surveys that are approximately limited to a narrow band in redshift correspond to thin shells rather than thin discs, surveys over solid angles ≪ 4π (e.g. many square degrees) can strongly overlap with matched discs if made over the predicted redshift ranges. Since observational and theoretical uncertainties in the candidate 3-manifold cannot be avoided, a successive filter method for finding sets of likely topologically lensed pairs (Roukema 1996;Uzan et al. 1999;Marecki et al. 2005;Fujii & Yoshii 2011, 2013 is preferable to the pair separation histogram (PSH) method (Lehoucq et al. 1996), as the former can reveal very weak topological lensing signals (Fujii & Yoshii 2011). Here, a successive filter method, motivated by the matched discs principle, is applied to existing observational data and planned or possible surveys, using numerical simulations, in order to investigate observational strategies that can reject the T 3 solution, on the assumption that the FLRW metric is close enough to physical reality.
In Sect. 2.1, we briefly present the Aurich (2008) T 3 solution, matched discs, and not-quite-matched beams. Existing and planned or possible observations that can be used to predict sky (1)] in µK 2 as a function of Universe size L T 3 in units of c/H 0 , for the WMAP 9-year ILC map using the KQ85 galactic mask, for r = 0.2, 0.4, 1.0 h −1 Gpc from top to bottom (red, green, blue, respectively, online), for the orientation given in Table 1. positions and redshifts of topologically lensed copies are presented in Sect. 2.2. For existing observations, we apply the successive filter method to two partly simulated catalogues. The signal is represented by analysing the union of the observational catalogue with its topologically lensed images on (approximately) the opposite side(s) of the sky. Noise is represented by analysing the union of the observational catalogue with a simulated (non-lensed) data set in the region of comoving space where lensed objects would be found. Thus, signal is compared to noise. For planned or possible observations, the data are fully simulated, comparing a multiply connected T 3 simulation with a simply connected R 3 simulation. The simulation methods are described in Sect. 2.3, and the successive filter method is presented in Sect. 2.4. Statistical tests, either to find a low probability of falsely excluding the T 3 hypothesis, or to find a low probability of falsely detecting evidence in favour of the hypothesis, are presented in Sect. 2.5.
Results are presented in Sect. 3 and conclusions are given in Sect. 4. All distances are FLRW comoving distances except if stated otherwise, and Ω m and Ω Λ are the dimensionless matter density and dark energy density, respectively. The Hubble constant is written H 0 = 100h km/s/Mpc and c is the conversion factor from time units to space units (e.g. Taylor & Wheeler 1992).

The Aurich (2008) T 3 solution, matched discs, and not-quite-matched beams
The coordinates of the Aurich (2008) T 3 solution are given in Table 1 Roukema & Kazimierczak (2011), but for the  T 3 solution, with the addition of matched pencil beam observations. Spheres corresponding to an example redshift z = 6.2 are shown intersecting themselves (in the covering space). The redshift at the centres of the matched discs is less than this, and within a disc, the redshift increases radially out to the surface of last scattering (SLS). An observer (at the centre) pointing a telescope towards the right-hand matched disc can observe the z < ∼ 6.2 portion of the right-hand copy of the pencil beam at roughly the same redshifts as those of the corresponding portion of the left-hand copy of the pencil beam. z = 6.2 z = 6.2 z = 6.2 first pencil beam second pencil beam θ L r θ ′ r ′ Figure 3. Relation between topological image pair with respect to observer and matched disc centres, as in Fig. 2. The object is at comoving radial distances r and r ′ , respectively, and at angles θ and θ ′ , respectively, from the matched disc centres. The relation between these is given in Eq.
(2). map 2 , using the corresponding KQ85 galactic contamination mask (Bennett et al. 2013), 3 and show the dependence of the subgigaparsec-scale mean cross-correlation    T 3 solution, but with L T 3 = 3.8 motivated by the WMAP7 ILC map (Fig. 1), at angle β from the disc centre, and approximate fraction of full sky covered by the three pairs of discs ω/(4π) a , depending on the assumed FLRW metric parameter Ω m , where Ω Λ = 1 − Ω m . in Fig. 1. The cross-correlation ξ C is a two-point correlation function that at small scales correlates pairs of points that are observed to be separated by large distances but according to the hypothesised 3-manifold are also separated by a small distance. The cross-correlation should be low if the hypothesis is wrong (Fig. 2 Roukema et al. 2008b) and high if the hypothesis is correct (Fig. 3 Roukema et al. 2008b). The strongest cross-correlation in Fig. 1 occurs for L T 3 ≈ 3.80 ± 0.05 (in units of c/H 0 ), where the uncertainty is the half-width of the maximum at its base. The  uncertainties are estimated from the pseudo-probability function used in the Monte Carlo Markov Chain search for an optimal solution. Since the pseudo-probability estimator is not a true probability, other methods are needed to estimate a realistic uncertainty. Figure 9 of , showing the estimates in different foreground-subtracted single waveband WMAP maps, suggests σ(L T 3 ) ≈ 0.02, combining random and some systematic sources of error (ILC versus single band maps). The shift by 0.05(c/H 0 ) between Aurich (2008)'s estimate of L T 3 for WMAP5 data and that shown in Fig. 1 suggests that inclusion of systematic error would give σ(L T 3 ) > ∼ 0.05, i.e. 150h −1 Mpc, rather than σ(L T 3 ) = 0.02. The angular uncertainty given by , 2 • , is similar to that of the Poincaré dodecahedral space solution found earlier with the optimal cross-correlation method (Roukema et al. 2008b,a), i.e. a few hundred comoving h −1 Mpc in a tangential direction at a radial distance of 5 to 10 h −1 Gpc.
Leaving aside these uncertainties for the moment, the observationally most dramatic element of this T 3 solution is revealed by considering the redshift of an object topologically lensed in opposite directions along a fundamental axis.  shows this in his Fig. 10 and briefly discusses it. Here, we use the principle of matched discs (Roukema & Kazimierczak 2011), shown with some extra detail in Fig. 2, and Table 2, listing the redshifts at the centres of a pair of matched discs and at circles increasing radially outwards to form the pair of matched discs. The fractional sky coverage ω/(4π) = 12π(1 − cos β)/(4π), for discs of angular radius β, is also shown.
Given the rapidly increasing numbers of observed objects in the 5 < ∼ z < ∼ 6 range, the beginning of detections around 6 < ∼ z < ∼ 8, and the large fractional covering of the sky by the matched discs, it is obvious that independently of the few hundred h −1 Mpc uncertain-ties in the T 3 solution, the observational data to reject or confirm the solution to very high statistical significance are going to become available over the next decade or two. The absence of topologically lensed images of quasars can be interpreted as a problem of their short lifetimes and highly anisotropic nature (beamed jets), but the absence of topologically lensed images of early forming galaxies would be much more difficult to explain.
Let us return to the uncertainty in the T 3 solution parameters. Pencil beam observations ("deep fields") that probe to z ∼ 6 have typical widths of at most a few arcminutes, i.e. a few h −1 Mpc in comoving thickness at these redshifts. Thus, an accuracy of a few hundred h −1 Mpc suggests that deep fields of a few degrees in size-pointed in the appropriate directions-will most likely be needed. The fourth row of entries in Table 2 shows the redshifts at the β = 10 • radius circle for L T 3 = 4.0, i.e. 4σ greater than the WMAP9-ILC estimate shown in Fig. 1 if σ is defined as the absolute difference between the Aurich (2008) WMAP5 estimate and the WMAP9-ILC estimate made here. Thus, for surveys limited to, e.g. z < 6.2, observations closer to the face centres would better test the solution. Figure 2 also shows a pair of not-quite-matched beams. These are not truly matched, since they exactly match only where they intersect the matched discs. The topologically matched parts of the two beams to the left (in Fig. 2) of their respective discs occur in the left-hand copy of the beam at slightly higher redshifts and in the right-hand copy at slightly lower redshifts, with respect to the redshift of the intersections with the matched discs. Since realistic surveys usually have wide redshift distributions, there is a fair chance of covering the topologically lensed region. However, this requires that the survey of the right-hand copy (in this figure) of the beam be wide enough in solid angle to cover the projection of the "beam" onto the sky, since the right-hand copy is not a beam from the observer's point of view. For a small angular distance of the left-hand copy of the beam from the matched disc centre, the projected solid angle of a small portion of the right-hand copy will not be too large. Thus, for a small angular offset from the matched disc centres (axes of the fundamental domain), "not-quite-matched beams" can potentially test topological lensing hypotheses. The relation between an object near a matched disc centre and its topological image is illustrated (for the T 3 case) in Fig. 3, giving While matched discs and not-quite-matched beams indicate the parts of the sky that should be observed, detecting a significant statistical signal and generating a list of candidate pairs of topologically lensed objects requires a sensitive statistical method. This is presented below in Sect. 2.4. However, first we need to consider existing, planned and possible surveys in the three-dimensional regions of interest. Table 3 shows that almost all of the z ∼ 6 objects that are so far known near the T 3 axes are those near the high galactic latitude, northern fundamental direction 1, although a few are also known near the corresponding southern fundamental direction 4. Most of the northern objects are from the Subaru Deep Field observed with Table 5. Known objects a near the northern galactic matched disc at sky position (l 1 , b 1 ) and redshift z 1 and their predicted most likely southern galactic positions (l 2 , b 2 ) and redshifts z 2 , and redshift and cosmological time differences ∆z := z 1 − z 2 and ∆t := t 2 − t 1 (in h −1 Myr), respectively, assuming metric parameters Ω m = 0.28, Ω r = 1.65 × 10 −4 Ω m , Ω Λ = 1 − Ω m − Ω r and the T 3 fundamental axis (l = 6 • , b = +77 • ) of length L = 11.4h −1 Gpc.   (Maihara et al. 2001;Shimasaku et al. 2006), listed along with some of the other well-known deep fields in Table 4. The galactic coordinates and redshifts of the northern objects are listed as part of Table 5.

Planned and possible observations
First let us consider high-redshift quasars. The VISTA/VIKING 4-m class telescope project should discover quasars at z ∼ 7 over about 1500 deg 2 centred on the northern and southern Galac- Table 4. Some well-known deep fields and their angular distance θ i from the nearest T 3 axis (listed if θ ≤ 20 • ).  (Maihara et al. 2001) tic poles 4 (Findlay et al. 2011). Although Table 2 here and the expected completeness levels shown in Fig. 13 of Findlay et al. (2011) indicate that coverage of the desired regions is possible (depending on the value of Ω m ), only about 8 quasars are expected in the whole survey (Sect. 4.1, Findlay et al. 2011), clearly too low to obtain significant evidence either for or against the T 3 candidate. Polsterer et al. (2012) find 22,992 photometric quasar candidates with 5.5 < z < 6.2 and an estimated error of σ(∆z) = 0.087, expecting about half the candidates to be true quasars. However, the solid angle from which the quasars are selected is about 2π, giving a candidate surface density of about 1/deg 2 and an expected spectroscopic number density of about 0.5/deg 2 if a complete spectroscopic followup were to be performed. This is about 100 times higher than for the VISTA/VIKING survey (as presently designed), but still somewhat low. While quasars dominated highredshift records for the decades when z > 1 was considered a high redshift for a spectroscopically confirmed extragalactic object, this seems less likely at z ∼ 6. For a given apparent magnitude limit, it is clear that LBG and LAE catalogues are present in much high number densities at z ∼ 6.
Plans for LBG and LAE searches typically focus on the existing "deep fields", with the aims of achieving wide coverage across the electromagnetic spectrum. Several of these fields and their angular distances θ i from the closest T 3 axes are given in Table 4, indicating which fields would be the most useful for testing the T 3 candidate via LBG and LAE searches. Comparison with Table 2 indicates that for 5 < ∼ z < ∼ 7 searches, the Subaru Deep Field and its corresponding northern region is optimal, while for 5.5 < ∼ 8 searches, the Virmos Very Deep Survey fields 0226−04 and 1400+05 and their northern and southern, respectively, counterparts would be worth observing.
The EUCLID mission 5 planned for launch in 2019 6 has a deep survey (Sect. 1.3, Refregier et al. 2010), the EUCLID optical and NIR Deep Imaging Survey, planned over 40 deg 2 . This should obtain "thousands" of likely LBG's and LAE's with z > 6 based on photometric redshifts (Sect. 14.2, Refregier et al. 2010), i.e., > ∼ 50/deg 2 . If a few objects had quadruple multiplicities that made them very likely topological lensing candidates, then optical/nearinfrared spectroscopic followup with an instrument such as X-Shooter (300 nm < ∼ λ < ∼ 2400 nm, Vernet et al. 2011) on the Very Large Telescope (VLT) on the southern objects and corresponding spectra of the northern objects with the Keck or Subaru telescopes should obtain enough rest-frame spectral energy distribution information to check whether the would-be topologically lensed pairs of objects resembled each other more than would be expected for arbitrary pairs of objects at similar redshifts. The Multi Unit Spectroscopic Explorer (MUSE) 7 on the VLT, which should be well-tested by the time that the EUCLID data are available, would be useful for secondary followup to study the spatial environments of the candidate lensed objects, especially when the cosmological time difference between the members of a pair is small compared to a typical galaxy dynamical time scale.
Thus, we should simulate several square degrees of observations with parameters consistent with those estimated for EU-CLID. In particular, it would be interesting to see if a statistical signal could be obtained from photometric redshifts alone, prior to spending high amounts of exposure time on highly sought after telescope/instrument combinations.
The Ultra-Deep Survey of the James Webb Space Telescope (JWST) should detect galaxies at z > ∼ 6, but over only about 10 arcmin 2 (Sect. 2, Table II, Gardner et al. 2006). The JWST's planned Deep Wide Survey should find galaxies over 1 < ∼ z < ∼ 6 over a larger solid angle, 100 arcmin 2 (Sect. 3.6, Table III, Gardner et al. 2006). For an initial uncertainty of about two degrees in the T 3 axes, these surveys are clearly too narrow.

Simulations
Simulating searches for matched quadruples requires comparison of observations in a T 3 model to those in an R 3 model. The existing Subaru Deep Field observations are used to provide an observationally based simulation of both sorts, in which the southern field is generated for the T 3 model by applying the Aurich (2008) T 3 holonomy (Table 1) to the Subaru galaxies, and simulated independently of real observations in the case of the R 3 model. For a hypothetical observational program aimed at the centres of both the 1-4 axis expected matched discs, fully simulated data are needed in both cases.
The spatial two-point auto-correlation function ξ(r) could, in principle, lead to excess non-topological chance isometries if the tolerances for requiring isometry are not tight enough. For the low number densities of objects expected, the effect is unlikely to be strong. Half of a large number of points are first generated uniformly within a redshift shell defined by the required redshift limits. Each further point is chosen by randomly choosing an existing point, and placing the new point in a 3-Euclidean direction cho-sen uniformly from 4π ster at a comoving distance chosen with the probability P(r, dr) ∝ 1 + ξ(r), where for a correlation length r 0 and power law index γ, and numerical cutoffs r min and r max . Points that do not fall in the redshift shell are ignored, and new points are generated in the same way until the required number fill the shell. Numerical measurement of the resulting simulated distributions shows that in practice, this simulates a stronger correlation than that required. Thus, the chance of non-topological isometries is overestimated, i.e. our results are conservative: real observations are less likely to give a false positive detection. Moreover, in order to err on the side of possibly underestimating the numbers of matching quadruples in the multiply connected case, the correlation function is not used when simulating this case; uniform distributions are drawn from instead. Again, this is conservative: we slightly underestimate the statistical significance of the method (Sect. 2.5).
Peculiar velocities and the uncertainties in redshift estimation need to be simulated (Roukema 1996). Photometric redshift errors are those of most interest here: are they small enough to significantly discriminate between signal and noise? These errors are all simulated radially using a peculiar rapidity φ pec selected from a Gaussian distribution of mean zero and standard deviation atanh(σ(β pec )) for a single input parameter, the peculiar velocity standard deviation σ(β pec ), and using Eq. (A4), where β pec = tanh φ pec and z pec = [(1+β pec )/(1−β pec )] 1/2 −1. Use of rapidities rather than velocities avoids the unphysical case of |β pec | ≥ 1 for simulating photometric redshifts with high σ(β pec ).
Before applying the holonomies in a given simulation, random Gaussian errors are added to the T 3 axis parameters given in Table 1. "Observations" of the simulated catalogues are carried out in galactic coordinate limited regions in the expected position(s). Lehoucq et al. (1999) introduced the terminology of type II pairs and type I pairs (or n-tuples), that had been introduced earlier by Lehoucq et al. (1996) and Roukema (1996), respectively. Type I pairs or n-tuples can be thought of as the matching between a local region of objects and its distant copy. Since a holonomy for an FLRW multiply connected model is an isometry, the distances among the members within the "original" region should correspond to the distances among the members of the copy of that region. A Type II pair-in a space such as T 3 -corresponds to an object and its copy. In T 3 , this separation is a vector in the covering space R 3 . Difficulties in finding statistically significant numbers of either type of pair given realistic observational parameters and noise led to a new way of collecting type I pairs ) and a successive filter method (Marecki et al. 2005). Fujii & Yoshii (2011) presented a variation on the successive filter method, along with an analysis step that is roughly equivalent to collecting together ntuples of mixed type for all n ≥ 4 or to the Marecki et al. (2005) bunches of pairs filter (Sect. 3.2.5, Fujii & Yoshii 2011). The latter step, made possible by the successive filters, significantly reduces the combinatorial problem presented in Roukema (1996). A catalogue containing two distant regions, each with N objects, might contain just one pair of matched n-tuples across the two regions, but searching for all ( N C n ) 2 combinations of objects with N = 100 and n = 7 would require 10 20 comparisons of 7-tuples. Fujii & Yoshii (2011) use toy simulations to show that a very small number of matched n-tuples can be detected by the full method of successive filtering and collecting.

The successive filter method for type I pairs
The successive filter method implemented here tests four simulated (and/or real) galaxies at (x, y, z)(i), (x, y, z)( j), (x, y, z)(k), and (x, y, z)(l) via the following filters in the following order, where (i, j, k, l) is an ordered choice of unequal indices in the list of galaxies. A crossed quadruple parameter χ(q) ∈ Z 2 is defined for a given ordered quadruple q = (i, j, k, l) in order to relate the lifetime and bunches of pairs (BoP) filters (Marecki et al. 2005). This algorithm allows the pair separations d, where d(., .) is the comoving distance between two arbitrary points in the covering space R 3 , the (signed) x, y, z component separations of each pair, to be calculated first, and the loop for finding quadruples to be performed over pairs of pairs. Arithmetically, the change from subtraction to addition of pair separations in the BoP filter is equivalent to swapping an ((i, j), (k, l)) pair of ordered pairs to ((i , k), ( j, l)). The filters are: (1) type I pair filter: (2) lifetime filter: (3) BoP filter: Substituting a difference test for the BoP filter above, i.e. |d(i, j) − d(k, l)| < ǫ BoP and |d(i, k) − d( j, l)| < ǫ BoP for the uncrossed and crossed pairs of pairs, respectively, would allow non-planar quadrilaterals (fold a rectangular sheet of paper along its diagonal to see this), whereas only a parallelogram can represent a T 3 topological quadruple. A list of quadruples (pairs of pairs, each associated with a crossed quadruple parameter χ) that satisfy all three successive filters is obtained. An arbitrary galaxy i is a member of s i ≥ 0 quadruples. The frequency of s in a given simulated set of galaxies is f (s), i.e. f (s) := |{i : s i = s}|.
If a topological lensing signal is present, there should be a high number of galaxies which are a member of many quadruples, f (s) should be high for high s. This is a critical step in the successive filter method, introduced in Sect. 3.2.5 of Fujii & Yoshii (2011): for s ≫ 1, the statistic f (s) should be significantly higher in a catalogue containing topological lensing than in a simply connected catalogue.

Statistical significance
For an arbitrary realisation, let us define the cumulative number of galaxies that are each members of many quadruples for some quadruplet membership number s * > 0, which formalises the word "many" in "many quadruples". Thus, for a fixed value s * , F is a random variable which we model numerically. To estimate what observational strategy is required to have a low chance of a false negative inference from the data, i.e. to find β, the expected probability of falsely excluding the T 3 hypothesis, suppose that the observational result gives F = i for some non-negative integer i. Then the probability of falsely excluding the Aurich (2008) T 3 hypothesis is the cumulative probability P(F ≤ i|T 3 ), which can be estimated numerically by finding the fraction of T 3 simulations for which F ≤ i. Since we don't yet know the results of the observations, we have to weight this over p(F = i|R 3 ), the probability density function of i for the simply connected case. Thus, the expectation value of the probability of falsely rejecting the hypothesis P(F ≤ i|T 3 ) (which itself is a random variable) is Similarly, to estimate what observational strategy is required to have a low chance of a false positive inference from the data, the expected probability of falsely supporting the T 3 hypothesis can be written i.e. the two expectation values are equal. For a given simulation parameter set, a set of simulations is analysed for each s * ∈ {1, . . . , 10}, a range that is found below (Sect. 3) by inspection of N(s) versus s histograms for typical realisations. The minimum determines the s * value for the optimal statistical test for the simulation parameter set, and the expectation value of the probability of falsely excluding the T 3 hypothesis or of falsely detecting evidence in favour of it.   . Quadruple frequencies N(s) obtained by the successive filter method (Sect. 2.4) applied to sets of objects in the multiply connected (dark curve, red online) and simply connected (pale curve, green online) cases, using the known, northern galactic objects in the SDF (second, third and sixth columns of Table 5). The southern object set is calculated using the Aurich (2008) T 3 model in the multiply connected case, and simulated with an input auto-correlation function r 0 = 10.0h −1 Mpc, γ = 1.8, r min = 1.0h −1 Mpc, r max = 100.0h −1 Mpc in the simply connected case. The southern field is "observed" over 1 deg 2 at the expected position in both cases. Top: Successive filter parameters (Sect. 2.4) are ǫ = 1.0h −1 Mpc, ∆t = 0.01h −1 Gyr, ǫ BoP = 1.0h −1 Mpc. Bottom: Same, except that ∆t = 1h −1 Gyr. correct, with zero uncertainty in the T 3 axis parameters given in Table 1). Figure 4 shows the application of the successive filters method under the assumption that a survey of one square degree in the redshift range 5 < z < 6.2 is 100% complete in comparison to the surveys that found the high galactic latitude sample. The southern survey field is centred on the expected median celestial position of the implied objects (the fourth and fifth columns of Table 5). The topological signal is clearly very strong in Fig. 4, both for low ∆t = 0.01 h −1 Gyr (top), in which case it is unlikely that stellar evolution would cause the earlier image of a galaxy to dim too much to be seen, and for high ∆t = 1 h −1 Gyr, in which case dimming could weaken the test. The northern part of each catalogue analysed consists of the 44 known objects, there are 41 topologically implied objects within the southern field, and the simulated southern subset  Fig. 4, but a realisation of Gaussian uncertainties in the T 3 parameters is applied, destroying the topological signal, since none of the implied topological images fall in the southern field in this realisation.
The uncertainty in the T 3 coordinates and fundamental length also need to be taken into account. An error of 2 • in the vector (in the flat comoving covering space) from the observer to an object at a matched disc centre implies an error of about twice the size in the observing angle of the implied image, because the diameter of the shell containing objects at the distance of the matched disc centres is twice the radius of the shell. Thus, as Fig. 5 shows, a typical realisation of a 1 deg 2 southern survey is very unlikely to detect any topological quadruples (satisfying the filter criteria). The total disappearance of the topological signal in this figure (s = 0 for all objects, since there are no topological images lying within the southern field at all), making the N(s) distribution even weaker than that of the simply connected data set, is an artefact of the method. We did not add any "background" simulated data around the northern galactic, observed SDF field, so the implied southern objects constitute a "survey" that contains less objects (zero) than the simulated southern survey (16 objects in Fig. 5). Thus, let us consider fully simulated surveys.

Planned or possible observations
Given the uncertainties in the T 3 solution discussed in Sect. 2.1 and listed in Table 1, let us consider a pair of northern and southern surveys centred on the matched disc centres for the same T 3 axis, each of 64 deg 2 . As explained above, for a given known object, a two degree T 3 axis uncertainty yields an approximately four degree uncertainty for an observer placed halfway between the opposing images and searching for a topological image at the expected position. However, since we simulate both fields, other pairs of objects in the matched discs can be found, so 2 • should approximately correspond to a 68% chance (for one angle offset from a Gaussian distribution of width 2 • ) of finding matching pairs. Thus, 64 deg 2 should approximately correspond to a 2σ (where σ is one standard deviation) chance of finding topological pairs.
The uncertainty in L T 3 gives a 1σ uncertainty of about 150 h −1 Mpc, i.e. a 2σ uncertainty of 300 h −1 Mpc. For a survey going from the matched disc centre to β ≈ 4 • away, Ta-  Fig. 4, for fully simulated, multiply connected (dark, red online) and simply connected (pale, green online) catalogues, over northern and southern fields each of ≈ 64 deg 2 centred at antipodal matched disc centres, with 4.3 < z < 6.6, and successive filter parameters ǫ = 2.0h −1 Mpc, ∆t = 1.0h −1 Gyr, ǫ BoP = 1h −1 Mpc, and (in the simply connected case) an input auto-correlation function r 0 = 10.0h −1 Mpc, γ = 1.8, r min = 1.0h −1 Mpc, r max = 100.0h −1 Mpc [Eq. (3)]. A total of 300 simulated objects (north plus south) are present in each catalogue. Top: Spectroscopic redshifts. Bottom: Photometric redshifts with σ(β pec ) = 0.005, i.e. σ(∆z) ≈ 0.03. ble 2 (for Ω m = 0.28) gives a typical redshift of z ≈ 5.38, i.e. 5.706 h −1 Gpc in radial comoving distance from the observer. Inverting this, 5706 ± 300 h −1 Mpc gives 4.60 < ∼ z < ∼ 6.33. We extend this a little to be conservative, and increase the filter criterion ǫ to 2 h −1 Mpc. A typical realisation is shown in the top panel of Fig. 6. Clearly, a pair of surveys that are wide enough on the sky and in redshift coverage, finding 300 objects above an idealised, z-independent completeness limit, has a good chance of strongly rejecting or supporting the T 3 hypothesis.
Could photometric redshift estimates be sufficient? The lower panel of Fig. 6 shows the quadruple signal for a realisation with Gaussian peculiar velocity uncertainties simulated with σ(β pec ) = 0.005, i.e. a redshift uncertainty of σ(∆z) ≈ 0.03. The signal is clearly much weaker, but (in this realisation) is still distinguishable from the N(s) curve of the simply connected simulation. Figure 7 presents confidence levels, i.e. p 1 , as defined in (1 + (z 1 + z 2 )/2)σ(β pec ), using σ(β pec ) ≪ 1 and Eq. (A4), where z 1 < z < z 2 is the redshift range], and the number of galaxies N gal per combined (north plus south) catalogue. The lowest probabilities p 1 (highest significance) results are for low σ(z phot ) and high N gal , at top-left in each of the panels. Top: 4.3 < z < 6.6, solid angle per survey direction (south or north) ω ≈ 100 deg 2 , ∆t = 1 h −1 Gyr. The other parameters are identical to those for Fig. 6. Bottom: Same, except that ∆t = 0.1 h −1 Gyr.
Eq. (11) in Sect. 2.5, from ensembles of realisations. Unsurprisingly, low σ(∆z phot ) and high N gal generally give the statistically most significant results. The dependence on these two parameters is not fully monotonic. This is understandable because for a fixed σ(∆z phot ), higher N gal not only increases the numbers of topological quadruples, it also increases the number density, so that the numbers of non-topological quadruples also increase, yielding some complexity in the dependence. There is some statistical noise in the contours. The total computing time for Fig. 7 on a single, recent 4-core processor would be from a few weeks to a few months (Appendix B). The actual calculations were performed using parallel computing resources on several different machines. Figure 7 also shows that a typical photometric redshift error of σ(∆z phot ) < ∼ 0.01 would enable rejection at p 1 < ∼ 0.05 (i.e. a 2σ rejection according to intuition for a Gaussian distribution), provided that N gal > ∼ 500 in northern and southern surveys each over ω ≈ 100 deg 2 . This is only a moderate confidence level for re- jecting the hypothesis, but since p 1 also represents the expectation value of falsely accepting the hypothesis, it would be sufficient to provide a strong motivation for studying candidate topologically lensed galaxies further. A plot roughly similar to the lower panel of Fig. 6 would be obtained, enabling the particular galaxies most likely to be topologically lensed to be identified. The particular realisation in the lower panel of Fig. 6 shows about 10 galaxies that are each members of s > 1 quadruples. This is a small enough number for spectroscopic followup to be relatively easy to obtain. Comparison of the top and bottom panels in Fig. 7 illustrates the potential role of evolutionary effects. The lower panel limits type II pairs to those with ∆t = 0.1 h −1 Gyr instead of ∆t = 1 h −1 Gyr (upper panel). To attain moderate confidence, i.e. p 1 < ∼ 0.05, photometric redshift errors would need to be tightened to about σ(∆z phot ) < ∼ 0.005, for slightly higher numbers of galaxies. For initial starburst durations not much shorter than a typical galaxy dynamical time of ∼ 1 h −1 Gyr, ∆t = 0.1 h −1 Gyr should give a high probability that a galaxy is included in a survey at both its topological images. Figure 8 shows that increasing the survey area and redshift depth still further, to ω ≈ 196 deg 2 and 4 < z < 7, would allow statistically similar results for weaker accuracy of the photometric redshifts and lower numbers of galaxies. An accuracy of σ(z phot ) < ∼ 0.02 or σ(z phot ) < ∼ 0.01, for ∆t = 1 h −1 Gyr (upper panel) or ∆t = 0.1 h −1 Gyr (lower panel), respectively, would give p 1 < ∼ 0.05. Figure 8 also shows that for a sufficiently high number of galaxies, N gal > ∼ 900, a photometric redshift accuracy of σ(∆z phot ) < ∼ 0.01 would be sufficient for p 1 < ∼ 0.01, i.e. an expectation value of false rejection of about 1%. Most of the present interest in photometric redshift techniques is at lower redshifts, but high statistical accuracy in z ∼ 6 photometric redshifts would clearly be useful for deep redshift topological lensing.

CONCLUSION
Over the next few decades, wide-angle surveys in the z ∼ 6 redshift range will inevitably be performed. The results above indicate that with appropriate targetting and choices of observational parameter limits, the speed with which the T 3 candidate for the topology of the Universe can be rejected or detected can be optimised. For more specific observational strategies for particular telescope/instrument combinations, more detailed analyses could potentially be carried out by using detailed galaxy formation models, such as the hybrid N-body/semi-analytic simulations (Roukema et al. 1993(Roukema et al. , 1997(Roukema et al. , 2001) that have been extensively developed to simulate detailed galaxy properties as a function of space and time (e.g., Hatton et al. 2003, and references thereof), including a specific focus on LAEs (e.g., Garel et al. 2012). More sophisticated combinations of existing observations, including the VVDS 1400+05 field within 16 • of the same southern axis and the VVDS 0226−04 field within 20 • of the corresponding northern axis (Table 4) along with simulated future observations, should also potentially yield several alternative strategies for topological lensing detection or rejection.
The total number of L * galaxies at 4 ≤ z ≤ 7, where L * is the characteristic luminosity of a Schechter function (Schechter 1976), is estimated to be φ * ∼ 3 × 10 −3 h 3 Mpc −3 (e.g. Table 5, Bouwens et al. 2011). This gives about 3-8 million L * galaxies for the pairs of 100 deg 2 and 196 deg 2 fields simulated above, over 4.3 < z < 6.6 and 4 < z < 7, respectively. For the minimum N gal > ∼ 500 or N gal > ∼ 400 needed for achieving 5% expected confidence levels in the two field sizes, respectively (Sect. 3.2), surveys complete to L > ∼ 8.8L * or L > ∼ 9.9L * , respectively, would find the required numbers of galaxies. Because the computing time for the above calculations scales roughly as N 4 gal for sufficiently high N gal (since the heaviest computation is checking the list of possible quadruples; see Appendix B), simulations for surveys complete to L > ∼ L * , i.e. for N gal > ∼ 10 6 , are clearly not practical without further filters added early in the successive filter algorithm.
Nevertheless, the most difficult aspect of predicting constraints on the model is the difference between precise cosmology and accurate cosmology. The dark energy parameter Ω Λ is suspected by several cosmologists to be an artefact of using the FLRW metric (a homogeneous solution of the Einstein equation) rather than the physical average metric of the actually inhomogeneous Universe (e.g. Buchert & Carfora 2003;Buchert 2008;Célérier et al. 2010;Wiegand & Buchert 2010;Kolb 2011;Boehm & Rasanen 2013;Wiltshire et al. 2013;Buchert et al. 2013;Roukema et al. 2013). A significant confirmation of the T 3 hypothesis would provide a constraint on the inhomogeneous approach to cosmology, but this is unlikely to occur using an FLRW interpretation of the observational data if the time dependence of the FLRW metric parameters is too far from relativistically consistent formulae in the relevant redshift range (Larena et al. 2009). Some of the familiar FLRW relations may be algebraically valid in fully relativistic models, with an effective rather than a local physical interpretation (Buchert & Räsänen 2012), so detection of topological lensing might still be possible under the assumption of an FLRW metric.

ACKNOWLEDGMENTS
Thank you to Roland Bacon for contributing several key ideas to this project, and to the referee Andrew Jaffe for several constructive comments. Some of this work was carried out within the framework of the European Associated Laboratory "Astrophysics Poland-France". Part of this work consists of research conducted in the scope of the HECOLS International Associated Laboratory. BFR thanks the Centre de Recherche Astrophysique de Lyon for a warm welcome and scientifically productive hospitality. A part of this project has made use of Program Obliczeń Wielkich Wyzwań nauki i techniki (POWIEW) computational resources (grant 87) at the Poznań Supercomputing and Networking Center (PCSS). A part of this work was conducted within the "Lyon Institute of Origins" under grant

APPENDIX A: COMBINATION OF COSMOLOGICAL AND PECULIAR VELOCITIES IN THE FLRW MODEL
As derived in Synge (1960) and presented in Narlikar (1994) (see also Roukema 2010, and refs therein), the expansion redshift can be derived by parallel-transporting the four-velocity of the world line of a distant galaxy along a null geodesic (path of a photon) joining it to the observer. A fundamental observer (at rest with respect to the comoving spatial coordinate system) has zero peculiar velocity and a redshift of z cosm . The latter can be interpreted as a specialrelativistic radial speed β cosm in natural units, where 1 + z cosm = 1 + β cosm 1 − β cosm = 1 γ cosm (1 − β cosm ) = γ cosm (1 + β cosm ) where φ cosm is the rapidity defined by β cosm =: tanh φ cosm , γ cosm := (1 − β 2 cosm ) −1/2 , and γ pec := (1 − β 2 pec ) −1/2 . For a radial peculiar velocity of β pec in natural units, the overall redshift z can be calculated using Minkowski spacetime addition of the rapidities φ cosm and φ pec , where β pec =: tanh φ pec , i.e. the overall rapidity is since addition of four-velocity vectors at the same spacetime location is meaningful. Thus, similarly to Eq. (A1), the overall redshift is given by i.e. z = z cosm + z pec + z cosm z pec .
When max(z cosm , |z pec |) ≪ 1, this reduces to z = z cosm + z pec to first order in both redshifts. For high-redshift astronomy, i.e. z cosm > 1, the third term in the right-hand side of Eq. (A4) is more significant than the second, contrary to popular belief that sets the third term to zero. Nevertheless,  Figure A1. Confidence level calculation time in seconds for quadruple searches of (the equivalent of) 30 realisations for a single parameter set, as a function of the total number of galaxies N gal , for the simulations shown in Figs 7 and 8, for three different processor sets, as labelled. The times for the first processor set are halved to give the equivalent of four cores. Power-law fits to N gal ≥ 400 are shown.
We ignore gravitational redshift here, since observations close to the Schwarzschild radius are unlikely in the case of interest. Parallel transport of the four-velocity (see Narlikar 1994) implies analogous relations to those above. Figure A1 shows that for sufficiently high N gal , the successive filter quadruple searches scale roughly as the fourth power of the number of simulated galaxies. From top to bottom as labelled, the power law best fits are proportional to N 4.2 gal , N 4.1 gal , and N 3.9 gal , respectively. Parallelisation is via openmp.