Paving the Way for Euclid and JWST via Optimal Selection of High-z Quasars

We introduce a probabilistic approach to select 6<z<8 quasar candidates for spectroscopic follow-up, which is based on density estimation in the high-dimensional space inhabited by the optical and near-infrared photometry. Density distributions are modeled as Gaussian mixtures with principled accounting of errors using the extreme deconvolution (XD) technique, generalizing an approach successfully used to select lower redshift (z<3) quasars. We train the probability density of contaminants on 733,694 7-d flux measurements from the 1076 square degrees overlapping area from the DECaLS (z), VIKING (YJHK), and unWISE (W1W2) imaging surveys, after requiring they dropout of DECaLS g and r, whereas the distribution of high-z quasars are trained on synthetic model photometry. Extensive simulations based on these density distributions and current estimates of the quasar luminosity function indicate that this method achieves a completeness of>75% and an efficiency of>15% for selecting quasars at 6<z<8 with J<21.5. Among the classified sources are 8 known 6<z<7 quasars, of which 2/8 are selected suggesting a completeness ~25%, whereas classifying the 6 known (J<21.5) quasars at z>7 from the entire sky, we select 5/6 or a completeness of ~80%.The failure to select the majority of 6<z<7 quasars arises because our model of quasar SEDs underestimates the scatter in the distribution of fluxes. This new optimal approach to quasar selection paves the way for efficient spectroscopic follow-up of Euclid quasar candidates with ground based telescopes and JWST.


INTRODUCTION
Luminous high-redshift quasars (QSOs) are one of the best probes of the primordial Universe at the end of the dark ages. Their spectra provide important information regarding the properties of the intergalactic medium (IGM) during the epoch of reionization (EoR). In fact, deep spectroscopy of > 6 QSOs showed that the IGM is significantly neutral at ≥ 7 (e.g., Bañados et al. 2018;Davies et al. 2018;Wang et al. 2020;Yang et al. 2020a), but highly ionized at ≤ 6 (e.g., McGreer et al. 2011McGreer et al. , 2015Yang et al. 2020b).
In addition, the engines of the most distant QSOs, the super massive black holes (SMBHs), are crucial for understanding the formation mechanisms of the first generation of black hole seeds (see Inayoshi et al. 2020, , for a recent review), for a recent review). Their existence up to = 7.6 (e.g., Wang et al. 2021), and hence formation in less than 1 Gyr, poses the most stringent constraints on the masses of black hole seeds. In fact, making the standard assumptions about Eddington-limited accretion, current BH masses in the highestquasars appear to rule out the expected ∼ 100 seeds from Pop III remnants, and instead require more massive seeds (e.g., Volonteri & Begelman 2010;Volonteri 2012). Thus, the following theoretical scenarios have been proposed: the growth from massive black hole ★ E-mail: riccardonanni@ucsb.edu seeds (10 4−6 ) through the direct collapse of a primordial cloud (e.g., Habouzit et al. 2016;Schauer et al. 2017;Dayal et al. 2019), lower-mass seeds (10 2−3 which are the remnants of PopIII stars) with Eddington limited or even super-Eddington accretion and very rapid growth (e.g., Madau & Rees 2001;Tanaka & Haiman 2009;Inayoshi et al. 2016), or presence of radiatively inefficient accretion modes (e.g., Trakhtenbrot et al. 2017;Davies et al. 2019).
At the highest redshifts, there are only eight quasars known at ≥ 7 (Mortlock et al. 2011;Bañados et al. 2018;Wang et al. 2018;Yang et al. 2019Yang et al. , 2020bMatsuoka et al. 2019b,a;Wang et al. 2021) with two of them at = 7.5 (Bañados et al. 2018;Yang et al. 2020a), and the most distant one at = 7.6 ). This sample of ≥ 7 QSOs is still very limited -owing to the opacity of the intervening high-IGM, distant quasars are brightest redward of their Ly emission line which is redshifted to NIR wavelengths at ≥ 7, making both imaging and spectroscopic observations more challenging. Furthermore, the expected number density of ≥ 7 quasars is low (10 −3 deg −2 at = 21; Wang et al. 2019), while the contaminants, mostly Galactic cool dwarfs and early type galaxies, are far more numerous (≈ 20 deg −2 at = 21). As a result, quasar target selection in this redshift range is largely inefficient (efficiency ∼ 1%; Bañados et al. 2018;Wang et al. 2021), and thus requires large amounts of telescope for spectroscopic confirmation, making it extremely challenging to find more bright > 7 quasars with existing datasets.
On the other hand, the advent of the next generation photometric and spectroscopic telescopes, such as Euclid or the James Webb Space Telescope (JWST), should prove to be a watershed moment in high-redshift quasar studies (Euclid Collaboration et al. 2019). In fact, Euclid's wide field IR imaging should enable the discovery of ∼ 100 quasars with 7.0 < < 7.5, and ∼ 25 beyond the current record of = 7.6, including ∼ 8 beyond = 8.0 (Euclid Collaboration et al. 2019), and JWST will deliver exquisite spectra of them. Ground based telescopes will play an essential role in discovering the brighter Euclid quasars, whereas fainter AB > 21.5 ones will likely require JWST. Although current selection methods based on simple colorcuts were able to discover most of the > 7 known QSOs (Bañados et al. 2018;Yang et al. 2020a;Wang et al. 2021), their ∼ 1% efficiency is far too low to make confirmation of the on average fainter Euclid QSOs feasible, as this would require excessive amounts of ground based and JWST observations. It is thus clear that more efficient selection methods are required.
So far, two different statistical methods for selecting high-QSOs have been proposed. The first one is based on the Bayesian model comparison (BMC) technique laid out by Mortlock et al. (2012), while the second uses a simpler minimum-2 model fitting method to the quasars' spectral energy distribution (SED) (Reed et al. 2017). These methods are based on improved population models for the key contaminants: MLT dwarf types, and compact early-type galaxies, and they both require model colours for each population. The BMC method additionally requires a model for the surface density of each source as a function of apparent magnitude. Although these methods have been successfully used in the past to select high-QSOs (Mortlock et al. 2011;Reed et al. 2017), including in the VIKING survey (Barnett et al. 2021), they mostly rely on constructing a contaminant model of the entire sky in the color-range in question to very faint magnitudes. The efficacy and feasibility of this approach has not yet been demonstrated, and it appears extremely challenging given our currently poor knowledge about the different kind of contaminants. Another quasar method that has been employed uses the random forests machine learning algorithm in conjunction with color-cuts for quasar selection and photometric redshift estimation (Schindler et al. 2017;Wenzl et al. 2021). While this method has been demonstrated to successfuly select quasars at lower-, its primary drawback is that it cannot properly account for photometric uncertainties.
In this paper, we describe our probabilistic high-quasar selection technique, which uses density estimation in flux space to compute the probability of being a high-quasar for each candidate. For density estimation, we use the extreme deconvolution method (XD; Bovy et al. 2011a;2011b), which generalizes the familiar machine learning approach of describing a probability density with a mixture of Gaussians to the case of heteroscedastic noise. XD enables one to deconvolve errors for noisy training data to construct the true underlying noiseless probability density, and then reconvolution of the associated noise to evaluate the probability at new arbitrary test locations. In the context of high-quasar selection, the main merits of this approach are: 1) it is fully Bayesian and thus optimal, that is density estimation constitutes the optimal approach to estimate a classification probability 1 , 2) the contaminant model is fully empirical and requires making no assumptions, 3) it fully accounts for errors in a principled fashion, i.e. noiseless distributions are inferred via deconvolution and then reconvolved with the given target uncertainties. In the end, the target selection/classification problem becomes the task of training good number-density models for both the target population and the contaminant population to maximize the efficiency and completeness of the survey. We applied our target selection technique (hereafter XDHZQSO) to a set of possible high-candidates that are selected with the use of optical, NIR and MIR surveys, and construct our XDHZQSO quasar targeting catalog. This catalog will be used for future spectroscopic follow-up to confirm new high-QSOs in the NIR ground based survey area, while this technique provides a better method for classifying and prioritize high-QSOs candidates in the near future, especially with the advent of Euclid in 2022.
This paper is structured as follows. We present the XDHZQSO method in §2. In §3 we discuss the data used to train our probabilistic classifier, and in §4 we describe the construction of the XDHZQSO models from the training data and its application to classify our candidates. In §5 we provide a detailed description of the analysis of source completeness and efficiency. In §6 we show the results of our code in classifying both the known high-QSOs in the VIKING survey area, and the known > 7 QSOs on the entire sky. In §7 we discuss the limitations of our selection technique, compare it to other methods, and describe various extensions to the basic method described in this paper. We conclude in §8. Throughout the paper, we adopt a flat cosmological model with 0 = 68.5 km s −1 Mpc −1 (Betoule et al. 2014), Ω M = 0.3, and Ω Λ = 0.7. All the magnitudes are given in the AB system, while the uncertainties of our reported measurements are at 1-confidence level.

PROBABILISTIC CLASSIFICATION METHOD
The use of probabilistic methods for target selection is essentially a classification problem in which objects are classified into one of a discrete set of classes, based on their measured physical attributes. These classes can be modeled using a set of objects with class assignments available on which we can train the classification algorithm. Although this is a classical problem in data analysis/machine learning, the physical attributes of astronomical targets are rarely measured without substantial and heteroscedastic measurement uncertainties, and often there is also the problem of missing data. Knowing that, classification algorithms for astronomical target selection have to deal with these complications by naturally degrading the probability of an object being in a certain class if the measurement uncertainties imply that the object overlaps several classes.
Consider an object with "true" attributes { } that we wish to classify into class or class . In our specific case, we would like to classify an object into classes "high-QSO" or "contaminant" based on its physical noiseless { } and noisy { } attributes (e.g., fluxes, magnitudes, colors, or relative fluxes), and the associated uncertainties { }. This can be expressed using the Bayes' theorem to relate the probability that object belongs to class to the density in attribute space: where since ∪ contains all of the possibilities. In Eq. 1 we distinguish between discrete probabilities and continuous probabilities . The ({ }| ∈ ) factor in the numerator of the right-hand side of Eq. 1 is the density in attribute space evaluated at the targets's attributes { }, while ( ∈ ) is the total number of objects in a prior probability. The denominator ({ }) is a normalization factor, and expresses the total probability that the object belongs to either class or class . It is easy to see that this probability is a true probability since it always lies between zero and one, and the sum of the probabilities for the two classes is equal to one.
Measurement uncertainties are handled in this framework through marginalization over the "true" properties { } given the observed ones { } and the measurement-uncertainty distribution ({ }|{ }): We take ({ }|{ }) to be Gaussian, which is an extremely good approximation for flux measurements. As a result XD provides a simple mechanism to 1) infer the the true underlying "noise deconvolved distribution" ( ∈ |{ }), as well as 2) performs the convolution integral in Eq. 3. Since the model is a mixture of Gaussians and the errors are Gaussian, the normally complex operations of deconvolution/convolution reduce to trivial algebraic operations. Compared to other probabilistic selection methods, the great advantage of our approach is that the poorly understood contaminants are modeled fully empirically, rather than relying on physical models (e.g., Mortlock et al. 2012;Barnett et al. 2021), and the contaminant classes are all grouped into a single all-inclusive contaminant class. In this way, the density models for the contaminant class can be simply trained using real data from the entire sky. This method was already applied in the past to select SDSS QSOs (Bovy et al. 2011b;Bovy et al. 2012), and was shown to be effective even in the challenging redshift range 2.5 ≤ ≤ 3 where the stellar contamination is significant.

TRAINING DATA
To construct probability density models we trained on either real or simulated photometry, depending on whether we are considering "contaminants" or "quasars". Contaminants were trained on 1076 deg 2 of overlapping imaging from VIKING ( ), DECaLS ( ), and unWISE ( 1 2) 2 . In Table 1 we summarize the properties of the three surveys we used for our selection. The quasar models 2 To compute the area covered by the sources in our sample we used the ℎ Python package, based on the Hierarchical Equal Area isoLatitude were trained on synthetic photometry from the McGreer et al. (2013) "simqso" simulator 3 . This section describes the data used to train these density classification models.

Contaminant Data
The contaminant training set is generated using photometry from deep optical, and near-and mid-IR imaging surveys. At NIR wavelengths, we used , , , and bands coming from VIKING DR4. The VIKING data were obtained from the VISTA Science Archive 4 . For optical bands, we mainly used data from the DESI Legacy Imaging Surveys (DELS) 5 , which combines three different imaging surveys: the Dark Energy Camera Legacy Survey (DECaLS), the Beĳing-Arizona Sky Survey (BASS; e.g., Zou et al. 2019), and the Mayall −band Legacy Survey (MzLS). These three surveys jointly image ∼14,000 deg 2 of the extragalactic sky visible from the northern hemisphere in three optical bands ( , , and ). The sky coverage is approximately bounded by −18 • < < +84 • in celestial coordinates, and | | > 18 • in Galactic coordinates, and it overlaps with most (≈ 80%) of the VIKING survey footprint. An overview of the DELS surveys can be found in Dey et al. (2019). When available, we also included Pan-STARRS (PS1) photometric data in our selection, which provides 3 sky coverage (≈ 70% overlap with the VIKING footprint) in five different filters: 1 , 1 , 1 , 1 , and 1 . As described below, these data were used to further refine our training catalog. In the MIR, we used the 1 and 2 bands coming from the unWISE release (Schlafly et al. 2019), that comes from the coaddition of all publicly available 3 − 5 WISE imaging (Wright et al. 2010), including that from the ongoing NEOWISE (Mainzer et al. 2011) post-cryogenic phase mission. The steps used to construct our catalog are illustrated schematically in Fig. 1, which we describe in detail in the following.
To construct our contaminant training sample, we cross-matched the VIKING catalog with the DELS, PS1, and unWISE ones, using a radius 2 . As we are interested in finding 6 ≤ ≤ 8 QSOs, we used the -band as the "detection band". In fact, at the very highredshift ( > 7) the Ly drop falls in the -band, preventing the detection of very high-QSOs, while the VIKING -band reaches a depth of 22.1 (at 5 ). We then selected all the sources withband signal-to-noise ratio SNR( ) ≥ 5. Since ≥ 6 QSOs drop in the bluest optical filters, we further required our objects to have SNR( , ) < 3 6 , and, when available, SNR( PS1 , PS1 ) < 3. We also removed objects with SNR( PS1 ) ≥ 5 and − < 2, when these Pixelization (HEALPix). We used ℎ to subdivide a spherical surface in 200 pixels, in which each pixel covers the same surface area as every other pixel, and summed the areas of the pixels that includes one or more sources from the VIKING survey area.  Figure 1. General steps (red ellipses) performed to construct the contaminant training sample. The blue boxes represent the conditions that the sources must satisfy to make it to the next step, while yellow boxes provide more information about some specific steps. After the first step (the match with other surveys), sources are divided into two sub-catalogs depending on their DELS counterpart: sources with a DELS detected counterpart (DELS detected), and sources with no detected counterpart but with DELS observations (DELS undetected). Sources with neither DELS counterpart nor observations are simply removed. † At this step we also removed sources with SNR( PS1 , PS1 ) ≥ 3, or SNR( PS1 ) ≥ 5 and − < 2, when these data are available. Total sources ( ≥ 17) 733,694 † At this step we also removed all the sources with SNR( PS1 , PS1 ) ≥ 3, or SNR( PS1 ) ≥ 5 and − < 2, when these data are available. data were available. For sources covered by the DELS footprint but with no counterpart detected in the survey within 2", we performed forced photometry on the DECaLS and unWISE images with an aperture radius 1.5 , 7 , respectively, and removed all sources with SNR( , ) ≥ 3. For the surviving sources, we also performed forced photometry on the VIKING images ( filters), using an aperture radius 1.5 . Sources that have missing data in at least one of the requested filters (VIKING-, DECaLS-, unWISE-1 2) are removed from our catalog at this stage. Finally, we visually inspected bright sources ( < 17), and found they were often artifacts or bright stars, so you decided to exclude them. The resulting final "contaminant" training catalog contains 733,694 sources, while the number of sources that survived each filtering step are presented in Table  2. Among the final sources, we identified eight known 6 ≤ ≤ 7 QSOs, indicating that the contamination of the contaminant training set with high-quasars is small. Therefore, we did not remove these known QSOs from the training set.

Quasar Data
We used a sample of 440,000 6 ≤ ≤ 8 QSOs simulated from the "simqso" code from McGreer et al. (2013), using the updated version described in Yang et al. (2016). The simqso code was used to generate a grid with a uniform distribution in redshift over the range 6 ≤ ≤ 8, and in magnitude over the range 17 ≤ ≤ 22.5. Assuming that the QSO spectral energy distributions (SEDs) do not evolve with redshift (Kuhn et al. 2001;Yip et al. 2004;Jiang et al. 2006;Bañados et al. 2018), the quasar spectrum is modeled as a power-law continuum with a break at 1200 Å. For redder wavelength coverage, we added four breaks at 2850, 3645, 6800, and 30000 Å. The slope ( ) from 1200 to 2850 Å follows a Gaussian distribution with mean ( 1200 ) = −0.5 and dispersion ( 1200 ) = 0.3; the range from 2850 to 3645 Å has a slope drawn from a Gaussian distribution with ( 2850 ) = −0.6 and ( 2850 ) = 0.3; from 3645 to 6800 we adopted a Gaussian with ( 3645 ) = 0.0 and ( 3645 ) = 0.3; finally, from 6800 to 30000, we used ( 6800 ) = 0.3 and ( 6800 ) = 0.3. The parameters of emission lines are derived from the composite quasar spectrum from (Glikman et al. 2006), and the lines are added to the continuum as Gaussian profiles, where the Gaussian parameters (wavelength, equivalent width, and full with half maximum) are drawn from Gaussian distributions. These distributions recover trends in the mean and scatter of the line parameters as a function of continuum luminosity, e.g., the Baldwin effect (Baldwin 1977), and blueshifted lines (Gaskell 1982;Richards et al. 2011). The simulator also models absorption from from neutral hydrogen absorption in Ly forests based on the work of Worseck & Prochaska (2011). The final noiseless photometry of simulated QSOs is derived from the model spectra by integrating them against the respective filter curves.

XDHZQSO DENSITY MODEL
To estimate the density of contaminants and quasars in flux space (the ({ }| ∈ ) factor from Eq. 1), we used the XDGMM 7 implementation of extreme deconvolution from Holoien et al. (2017). XDGMM is a python package that utilizes the scikit-learn API (Pedregosa et al. 2011;Buitinck et al. 2013) for Gaussian mixture modeling. It performs density estimation of noisy, heterogenous, and incomplete data and uses the XD algorithm 8 (Bovy et al. 2011b) for fitting, sampling, and determining the probability density at new locations. As described by Bovy et al. (2011b), XD models the underlying, deconvolved, distribution as a sum of Gaussian distributions, where is a model complexity parameter that needs to be set using an external objective. It assumes that the flux uncertainties are known, as is in our case, and consists of a fast and robust algorithm to estimate the best-fit parameters of the Gaussian mixture. In §4.2 we follow the approach used by Bovy et al. (2011b) to construct the flux density model of the two classes.
Finally, to compute the probability of an object belonging to a certain class, we need to estimate the number counts of both quasars and contaminants (the ( ∈ ) factor from Eq. 1): i.e., these are the prior factors of our Bayesian approach. For the contaminants, we compute this factor empirically from the number counts ( -band magnitude distribution of contaminants), while for the quasars we derived them from the high-QSO luminosity function. However, to derive the true number counts for the QSOs, which includes the survey incompleteness at the faint end, we used the empirical data to compute the incompleteness for the VIKING survey, and apply it to the QSO number counts. In §4.3 we provide details about the computation of these prior factors.

The binning approach
The full model consists of fitting the probability density (the ({ }| ∈ ) factor from Eq. 1) in a number of bins in -band magnitude for the two classes of objects. We opted to bin inband because the probability density of quasars will have a dominant power-law shape corresponding to the number counts as a function of apparent magnitude, whereas the color distribution is much flatter. While the latter can be represented well by mixtures of Gaussian distributions, the power-law behavior cannot without using large numbers of Gaussians. Thus the slow variation of the color distributions with magnitude is captured by our model, since we use narrow bins in -band magnitude.
The full contaminant model consists of 50 overlapping bins where the right edges are uniformly distributed in the range = 20 − 22.5 with a step of 0.05 mag, while the width is given by a broken sigmoid function: where, bre is the -band bin right edge, = 0.1 represents the minimum bin width and 1 = 5, 2 = 1 represent the maximum bin widths in the two -band ranges, ℎ 1 = 21, ℎ 2 = 22, and Δ = 0.1. The broken sigmoid for the contaminants is shown in Fig.  2. The use of a variable bin width is driven by the need of having a model that is as continuous as possible, as the XD fits can jump between local maximums. In fact, this procedure guarantees that many (> 20%) of the objects in the bins overlap for adjacent bins, and thus the model varies smoothly. Furthermore, the use of a broken sigmoid guarantees that both at the bright and faint ends, where fewer objects are present, the bins are large enough to contain a sufficient number of sources. In fact, we have > 2000 training objects in each bin to build the contaminant models.
As for the quasar model, we used 11 uniform spaced bins with a width 0.5 mag in the range = 17 − 22.5, and we further divided the quasar class into three subclasses corresponding to "low-redshift" (6 ≤ ≤ 6.5), "medium-redshift" (6.5 ≤ ≤ 7), and "high-redshift" (7 ≤ ≤ 8) quasars, constructing a QSO model for each bin. We opted to divide the QSO into these three redshift bins, instead of working with a broad 6 ≤ ≤ 8 bin, for the following reasons: (i) As shown in §5.1, the efficiency and completeness of our selection method strongly depends on the -bin in question owing to the changing overlap between quasars and contaminants.
(ii) While the 6 ≤ ≤ 7 range has been largely investigated in the past, few objects have been found at 7 ≤ ≤ 8, making it the highest priority range that we are interested in investigating.
(iii) Spectroscopic wavelength coverage is different for different instruments, with the dividing line between optical and near-IR spectrographs typically occurring around ≈ 10, 000 Å ( = 7.2). Thus, not all the 6 ≤ ≤ 8 QSOs can simply be confirmed with a single instrument, and multiple instruments could be required to confirm candidates over such a broad redshift range. Hence, the redshift bins we adopted also facilitates in efficiently conducting follow-up observations.
However, in the future we plan to introduce the redshift as one of the modelled quantities as done by Bovy et al. (2012), so that one would no longer needs to construct models in different redshift bins, as this approach also provides photometric redshifts, which can be used to select candidates over any desired redshift range.

Construction of the model
The XD code fits for all the -band magnitude bins for a given class are initialized using the best-fit parameters for the previous bin, so to guarantee the continuity of the mode. The starting bin (the one that is not initialized) is the closest to = 21, where we know we always have a quite large sample of objects (> 10 4 ) for the training. Hereafter, we describe the model in a single bin first for a single example class, using the contaminant class as the example.
In a single bin in -band magnitude, we separate the absolute flux from the flux relative to the -band in the likelihood in Eq. 1 as follows: where { } are the , , , , 1, 2 fluxes, { / } are the fluxes relative to -band, and is the -band flux. We model the two factors of the right hand side of Eq. 5 separately.
We modeled the ({ / }| , ∈ "cont.") factor using XD in narrow bins in -band magnitude. We use relative fluxes rather than colors since the observational uncertainties are closer to Gaussian for relative fluxes than they are for colors. Also, for sources where the flux measurement can be negative the magnitudes are badly behaved, while relative fluxes remain well behaved in this case. To evaluate the XD probabilities during training, we always convolved the underlying model with the object's relative-flux uncertainties assuming that they are Gaussian distributed, such that the convolution of the Gaussian mixture with the Gaussian uncertainty results in another Gaussian mixture. Although the ratio of two noisy Gaussian deviates is not itself Gaussian distributed, Gaussianity is a good approximation provided that the -band flux errors are small. The validity of this approximiation is discussed further in Appendix A. Note also that since all other fluxes are divided by the -band flux, the resulting uncertainties are covariant, and we provide the functional form of this covariance matrix in A. To train for the QSO models, since the simulated quasar fluxes are noiseless, we simply need to fit their flux densities without deconvolving to derive the underlying deconvolved quasar model. However, to avoid singular inverse variances for the effectively noiseless model data, we added a tiny error (0.01) to the simulated noiseless relative fluxes drawn from a Gaussian distribution, and used for consistency this small value of the error as the input error on the photometry in the XD code. In Fig. 3 we show the relative-flux relative-flux diagrams of our training data: the contaminants are displayed using black contours, while a sub-sample (5000) of simulated 6 ≤ ≤ 8 QSOs are shown as coloured points. For display purposes, we added to the displayed quasars real errors drawn from a noise model based on our contaminant catalog which is described in Appendix B. We modeled the six-dimensional relative fluxes { / }, using 20 Gaussian components. The number 20 was chosen after performing XD fits with 10, 15, 20, and 25 components. While fits with less than 20 components overly smoothed the observed distribution, models with more than 20 components used the extra components to fit extremely low significance features in the observed distribution. The same number of components was also adopted by Bovy et al. (2011b). Similarly, we also used 20 Gaussian components to fit for the quasar models.
To provide a visual example of the model generated by the XD-HZQSO code, we display in Fig. 4 the 20.67 < < 21.2 deconvolved contaminant model (black contours) compared to the 20.5 < < 21.0 QSO models in the three redshift bins: 6 ≤ ≤ 6.5 For display purposes, we added to the simulated noiseless quasars the real errors coming from our contaminant catalog as explained in Appendix B, while the black line and colored filled circles represent the color-redshift relation predicted using our simulated QSOs. Although we do not know the real nature of our contaminants, we expect that most of them are cool brown dwarves and early type galaxies.
(blue), 6.5 ≤ ≤ 7 (green), and 7 ≤ ≤ 8 (red). To generate the displayed samples, we drew 50,000 sources from the deconvolved contaminant model, and 50,000 objects from the three redshift-bins deconvolved QSO models. It is apparent that the large overlap between the contaminant and the 6.5 ≤ ≤ 7 and 7 ≤ ≤ 8 QSO contours will greatly lower the efficiency in selecting QSO candidates in these two redshift ranges, as better explained in §5.1. To asses the quality of our contaminant deconvolved models, we sam-pled the deconvolved models in each -band bin 9 , re-added the errors to the deconvolved fluxes following our noise modeling procedure described in Appendix B, and compared the relative-flux distribution of the reconvolved sample with the original real noisy data. In Fig. 5 we compare a simulated set of samples (red contours) from the deconvolved 20.67 < < 21.2 contaminant model with the real data distribution (black), while in Fig. 6 we compare the same simulated sample after adding the errors, following Appendix B (red), with the real contaminant distribution (black). It is apparent that, after re-adding the errors to the deconvolved quantities, we obtain a distribution that is identical with the 20.67 < < 21.2 real data.

Computation of the priors
The second factor of Eq. 5, ( | ∈ "cont.") represents the number counts of contaminants (or quasars) as a function of apparent magnitude, and is always expressed in units of deg −2 . For the contaminant class, we modeled the number counts directly using the number counts of the training data, by fitting the histogram of -band magnitude number counts per square degree. We used a 40-order polynomial to perform a robust fit to the range ≤ 21.4, while at > 21.4 we used a cubic spline to interpolate the histogram, namely to capture the drop-off due to catalog incompleteness. In order to model the effect of the incompleteness on the real data distribution, we fit a power-law to the range 20.7 ≤ ≤ 21.4: where ( ) = −95.3 and = 73.0, and extrapolated this power law fit to > 21.4. The ratio between the value given by the powerlaw and the cubic spline interpolated number counts gives us the incompleteness correction term to apply to our QSO number counts at > 21.4. We show in Fig. 7 (top right panel) the ( | ∈ " .") factor.

HIGH-QSO SELECTION
In this section we present the XDHZQSO source classification for all the objects selected by our initial cuts described in §3.1. This catalog was also used to train the contaminant model as described in §4, since we argued that the fraction of high-QSOs contained in this catalog is negligible. Using the models of quasar and contaminant deconvolved relative fluxes, we computed the probability that every object is a high-QSO or a contaminant using Eq. 5. Specifically, we used the models from the previous section as follows. For an object with -band magnitude , we first found the bin whose midpoint is the closest to this magnitude. Then, we used this bin to evaluate the relative-flux density ({ / }| , ∈ " .") for this object's relative fluxes by convolving the underlying 20 Gaussian mixture model with the object's uncertainties. This uncertainty convolution is simply adding the object's uncertainty covariance to the intrinsic model covariance for each component.
Finally, we evaluated the number density as a function of the object's apparent magnitude in -band, using the interpolated relations described in §4.2. We did this for each of the classes (contaminant and the three quasar classes) and compute the probabilities using Eq. 1.
In Fig. 8, we show the distribution of XDHZQSO quasar probabilities for the sources we classified in the VIKING survey area. Since the catalog is expected to contain mostly contaminant sources, the probability distribution is peaked at zero in each redshift bin, with a few exceptions at higher probabilities that represent our best candidate quasars for future spectroscopic confirmation. It is also apparent that the number of the best candidates for spectroscopic follow-up (i.e. those with QSO > 0.1) decreases as the redshift increases. This results from the combination of two factors: 1) the number density of QSOs decreases as redshift increases, 2) the overlap in the relative-flux-relative-flux space between the higher-QSOs and the contaminants is larger, in particular in the 6.5 ≤ ≤ 7 range (see Fig. 4).

Completeness and efficiency computation
To select high-QSO candidates for spectroscopic follow-up confirmation, we defined a probability threshold ( th ) that effects a balance between contamination and completeness: this threshold should be small enough to avoid missing many high-QSOs, so that the sample completeness is high, and it should be large enough to keep the number of contaminants low to increase the efficiency of the selection method. From a practical perspective, the completeness can be seen as a proxy for the expected fraction of recovered high-QSOs as a function of the probability threshold, in a certain sky area, while the efficiency is a proxy for the expected spectroscopic confirmation efficiency of the candidates at the telescope.
The completeness (C) is defined as: where Q ( ≥ th ) is the number of high-QSOs per square degree with a probability ≥ th , and Qtot is the total number of QSOs per square degree, while the efficiency (E) is defined as: where C ( ≥ th ) is the number of contaminants with a probability ≥ th per square degree. In the limit where the classification of all the sources in our survey is known, we could compute both C and E directly from the VIKING survey area. However, as we do not know the real classification of most of the sources in our sample, we used simulations to compute the completeness and the efficiency of our selection method, as we now describe.
In order to reduce the statistical fluctuations we simulated a large number of both high-QSOs and contaminants. High-QSOs were simulated by sampling the ≥ 6 LF from Eq. 7 (Wang et al. 2019), 50,000 contaminants 50,000 6 < z < 6.5 QSOs 50,000 6.5 < z < 7 QSOs using a Markov Chain Monte Carlo (MCMC) approach. Namely, this equation can be interpreted as the 2-D probability distribution of the quasars as a function of redshift and magnitude, and MCMC is a convenient method to generate samples. Again, we expressed the LF as a function of redshift and apparent -band magnitude, by converting the 1450 to -band magnitude using the -correction from Eq. 8, and multiplied it by the incompleteness found in §4.2 for the VIKING -band magnitude distribution. We then used the MCMC method to sample the redshift and -band magnitude distributions of 300,000 QSOs with 6 ≤ ≤ 8, and 17 ≤ ≤ 22. Given the redshift and -band magnitude of each source, we used our deconvolved quasar models to sample the noiseless fluxes for the 300,000 simulated QSOs, and added representative photometric errors according to our noise model in Appendix B. Then, the simulated QSOs were divided into the three redshift bins adopted previously, and we computed their probability of being quasars using Eq. 1, to derive the Q ( ≥ th ) needed for Eq. 9 and Eq. 10.
To simulate the contaminants, we drew 100 million 17 ≤ ≤ 22 sources from the -band magnitude distribution of the contaminant training catalog (upper-left panel Fig. 7). We again sampled the deconvolved contaminant models to generate the noiseless fluxes for our simulated sources, and added the errors as explained in Appendix B. Then, we evaluated the probability that these synthetic sampled "sky" objects are quasars using Eq. 1, which is needed to determine the C ( ≥ th ) term from Eq. 10. Finally, we rescaled the numbers of simulated contaminants and high-QSOs to reflect the prior number count distributions shown in Fig. 7, and we used Eq. 9 and Eq. 10 to compute the completeness and efficiency, down to a -band magnitude of 21.5. This magnitude limit was introduced since it is representative of what can be realistically confirmed with a near-IR instrument on an 8m class telescope in a reasonable exposure time, and is also close to the 5 limit of the VIKING data we use. Fainter objects would require longer exposure times and excellent observing conditions making them much more challenging to spectroscopically confirm.
In Fig. 9 we display the number count distribution of quasar probabilities, / Ω/ , for simulated QSOs and contaminants. This quantity is defined such that the integral over probability yields the number of objects per square degree. Fig. 10 shows the efficiency (black) and the completeness (red) of our selection method as a function of the probability threshold ( th ), in the three redshift bins: 6 ≤ ≤ 6.5 (top), 6.5 ≤ ≤ 7 (central), and 7 ≤ ≤ 8 (bottom). It is apparent that lowering the threshold will always increase the completeness, but this comes at the cost of a lower efficiency, thus increasing the number of contaminants that are spectroscopically fol- . It is apparent that, after re-adding the errors to the deconvolved quantities, we obtain a distribution that is consistent with the 20.67 < < 21.2 real data. lowed up. It is also evident that the completeness and efficiency are generally higher in the 6 ≤ ≤ 6.5 range, where the overlap between the QSO and contaminant relative-flux distributions is smaller compared to the 6.5 ≤ ≤ 7, and 7 ≤ ≤ 8 cases (i.e. the red and green contours overlap the black contours in Fig. 4 more than the blue contours).
Since the expected number density of high-QSOs is very low, the choice of the th is mostly determined by the need to have a high completeness to avoid missing the coveted highest-redshift sources. In fact, by integrating the LF in Eq. 7 down to = 21.5, we expect to find ≈ 15, ≈ 5, and ≈ 2 QSOs in the ranges 6 ≤ ≤ 6.5, 6.5 ≤ ≤ 7, and 7 ≤ ≤ 8, respectively, in the 1076 deg 2 VIKING survey area. Recovering this small number of expected sources would require a relatively high completeness (possibly ≈ 90%). As such, we chose use the completeness as the main criterion for setting the probability threshold th , whereas the efficiency plays a pivotal role in setting th when a high completeness corresponds to < 10%. To visualize the tradeoff between completeness and efficiency (both of which are parameterized by th ), we plot in Fig. 11 the efficiency as a function of the completeness for the three redshift bins.
In the 6 ≤ ≤ 6.5 range, the 90% completeness requirement corresponds to th = 0.1 and = 88% (Fig. 11, top panel)  In the top left panel, the black points are the real contaminant data from the VIKING survey, while we used a 40-order polynomial to perform a robust fit to the range ≤ 21.4, and at > 21.4 we used a cubic spline to interpolate the histogram, namely to capture the drop-off due to catalog incompleteness (red line). To model the effect of the incompleteness on the real data distribution, we fit a power-law to the range 20.7 ≤ ≤ 21.4, and extrapolated it to > 21.4 (blue dashed line). The ratio between the value given by the power-law and the cubic spline interpolated number counts gives us the incompleteness correction term to apply to our QSO number counts at > 21.4. The 1 Poissonian errors are shown as short blue lines. The other three QSO panel show the the ∼ 6.7 quasar LF from Wang et al. (2019), after the inclusion of the incompleteness (red line). The extrapolation of the LF at > 21.4 without the incompleteness correction is shown as a blue dashed line. 6.5 ≤ ≤ 7 range the high completeness requirement ( = 90%) cannot be achieved without lowering the efficiency to an unacceptable value ( ≈ 10 −3 %; see Fig. 11, central panel), while a 75% completeness (achievable with th = 0.07) corresponds to ≈ 15%, which is a more reasonable efficiency value to work with. For the 7 ≤ ≤ 8 range, we want a completeness of 90% to avoid missing the ≈ 2 expected > 7 QSOs. This requirement corresponds to ≈ 15% efficiency, and can be achieved with th = 0.1. The very low value of efficiency in the two highest redshift ranges is caused by the large overlap between the 6.5 ≤ ≤ 8 QSOs and the contaminant models, as is apparent in Fig. 4 (see the larger overlap of the green and red with the black contours). Consequently, also the number of QSO candidates with probability above the threshold in these redshift ranges is lower compared to the 6 ≤ ≤ 6.5 range.
To summarize, we report in Table 3 the three probability thresholds derived from our completeness and efficiency analysis, and the corresponding completeness, efficiency, and number of candidates ( QSO ≥ th ) with QSO ≥ th that are selected for future spec-troscopic follow-up. For the 7 ≤ ≤ 8 range, we obtain an efficiency that is 15%, whereas quasar selections based on color-cuts work at percent level efficiency in this redshift range (Bañados et al. 2018;Wang et al. 2021). The higher efficiency that we derive results from the combination of two primary factors: 1) our probabilistic density estimation takes advantage of the full feature space (all flux ratios) at once without strict boundaries, making it more effective and inclusive than simple color-cuts, and 2) our effort to compile as much panchromatic photometry as possible improves the efficiency, relative to previous efforts (Mortlock et al. 2011;Bañados et al. 2018;Yang et al. 2020a;Wang et al. 2021), to select > 7 quasars using color cuts. On the other hand, an efficiency of 15% for the 6.5 ≤ ≤ 7 range is lower compared to some color-cut selections performed in the past (i.e., Bañados et al. 2016). This likely results from the fact that our study does not include the PS1-filters, which greatly improves the selection of 6.5 ≤ ≤ 7 QSOs, since in this redshift range the Ly line enters the PS1-filter and drops out of the PS1-filter, while the broader DECaLS-filter covers both the aforementioned PS1 filters. In a future study we plan to include the PS1-filters to improve our selection efficiency for this particular redshift range.

CLASSIFICATION OF KNOWN HIGH-QUASARS
By integrating the = 6.7 LF from Wang et al. (2017) in the 17 ≤ ≤ 21.5 range, we expect to find ≈ 21 (≈ 28) QSOs at 6 ≤ ≤ 6.5, ≈ 7 (≈ 9) QSOs at 6.5 ≤ ≤ 7, and ≈ 3 (≈ 4) QSOs at 7 ≤ ≤ 8, depending on whether (or not) we consider the effect of the -band photometric incompleteness in the VIKING survey. Thus, after performing the spectroscopic follow-up of the targets with ≥ th , we expect to discover high-QSOs among our candidates with numbers consistent with these estimates.
Past works already studied the VIKING area and searched for ≥ 6 QSOs (e.g., Venemans et al. 2013Venemans et al. , 2015Barnett et al. 2021). For example, both Venemans et al. (2013) and Barnett et al. (2021) used the filters from the VIKING survey to find > 6.5 QSOs: Venemans et al. (2013) applied color-cuts and found three new QSOs, while Barnett et al. (2021) selected four known QSOs and 17 QSO candidates using the BMC method, but no new QSOs were found. Other QSOs where found in the VIKING footprint from past works, as they searched for high-QSOs in other surveys that partially overlap with the VIKING area: i.e., the CFHTLS (Willott et al. 2009), the Pan-STARRS1 (Bañados et al. 2016), the VST-ATLAS (Carnall et al. 2015), the DELS (Wang et al. 2017), and the HSC (Matsuoka et al. 2016;2018a;2018b;2018c;2019b;2019a). So, we expect to have some known high-QSOs in our VIKING dataset, and to recover them among our candidates. In §6.1, we provide a summary of the known QSOs that are covered within our search area but that are not in our VIKING dataset due to our selection criteria. Then, we describe the performance of XDHZQSO in recovering and classifying both the known high-QSOs in the VIKING survey area ( §6.2), as well as the known > 7 QSOs ( §6.3) over the entire sky.

Missed high-QSOs
From past works (Willott et al. 2009;Venemans et al. 2013;Bañados et al. 2016;Matsuoka et al. 2016;2018a;2018b;2018c;2019b;2019a), we identified 32 known ≥ 6 QSOs in the DE-CaLS+VIKING area. However, the imposition of our selection criteria reduced this number in our final VIKING area dataset, as 20 QSOs are lost because they do not satisfy SNR( ) > 5, and another four QSOs are not selected as they do not have data in all the bands considered in our study. That leaves eight known > 6 quasars in the VIKING area dataset whose probabilistic classification is described in the following section.

Classification of known high-QSOs in the VIKING Survey Area
Among the classified sources there are eight known high-QSOs that were found in the VIKING survey area from past works: The three panels show the results for the three redshift bins: 6 ≤ ≤ 6.5 (top), 6.5 ≤ ≤ 7 (central), and 7 ≤ ≤ 8 (bottom). The blue dashed vertical line marks our adopted probability threshold. It is apparent that lowering the threshold will always increase the incompleteness but this comes at the cost of lower efficiency, thus increasing the number of contaminants selected for spectroscopic follow-up. It is also evident that both the efficiency and completeness are lower at 6.5 ≤ ≤ 7, and 7 ≤ ≤ 8, where the QSO properties largely overlap with the contaminant distribution (see the overlap between the red and green over the the black contours in Fig. 4). Summary of the probability threshold ( th ) adopted in each redshift bin to select high-QSO candidates for spectroscopic follow-up, and the corresponding completeness (C), efficiency (E), and number of candidates selected ( ( QSO ≥ th )). The last two columns represent the number of QSOs expected ( exp ) according to our adopted LF (Eq. 7) down to = 21.5, and how many of them we expect to recover among our candidates ( rec ).
and their main properties are listed in Table 4, while their probabilities of being high-QSOs are pinpointed with red arrows in Fig. 8.
In the range 6 ≤ ≤ 6.5, our models are able to correctly classify one known QSO out of three, J0142-3327 ( QSO ≈ 99.9%), but our selection threshold in this redshift range ( th = 10%) does not allow us to recover DELS J1217+0131 ( QSO ≈ 0.06%), and HSC J1137+0045 ( QSO ∼ 10 −7 %). The probability of these three quasars are also reported in Table 4 and shown in Fig. 8 (upper panel). The low probability of the latter one is not surprising, considering that HSC J1137+0045 is a very faint QSO ( = 21.51 and SNR( ) = 5.4), selected from the Hyper Suprime-Cam (HSC) Subaru Strategic Program (SSP) survey (Aihara et al. 2018), and that apparently lacks strong Ly in emission (Matsuoka et al. 2019b). However, to better understand the low probability values obtained for these two quasars, we compared their photometric properties with those sampled from our XD deconvolved models. For each of the three known 6 ≤ ≤ 6.5 QSOs, we simulated 10,000 contaminants and 10,000 6 ≤ ≤ 6.5 QSOs, using the XDHZQSO models in the magnitude bins that include the -band magnitudes of the three QSOs. To visualize the probability of selecting a known quasar, we draw samples from the "deconvolved" (i.e., noise free) XDHZQSO contaminant and quasar models, and overplot the relative flux measurements of the real quasars, with ellipses indicating their (covariant) 1 errors. This is shown in Fig. 12, where we plot the deconvolved relative-flux relative-flux contours for the simulated contaminants (black) and 6 ≤ ≤ 6.5 QSOs (blue), compared to the properties of the known 6 ≤ ≤ 6.5 QSOs from the VIKING survey area. The reason we are creating 10,000 copies of contaminant and 10,000 of QSOs for each known high-QSO is that the contaminant and quasar models are magnitude dependent. Thus formally, we would need to show a plot for each object, where we compare its properties with those from the sampled contaminants and QSOs. However, given that these magnitude dependencies are subtle, we chose to simply simulate 10,000 copies of sources at each magnitude and aggregate them onto a single plot. It is apparent that in some sub-plots of Fig. 12 (especially those with and ), the relative fluxes of both HSC J1137+0045 and DELS J1217+0131 are not consistent with the simulated 6 ≤ ≤ 6.5 QSOs relative flux distributions (blue contours), consequently lowering the classification probability of these two objects. Considering that HSC J1137+0045 is a QSO that apparently lacks strong Ly in emission (Matsuoka et al. 2019a), while DELS J1217+0131 exhibits a strong Ly emission line (Wang et al. 2017), we conclude that the properties of the "simqso" simulated high-QSOs, that have been used for the training of our XDHZQSO QSO models, are too rigid to include these two sources.
Among them, J0921+0007 ( QSO ∼ 10 −4 %) is also a HSC selected QSO ( = 20.9) that has similar optical colors to Galactic brown dwarfs (Matsuoka et al. 2018b). Adopting the same procedure as described above to generate 10,000 contaminants and 6.5 ≤ ≤ 7 QSOs for each known QSO, we show in Fig. 13 the deconvolved relative-flux relative-flux contours for the simulated contaminants (black) and high-QSOs (blue), compared to the properties of the known 6.5 ≤ ≤ 7 QSOs from the VIKING survey area. Also in this case, it is apparent that the relative fluxes of the four QSOs with QSO ≤ 10 −2 % are inconsistent with the deconvolved QSO model properties (blue contours in Fig. 13) in some sub-plots: 1) J0148-2826 is inconsistent with panels showing , 1 , and 2 , 2) HSC J0921+0007 is inconsistent with panels showing 1 , and 2 , 3) DELS J1048-0109 is not consistent with panels showing , and 2 , and 4) HSC J1205-0000 is not consistent with the QSO distribution in any panel. We provide a more detailed discussion of these discrepancies between real and simulated QSO properties in §7.1.

Classification of the z ≥ 7 QSOs
While we tested in §6.2 the ability of our models to recover the known 6 ≤ ≤ 7 QSOs in the VIKING survey area, testing our classification models for the highest redshift range was not possible as there are no known > 7 QSOs in the VIKING footprint. Therefore, we applied our method to the > 7 QSOs that have been discovered so far over the entire sky, using published photometric measurements. There are, at the time of writing, a total of eight known >  Wang et al. 2021). To classify them, we first collected the photometric data in the seven bands of interest (DECaLS-, VIKING-, and WISE-1 2) from the literature, when available. Since some of these sources have public NIR data coming from the Wide Field Infrared Camera (WFCAM) for the UK Infrared Telescope (UKIRT), we used the transformation equations between VISTA and WFCAM derived by González-Fernández et al. (2018), to convert the UKIRT magnitudes into the VIKING ones. For the missing flux measurements, we performed forced photometry. Since J0313-1806 has no photometric measurements in the and bands, we used synthetic photometry computed by integrating the observed spectrum of this source from Wang et al. (2021) against the respective filter curves. However, we excluded from our classification list both J2356+0017 and J1243+0100, as they are too faint (SNR( ) < 5) to make it into our catalog. Finally, we used our XDHZQSO models to classify the remaining six sources following the same procedure described in §5. In Table 5 we summarize  Matsuoka et al. (2016) the properties and results from our classification of these six ≥ 7 QSOs. Based on our defined probability threshold for the ≥ 7 range ( th = 10%), we are able to recover five QSOs: J0252-0503 ( QSO = 17.1%), J1120+0641 ( QSO = 29.4%), J1007+2115 ( QSO = 77.2%), J1342+0928 ( QSO = 33.3%), J0103-1806 ( QSO = 43.7%). However, we fail to select J0038-1527 ( QSO = 0.3%). J0038-1527 exhibits strong broad absorption line (BAL) features (Wang et al. 2018), that can alter its colors, making it different compared to our 7 ≤ ≤ 8 QSO models, which do not attempt to model BAL absorption. As in §6.2, we simulated a large number of contaminants and 7 ≤ ≤ 8 QSOs, and compare their relative fluxes with those from the real > 7 QSOs in Fig. 14. It is evident that J0038-1527 deviates from the blue contours (deconvolved 7 ≤ ≤ 8 QSO models) in the sub-plot displaying vs. , as the absorption from the BALs impacts the -band flux. Again, we discuss the deviations of the real QSO properties from the expected simulated ones in §7.1.

DISCUSSION
In §6 we showed that XDHZQSO is able to recover two 6 ≤ ≤ 7 QSOs out of eight that passed our selection criteria (i.e., SNR( ) ≥ 5, SNR( , ) < 3, and no missing data) and made into our catalog: we select one QSO at 6 ≤ ≤ 6.5 (ATLAS J025.6821-33.462 at = 6.31), and one QSO at 6.5 ≤ ≤ 7 (VIK 0305-3400 at = 6.61). The application of XDHZQSO on the 7 ≤ ≤ 8 QSOs found in the entire sky, that meet our selection criteria, allow us to recover five out of six 7 ≤ ≤ 8 QSOs (J0252-0503 at = 7.02, 1120+0641 at = 7.085, J1007+2115 at = 7.515, J1342+0928 at = 7.541, and J0313-1806 at = 7.642). In §7.1 we discuss the limitations of our selection technique that could explain our failure to select of some of the known high-z QSOs in the VIKING area, while in §7.2 we provide a comparison between our code and other probabilistic classification methods.

Limitations of the XDHZQSO selection method
In §6.2 and §6.3 we showed that our method is only able to recover some of the known high-QSOs. In fact, there are several reasons that can lead to the failure to select a source, and all of them involve the source properties and corresponding errors being more consistent with the XDHZQSO contaminant models rather than the high-QSO ones. Here we discuss the possible causes that lead to the nonselection of some of the known > 6 sources: • Noisy data or photometric variability. In the case of a source with large photometric errors, our method naturally degrades its probability of belonging to high-QSOs class if the data uncertainties imply that the object overlaps with the contaminant class. On the other hand, this limitation is not afflicting other selection methods. In fact, a color-selection technique that does not use photometric errors could select a noisy object, whereas XD would spread that probability out, meaning it might be more likely to be classified as a contaminant if, given the errors, it significantly overlaps the contaminant locus. However, we stress that taking errors into account is a feature not a flaw of our method (i.e., not taking into account errors will generally result in an overall lower efficiency then taking them into account, which is the more optimal approach). Furthermore, since many surveys were performed at different epochs, intrinsic variability of sources could also play a role in lowering the computed probabilities (see Ross & Cross 2020 for a study of the variability of 5 < < 7 quasars). However, since the variability of these objects is supposed to be small (at most 10% given low-structure functions; e.g., Vanden Berk et al. 2004;Kelly et al. 2009;Schmidt et al. 2010), we argue that this is probably not the main issue we are facing.
• Inaccurate models. Since our method is a classification technique, its validity strongly depends on the correct modelling of the considered classes. If the XDHZQSO models are not a good representation of the underlying deconvolved flux distributions of one or more classes, then the computed probabilities are not reliable. Although, that seems not the case for our contaminant class, as the models are trained with the real data coming from our survey, it can be an issue for our high-QSO classes. In fact, our quasar models are trained on synthetic photometry determined from simulated QSO spectra whose properties are consistent with the mean spectrum of low-luminous QSOs (McGreer et al. 2013). However, these simulated quasar spectra could not well represent the intrinsic relative flux scatter of all the luminous QSOs, or the properties of peculiar sources such as Broad Absorption Line QSOs (BALQSOs). For example, J0038-1527 is a BALQSO (Wang et al. 2018), and its -band relative flux is lower than expected compared to objects with similar redshift and luminosity (see Fig. 14). Furthermore, in the sub-panels showing , , 1, and 2 bands in Figs. 12, 13, 14 it is apparent that our XDHZQSO QSO models are too rigid, as the simulated QSO deconvolved density distributions (blue contours) appear too little scatter as compared to the real QSOs to be a good representation of the intrinsic QSO scatter. For the 1 2-bands, there could be also source confusion/deblending errors in the photometry since we just performed aperture photometry, without taking into account the large unWISE (≈ 6") point spread function. A model that better reproduces the full distribution of the relative fluxes of the luminous  Wang et al. (2021) QSOs at low-would provide a better classification of our sources. Therefore, our conclusion is that the "simqso" simulator was designed for color-cuts, but it is not up to the demands of a density estimation method.
Our current simulated quasar sample fails to capture the full spectral diversity of the observed quasar population, which is important for the density estimation method. Hence, to improve on our quasar selection, we have to move beyond modeling average quasar properties, for which "simqso" was originally designed, but rather capture the full relative flux distribution of the full population. In the future, we plan to mitigate these limitations by carefully modelling of the relative fluxes of QSOs using empirical data coming from the SDSS and BOSS surveys, which would capture the full distribution of quasar SEDs and hence relative fluxes.

Comparison with other probabilistic classification methods
Compared to other probabilistic classification methods, our approach has two main advantages: (i) Our method accounts for the photometric errors by convolving the underlying density distribution with the object's uncertainties, assuming that the relative-flux uncertainties are Gaussian. While this approach is required to correctly estimate the probability that a noisy object is a member of given class, standard random forest methods ignore the photometric errors (e.g., Schindler et al. 2017;Wenzl et al. 2021), thus not utilizing all the information contained in the data. For bright sources this should not be so problematic given the small associated uncertainties. However, at high-we have to take into account that: 1) QSOs dropout of optical bands (e.g., ) and so we need to accurately treat low signal to noise dropout fluxes, and 2) QSOs are rare at high-and the LFs rise with decreasing flux. So, to build up statistics, the majority of targets will always be near the flux limits of our data, while the inclusion of the photometric errors in the analysis of fainter sources would prevent the overly optimistic identification of contaminants as high-QSO candidates.
(ii) The BMC method (Mortlock et al. 2012) is also Bayesian and is directly analogous to what we are doing, with the caveat that they assume perfect knowledge of the contaminant models based on templates and priors (number counts), which are unlikely to be correct in detail. Instead, our model for the contaminant class is purely empirical and does not need to construct SED models for the mean properties of each possible contaminant. This approach is very powerful as it captures the underlying deconvolved distribution of the contaminant using real data, and includes all the kind of possible contaminants without the need of modelling them. On the contrary, the BMC method requires a perfect knowledge of both the properties and the type of contaminants, whose feasibility has not been yet demonstrated. For example, even if brown dwarfs and early type galaxies are the majority among the contaminants, also Type-2 QSOs, reddened low-QSOs, and FeLoBAL QSOs could also contaminate the high-selection, whereas constructing models for the number density and colors of all these sources would be a daunting task.

CONCLUSION
In this paper we described the application of the XDHZQSO method to select high-(6 ≤ ≤ 8) QSOs. Our approach is based on density estimation in the high-dimensional space inhabited by the optical-IR photometry. The main idea is that quasars and the far more abundant contaminants (cool dwarf stars, red galaxies, lower-z reddened or absorbed QSOs) inhabit different regions of this space. Thus, probability density ratios yield the probability that an object is a quasar, which is used to select and prioritize candidates for spectroscopic follow-up, resulting in a fully optimal method. Density distributions are modeled as Gaussian mixtures with principled accounting of errors using the XD algorithm. Compared to other probabilistic selection methods, the great advantage of our approach is that the poorly understood contaminants are modeled fully empirically.
High-quasars were trained on synthetic photometry in three redshift bins (6 ≤ ≤ 6.5, 6.5 ≤ ≤ 7, 7 ≤ ≤ 8), whereas contaminants were trained on the VIKING ( ) imaging survey combined with deep DECaLS -band and unWISE ( 1 2), where all sources were required to be and dropouts. The combination of depth ( < 22) and wide field (1076 deg 2 ) make this the best panchromatic imaging for training quasar selection until Euclid arrives.
From extensive simulations we determined the threshold ( > th ) required to obtain a completeness of 75% in each redshift bin, which results in selection efficiencies 15%. These high efficiencies indicate that the ≈ 1% efficiencies of recent color-cut based surveys are not necessary. The required thresholds th and resulting efficiencies depend on the -bin in question owing to the changing overlap between quasars and contaminants, where the higher redshift bins have lower efficiencies. With the adopted th = 0.1, 0.07, 0.1, we selected 14, 27, and 23 quasar candidates in the range 6 ≤ ≤ 6.5, 6.5 ≤ ≤ 7, 7 ≤ ≤ 8 in the VIKING footprint, respectively. These targets have been scheduled for optical and NIR spectroscopic follow-up, and the results will be published in a future work (Nanni et al. in prep.).
In the VIKING footprint the there are eight known 6 ≤ ≤ 7 QSOs that meet our catalog criteria, of which two are selected. Since there are no > 7 known QSOs in the VIKING footprint, we applied our method to six out of eight known > 7 QSOs in the entire sky (we excluded two > 7 QSOs as they do not meet our catalog criteria), . Efficiency vs completeness in the three redshift bins: 6 ≤ ≤ 6.5 (top), 6.5 ≤ ≤ 7 (central), and 7 ≤ ≤ 8 (bottom). The red point marks the efficiency and completeness at the value of the chosen probability threshold ( th ). The low overlap between the 6 ≤ ≤ 6.5 QSO and contaminant contours allows us to work with high values of efficiency (88%) and completeness (90%). However, at 6.5 ≤ ≤ 7 and 7 ≤ ≤ 8 the overlap with the contaminant properties is so large that we are forced to work at a lower efficiency (15%) to have a high completeness (≥ 75%). and recover five of them. We argued that the XDHZQSO misses some of these quasars for two reasons: 1) the existing quasar fluxes are noisy so that our model correctly assigns them a low probability, and 2) the inaccuracies in our modeling of quasars, namely that the synthetic quasar spectra we used do not capture the the scatter in the distribution of relative fluxes. We argued that the first limitation is a feature rather than a flaw in our approach, since we deliver reliable probabilities treating noise, and that this overall will result in higher selection efficiency. As for the second, an empirical model of luminous quasar spectra will definitely improve our classification, which we will pursue in future work. From the integration of the = 6.7 LF down to = 21.5, we expect to find ≈ 15, ≈ 5, and ≈ 2 QSOs at 6 ≤ ≤ 6.5, 6.5 ≤ ≤ 7, 7 ≤ ≤ 8, respectively, in the VIKING survey area. Considering the completeness we derived in the three redshift ranges and the fact that three, and four ≤ 21.5 QSOs have been already discovered in the VIKING footprint at 6 ≤ ≤ 6.5, and 6.5 ≤ ≤ 7, respectively, we expect to discover ≈ 10, ≈ 1, and ≈ 2 new QSOs at 6 ≤ ≤ 6.5, 6.5 ≤ ≤ 7, 7 ≤ ≤ 8, respectively, with future spectroscopic follow-up of our candidates.
Future applications of this methodology will focus on three datasets: UKIDSS, UHS, and Euclid. UKIDSS covers an area of ≈ 4000 deg 2 with similar multi-filter coverage as VIKING ( ), making it the best ground to apply XDHZQSO after VIKING. Instead, UHS covers a larger area (≈ 12, 700 deg 2 ) but only with three filters ( ). To apply our method to UHS, whose sources have no data in the -band, we will simply re-score by setting the errors in the bands with no measurements to a large number.
Finally, the advent of Euclid in 2022 will provide plenty of optical/IR data with a better separation between high-QSOs and contaminants properties, as its six-year wide survey will cover 15, 000 deg 2 of extragalactic sky in four bands: a broad optical band (5500 − 9000 Å), and three NIR bands, (9650 − 11920 Å), (11920 − 15440 Å), and (15440 − 20000 Å), a depth of 24 mag at 5- (Laureĳs et al. 2011). The Euclid's wide field IR imaging should enable the discovery of ∼ 100 QSOs at > 7, and ∼ 25 beyond the current record of = 7.6, including ∼ 8 beyond = 8.0 (Euclid Collaboration et al. 2019). Since no data have been delivered yet from Euclid, we will need re-train XDHZQSO on the Euclid photometry to get the contaminant model. Finally, the high efficiencies in finding > 7 QSOs reached by XDHZQSO suggest that we can do much more efficient spectroscopic follow-up, while we have a framework to solve the problem of performing low efficiency selection with JWST. Deconvolved relative-flux relative-flux contours for the simulated contaminants (black) and 6 ≤ ≤ 6.5 QSOs (blue), compared to the properties of the known 6 ≤ ≤ 6.5 QSOs from the VIKING survey area. The probability threshold to select these sources with our method is th = 0.1. It is apparent that both J1217 and J1137 are "off" from the QSO contours in the sub-plots, while J1137 is also "off" in the sub-plots, thus lowering their probabilities of being classified as high-QSOs.

APPENDIX A: COVARIANCE COMPUTATION AND APPLICATION
To construct the contaminant models during the training step, we deconvolved the noisy relative fluxes of our contaminant sources, assuming that the relative-flux uncertainties are Gaussian, and providing the covariance matrix of the uncertainties of the single objects. While the flux measurements in each filter are independent of one another, i.e. their noise is uncorrelated, the relative flux errors are correlated (i.e., they are the ratio of the flux in a given band flux and the -band flux). Thus, the covariance of a source with fluxes ì = { 1 , 2 , ..., } and errors ì = { 1 , 2 , ..., } coming from filters that include the -band one, can be computed as:  f W2 Figure 14. Deconvolved relative-flux relative-flux contours for the simulated contaminants (black) and 7 ≤ ≤ 9 QSOs (blue), compared to the properties of the known ≥ 7 QSOs to date. The probability threshold to select these sources with our method is th = 0.1. It is apparent that J0038-1527 deviates from the quasar locus indicated by the blue contours in the sub-plot displaying vs. , with the effect of lowering its QSO classification probability.
code the noisy relative fluxes with covariance matrices computed using Eq. A2 and A3. However, we noticed that for bins whose -band median point is mp > 21 (i.e., SNR( mp ) < 10) the XD code is not able to correctly deconvolve the contaminants properties. This is apparent in Fig. A1, where we show the comparison between the real data (black contours) and a noise added sample from the deconvolved model (red contours) generated by the XD code in a faint bin (22.0 < < 22.3, SNR( mp ) = 5): it is clear that we do not obtain a noisy relative flux distribution that is consistent with the real one. This deconvolved model was generated after providing a covariance matrix in the form of Eq. A2 plus A3, while we added the errors to the deconvolved sample as described in Appendix B. The failure of the XD code to correctly deconvolve the relative fluxes in the limit of faint -band bins (SNR( mp ) < 10) arises from the violation of our assumption that the relative-flux uncertainties are Gaussian in this regime. In fact, the ratio of noisy quantities is in general not Gaussian distributed, as we assumed in order to use XD. However, this is a good approximation if has small errors relative to , whereas as becomes noisier, one will generate progressively stronger tails in / . To remedy this problem, we f W2 Figure A1. Relative-flux relative-flux contours comparison between the real data (black) and a noise added sample from the deconvolved model (red) generated by the XD code in the 22.0 < < 22.3. Errors have been added as explained in Appendix B, while the model was generated providing a covariance matrix in the form of Eq. A2 plus A3. The labelled quantities are relative fluxes (i.e., fluxes in different bands divided by the -band flux). It is apparent that we do not obtain a noisy relative flux distribution that is consistent with the real one.
decided to construct our faint ( mp > 21) deconvolved contaminant models providing a diagonal covariance: with only elements on the diagonal computed by Eq. A3 and zeros elsewhere. Although, this is not formally the correct approach to deal with non-independent quantities, it simply provides good results during the training step.
In Fig. A2 we show the comparison between the real data (black contours) and a noise added sample from the deconvolved model (red contours) generated by the XD code with a diagonal covariance. The -band bin and the real data are the same as those displayed in Fig. A1. In this case it is apparent that after re-adding the errors the noisy simulated distributions are far more consistent with the real ones.

APPENDIX B: NOISE MODEL
As described in several parts in this paper, we often sampled a huge number of simulated high-QSOs and contaminants from our XD-HZQSO deconvolved models, and finally computed their probabilities of being high-QSOs based on their simulated properties. However, the sampling of deconvolved models produces noiseless relative