eBASCS: Disentangling Overlapping Astronomical Sources II, using Spatial, Spectral, and Temporal Information

The analysis of individual X-ray sources that appear in a crowded field can easily be compromised by the misallocation of recorded events to their originating sources. Even with a small number of sources, that nonetheless have overlapping point spread functions, the allocation of events to sources is a complex task that is subject to uncertainty. We develop a Bayesian method designed to sift high-energy photon events from multiple sources with overlapping point spread functions, leveraging the differences in their spatial, spectral, and temporal signatures. The method probabilistically assigns each event to a given source. Such a disentanglement allows more detailed spectral or temporal analysis to focus on the individual component in isolation, free of contamination from other sources or the background. We are also able to compute source parameters of interest like their locations, relative brightness, and background contamination, while accounting for the uncertainty in event assignments. Simulation studies that include event arrival time information demonstrate that the temporal component improves event disambiguation beyond using only spatial and spectral information. The proposed methods correctly allocate up to 65% more events than the corresponding algorithms that ignore event arrival time information. We apply our methods to two stellar X-ray binaries, UV Cet and HBC515 A, observed with Chandra. We demonstrate that our methods are capable of removing the contamination due to a strong flare on UV Cet B in its companion approximately 40 times weaker during that event, and that evidence for spectral variability at timescales of a few ks can be determined in HBC515 Aa and HBC515 Ab.


INTRODUCTION
Analysis of X-ray data relies on the identification of the emitting sources and the allocation of the recorded events to the separated sources. When observing clusters of multiple contiguous sources, the presumable overlap of the sources' point-spread functions (PSFs) casts uncertainty on the true origin of each of the recorded events, as well as on the physical and spectral properties of components in the observed system. These uncertainties are often amplified by issues such as low-count data and background contamination. One possible solution for analysis involves fitting multiple PSF components to binned images (Primini & Kashyap 2014). However, X-ray data are originally acquired as event lists that allow high spatial resolution, and also contain energy and arrival time information that is lost when binned into a 2D image. The purpose of the separated extraction regions is to allow further processing and individual analysis of the separated sources. This is sub-optimal in the event of substantial overlap in the sources' PSF, since events from each source are highly likely to contaminate the core of the other sources, which is expected ★ E-mail: adm18@ic.ac.uk to result in misclassification. For a given PSF size, this misclassification rate increases with both the size of the extraction region and the proximity of the sources. Moreover, events outside of the extraction regions are discarded from the analysis, which further inflates the uncertainties in parameter estimation due to the misclassification.
The problem with identification of sources in close pairs or in crowded fields is often encountered when running standard detection algorithms (e.g., wavdetect, Freeman et al. 2002) leading to source confusion and misclassification of extended sources. Source catalogs often flag problematic sources, use optical catalogs as a reference, or modify source and background extraction regions to exclude the overlapping part (Watson et al. 2009;Evans et al. 2010;Principe et al. 2017).
The limitations of the manual extraction approach have motivated the development of alternative algorithmic methodologies to tackle the problem of overlapping point sources in high-energy photon image analysis. Jones et al. (2015) developed a statistical approach to probabilistically allocate events to sources using spatial and spectral information. This method is known as Bayesian Separation of Close Sources (BASCS). BASCS models the spatial distribution using a known PSF and leverages the differences in the energy spectra of the components to estimate the locations of the point sources and their relative intensities. BASCS simultaneously provides the posterior distribution of the number of sources, which is particularly useful for detecting sources. This final feature of BASCS has added to the effort of probabilistic cataloging in dense fields that has been pursued by others (such as Portillo et al. (2017), whose method applies to multiband optical data), but is not the focus of this article. More recently, Sottosanti et al. (2017) used a Bayesian mixture model to disentangle sources using spatial information, with the added feature of allowing for a diffuse non-isotropic background which is more suitable for -ray data. Picquenot et al. (2019) developed a method to separate extended sources from the background by using spectral information (unlike BASCS, this method uses binned images and assumes that the source components all have similar spectra). Foord et al. (2019) developed BayMAX, a statistical tool that uses spatial and spectral information to distinguish between single and dual active galactic nuclei (AGNs) via a Bayes Factor evaluation.
Here, we tackle the separate problem of time-variable sources. In many cases, temporal information carries significant discriminatory power, since astronomical objects display brightness and spectral variability. Their spectral energy distributions are also expected to change with source intensity. Therefore, we expect a method that leverages the temporal signatures of the observed sources to outperform existing algorithms in the task of allocating recorded events to temporally variable sources. The principles behind the methodology developed here follow those of Jones et al. (2015), but the variability of source intensity across time is incorporated into the model to aid in the task of source separation.
The paper is organized as follows. Section 2 provides an overview of the existing methodology and introduces the proposed methods. Section 3 presents the statistical analysis framework and computational procedures. Simulation studies are carried out in Section 4 to evaluate the performance of the methods. In Sections 5 and 6 two datasets from observations of the UV Cet and HBC 515 A systems (respectively) are analyzed. Finally, potential extensions and limitations are discussed in Section 7 and detailed results from our numerical studies appear in a number of appendices.

Structure of the Data
High-energy detectors, such as Charged Coupled Device (CCD) imaging spectrometers, record detector spatial coordinates, ( , ), arrival time, , and energy, . The full event list for recorded events is denoted as x = {x } =1 = {( , , , )} =1 . The observed spatial and spectral data is subject to the effect of the Point Spread Function (PSF) and the Redistribution Matrix Function (RMF). While the spatial dispersion of events by the PSF is explicitly accounted for in the model, for this study the RMF-induced energy dispersion is ignored and the observed spectra is modelled, rather than the source spectra.
Each recorded event is assumed to originate either from one of the sources located in the field of view, or from the background. In this article, only point sources are considered. Our proposed methods, unlike BASCS (Jones et al. 2015), assume that the number of observed sources on the image is known. The location, intensities, spectral distributions, and light curves are unknown. Background events are assumed to be uniformly distributed spatially under the source. Their spectrum is assumed to be known up to a normalization factor, and can be either uniform, or one of the models described in Section 2.4. We expect the modeling to be robust to mild deviations from the assumption of spatial uniformity of the background, as photons that are spatially distant from the sources are highly likely to be attributed to the background, irrespective of the background model.

Finite Mixture Model
A natural way to describe data assumed to originate from a collection of sub-populations (in this case, the sources and background) is with the class of statistical models known as finite mixture distributions. In such models, each event x , = 1, . . . , is assumed to arise from one of + 1 ( sources and the background) component distributions {ℎ (x |Θ )} =0 , each parametrized by a parameter vector Θ . It is further assumed that it is not known which mixture component generated each observation. The background contamination is defined as corresponding to component = 0 of the finite mixture. Further denote by the proportion of the population originated from mixture component , so that =0 = 1. The likelihood of data x is thus: With the aim of learning which mixture component underlies each observed event, we introduce the latent indicator variables, , into the model: = if x is drawn from mixture distribution ℎ i.e., if event originated from source . Since w = ( 0 , . . . , ) gives the proportion of the population of events belonging to each mixture, and s = ( 1 , . . . , ) gives the mixture assignment of event x , it is sensible to model s with a Multinomial 1 distribution: ( 1 , . . . , ) | w ∼ Multinomial( ; ( 0 , . . . , )). (2) The joint distribution of the data, x, and latent variable, s, is then given by: For the rest of this article, we refer to the mixture weights, w, as the relative intensities of the sources. Now that the general framework for statistical modelling of overlapping sources is established, models for the individual mixture distributions {ℎ (x |Θ )} =0 can be constructed. In particular, each component is a combination of sub-models for the spatial, spectral and temporal data. BASCS (Jones et al. 2015) only included {( , , )} =1 , i.e., only the spatial and spectral information. The contribution to the existing framework is the formulation of a model for the temporal data { } =1 and its integration into the current methodology.

Spatial Model
The detector spatial coordinates of an event ( , ) are a deviation from the actual (unknown) position of the event's originating source 1 Let = =1 1( = ) for = 0, . . . , . The probability mass function of the Multinomial( ; ( 0 , . . . , )) distribution is then given by: . These are Total number of events detected, i.e., · = , ( , , , , ) Vectors of the corresponding event specific variables (say ), = ( , ). This deviation is due to the PSF, the response of the imaging system to the incident events that redistributes them on the surface of the detector according to a telescope and detector specific distribution centered at . In our numerical studies, we assume that the PSF is well approximated by a King profile (see Appendix A and Jones et al. 2015). The observed event locations from source are, therefore, distributed according to the PSF centered at the unknown source location , i.e., for = 1, . . . , , where ( , ) denotes the PSF centered at evaluated at ( , ).
As for the background, the assumption of spatial uniformity across the image means:

Spectral Model
In addition to spatial information, Jones et al. (2015) discuss the benefits of including spectral information when allocating events to sources. Generally, approximating the observed spectral energy distribution 2 with a rough shape is a simple way to exploit spectral differences among sources, with relatively little additional computation. A simple sensible photon energy model is a Gamma distribution 3 , 2 Since we use approximate spectral shapes to distinguish sources, we ignore the effects of the Redistribution Matrix Functions (RMF) and Auxiliary Response File (ARF) and instead model the distributions of the observed spectrum as recorded by the detector. Although differences in the observed spectra are just as powerful as differences in source spectra for source separation, it precludes us from interpreting the parameters of the observed spectral model in terms of the physical properties of the sources. 3 The density of the Gamma( , / ) distribution is because it only allows positive energy values and its mode is around the mid-to-low energy region. This yields: for = 1, . . . , , where and are respectively the unknown shape and mean parameters of the Gamma distribution. This modeling strategy makes no attempt at describing the energy distributions in a detailed science-based manner, since for instance emission lines are ignored. Nevertheless, Jones et al. (2015) showed that when sources are relatively close spatially, modelling their rough spectral shapes improves source detection and separation and increases the precision of source parameter estimates. In practice, however, a single Gamma distribution may not capture the shape of the spectra sufficiently well, and the miss-specification of the spectral model can bias the parameter estimates. Jones et al. (2015) discuss a more general spectral model, defined by a mixture of two Gamma distributions, that allows for a more flexible approximation of the observed energy distribution. This energy model can be written as: for = 1, . . . , , where is the mixture weight. We denote the modelled spectral distribution by , with representing its parameters. The observed spectral energy for source is therefore distributed as with unknown parameter , i.e, The reader is referred to Jones et al. (2015) for further discussion on the specification of the spectral model and the costs and benefits of more complex spectral energy distributions. The background contamination is again assumed to have a uniform spectral distribution: where and are the detector-specific minimum and maximum observable photon energies.

Temporal Model and Time-varying Energy Distributions
Our proposed extension of BASCS (Jones et al. 2015) incorporates event arrival times into the model. The assumption of BASCS that both the relative brightness of the sources and their spectral energy distributions remain invariant during the observation period is thus relaxed. In general, astronomical objects display brightness and spectral variability. Furthermore, spectral energy distributions are also expected to change with source intensity. In many cases temporal information therefore carries significant discriminative power and including it will outperform BASCS in the allocation of recorded events to sources.
We model the light curve of each source as a piece-wise constant function defined on a pre-selected collection of time bins, the union of which equals the entire observation period. Assuming that the light curves are uniform over each of the time intervals yields a flexible temporal model, suitable to flexibly capture temporal variability.
We formulate the proposed extension as follows: for each source , let = { } =0 , with 0 = 0 < 1 < ... < = , be a collection of breakpoints segmenting the full observation interval into bins. Let indicate the bin in which event is detected, i.e., = if ∈ ( −1 , ]. The labels { } are an observed, discretized version of the original time-arrival data. These play a role in measuring the time-varying intensity within each of the sources. To measure the within-source time relative intensity, first let represent the number of events from source that are observed in time bin . For each source , let = ( 1 , 2 , ..., ) be the relative intensities for source across the time bins, with =1 = 1. For source , the number of events observed across the time bins are modeled as a Multinomial distribution, where =1 = . The collection of time intensities across the background and the sources is denoted = ( 0 , 1 , ..., ).
The temporal model is not intended to capture nuanced variability in the observed light curves. Instead, in conjunction with coarse energy distributions, it captures general trends in time that may have discriminatory power.
The source-level energy distributions may also be time variable, i.e., they may change at each of the breakpoints . The energy distribution for events arriving in time bin from source is modeled with a distribution and source-time specific parameters , i.e., where denotes the mixture of two Gamma distributions defined in Section 2.4. In this case the models for the spectral energy distribution are independent for each source and within each time bin. Because the number of parameters in this energy model grows rapidly with the number of sources and the number of time bins, it is difficult to fit without a sufficient number of events. A more sophisticated model might have fewer time bins for dimmer sources. In our numerical studies and examples, we assume the source spectra are invariant over the whole observation period.

Selection of time bins
In order to account for temporal changes, we split the data into possibly irregular bins that capture the variations in the observed light curve of the system. There is a trade-off here between the fineness with which one wants the variability to be modeled on the one hand, and the data size that allows useful uncertainty intervals on the fitted parameters, the total number of parameters, and the running time. The dimension of the parameter space scales linearly with the number of time segments; for the sake of parsimony as well as to avoid unnecessarily complex calculations, a small number of segments is preferred. The temporal segmentation model is not designed to capture subtle variations directly, but rather aims to improve disambiguation between overlapping photon events.
We thus allow the number of breakpoints to be set on a case by case basis, and choose as the breakpoints those times which show the largest changes in counts in adjacent bins. Given the user-specified number of breakpoints, we develop a simple data-driven procedure to identify the locations that best isolate transient variations and flares in the light curves. The procedure first divides the event-arrival times into a high-resolution histogram and then identifies the adjacent bins with the largest differences in intensity. The breakpoints are set to separate these bins; see Figure 1 for an illustration of this method in the case of the UV Cet light curve with the number of breakpoints preset to five. Finally, the algorithm merges all of the histogram bins between the selected breakpoints to obtain the target number of time intervals. Full details appear in the form of pseudo-code in Appendix B; computer code is included in the eBASCS software package available on the CHASC GitHub software library 4 .

Statistical Models and Likelihoods
Here we combine the spatial, spectral, and temporal models into the overall models that we use and compare in this article. We refer to the model that only incorporates spatial data, {( , )} =1 , as the baseline or spatial model. Its parameters are denoted by Θ = { } =1 , and the corresponding likelihood is: BASCS (Jones et al. 2015) models observed event spatial and energy data: {( , , )} =1 . The parameters for this model are denoted Θ = { , } =1 , and the corresponding likelihood is: The proposed method, eBASCS, models observed event spatial, spectral and temporal data: The parameters for this model are denoted Θ = { , , } =1 , and the corresponding likelihood is: Some X-ray detectors do not record the energy of the observed events with useful accuracy , e.g., the Chandra/HRC-S detector used to record the UV Cet observation analysed in Section 5. In such cases, it is useful to consider a spatio-temporal model that only includes the spatial and temporal components. We refer to this model as the space+time model. The parameter for this model is Θ = { , } =1 , which yields the likelihood:

Inferential approach
We adopt a Bayesian statistical approach. This allows us to quantify uncertainty in the model parameters via their joint (posterior) distribution given the observed data. From this we can compute point estimates and error bars, if required. The Bayesian paradigm also allows us to incorporate existing scientific knowledge about the likely values of the parameters into the model via their prior distributions.
Once the prior distributions are specified and the data is observed (and integrated into the model via the likelihood), Bayes' Theorem gives the expression for the posterior distribution of the parameters, i.e., the updated beliefs about the model parameters after observing the data: where (x|Θ, ) ≡ (Θ, ) is the likelihood of the data under the chosen model, (Θ, ) is the prior distribution of the parameters, and (x) is the marginal distribution of the data. While (x) is sometimes used for model selection, for the purposes of Equation 16, it can be viewed as a normalizing constant and need not be computed. Our choices of prior distributions are discussed in Section 3.2.

Specifying prior distributions
Our general approach is to specify uninformative and computationally practical prior distributions. In our applications, we usually observe a large enough data set to mitigate the effect of the prior distributions on posterior inference. However, any information about likely parameter values can and should be encoded into the prior distributions.
Following Jones et al. (2015), we specify uniform (across the image) priors on the source locations: for = 1, . . . , , but one could also specify a distribution centered at a likely value 0 for . The prior distributions for the other parameters are those given in Jones et al. (2015). The parameters that are probability vectors, w and , and govern the Multinomial splitting of events among sources and time bins, respectively, are given Dirichlet prior distributions. Picking conjugate prior distributions (like Dirichlet priors for Multinomial likelihoods) is computationally convenient, and allows for more transparent interpretations of posterior inferences (Gelman et al. 2013). Hence: Following Jones et al. (2015), the above hyperparameters are set at = = 1, so that the prior distributions correspond to as much information as a single event added to each source (or each time bin for ).
For a spectral model defined as a single Gamma distribution, we use the prior distributions, ∼ Gamma(2, 0.5) and ∼ Uniform( , ). If a mixture of two Gamma distributions is used for the spectral model, a reasonable choice for the mixture weight parameter is a Beta(2,2) distribution (assuming a 2components mixture), again following Jones et al. (2015).

Statistical Computation
In order to derive useful summaries of the posterior distribution of the model parameters, we generate a Monte Carlo sample of parameter values from the posterior. Indeed, for an unknown model parameter with marginal posterior distribution ( |x), a sample { (1) , . . . , ( ) } of draws from the posterior allows us to compute point estimates and error bars for , e.g., via the mean and standard deviation of the sample if the posterior is roughly Gaussian (e.g., Stenning & van Dyk 2021), or more generally via quantiles of the sample.
We use Markov Chain Monte Carlo (MCMC) to obtain a sample of (Θ, ) from its joint posterior distribution, (Θ, |x). A Markov chain is a set of sequentially sampled random variables, (Θ, ) ( ) , for = 1, 2, . . ., such that each (Θ, ) ( ) , depends on the history of the chain only through the most recent iterate, (Θ, ) ( −1) . 5 MCMC methods are a class of iterative algorithms that produce a Markov chain with stationary distribution equal to the target Bayesian posterior distribution. If run for a sufficient number of iterations, the marginal distribution of each of the (correlated) iterates of the chain approaches the target posterior distribution. Thus, if the sample that is comprised of the MCMC iterations before the chain reaches approximate convergence (i.e., the burn-in iterations) is discarded, the remaining sample (i.e., the main run) can be treated as a correlated sample from the target posterior distribution. In our simulations, we typically discard the first half of the chain as burn-in.
Positively correlated samples carry less information than independent samples of the same size, and thus yield estimates of posterior quantities with higher Monte Carlo error. The Effective Sample Size (ESS) approximates how large of an independent sample would contain the same level of information as our correlated MCMC sample (e.g., Stenning & van Dyk 2021). In our numerical studies and examples, we choose the number of main-run iteration to obtain an ESS between 1000 and 2000. Generally, this requires a total run of about 40,000 iterations. (We discard the first half of each chain as burn-in and thin the remainder, saving only every tenth iterate, to reduce memory requirements.) We also monitor convergence of the chains through visual diagnostics such as trace plots and auto-correlation plots. For a more detailed discussion of practical considerations involved with MCMC, the reader is referred to Gelman et al. (2013) or, for a more astronomy-oriented account, to Stenning & van Dyk (2021). We develop both R code for MCMC methods based on the Metropolis algorithm 6 (Metropolis et al. 1953) and STAN 7 code which implements Hamiltonian Monte Carlo 8 (Duane et al. (1987), Neal (2011)).
Naturally, our methodology has certain limitations. Here we detail two and propose solutions. Finite mixture models, such as the one we use with eBASCS, produce multi-modal posterior distributions, which are notoriously difficult to explore via MCMC and thus require special care. The most obvious and important issue is the possibility that the algorithm becomes "stuck" in one mode, and thus fails to explore the full posterior distribution. To give a more detailed example, consider a binary star system, with unknown location parameters 1 and 2 . If the sources are well separated, it might happen that the MCMC samples of both 1 and 2 converge to the location of source 1, see Figure 2. This worry can be addressed by choosing starting values for 1 and 2 at random locations along the edges of the image close to the respective sources. Another option, that we recommend, is to initialize the location parameter values at the approximate location of the modes of the spatial image of observed event locations, e.g., found by applying Kernel Density Estimation 9 (KDE) to the raw images. We have not found it necessary to implement sophisticated multi-modal sampling techniques (e.g., parallel tempering 10 (Geyer 1991) or evolutionary Monte Carlo 11 (Liang 6 The Metropolis algorithm is a simple MCMC sampler that produces a Markov chain by sampling the next iterate from a proposal distribution centred at the current iterate (i.e. a symmetric proposal distribution), using a rejection rule designed to ensure that the stationary distribution of the Markov chain equals the target posterior distribution. See Speagle (2019) for a recent, and accessible, description of MCMC concepts. 7 STAN is a probabilistic programming language for Bayesian inference with gradient-based MCMC techniques (Carpenter et al. 2017). 8 Hamiltonian Monte Carlo is an MCMC method that exploits the differential structure of the target posterior distribution to generate a Markov chain that efficiently represents the posterior distribution. (See Betancourt (2017) for an introduction to Hamiltonian Monte Carlo.) 9 Kernel Density Estimation is a non-parametric procedure designed to estimate probability densities from observed data (Davis et al. 2011). 10 Parallel tempering is a multiple chain MCMC method, where each chain targets one of a sequence of tempered versions of the target distribution. Swapping draws among the chains enables improved exploration of multimodal posterior distributions. 11 Evolutionary Monte Carlo incorporates features from simulated annealing and genetic algorithms to improve on the efficiency of standard MCMC methods. & Wong 2000)) since the failure mode has been obvious and the remedies easily implemented.
Another possible concern is thatresult in computational inefficiency. For instance, the source locations 1 and 2 can only take values within the boundaries of the image, and the spectral parameters must be positive. Using a standard Metropolis proposal distribution for these parameters, such as a normal distribution centered on the current value, leads to severe inefficiency in the algorithm since proposed values outside of the parameter's support must be rejected, which induces sub-optimality in the Metropolis acceptance rate. To remedy this, we log-transform the parameters to eliminate the constraints on their support. This allows us to efficiently implement a normal proposal distribution (on the logarithmic scale).

SIMULATION STUDIES
In this section, we evaluate the benefit of incorporating temporal data, by comparing the performance of eBASCS with the BASCS and spatial algorithms on simulated data sets. Our simulations are parametrized in terms of a range of background intensities, spatial source separations, and relative source intensities. They are organized into two simulation studies that differ in terms of the strength of the background. Simulation I, described in Section 4.2, constructs a challenging scenario where one source is weak compared to the background contamination. Simulation II, described in Section 4.3, considers a more realistic situation where the background is weaker than the sources, in order to mimic the level of background encountered with modern high-resolution X-ray telescopes like Chandra. Section 4.1 sets up the general simulation design, which applies to both Simulations I and II.

Simulation Design
Our simulation design involves two astronomical objects, one brighter than the other, emitting photons in an image of 10 by 10 spatial units 12 . The first simulation setting is the distance between the two sources, denoted as , and we consider the set of values ∈ {0.5, 1, 1.5, 2}. In each simulation, the number of events originating from the background and sources are drawn from Poisson  . Average proportion of events originating from the faint source that were correctly classified by the eBASCS (green), BASCS (red) and spatial (blue) algorithms, for simulation settings ( = 1.5, ∈ {1, 2, 5, 10, 50}), averaged over the replicate data sets. This plot is replicated for the other simulation studies in Appendix C.
distributions with respective means 0 , 1 and 2 . The brighter source has 1 = 2000, while the fainter source's intensity is defined as 2 = 1 / , where denotes the relative intensity of the two sources and is the second simulation setting. We consider the set of values ∈ {1, 2, 5, 10, 50}. The strength of the simulated background 0 differs between Simulation I and II, see Sections 4.2 and 4.3 for details.
We also generate spectral data for the source and background events. Since the focus of Simulations I and II is to investigate how much we can improve the fitted parameters and event allocations by incorporating temporal data, we use a common spectra for both sources (rather than different spectra between the sources or between the time bins). Specifically, the two sources have a common spectra generated from a Gamma distribution with shape parameter = 3.18 and mean = 1832; these particular parameter values follow Jones et al. (2015). The background spectra is generated from a Uniform distribution.
To simulate the temporal data (in the form of arrival times for the events), a 60ks observation period is defined and split into four equally sized time bins. Events are allocated to each time bin according to a Multinomial distribution with parameters ( 1 , 2 , 3 , 4 ) = (0.05, 0.15, 0.3, 0.5) for the bright source, and ( 1 , 2 , 3 , 4 ) = (0.5, 0.3, 0.15, 0.05) for the faint source. This means that the bright source gets brighter over time, and that the faint source dims over the observation period (see right panel of Figure 3). Background events are spread uniformly among the time bins. The number of parameters to be fitted for eBASCS grows linearly with the number of time bins. Simulating data and fitting the model with four time bins is a reasonable option to both generate enough discriminatory temporal information and moderate the required computational complexity.
In Simulation I, replicate data sets were generated under 20 different settings, crossing ∈ {1, 2, 5, 10, 50} and ∈ {0.5, 1, 1.5, 2}. In Simulation II, 5 different settings were considered, crossing ∈ {1, 2, 5, 10, 50} and = 1. Evaluating the performance of eBASCS in this way ensures its consistency with the expected behaviour of a disentangling model in the physical environments represented by the datasets. In particular, the two sources are expected to be increasingly distinguishable as their spatial separation, , grows. Similarly, as gets large the model is expected to easily detect the brighter source but detection of the dimmer source may remain difficult. Fifty replicate data sets were generated for each of the 20 settings in Simulation I, and each of eBASCS, BASCS and spatial were run on each replicate. For Simulation II, fifty replicate data sets were generated for each of the 5 settings, and each of eBASCS, space+time, BASCS and spatial were run on each replicate.

Simulation I: High Background
In Simulation I, the strength of the simulated background 0 is defined as a function of the faint source region and intensity. Specifically, we set 0 so that the expected number of background and faint source counts is equal in the faint source region (defined as the area where the PSF is greater than 3% of its maximum). Mathematically, we let be the probability that a event from the faint source falls within this source region and set 0 = 2 . This results in very intense background that strongly overwhelms the fainter source, allowing us to investigate to what extent eBASCS outperforms BASCS and spatial in an extremely noisy environment.
At each MCMC iteration, updated allocations of the events to sources or background are drawn from a Multinomial distribution with probabilities given by the posterior distribution for the variable , computed with the latest sampled parameter values. From these allocations, two metrics are computed to measure the classification performance of eBASCS. Specifically, we report Allocation Recovery: The proportion of events originating from a source that were indeed allocated to that source.
Allocation Accuracy: The proportion of events allocated to a source that actually originate from this source.
All proportions are averaged over both MCMC iterations and replicate data sets. Averaging over the replicate data sets reduces sampling variability, which can be substantial in low-count simulation settings (between 2000 and 4000 counts with ∈ {50, 10, 5}). Relying on a single replicate data set might not accurately reflect the relative performance of the methods. Tables 2 and 3 report the two metrics, respectively, averaging over the 20 simulation settings. Appendices D1 and D2 report complete results for each simulation setting separately.
As expected, on average eBASCS performs substantially better at disentangling overlapping sources than either BASCS or spatial. This provides strong evidence of the ability of the proposed method to leverage the temporal information to extract discriminatory features. Tables 2 and 3 show that all models are less able to properly allocating events to the faint source than to the bright source. This is a direct consequence of the simulation design. Under the simulation settings where is large ( ≥ 5), the faint source is completely overwhelmed by the strong background. For example, with a relative intensity of = 5, the expected number of faint source and background events are 2 = 400 and 0 ≈ 1920, respectively, while the expected number of bright source events is 1 = 2000. Figure 4 shows that classification performance for the faint source improves as its relative intensity grows (i.e., as decreases). In the simulation setting with = 1 and = 0.5, eBASCS had an allocation recovery of 57.1% of the bright source events and 56.4% of faint source events (see Table D1 in Appendix D1). That is, in a setting where the sources are extremely close, have the same spectra and relative intensity, and are immersed in an exceedingly strong background, eBASCS still correctly classifies more than half of events from the sources. This is a substantial improvement over BASCS, which had an allocation recovery of only 42.8% of bright source events and 42.2% of faint source events in the same simulation setting (see Tables D2  and D7 in Appendix D1).
The biggest improvement of eBASCS over the other algorithms is its ability to distinguish faint sources from background, particularly when the relative intensities of the bright and faint sources are extreme. In such a case, BASCS and spatial sometimes mistake the faint source for a random cluster of events from either the brighter source or the background. In the middle panel of Figure C1 in Appendix C, for the simulation setting = 10 the allocation recovery of faint source events by BASCS and spatial is close to 0, which indicates that these algorithms cannot separate the faint source from the bright source or the background. eBASCS, however, is able to better distinguish the faint source and has an allocation recovery of around 20%. Incorporating the temporal data allows eBASCS to partially solve this problem, and to provide more precise and confident event allocations.
In addition to probabilistically attributing events to sources, eBASCS provides estimates and error bars for the model parameters. Figures E1 and E2 (in Appendix E) illustrate the statistical properties of the estimates by plotting the posterior means of the two source locations for all 50 replicate data sets under each of the 20 simulation setting. Crosses indicate the true source locations. The variance among replicate data sets of the posterior mean of the bright source's location increases as decreases, i.e., as the faint source becomes relatively brighter and the background intensifies compared to both sources. (Recall that the background intensity is tied to the faint source intensity.) The location of the faint source, however, is estimated more accurately as it grows brighter (i.e., as decreases). Particularly, when it is bright enough for the algorithms to detect an energy signature that is distinguishable from the background spectrum.
The eBASCS algorithm yields fits that are at least as accurate as BASCS. Even when the sources are closely located (i.e., = 0.5), the eBASCS-fitted posterior means of the source locations concentrate more closely around the true locations than those provided by BASCS. With = 0.5, the separation between sources and background relies heavily on the energy model and the spatial algorithm does not perform as well as either BASCS or eBASCS.
The specific simulation setting with = 10 and = 1 is illustrated in Figure 5 and shows that eBASCS is not only able to locate the sources more accurately, as its posterior means cluster more closer to the true location, but also more confidently, since the eBASCS standard deviations (circling the posterior means in Figure 5) are much smaller than those of BASCS and spatial. This also holds for the other model parameters, see Appendix F for a full comparison of the parameter estimates.
To investigate the number of counts eBASCS requires to produce meaningful results, we repeated the simulation setting illustrated in Figure 5 ( = 10, = 1) but with 1 = {800, 600, 400}; recall that The points and ellipses represent the posterior means and standard deviations for each of the ten replicates; those for the bright source are plotted in blue and those for the faint source in red. Crosses indicate the true location of the sources. eBASCS is able to locate the faint source much more consistently (the posterior means of the location are closer to the true value) and confidently (the posterior standard deviations are much smaller) than the other methods. Figure 6. Sky pixel locations of photon events disputed by the spatial and space+time algorithms for UV Cet. Events coloured in black were allocated to the same source by both algorithms. Events coloured in orange were allocated to UV Cet B by the spatial algorithm and UV Cet A by the space+time algorithm. Events coloured in green are the opposite, i.e., allocated to UV Cet A by spatial but to UV Cet B by space+time. Disputed background events are marked as magenta symbols ("+" denoting those reallocated to a source, and "*" are those reallocated to the background by space+time).

Simulation II: Low Background
The background intensities in Simulation I are by design the same as the intensity of the faint source. The background in modern highresolution X-ray telescopes similar to Chandra, however, is typically weaker than this. Simulation II assesses how eBASCS performs in a more realistic and plausible noise environment. It considers scenarios where the total background count is drawn from a Poisson distribution with mean 0 = 100.
The allocation recovery and allocation accuracy for each algorithm and setting in Simulation II are reported in Appendix G. Tables 4 and 5 average these metrics across relative source intensities, , to summarize the overall performance of the algorithms in Simulation II. The fact that the space+time algorithm outperforms BASCS for both sources is not unexpected. Because both sources are simulated with the same spectrum, the spectral data do not help distinguish between the two sources (but do help to separate out background events). Temporal data, however, do help to distinguish the two sources. Since background counts are low in Simulation II, the added benefit of spectral data is small compared to that of temporal data, as illustrated by the almost imperceptible improvement of BASCS over the spatial algorithm.

Data and Models
UV Ceti (Gliese 65) is an M dwarf hierarchical binary system. Both the main components of the binary, UV Cet A and UV Cet B are flare stars that undergo unpredictable and dramatic changes in their brightness over short timescales. The UV Cet system was observed with Chandra on 2001 Nov 26 (ObsID 1880) with the LETGS+HRC-S configuration. The spatial resolution of Chandra is sufficient to visually distinguish the two components, which are separated by a distance of 1.4 arcsec (UV Cet B is itself a binary (Benz et al. 1998), but with a separation of ∼1 mas, it is not resolvable by Chandra). These data have been previously analyzed to separate the two components (see Audard et al. 2003) using small extraction radii to limit contamination of one source due to the other. However, UV Cet B undergoes a large flare during the observation, and its effect is also visible in the light curve of UV Cet A (Audard et al. 2003, see Fig. 2). This source is thus a natural test case for the application of the eBASCS  algorithm. Since the HRC-S has no spectral discrimination, applying BASCS to the 0 th -order data to separate the events from the two sources will not yield any improvements over the spatial algorithm. We demonstrate below that the space+time algorithm does lead to a significantly better allocation of events. We selected the time bins for UV Cet using the procedure detailed in Section 2.6. We chose 6 time bins which renders a temporal model that is flexible enough to capture the source flare at ≈54 ks after the start of observation (see Figure 1) while maintaining a reasonable number of model parameters to be fit with the space+time algorithm (25 parameters i.e., 4 location, 3 relative intensity and 18 time intensity parameters).

Results
To measure the difference of the space+time model over the spatial model, we carry out a disputed event analysis. This consists of comparing the allocations of the events in the two algorithms and studying the characteristics of disputed events to highlight the benefits of incorporating temporal information. For each event , the disentangling models output the posterior distribution of , the latent variable that encodes its origin. For each model, we set the allocation of event to the source it has the highest probability of originating from, i.e., event is allocated to the bright source if ( = bright) = max{ ( = bright), ( = faint), ( = background)}. Figure 6 shows that, as expected, most disputed events are located in the zone where the source wings overlap, or in other words where events are roughly equidistant from both cores. In this case, the space+time algorithm performs a more careful analysis of events in the overlap between the sources, i.e., in locations where it is most difficult to separate the sources based on spatial data alone.
The recorded locations of the disputed events allocated to UV Cet A by spatial, circling around the source's core, indicate that these events were assigned to UV Cet B by space+time on the basis on temporal information. Indeed, as illustrated in the right panel Figure 7, these events were all detected at the time coincident with an observed flare attributed to UV Cet B. The right panel shows that this contamination of the flare of UV Cet B on UV Cet A is completely removed by eBASCS, as the lightcurve of UV Cet A (as allocated by eBASCS) does not exhibit a spike in its intensity at the time of the flare. Figure 7 also shows that the contamination of UV Cet A on UV Cet B at early times (between 0ks and approximately 15ks), i.e., when UV Cet A has higher intensity, is removed by eBASCS. Table 6 reports the disagreement matrix (i.e., the number of events allocated to each source by both algorithms). Adding temporal data changed the allocation of 331 (out of 12,660) events, 149 of which correspond to the contamination of UV Cet B's flare on UV Cet A, and 107 of which correspond to the contamination of UV Cet A on UV Cet B in the early stages of the observation period. Removing these contaminations would not have been possible without mod-elling the temporal information, and clearly shows the improvement of space+time over the spatial algorithm.

Data and Models
HBC 515 A is a component of the well-separated weak-lined T Tauri multi-component system HBC 515 (Reipurth et al. 2010). HBC 515 A is a binary, composed of two variable stars, HBC 515 Aa and HBC 515 Ab, separated by a distance of approximately 0.5 arcsec. The system was observed with Chandra/ACIS-S on 2011 January 8 (ObsID 12383). The overlap of the PSFs of the two components is significantly larger than UV Cet, and separating them through nonoverlapping extraction regions would lead to a significant loss in the number of counts available for follow-up analyses. Such an analysis was performed by Principe et al. (2017), who used a complex set of regions covering the cores and the diametrically-opposed crescentshaped wings to extract the events and conduct spectral and temporal analyses. They found that the spectra accumulated over the duration of the observation were similar, and that there was no evidence for temporal variability. In the following, we demonstrate that even in this challenging dataset, eBASCS is able to recover small temporal changes to both the intensities and spectra of the two components. Not only do the data show that the spectra of both stars vary stochastically over timescales of a few ks, but that at different times different components are observed to be spectrally harder.
The HBC 515 A dataset is comprised of 14601 events recorded over 28.76 ks. We first extract the events over a range of sky pixels large enough to include both components of the binary, but exclude other components of the system (see Figure 8). 13 Since the ACIS detectors do allow for event energy discrimination, we apply the eBASCS algorithm to the HBC 515 A data, allowing us to characterize the time variability of both HBC 515 Aa and HBC 515 Ab. We use eBASCS with the number of sources fixed at = 2, and for simplicity adopt the King profile density (Appendix A) as the PSF model. We use six equally-spaced time bins to model temporal variations, as this choice presents a practical trade off between preserving the flexibility in modeling while limiting the size of the parameter space. We model the counts spectra in three different ways: first, using the single-Gamma distribution (Equation 6) for each source ; second, using the two-component mixture of Gamma distributions (Equation 7) for each source; and third, using a two component mixture at each time bin for each source (Equation 11). As discussed by Jones et al. (2015), a crude spectral model that roughly tracks the shape of the astrophysical model spectrum weighted by the effective area has sufficient power to distinguish spectral variations between the sources. We use a uniform spectral model for the background.

Results
The two-component mixture of Gamma distributions (Equation 7) fits the observed counts spectra better than the single-Gamma distri- There is moderate contamination from the UV Cet B's flare (at around 53ks) on the background. This contamination is less severe than with the spatial algorithm; space+time allocated 84 events to the background at the time of the UV Cet B flare, while spatial allocated 106 events to the background. (Such contamination is also a consequence of the approximate nature of our PSF model.) Right Panel: Arrival times of events disputed by the spatial and space+time algorithms. The orange bars indicate events that are moved from UV Cet B to UV Cet A, and green bars indicate events that are moved from UV Cet A to UV Cet B by the space+time model. Events allocated to the same source by both algorithms are not included in the plot. Notice that space+time is successful in identifying the contamination in UV Cet A due to the large flare of UV Cet B and allocating those events to UV Cet B. Ab computed with BASCS plotted as red and blue crosses, respectively. Middle Panel: Histogram of spectral data with source spectra superimposed (using the posterior mean of the spectral parameters fitted with BASCS) . Right Panel: Light curves for the two separated components of HBC 515 A obtained by allocations made by eBASCS, superimposed with the eBASCS-fitted light curves (grey). The red and blue curves denote the average of the allocated light curves over 500 iterations of eBASCS, the dark gray horizontal lines denote the posterior mean of the temporal parameters fitted by eBASCS, and the light grey regions denote the intervals between the 16% and 84% posterior quantiles (see Table 7). bution (see Figure 9). Unfortunately, the very few counts in each time bin per source leads to large uncertainties in the parameter estimates under the spectral model that incorporates a two component mixture at each time bin. Moreover, the spectra of the sources do not appear to be variable enough to justify the complexity of estimating different spectral parameters for each source at each time bin. Thus, we adopt the two-component mixtures for each source, but do not allow them to vary between time bins. Table H1 in Appendix H shows that parameter values fitted by BASCS are recovered by eBASCS with similar precision. This demonstrates the consistency and well-definedness of eBASCS. Both models return a posterior mean of the relative intensity of the background of ≈0; this is a consequence of the cropping of the image described in Section 6.1.
Our estimates of the two-component Gamma mixture model parameters indicate that HBC 515 Aa and HBC 515 Ab have very similar spectra (see middle panel of Figure 8), as Principe et al. (2017) suggested.  Table 7. Temporal parameters of HBC 515 A fitted with eBASCS. , denotes the relative intensity of source in time bin . Here = 1 corresponds to HBC 515 Aa, = 2 to HBC 515 Ab and = 3 to the background. The first column gives the posterior mean of the corresponding parameters, and columns "q16" and "q84" respectively denote their 16% and 84 % posterior quantiles.
eBASCS is able to recover temporal changes in the intensities of the two components (see Table 7 and right panel of Figure 8). The estimated temporal parameters shown in Table 7 reveal statistical evidence for a difference in the light curves of HBC 515 Aa and HBC 515 Ab. Indeed, the error bars (computed using 16% and 84% posterior quantiles) of the eBASCS-fitted temporal parameters 1, and 2, do not significantly overlap in five of the six time bins. (Only error bars for 1,4 and 2,4 show a substantial overlap.) The right panel of Figure 8 illustrates the difference in the separated lightcurves; HBC 515 Aa is stable for the first 15ks and then starts dimming, whereas HBC 515 Ab has a u-shaped light curve.
To further investigate spectral differences between the sources, we analyse variations in their hardness ratios, shown in Figure 10. First, we sampled 500 allocations of the recorded events to HBC 515 Aa and HBC 515 Ab from the posterior distribution of (i.e., the latent variable encoding the origin of the observed events, see Section 2.2) under eBASCS. Then, for each allocation, we computed the spectral hardness of the separated sources in the Soft ( :0.3-0.9 keV), Medium ( :0.9-2 keV), and Hard ( :2-8 keV) bands, in each of 30 time intervals of length 1ks. This yields, for each separated source at each time interval, the posterior distribution of spectral hardness in the top (log ) and bottom (log ) panels of Figure 10. Figure 10 shows that both sources exhibit variations in their spectra over the observation period. eBASCS is able to identify time scales over which HBC 515 Aa and HBC 515 Ab exhibit differences in their hardness ratios, see caption of Figure 10 for details.

SUMMARY
We have presented eBASCS, an extension to the BASCS method developed by Jones et al. (2015) to leverage temporal variability signatures in high-energy astronomical sources with overlapping point spread functions to perform a better separation of the photon events. The method integrates the temporal information into the disentangling algorithm via a flexible model which allows us to extract discriminatory features from the observed data. The assumption of independence of the brightness across time bins allows the model to flexibly capture temporal variability.
Several enhancements to eBASCS are in progress. We plan to enhance the scalability of the method, while maintaining its current flexibility, by modelling the temporal information with simple continuous-time processes; incorporate instrument sensitivity and model the spectra using physically meaningful models for the source spectra; explore extensions of our spectral modelling to grating data (e.g., to separate photons in overlapping lines in the Chandra LETGS+HRC-S UV Cet observation); apply our methodology to astronomical systems that exhibit higher contrast in the relative intensities of their components (e.g., weak jets of X-ray bright quasars); explore observations from instruments with lower spatial resolution (such as NuSTAR) to investigate whether eBASCS is able to separate spatially unresolved sources on the basis of their spectral and temporal variations; and finally, extend the method to allow the number of sources in the model to be estimated by carrying out both model comparisons for different assumed numbers of sources (e.g., using AIC (Akaike 1974) or BIC (Schwarz 1978)) as well as using a more sophisticated Reversible Jump MCMC method (Green 1995;Jones et al. 2015).
Simulation studies show that eBASCS achieves more accurate separation of photons from overlapping sources than either BASCS or the baseline spatial method. In particular, the proposed method further removes the contamination at the sources' cores and produces a  Table I2) are shown as grey bands. Overall the panels illustrate several instances of spectral changes, e.g., HBC 515 Aa is spectrally harder at the beginning of the observation (see C ), and is spectrally softer near the 25 ks mark (see C ); HBC 515 Ab shows increasing spectral hardening as it recovers from its minimum brightness. better disambiguation of the event allocation. eBASCS retains the advantages of performing inference under the Bayesian paradigm (i.e., uncertainty quantification, joint parameter inference, and probabilistic assignment of events) from its predecessor.
The probabilistic allocation of events to sources can be incorporated in detailed follow-up spectral and temporal analyses. In particular, this uncertainty can be accounted for by repeatedly sampling event allocations from the posterior distribution of , conducting the follow-up analysis according to the sampled allocations, and finally combining the results from each individual analysis.
Our application of eBASCS to the datasets UV Cet and HBC 515 A shows that our proposed model performs a more careful separation of the observed sources than other methods. The space+time model almost eliminates the contamination of the flare of UV Cet B on UV Cet A. Applying eBASCS to the HBC 515 A data allows us to recover temporal changes in the intensities of HBC 515 Aa and HBC 515 Ab and to identify time intervals where the hardness ratios of the two components appear to differ.
Based on the Simulation results reported in Section 4 and the general statistical principle that including more data/information in a model yields more reliable estimates, we expect the overall performance of eBASCS to exceed that of the competing algorithms. Improvement is expected to be even more significant on systems with more substantial spatial overlap and distinct light curves among the sources.

eBASCS 15 APPENDIX A: KING PROFILE DENSITY
The Point Spread Function used to generate the data in the simulation studies and fit the models (both in the simulations and dataset applications) is the 2D King profile, as in Jones et al. (2015), whose functional form is given by: The constant is determined numerically. The parameters are chosen to be: off-axis angle = 0 arcmin, core radius 0 = 0.6 arcsec, power-law slope = 1.5, and ellipticity = 0.00574.

APPENDIX B: TIME-BIN SELECTION ALGORITHM
The following time bin selection algorithms isolates transient variations and flares in the observed system light curve. The algorithm initially evenly bins the temporal data, into ≈50 bins, then selects breakpoints in between which the largest variation in brightness occur, and finally applies a small deviation to the final breakpoints to avoid splitting the data too close the flares, for instance. The parameters (to be chosen by the user) for this algorithm are: • the number of thin bins in the initial binning of the temporal data, • the number of breakpoints to select, • the deviation applied to the selected breakpoints.
The algorithm is designed as follows: • Thinly bin the observation period [0, ] into bins.
• Count the number of observations in each time bin, and denote by = ( 1 , 2 , . . . , ) the vector containing the counts in each bin.
• If two breakpoints occur at consecutive bins, only keep one corresponding the higher value of Δ . If three breakpoints occur at consecutive bins, delete the middle one. This might cause the final number of breakpoints to be less than , hence running the algorithm with different values of is recommended to produce a satisfactory selection.

APPENDIX C: SIMULATION I: GRAPHICAL COMPARISON OF PROPORTION OF CORRECTLY ALLOCATED EVENTS ACCORDING TO SIMULATION SETTINGS.
Figures C1, C2, C3 and C4 present a graphical summary of the results given in Sections D1 and D2. Figure C4. As in Figure C1, for = 2.  Table D8. As in Table D1, for the faint source by spatial.

D1 Simulation I Allocation Recovery (fraction of events from a component that are correctly allocated to the same component)
Tables D1, D3, D5 give the allocation recovery of eBASCS for the bright source, faint source and background respectively. Tables D4, D4, D6 give the allocation recovery of BASCS for the bright source, faint source and background respectively. Tables D7, D8, D9 give the allocation recovery of spatial for the bright source, faint source and background respectively.  Table D17. As in Table D10, for the faint source by spatial.

D2 Simulation I Allocation Accuracy (fraction of events correctly allocated to the component)
Tables D10, D12, D14 give the allocation accuracy of eBASCS for the bright source, faint source and background respectively. Tables D11, D13, D15 give the allocation accuracy of BASCS for the bright source, faint source and background respectively. Tables D16, D17, D18 give the allocation accuracy of spatial for the bright source, faint source and background respectively.  and E2 show the true location of sources and their mean posterior locations under eBASCS (top part) and BASCS (bottom part) for all data set replicates under the simulation parameter setting indicated in the top left corner of the plot (for Simulation I). Each dot represents the mean posterior location of the source (blue for bright source, red for faint source) for one dataset replicate, and the large "X"'s of corresponding color indicate the true locations.   Table F1. eBASCS (left), BASCS (middle) and spatial (right) parameter estimates, resulting from an application of the models to a simulation dataset with settings = 0.5, = 1. Parameters and denote the spatial coordinates of the sources, their relative intensities ( = 1 corresponds to the bright source, = 2 to the faint source and = 3 to the background). and respectively denote the mean and shape parameters of the gamma distributions. , denotes the relative time intensity of source in time bin . The "truth" column gives the parameters values used to generate the data. The "mean" column gives the posterior mean of the corresponding parameters, and the "(q16,q84)" column reports the 16% and 84% posterior quantiles. Table F1 shows the true and estimated parameter values for a single dataset replicate under the simulation settings = 0.5, = 1, for the Simulation I. The posterior means of the fitted location and intensity parameters are closer to their respective true values when inferred by eBASCS compared to BASCS and spatial. The posterior quantile intervals are also narrower for eBASCS. This shows that eBASCS is able to estimate the model parameters shared with BASCS and spatial more accurately and more confidently.  Table G4. As in Table G1, for space+time.

G1 Simulation II: Allocation Recovery
Tables G1, G2, G3 and G4 give the allocation recoveries of eBASCS, BASCS, spatial and space+time (respectively) in Simulation II.  Table G8. As in Table G5, for space+time.

G2 Simulation II: Allocation Accuracy
Tables G5, G6, G7 and G8 give the allocation accuracies of eBASCS, BASCS, spatial and space+time (respectively) in Simulation II.  Table I1. UV Cet fitted parameters under the spatial model. and denote the spatial coordinates of source , denotes the relative intensity. = 1 corresponds to UV Cet B, = 2 corresponds to UV Cet A and = 3 corresponds to the background.The "mean" column gives the posterior means of the corresponding parameters, and the "(q16,q84)" column gives the 16% and 84 % posterior quantiles.  Table I2. UV Cet fitted parameters under the space+time model. and denote the spatial coordinates of source , denotes the relative intensity.
, denotes the relative intensity of source in time bin . = 1 corresponds to UV Cet B, = 2 corresponds to UV Cet A and = 3 corresponds to the background.The "mean" column gives the posterior means of the corresponding parameters, and the "(q16,q84)" column gives the 16% and 84 % posterior quantiles. Table I1 gives estimates for the spatial model parameters, when fitted to the UV Cet data. Table I2 gives estimates for the space+time model parameters.