Understanding and predicting cadence effects in the characterization of exoplanet transits

We investigate the effect of observing cadence on the precision of radius ratio values obtained from transit light curves by performing uniform Markov Chain Monte Carlo fits of 46 exoplanets observed by the Transiting Exoplanet Survey Satellite (TESS) in multiple cadences. We find median improvements of almost 50% when comparing fits to 20s and 120s cadence light curves to 1800s cadence light curves, and of 37% when comparing 600s cadence to 1800s cadence. Such improvements in radius precision are important, for example, to precisely constrain the properties of the radius valley or to characterize exoplanet atmospheres. We also implement a numerical Information Analysis to predict the precision of parameter estimates for different observing cadences. We tested this analysis on our sample and found it reliably predicts the effect of shortening observing cadence with errors in the predicted % precision of<0,5% for most cases. We apply this method to 157 TESS object of interest that have only been observed with 1800s cadence to predict the precision improvement that could be obtained by reobservations with shorter cadences and provide the full table of expected improvements. We report the 10 planet candidates that would benefit the most from reobservations at short cadence. Our implementation of the Information Analysis for the prediction of the precision of exoplanet parameters, Prediction of Exoplanet Precisions using Information in Transit Analysis (PEPITA) is made publicly available.


INTRODUCTION
The transit technique has proven its success with more than 3500 exoplanets discovered using it.Missions like Kepler (Borucki et al. 2010(Borucki et al. , 2007;;Borucki 2016) and the Transiting Exoplanet Survey Satellite (TESS) (Ricker et al. 2015) have provided the exoplanet community not only with a large number of newly discovered exoplanets, but also with the ability to characterize them with an ever-increasing level of detail as we better understand how to extract the information contained in their transits.
The transit of an exoplanet allows us to measure the radius ratio between the planet and the star,   / * (see e.g.Seager & Mallén-Ornelas 2003) from which, knowing the radius of the star, one can determine the radius of the planet.There is a particular interest in obtaining precise measurements of this quantity, since knowledge about an individual exoplanet or an exoplanet population can be derived given such measurements of this quantity.For example, using data from the Kepler mission, a drop in the number of exoplanets with radii in between Earth's and Jupiter's was discovered (Fulton et al. 2017).This valley is known as the "radius valley" and was already predicted before its discovery by several groups (see e.g.Owen & Wu 2013).However, the availability of precise planetary radii measurements will be essential in the correct characterization (position ★ E-mail: julio.camero.21@ucl.ac.uk and depth) of the valley (see e.g.Ho & Van Eylen 2023;Van Eylen et al. 2018;Huber et al. 2022).Planetary radii are also essential in obtaining estimates for planetary densities, which are indicative of the composition of an exoplanet (see e.g.Zeng & Jacobsen 2017).Moreover, extremely precise measurements of transit depths (which is directly related to planetary radii), with precisions of 0.5% in different bands can be used to obtain transmission spectra of atmospheres and thus, infer the atmospheric composition of exoplanets (see e.g.Yang et al. 2022).
In the past years, concerns have been raised about the possible influence that the choice of cadences in the observation of exoplanet transit light curves may have in the precision of parameters derived from these events (see e.g.Dawson & Johnson 2012;Petigura 2020;Huber et al. 2022;Alexoudi 2022).Even before this, Kipping (2010) described how the use of longer cadences introduces distortions in the morphology of a transit light curve, affecting mostly the ingress and egress of the transit as shown in Figure 1.However, these deformations of the light curve can be modelled by numerically integrating a light curve to the required cadence (Kipping 2010).Presently, the concern lies in how the use of longer cadences (i.e.integrating the light curves to longer times) represents a loss of information that cannot be recovered even though our models accurately predict the shape of binned light curves (see e.g.Petigura 2020).This loss of information translates into light curves arising from different parameter sets becoming more alike and thus, reducing the precision with which we may extract information about different parameters from

No binning Binned to 1800s
Figure 1.The binning of a light curve produces deformations in its morphology.These are most evident in the shift of the contact points and the lengthening of the ingress and egress here highlighted with arrows in the zoomed plot.These deformations are understood and can be predicted by models as evidenced by the light curves shown in this figure which were generated using PyTransit.
these light curves.Figure 2 illustrates this idea with the transits of two planets with a different set of parameters, both with 20s and 1800s cadence.The bottom plots of the Figure show the difference between the light curves produced by each of the planets in each cadence and illustrate how, for the shorter cadence, there is a greater difference between the transits, while for the longer cadence the differences are smaller, which is what is meant by saying that information is lost.
This idea of the information contained in a light curve has been explored in previous works such as Carter et al. (2008); Price & Rogers (2014), where they implement an analytical Fisher Information Analysis (Information Analysis henceforth) to approximate, non-limbdarkened forms of transit light curves.The analysis of Price & Rogers (2014) allows them to predict, for each cadence and particular parameter set, what is the best precision that can be obtained by fitting each of the model parameters.
In this work, we approach the issue of cadence from two sides.First, we aim to provide an in-depth analysis of its effects by performing fits to a number of TESS confirmed planets that were observed using more than one cadence and then compare the resulting precisions in the radius ratio to understand what the impact of cadence is.Second, we extend previous analytical implementations of the Information Analysis by developing a numerical implementation of the analysis that can be applied to the non-approximate and limbdarkened forms of transit light curves and that can be adapted to any fitting model.We compare the predictions of this analysis with the results of our previous fits to understand whether the Information Analysis may be used as a reliable tool in the prediction of parameter precisions and compare our method with previous analytical methods.
In Section 2 we lay out the methodology that we follow in order to perform the homogeneous fits of a large number of light curves as well as the process of candidate selection.We also present our implementation of the Information Analysis to exoplanet transits.Then, in Section 3 we summarize the results obtained by our work which are then discussed in Section 4, and we conclude in Section 5.

Candidate selection
The selection of candidates starts with the full list of TESS confirmed planets obtained from the NASA Exoplanet Archive (NASA Exoplanet Science Institute (2020), downloaded June 2022), henceforth NEA, out of which systems consisting of a single planet orbiting a single star are selected, reducing the original number of 231 systems down to 105.We make this choice for computational simplicity and to reduce the number of light curves that need to be fitted.We do not expect that the presence of other planets in the system and/or possibly the presence of more than one star in the system to invalidate the results here presented, although these cases would require an extension of the prediction algorithm presented below to simultaneously model multiple planets and account for third light contamination.However, this is something that should be investigated independently.For each of these systems, we obtain the available light curves using the Lightkurve python package (Lightkurve Collaboration et al. 2018) to search for light curves in the MAST data archive1 .In order to ensure the homogeneity in the treatment applied to the light curves, only those authored by the TESS Science Processing Operations Center or SPOC-which is in charge of receiving raw data and extracting photometry and astrometry for each target and identifying and removing systematic errors among other tasks-are used.For a detailed description of the TESS SPOC pipeline, see Jenkins et al. (2016) and the data release notes2 .Once a list of all the available light curves along with their cadences has been obtained for all the systems, only those for which more than a single cadence is available are selected in this step, reducing the list down to 83 systems.This is done because light curves released with a shorter cadence do not necessarily have longer cadence light curves released by SPOC.
When a system has been observed in multiple cadences, since some Transit of planets with two different sets of parameters in both short (20s) and long (1800s) cadence.The bottom plots show the differences between the light curves and highlight how the longer cadence light curves have smaller difference (are more alike) and how, in this way, information that was contained in the shorter cadence light curve has been lost.The differences in radius ratio here would correspond to an ambiguity between the transit produced by a planet with a radius of ∼ 2.18 ⊕ and a planet with a radius of ∼ 2.64 ⊕ for a Sun-like star, an increase of radius of 21%.Light curves are generated using PyTransit and the plot is based on the ideas presented by Petigura (2020).
regions of the sky were observed in more TESS sectors than others, different cadences for a given system may have a different number of available sectors.That is, out of all cadences available so far for TESS light curves (i.e.20s, 120s, 600s and 1800s), a target may have been observed with 1800s in five sectors, with 600s in four and with 120s in just one.Therefore, we fit sectors of a given exposure one at a time in order to allow for a more homogeneous comparison of different cadences so that any changes in the retrieved parameters can be assumed, at least to the extent of the precautions taken here allow for, to arise from cadence effects.Whether this decision may have an influence in our results is explored later on by fitting a small number of systems with the same number of sectors available for several cadences.
However, the decision to fit individual sectors introduces a further constraint in our selection of systems due to the relatively short duration (∼ 27 days) of a sector.In order to ensure a good fitting of the transits, we decide that at least five transits should be present in the light curve captured in each sector.This effectively results in a restriction on planetary orbital periods, which are needed to be shorter than five days.With this final filtering of the list, we are left with a selection of 46 TESS confirmed planets in systems with a single star and a single planet, with more than one cadence available and with periods less than five days.These 46 systems translate to a total of 556 single-sector light curves to be fitted.A list of all the selected systems as well as the cadences available for each cadence is provided in the GitHub repository3 .

Light curve processing and fitting
Pre-processing of the light curve starts by applying a Savitzky-Golay filter (Savitzky & Golay 2002) to the light curve in order to remove any present trends in the data-vibration of the instrument or stellar variability, among others.In order to prevent any deformation in the transits by the application of the filter, two precautions are taken: (i) A transit mask is created using the transit duration, period, and transit time values from the NEA table.This mask is used to prevent the filter from incorporating the transits into the calculation of trends and protects the transits from deformations.
(ii) The window length of the filter is chosen to be five times the number of data points in a single transit to further discourage the filtering of trends associated with the transits.
Additionally, periodograms are calculated for the filtered light curves using a box least squares procedure from which values for the period, transit time and depth of the transit (bls_depth) are obtained.The square root of the value of the transit depth is used as a starting guess of the planet radius ratio, but the period and transit time obtained from the box least squares are only used as wherever no values for these quantities are available in the NEA table.
The final pre-processing step consists of modelling any remaining distortions in the data by using a Gaussian process, which finds functions that predict trends in the data (for more details about Gaussian processes and their application to exoplanet transits see Barros et al. 2020) and outlier identification after the distortion has been removed.To do this, a Bayesian model is set up using the exoplanet package (Kumar et al. 2019;Foreman-Mackey et al. 2021) and PyMC3 (Salvatier et al. 2016) while the Gaussian process (GP) is implemented using celerite2 (Foreman- Mackey et al. 2017;Foreman-Mackey 2018).The transit is modelled using the parameters {,  0 , log   / * , ,  1 ,  2 , F,  * ,  * } where  is the orbital period of the planet,  0 is a reference time for the transit (the mid-transit time of a reference transit),   / * is the radius ratio between the planet and the star,  is the impact parameter, { 1 ,  2 } are the parameters for a quadratic limb-darkening model, F is the mean value of the flux in the light curve,  * is the mass of the host star and  * is the radius of the host star.Meanwhile, for the GP a SHOTerm4 is used as the kernel and its parameters {log , log } (see documentation for the meaning of these parameters) are fitted simultaneously with the transit to model any trends in the residuals.The procedure here follows the basic structure described in the TESS case study presented in the exoplanet package5 .
Tight priors are only placed on the period, transit time and radius of the star, with the rest of the variables set with either loose Gaussian priors or uniform priors.This is to allow the information contained within each light curve to determine the shape of the posteriors, rather than it be dominated by a tight prior, so that any difference between cadences becomes apparent.As a note of caution, we chose not to fit for eccentricity for computational efficiency given the large sample of light curves to be fitted.Any signatures left behind in the light curve should be absorbed into the posterior of the stellar density (see e.g.Van Eylen et al. 2019;Dawson & Johnson 2012) so we expect any effect on the radius ratio determination to be negligible.Additionally, a uniform prior between 0 and 1 is placed on the impact parameter, meaning that grazing transits-those for which  > 1-can not be modelled.This should not be an issue as there are no planets with grazing transits in the selected sample of systems according to  values from the NEA table.For the GP, the parameters have tight priors to prevent overfitting.In Table 1 all the transit variables and their priors are summarized.
With the model setup, we obtain the first maximum likelihood or best fitting set of parameters.Using these parameters, we subtract the best fitting light curve and the GP to the data and then remove any point whose residual is larger than 5 times the root-mean-square value of all residuals.With the outliers removed, we obtain a new second set of best fitting parameters, which are then used as a starting point for the Markov Chain Monte Carlo sampling.
Finally, we sample two chains with 8000 tuning samples and 8000 normal samples, with the starting point given by the best fitting parameters of the light curve without outliers.The target accept parameter defines the target acceptance ratio of our MCMC.The acceptance ratio is defined as the ratio between the number of accepted samples and the total number of samples.We choose a value of 0.97 through experimentation as we found it produced good fits.Seeds for each chain are generated randomly.Convergence of the chains is checked through the "rhat" parameter, which must be close to 1 for convergence (Vehtari et al. 2021).

Numerical Information Analysis
The Information matrix technique (Information Analysis henceforth) is a mathematical formalism that allows-under some conditions that the data must meet-predicting the best precision one can expect to obtain in your model parameters after conducting an experiment but without having to perform the experiment or having to simulate it in detail (Wittman 2016).It does this by "measuring" how much information is contained in the combination of the data and the model.Returning to Figure 2 we can understand that the 20s cadence model is more sensitive to changes in the input parameters while the 1800s model has lost some of that sensitivity.That is, small changes in the input parameters produce larger changes in the produced light curve with 20s cadence than with 1800s cadence.We could then expect that 20s data should produce more precise values since small deviations from the true parameters will quickly make the predicted light curve deviate from the data, while 1800s will produce worse precisions.The Information Analysis formalizes this intuitive idea by introducing the concept of information, plus it takes into account the errors in our measurements.More than that, the information analysis technique also allows you to predict covariances between parameters, potentially warning you of the need for reparametrization for a more efficient fitting of your parameters.For a formal description of the Information matrix formalism, see Kagan & Landsman (1999).
The condition that our data must fulfil in order for this analysis to be valid is that errors in the data points must be Gaussian with a mean of 0 and uncorrelated.Here, we will assume that this condition is satisfied by our data.
To perform the analysis itself, we define {  ; {  }} as the transit flux model, which is evaluated at points in time   and which depends on a set of parameters {  }.The standard deviation in the point at time  is taken to be   and then we can calculate the entries in the zero-mean Gaussian-noise Information matrix using the following expression where the derivatives are with respect to model parameters, and we sum over all times.Here, B  is the inverse of the covariance matrix of the measurements, which in this case will be just a diagonal matrix with  −2  in the diagonal.That is, B  =    −2  .Then, with the conditions described above met, we can obtain the elements of the covariance matrix by simply inverting this matrix Cov(   ,   ) = ( −1 )   . (2) The diagonal of the matrix will give us the smallest possible variance we can expect on each of the parameters, while off-diagonal elements measure the covariance between the parameters.Of course, this is a prediction of the best precision we can obtain in the parameters, and many factors can result in our experiment obtaining larger variances.
Thus, all the difficulty of the analysis lies in calculating the derivatives of our flux model, which are not guaranteed to be analytically derivable.Calculating analytical derivatives of limb-darkened flux models is not possible, as binned light curves have to be calculated numerically, and calculating numerical derivatives can be expensive computationally.Previous works which implemented this technique in the context of exoplanet transits (Carter et al. 2008;Price & Rogers 2014) used simplified, linear trapezoidal transit models with no limbdarkening in order to be able to derive analytical expressions in the interest of computational efficiency.Their Information Analysis predictions of the variances agree well with MCMCs run on artificiallygenerated data based on the linear trapezoidal model, showing the validity of the analysis.However, the introduction of limb-darkening makes the predictions less accurate and the limitations go beyond the lack of limb-darkening effects in the model.The parameters for which they derive variances are related to the morphology of the light curve (e.g.duration of the ingress, depth of the transit or duration of the full transit among others) which are related in complex ways to the more physical parameters normally used to describe a transit (e.g.radius ratio, stellar density or impact parameter among others) and the data points-the times at which observations were madehave to be assumed to be perfectly uniformly sampled which is not necessarily the case as interruptions are frequent in observations.
In order to address the limitations of an analytical approach to the Information Analysis technique, we propose a fast numerical implementation such that the exact model that is going to be fitted is used directly in the calculation of the matrix-allowing the prediction of the precision of more physical variables such as the impact parameter and the density of the host star.Moreover, a numerical analysis allows for the prediction of the variances associated with the exact distribution in time of data points available, including interruptions or any other deviation from a uniform sampling of data points.We believe that our approach increases the appeal of the technique as it moves it closer to the hands-on work of observations and away from the theoretical realm, making it more suitable to be used for the efficient planning of observations of exoplanets whose parameters want to be refined.

Implementation
In order to efficiently implement the numerical Information Analysis of transit light curves, we make use again of the exoplanet package.The implementation of the package with theano(Theano Development Team 2016), allows performing the numerical derivatives required for the computation of the matrix and do so in a fast an efficient manner (Dan Foreman-Mackey, private communication).
Thus, Prediction of Exoplanet Precisions using Information in Transit Analysis (PEPITA) was developed by implementing the transit model described before and including methods were included for the calculation of the derivatives, the Information matrix and the covariance matrix.The implementation is designed in such a way that modifications of the particular transit model are easy to implement.All that is needed for the calculation of the covariance matrix is to provide an array of timestamps of when data points were collected along with the errors in the measurement of each of the data points (in our case taken directly as reported in the SPOC light curve) or a mean error that is assumed to be equal for all points, as well as a set of parameters defining the fiducial model-the model used to evaluate the derivatives and usually the best-fit set of parameters.In principle, the derived covariance matrix should make a good prediction regardless of the chosen fiducial model, as long as the real parameters are not too far from them.
In Figure 3 we show the numerical derivatives calculated for HD 2685 with 20s (black), 600s (orange) and 1800s (magenta) cadences.For clarity, we did not include 120s derivatives, since they are very similar to 20s derivatives.Just by a visual inspection of these derivatives, one can gain an understanding of why longer cadences are, in principle, worse performing in the constraining of transit parameters.The 1800s derivatives are smaller than the 20s derivatives, which indicates that observations made in 1800s will be less sensitive to small changes in the model parameters and the precision derived from them will be worse than what can be obtained with shorter cadences.Of course, the exact difference between different cadences will depend on the set of parameters used to calculate the derivatives.In other words, 600s derivatives have almost the same shape as 20s derivatives for this particular set of parameters, but this will not necessarily happen for other planets.One should also remember that the Information matrix consists of not only the derivatives, but also the precision (standard deviation) of the data.The magnitude of the error bars scales as ∝ 1/ √ I with I, the cadence of the observations, and so while shorter cadence models will be more sensitive, they will also suffer from larger error bars in the data.Therefore, it is not possible to say that one cadence may be better than another without performing the analysis.
Another important takeaway from Figure 3 is noticing how some parameters produce derivatives that are an order of magnitude or larger than others or that are non-zero for wider ranges of timecompare, for example, the radius ratio and the impact parameterwhich also explains why it is harder to constrain some parameters compared to others.
Priors are an important part of any Bayesian model that can be used to extract information from data, and these can too be incorporated into the Information analysis as another source of information that is independent on how much information is contained in the model.To do so, one must simply put these priors-the corresponding standard deviation of whatever prior distribution is being used-in their corresponding position of the diagonal of a new "priors" matrix which (orange) and 1800s (magenta) using the integration of the exoplanet package with theano.Just from the derivatives, it is intuitive to see that a 1800s light curve is less sensitive to changes in the parameters and will do worse in constraining them, while 600s and 20s almost as equally sensitive and should produce better precisions.
is then inverted so that the elements of the inverted matrix are given by with   , the standard deviation of the prior distribution of parameter .Note that having no prior would be equivalent to an infinitely large standard deviation, and so for that particular parameter the value of its entry in the inverted "priors" matrix will just be 0.That is, having no priors is the same as having no additional prior information.
All that is left to do is to add this matrix to the original Information matrix, and one has a matrix describing both the information contained in the data plus any information conveyed by our prior knowledge of the data.The inverse of this matrix provides us, just as before, with the covariance matrix whose diagonal elements will be the variances we expect to obtain by performing the MCMC fit.
For the purpose of this work, we use the median values of the posteriors obtained from the MCMC fit for the fiducial model.
For the priors, we use the standard deviations described in Table 1 for Gaussian priors.For the impact parameter, we use 1/ √ 12, the standard deviation of a uniform distribution between 0 and 1.For the limb-darkening variables, although the posterior distributions that we obtain from the fitting are for  1 and  2 , behind scenes the exoplanet package actually samples the reparametrizations described in Kipping (2013).Thus, even if the MCMC fit isn't able to constrain at all the values of these reparametrizations, it will still produce non-uniform distributions for  1 and  2 .These distributions correspond to a lack of extra knowledge derived from the data and so are just the prior on  1 and  2 that results from the reparametrization of these variables.Hence, we use the standard deviation of those distributions as the priors on  1 and  2 .

MCMC fit results
Upon successful completion of each of the MCMC fits, a series of manual inspections were performed on each of the fitted light curves before the results were approved.Light curves were checked before and after the application of the Savitzky-Golay filter to ensure the transit was not eliminated or distorted by the application of the filter.Points marked as outliers and removed are also checked for each of the light curves to ensure that the pre-processing best-fit model and GP have not failed and caused valid points in the transit to be flagged as outliers.Additionally, a plot is made where the folded light curve is shown before and after the application of the median GP obtained after sampling is finished.This is done to ensure that there has not been an overfitting of the transit by the GP.For the last two checks, we generate a folded light curve, where the model light curves obtained by choosing 10000 random samples of the posteriors are plotted with the median shown as a line and the range between the 16 and 84 percentiles shown with a color-filled region (see Figure 4) as well as a cornerplot to visually inspect the posteriors (see Figure 5, original cornerplots contain all variables while Figure 5 is a scaled-down version for clarity).In particular, Figures 4 and 5 show how for this planet, good fits are obtained for all cadences and fits to the shorter cadence produce a higher precision in terms of their radius ratio and impact parameter.
Given that, as mentioned before, there may be several sectors for a particular system-cadence combination, there are several choices available when deciding how to compare the precision obtained by the different cadences of a same system.Ideally, we would compare the performance of the same sector fitted with different cadences.However, this choice would drastically reduce the number of comparisons possible, sometimes even making comparisons between different cadences impossible.As such, we choose to compare all possible combinations between a particular system-cadence-sector combination and all other system-cadence-sector combinations with a different cadence.That way, the number of comparison becomes large (more than 6000 comparisons in total), and although some sectors of a particular system-cadence combination may have performed slightly better or worse than others, the deviation should average out as we compare everything with everything.
In Figure 6 we show the distribution of precision improvements observed for all possible cadence combinations out of the {20s, 120s, 600s, 1800s} cadences available for TESS observations.We calculate the precision improvements as follows.Given   the standard deviation of the posterior fit for the radius ratio of the shorter cadence (y-axis label in the plot) and   that of the longer cadence (x-axis label in the plot) the improvement is given by so that a positive value of 50 means that the shorter cadence performed 50% better than the longer cadence.

Information analysis results
In order to display the results of the Information analysis predictions, we compare the precision as obtained from the MCMC fit to that predicted with the Information analysis.We show in Figure 7 the predicted % precisions against the MCMC fit precisions, as well as the residual obtained by subtracting the fit values to the predicted values.

Multisector results
Since fits for published parameter values are usually performed to more than one sector at a time, we perform multisector fits for a small number of systems from our selection.In Figure 8, we show the MCMC fit precisions obtained for a fit to a single sector as well as for a fit to 6 sectors simultaneously for 4 systems of our selection.We also plot the Information analysis predicted precisions for the 6 sector fits.
We find that it is hard to tell how a multisector fit will perform exactly based on the single sector precisions, besides the expectation that precisions should increase with the use of more sectors.Nevertheless, we found no hints of the inclusion of several sectors at the same time in the Information analysis affecting its performance.

Predictions for TESS Objects of Interest (TOIs)
Given the encouraging results of the Information analysis, we provide as a proof of concept, the predicted radius ratio improvements to be obtained by the reobservation of TOIs with different cadences.
To perform this analysis, we start with the TOIs table from the NASA Exoplanet Archive and select only objects of interest which are identified as planet candidates.Then, only objects with values for log ,  * and  eff (stellar effective temperature) along with errors available in the table are chosen.From these, we select only those planet candidates for which only 1800s observations are available.The values we extract from the table for each of the TOIs are the transit duration , the transit depth , period, transit time and the stellar radius, surface gravity and effective temperature.
The analysis is performed in much a similar way to what was described above.However, in this case, we choose to set up the Information Analysis by modelling the transit using the log of stellar density, log  * , instead of the stellar mass (with no prior on the stellar density).Because values for the stellar density, the impact parameter and the stellar limb-darkening parameter are needed in order to construct the fiducial model, we obtain these values by combining the available parameters as follows.
The limb-darkening parameters are obtained using the Python Limb Darkening Toolkit or PyLDTk (Parviainen & Aigrain 2015;Husser et al. 2013).Since no value for stellar metallicity is available, we use  = 0.25 with error 0.125 for all the TOIs.Although the package can also provide errors in the calculated limbdarkening values, we still choose to use the same priors in the limbdarkening parameters as we used in the Information Analysis of the MCMC fits.We choose to do this, because the calculations of these limb-darkening parameters are very crude given the available stellar parameters.
Meanwhile, the impact parameter of the fiducial model is calculated by assuming a circular orbit.Under this assumption, the transit Final fits for LHS 3844 with median values and standard deviations of the posterior distributions of the radius ratio (  / * ) and impact parameter () for each fit.To avoid cluttering by the data points, we fold light curves and then bin the data to the original cadence.That is, a 20s light curve will be folded, and then the folded data will be binned into 20s bins so that there is a single data point every 20s.From the posteriors, 10000 random samples are drawn for the transit parameters and light curves generated.The median of all samples is shown with a red line, while the range between the 16 and 84 percentiles is shown with a yellow coloured region.Individual plots like these were generated for each of the fitted light curves and manually inspected to identify any issues with the final fit before approving any results.
duration () can be combined with the period (), stellar gravity ( * ) and stellar radii ( * ) to obtain the impact parameter as Meanwhile, for the fiducial model value of  * we obtain an expression combining  and  * by using Kepler's laws.With this,  * is given by Finally, for the radius ratio, once the limb-darkening parameters and the impact parameters are obtained we can get an approximate value for the ratio by using the transit depth in combination with the limb-darkening parameters and the impact parameter.This functionality is already implemented in the exoplanet package, and we make use of it.
With the fiducial model parameters all determined, we download the available 1800s cadence light curves for the TOIs.The array of timestamps is taken to be an array of times evenly spaced by each of the cadences between the minimum time from the downloaded 1800s light curve and the maximum time of the downloaded 1800s light curve.That is, if the first point in the 1800s cadence is at time , we create a uniform array of points separated by, for example, 20s for the analysis of 20s cadence starting at  and extending up to the last point of the 1800s cadence.This is to ensure homogeneity between cadences.For the errors in the observations, we take the mean error of the 1800s data points of each planet candidate σ1800s and then for each of the other cadences of  seconds the error ( s ) is assumed  to be the same for all points and equal to this need for approximating the errors should be more carefully examinated if this analysis is repeated in a more rigorous manner as it can directly affect the predictions of the analysis.Finally, we perform the Information Analysis for cadences 20s, 120s and 1800s, and we compare precisions in the radius ratio for each of the cadences.
Table 2 shows the expected improvements by reobservations with either 20s or 120s cadences for the 10 TOIs with the largest improvements.A full list is available in the GitHub repository6 along with a jupyter notebook showing the code used to make these predictions.The use of our Information Analysis implementation shown there can be extended to the analysis of other planet candidates/cadences.

DISCUSSION
The results presented in the previous section confirm our expectation that the precision of the radius ratio can be improved with the use of shorter cadences.We have observed an almost doubling (Improv.> 50%) of the precision in half of the comparisons between 20s and 120s cadences to 1800s cadences.
As expected, the median improvement observed is largest when comparing the shortest cadences to the longest cadences (i.e.20s to 1800s).The trends seen in the rows and columns of Figure 6 are consistent with the previous discussion about information being "lost" when a longer cadence is compared to a shorter one, and should encourage the use of shorter cadences wherever possible.It is not as clear from Figure 6, for example, whether 120s cadence observations are preferable to 600s with a median improvement of 4%.However, it is worth emphasizing that the plots show the distribution of all comparisons between 120s and 600s cadence observations.The actual improvement will depend on the particular system being considered, as can be seen in Table 2, where we present predicted improvements for 10 different TOIs and show that these improvements depend on the particular parameters of the system.For example, while TOI 3786.01 shows a very similar improvement in the radius ratio precision when reobserved with either 20s, 120s or 600s cadences (66.84%, 66.79% and 65.89% respectively), TOI 1701.01 shows a clear difference between the improvement expected from reobservations with either 20s or 120s cadence of around 65% and that expected from reobservations with 600 cadence of around 40%.
This dependence of cadence effects in the obtained radius ratio precision on the particular system, highlights the need for a fast and easily adaptable prediction method, such as the one we present here and made publicly available.Figure 7 shows overall median errors in our predictions of about 1%.We observe that, as the fit precision gets worse, the predictions start to deviate more and predict better precisions than the ones actually obtained in the fit.This can be understood by considering that, as the fit precisions get worse, it is possible that the fit is doing poorly for some reason our method did not account for.In those cases, since the Information Analysis predicts the best possible precision one should be able to obtain, our predictions are expected to be of better precisions than obtained with the fits.When fit precisions get worse than 10% we find that median errors for 20s cadence predictions can reach values of around 10% error.Therefore, predictions should be treated as an approximate lower boundary for the precision to be obtained with a fit.Whether this precision is obtained or not will depend on the fitting methodology.Where good fits are obtained, with precisions of a few %, we expect predictions to be accurate up to 0.5%.
When comparing our predictions (numerical method henceforth) to those obtained using the analytical methodology described in Price & Rogers (2014) (analytical method henceforth), by obtaining predictions for the radius ratio standard deviation using their analysis and dividing it by the median value of the radius ratio, we find that our numerical method produces better predictions of the radius ratio precision, specially for larger values of % precision.Moreover, while our methodology rarely predicts a precision worse than the precision obtained in the fit, analytical results suffer from this kind of error in most cases.This is relevant since, as discussed above, Information Analysis should predict the best attainable precision and thus be prone to predicting a precision better than the fit precision.That the analytical results suffer from underprediction errors is an indication that some of the information contained in the light curve failed to be captured by the analysis.In Figure 9, we show the absolute residuals from our predictions and the predictions obtained using the analytical approach when compared to fit precisions of the radius ratio.Coloured crosses represent the moving median of the residuals for a bin of size 6% around the central value for our predictions (white) and analytical predictions (red).While the residuals from both prediction methods increase as the fit precision increases, those from an analytical prediction do so in a more pronounced manner.
The code used to generate the predictions using the methodology of Price & Rogers (2014) (see equations A15 and A16 of their work) is available in the GitHub repository.

CONCLUSION
We have performed the uniform processing and fitting of 46 TESS confirmed planets transit light curves amounting to a total of 556 single-sector light curves to investigate the effects of cadence in the retrieved parameters.We model transits with the set of parameters {,  0 , log   / * , ,  1 ,  2 , F,  * ,  * } whose posterior distri- butions are obtained using a Markov Chain Monte Carlo (MCMC) procedure.
We also developed an implementation of a numerical Information Analysis technique to exoplanet transits by using the exoplanet package.PEPITA is highly adaptable to the exact set of parameters that are to be fitted and to the exact data points and precisions of the data points available.Thanks to the integration of exoplanet with theano, the required numerical derivatives can be obtained, making the numerical analysis possible and fast.Our numerical implementation differs from past analytical implementations of this technique in that it does not require approximations of the light curve shape and can be adapted to the exact model that is to be fitted while still producing fast results.This technique is applied to the light curves that were fitted with the MCMC.
Median improvements in the radius ratio precision of almost 50% are observed when comparing fits to 20s or 120s cadence light curves to 1800s cadence light curves.Smaller, but still relevant, median improvements of around 35% are also observed when comparing fits to 600s light curves to fits of 1800s light curves.However, it is important to highlight that these are median improvements only and that the actual improvement should be considered on a case-by-case basis.When we consider fits to multiple sectors simultaneously, we find no significant changes in the very limited sample considered.
With this in mind, we check the performance of our numerical information analysis by producing predictions of the radius ratio  precision obtained by the fits for each of the light curves.While we find that for fits where the precision of the radius ratio is worse-that is, for large values of % precision-our predictions perform worse, we speculate that this is likely caused by the poor performance of those fits due to our fitting model not accounting for certain factors which can affect the light curve.For example, removing stellar variability is done here with a general GP, but a more individualized analysis of each light curve could result in better removal of any variability.Another possible factor that is unaccounted in our model are stellar spots, which can also affect the light curve.Nevertheless, where the fit radius ratio precision is of a few %, our predictions have errors 0.5%.Even when larger errors of a few % are observed, these correspond to predicting precisions better than observed, in line with how the information analysis should perform.When we compare our method to previous analytical methods, we find that not only are our errors significantly smaller but that the analytical predictions tend to be of worse precisions than observed, indicating that the analysis has missed some of the information contained in the light curve.
Given the satisfactory performance of our prediction method, we apply it to a number of TESS objects of interest with observations available only with 1800s cadence to demonstrate how our implementation of the information analysis can aid in deciding which targets should be prioritized for short cadence observations.We present the top 10 TOIs which would benefit most from reobservations with short cadences and highlight how for some of them there's an added benefit by using 20s or 120s cadences instead of 600s cadence while for others the difference is considerably smaller.Additionally, in Appendix A we include the full list of all planet candidate predictions and make available the script used to make these predictions on the GitHub repository.
Our study has shown that shorter cadences offer better precisions in the radius ratio obtained from transits.Thus, whenever high precisions in this parameter are needed-constraining of the radius valley or the characterization of exoplanet atmospheres among othersspecial attention should be given to the choice of cadence as the choice of a shorter cadence is more probable to provide the required precision.We have not focused here in any of the particular areas that could benefit from the increased precisions, but we expect our results would be important in the context of investigations of the radius valley, transmission spectroscopy and the determination of exoplanet densities among others.Instead, our aim has been to highlight that there are indeed benefits to be gained by considering the use of shorter cadences.Moreover, we have demonstrated that PEPITA is able to accurately predict the precision in the radius ratio that can be obtained by fits to any particular light curve.Our implementation has the power of serving as a tool in the planning of future mission by providing information about which targets should be prioritized for observations with short cadences.In order to allow for such use of PEPITA, we have made it publicly available in GitHub7 .We encourage studies focused on one of those areas that require high precisions, to consider the issue of cadence in depth and perhaps to recommend reobservations of exoplanets of interest whenever the information analysis predicts that the required precision will be obtained.
Time since transit centre[days]

Figure 3 .
Figure3.Derivatives of the transit light curve of HD 2685 with respect to model parameters.The derivatives are computed for cadences of 20s (black), 600s (orange) and 1800s (magenta) using the integration of the exoplanet package with theano.Just from the derivatives, it is intuitive to see that a 1800s light curve is less sensitive to changes in the parameters and will do worse in constraining them, while 600s and 20s almost as equally sensitive and should produce better precisions.

R
Figure 4. Final fits for LHS 3844 with median values and standard deviations of the posterior distributions of the radius ratio (  / * ) and impact parameter () for each fit.To avoid cluttering by the data points, we fold light curves and then bin the data to the original cadence.That is, a 20s light curve will be folded, and then the folded data will be binned into 20s bins so that there is a single data point every 20s.From the posteriors, 10000 random samples are drawn for the transit parameters and light curves generated.The median of all samples is shown with a red line, while the range between the 16 and 84 percentiles is shown with a yellow coloured region.Individual plots like these were generated for each of the fitted light curves and manually inspected to identify any issues with the final fit before approving any results.

Figure 6 .
Figure 6.Density plot of the distribution of improvements (see Equation5) observed for every combination of cadences.Values above 0 indicate an improvement of precision with the use of a shorter cadence, while values below 0 indicate a worsening of the precision.Vertical grey lines indicate neither an improvement nor a worsening at a value of 0% while red lines indicate the median improvement observed for a particular cadence comparison.

Figure 7 .Figure 8 .
Figure 7. Predicted % precisions obtained using the Information analysis against precisions obtained from the MCMC fits.Points on the grey diagonal line indicate a perfect agreement between predicted and real value.Meanwhile, values above the line indicate an underprediction of the precision, while values below indicate an overprediction.Residual plots are included, with the difference between the predicted precision and the fit precisions plotted against fit precisions.

Figure 9 .
Figure 9. Residuals from our predictions (grey) and those obtained using the analytical methodology presented in Price & Rogers (2014) (red).Coloured crosses represent a moving median of the residuals, with a bin size of 6% around the central value.

Table 1 .
Transit and GP variables fitted with the Bayesian model and their priors.N ( ,  2 ) represents a normal distribution with mean  and standard deviation , | N | ( ,  2 ), represents a normal distribution of only positive values with mean  and standard deviation  and U (, ) represents a uniform distribution between  and .Values of the form  indicate that the value of  comes from the NEA table.Similarly, a standard deviation of the form   indicates that the standard deviation is taken as the error reported in the NEA table value.Fallback values for the period and transit time standard deviations in case no value is available in the NEA table are 10 −3 .

Table 2 .
Expected improvements by reobservation with either 20s or 120s cadence of TOIs with only 1800s cadence observations.The 10 TOIs with the highest improvements are shown.