Systematic Reanalysis of KMTNet microlensing events, Paper I: Updates of the Photometry Pipeline and a New Planet Candidate

In this work, we update and develop algorithms for KMTNet tender-love care (TLC) photometry in order to create an new, mostly automated, TLC pipeline. We then start a project to systematically apply the new TLC pipeline to the historic KMTNet microlensing events, and search for buried planetary signals. We report the discovery of such a planet candidate in the microlensing event MOA-2019-BLG-421/KMT-2019-BLG-2991. The anomalous signal can be explained by either a planet around the lens star or the orbital motion of the source star. For the planetary interpretation, despite many degenerate solutions, the planet is most likely to be a Jovian planet orbiting an M or K dwarf, which is a typical microlensing planet. The discovery proves that the project can indeed increase the sensitivity of historic events and find previously undiscovered signals.


INTRODUCTION
Gravitational microlensing has been proven to be a powerful tool to detect extrasolar planets (Mao & Paczyński 1991;Gould & Loeb 1992).To date, there are more than 190 1 confirmed microlensing planet detections.With the increasing number of detections, several statistical works that focus on the planet-host mass-ratio function have been presented in this field (e.g., Gould et al. 2010;Suzuki et al. 2016Suzuki et al. , 2018;;Poleski et al. 2021).However, the uncertainties of the current statistical works are still large.For example, Suzuki et al. (2016) suggest that the mass-ratio function has a break at br = 1.7 × 10 −4 , but later Jung et al. (2019) argues that the break should be located at a smaller mass ratio, br ∼ 0.55 × 10 −4 .However, since then, many < 10 −4 planets have been discovered 2 , which may conflict with a "break" in the mass-ratio function.
The large uncertainty in the statistical results mainly derives from the small number of detections.The reason that the planet numbers in these statistical samples are few is, on the one hand, due to the intrinsically low probability of the microlensing planet perturbation and the difficulty of obtaining sufficiently dense light curve coverage to capture and characterize such perturbations, and on the other hand, potentially due to a failure to find planetary signals in the existing microlensing events.The Korea Microlensing Telescope Network (KMTNet, Kim et al. 2016) has largely solved the former problem by stationing large field-of-view cameras at three sites around the globe, enabling continuous or near-continuous high-cadence observations of 100 deg 2 toward the Galactic bulge.The latter problem can be greatly reduced by applying a semi-automatic search algorithm to the light curves, such as the AnomalyFinder system (Zang et al. 2021b(Zang et al. , 2022) ) of the KMTNet.
However, current searches of KMTNet events are all based on the preliminary data reduced by a real-time or a post-season pipeline (Kim et al. 2018a,b) using the difference image software pySIS (Albrow et al. 2009).This software was adapted into a pipeline that is applied to all KMTNet events to produce real-time, photometry posted online (these data are referred to as "preliminary" or "online pipeline" photometry).While the online pipeline includes some adjustments for KMTNet-specific conditions, the pySIS software was meant to be tuned by-hand for the conditions of each individual field.For example, certain sections of the KMTNet camera may have bad pixel columns or some fields may have bright variable stars that need to be masked.Hence, because the online pipeline reduction is not tuned for the conditions in a specific field, by-hand TLC reductions often result in improved results.Therefore, after finding potential signals, the photometry is re-extracted by-hand using a tender-loving care (TLC) approach.This by-hand TLC approach also uses the py-SIS software but introduces fine-tuned parameters, special operations, and visual verification of various steps by human operators.The resulting high-quality data are then used for detailed modeling and publication.Therefore, it is possible for the AnomalyFinder and other searches (e.g., by-eye search) based on the preliminary dataset to miss planetary signals that were too subtle but can be enhanced or revealed by the TLC data.
tometry for the KMTNet microlensing events.In principle, all KMT-Net events could be re-reduced using the by-hand TLC procedure, but in practice, these reductions are operator-dependent and timeintensive.The time cost in particular has made it prohibitively expensive to apply to large numbers of KMTNet events, and so precluded a systematic search based on TLC data and also presented challenges in the new era of systematic candidate searches using the AnomalyFinder system (Zang et al. 2021b(Zang et al. , 2022)).So, in this project, we start by developing and updating algorithms for KMTNet TLC photometry to enable mostly-automated, highquality reductions that reproduce the methods of the most highlyskilled operators.These updates form a new TLC pipeline.We plan to apply the new pipeline to the archival events to find possible new planetary signals.Such a systematic reanalysis could both increase the number of planet detections and the survey sensitivity of KMT-Net, which will eventually allow us to obtain a more complete and accurate statistical result.
In the first of this series of papers, we describe the development of the new KMTNet TLC pySIS photometry pipeline.Among the challenges for TLC reductions, because the original pySIS algorithm was designed to work on datasets with tens to hundreds of observations for a few events per year, new challenges were encountered once it began to be applied to the hundreds of KMTNet events requiring rereduction, each with thousands of images.We report on both the underlying principles of the new and updated algorithms, the specific application to KMTNet, and how the changes allow for increased automation, robustness, and improve the accuracy of the pySIS photometry.
As a demonstration, we first apply it to the KMTNet 2019 season prime-field high-magnification events.We report the discovery of a previously undiscovered candidate planetary signal in event MOA-2019-BLG-421 (KMT-2019-BLG-2991), which is revealed by the KMTNet data reduced by the new TLC pipeline.Although there are unresolved degenerate non-planet interpretations, the discovery proves that the procedure can indeed find previously missed anomalies and increase the survey sensitivity.

PHOTOMETRY PIPELINE
The new TLC pipeline is developed from pySIS (Albrow et al. 2009), which is based on the difference image analysis method (Tomaney & Crotts 1996;Alard & Lupton 1998a;Bramich 2008).The input images are the calibrated images.The process of reducing the photometry with pySIS mainly consists of seven steps: We briefly describe the procedures as follows.The purpose of (a) preprocessing is to evaluate the quality of each image, i.e., the full width at half-maximum (FWHM) and the ellipticity of the PSF, the sky background, and the signal-to-noise ratio (SNR).This information is used to define a "good" image.Generally, a "good" image should have small FWHM, small ellipticity, low sky background, and high SNR.One "good" image is chosen to be the astrometric reference image, and all the other images are then aligned to it (step b).For (c), a series of "good" images are stacked to obtain a very high SNR image to use as the master subtraction reference image (R).Then in step (d), a spatially variable numerical kernel K is solved and used to convolve the reference image to each target image T .The target images are subtracted from the convolved images to obtain the difference images D. In step (e), variable sources show up in the difference images, and therefore the difference images are used to refine the position of the microlensing source (for more details about the algorithm, see Appendix A in Albrow et al. 2009).Then in step (f), the same kernels computed in the subtractions are used to convolve the reference PSF to the target PSF on the final position, to obtain the pixelized PSF models for each target image.Finally, in step (g), a linear fit between each PSF model P and each difference image D is performed at the refined source position to extract the flux.Residual images E's are produced after extracting the flux.
The by-hand TLC pySIS data reduction procedure of KMTNet was not well optimized for efficiency.Therefore, we carefully reviewed the procedures and made some updates.Many of the challenges in achieving high photometric accuracy come from (c), (e), and (g).In particular, (c) and (e) required significant human input, making it inefficient and the quality of the resulting photometry highly userdependent; i.e., only certain highly-skilled human operators were able to identify the reasons for bad photometry and correct them.As will be discussed in Sections 2.3 and 2.4, this step can be automated with improved metrics that are optimized for KMTNet data.We note that it is very difficult to make the pipeline both high-quality and fully automated.However, with experience, some common issues can be identified or even prevented by adjusting the algorithm.
The overall procedures have not changed in the new pipeline.However, many detailed operations have been updated.

New Image Quality Metrics
Before we start describing the details of all the updates, we introduce two metrics to evaluate the quality of the photometry.The first is sub , which describes the standard deviation of the subtracted image.For each difference image D, sub is defined as where D and T are the -th pixel value of the difference image and the original target image, respectively.√ T corresponds to the Poisson noise of the target image.valid is the region of all unmasked pixels.Here "STD( )" means the standard deviation of all .The quality of the subtraction is described by sub .If an image is wellsubtracted, the subtraction residuals D / √ T should approximately follow the standard normal distribution, therefore sub ∼ 1; a larger value represents a worse subtraction.
The second metric is res , defined as where E is the -th pixel value of the residual image, and P is the value of the normalized PSF model on the corresponding pixel.The photometry region is phot and the number of the pixels in phot is phot .The quality of the photometry in phot is described by res ; larger res values represent worse photometry.Because stars have different brightnesses, backgrounds, and blend levels, the expected res value varies, but generally, res 2 indicates poor photometry.These two metrics help us to quantify the goodness of image subtraction and flux extraction.All the updates below are based on these N star = 5 N star = 50 N star = 500 two measures, that is, each update makes at least one of sub or res smaller.The two metrics also help us to identify problematic data points caused by poor original images or errors in the photometry process.

PSF Extraction
Because it affects multiple steps, we start by describing changes to the algorithm to extract the PSF.The PSF extraction is important in two aspects.First, accurate PSF estimation can help us identify the quality of each original image, thus can help to find better reference image(s).Second, the PSF of the reference image is used to create the PSF model of all target images.Poor PSF estimation will cause poor flux extractions on all images.Therefore, we updated the PSF extraction script to compute the PSF more accurately .
The original pySIS used the Bphot script in ISIS3 (Alard & Lupton 1998b;Alard 2000) to compute a pixelated PSF for a given image.First, the script detects several ( star ) of the brightest stars that are not saturated.Then, it median combines them to obtain a clean PSF that eliminates crowded neighboring stars.When star is too small, the background pixels are noisy because of small number statistics.When star is too large, the SNR of the center pixels are lower because they are dominated by faint stars.We find for 2 × 2 image stamps of the KMTNet bulge field (the PSF does not change much on this scale), star = 50 is a better default choice than previous default number star = 5.In the rare cases when 50 is not optimal, the value is usually in the range star = 30 ∼ 200.See Fig. 1 for an example.

Preprocessing of Images
The primary purpose of the preprocessing is to evaluate the quality of all original images and provide information for reference image selections.The original pySIS uses a simple algorithm to estimate the FWHM, ellipticity, and the sky background.In the case of FWHM and ellipticity, the PSF is estimated by linearly fitting the two-point correlation function to a Gaussian function along the image axis X and Y, respectively, denoted as FWHMX and FWHMY.This algorithm is fast and efficient, but also has a number of limitations.For example, the ellipticity is estimated by FWHMX/FWHMY.The algorithm can incorrectly estimate the ellipticity and FWHM, particularly when the ellipse's axis ratio is large and its major axis is angled at close to ±45 • .In addition, the simple algorithm uses all the pixels on the image and generally overestimates the FWHM by about ∼ 1 pixel, resulting from saturated stars and bleeding spikes that are very common in KMTNet images.Moreover, the sky background is simply estimated by the 85-th percentile of all pixel values.
Therefore, we introduce DoPhot (Schechter et al. 1993) to accurately estimate the FWHM, ellipticity, and the sky background.
We also add an image quality indicator about the PSF irregularity as a supplement to the ellipticity.We use the modified version of Bphot in Section 2.2 to extract the PSF image, and then measure the irregularity Irr by summing the standard deviations over every "annulus" of the PSF Where STD is the standard deviation of the given pixels and max is the radius of the PSF image, by default 2.5 FWHM.A schematic can be seen in Fig. 2( ).If the PSF is completely symmetric, Irr will be zero because all the points with the same radius will be equal.Thus larger Irr values represent larger irregularity.This process takes significantly longer to run than the previous algorithm (a few minutes rather than a few seconds).However, as we will see in the next section, these quantities, mainly FWHM, sky background, Irr , and the standard deviation of each image, enable us to automatically select high-quality reference images without human decisions.

Selection of Reference Images
In the original by-hand TLC procedure, a human reviewer was required to check the quality of reference images.This was especially important for a dataset the size of KMTNet, because relatively rare edge cases (false "good" images) dominated the "best" images se-lected using the metrics calculated according to the original method.The result was that reference images often had to be selected manually with only limited information to assess why one image (or set of images) was better than another.This sometimes required repeating the entire process multiple times with different combinations of reference images to test if there was any improvement in the resulting photometry.With the more accurate and expanded metrics described in the previous section, we can both improve and better automate the reference image selection.
The image alignment and the overall photometry are not sensitive to the selection of the astrometric reference image.Therefore, here we only focus on the subtraction reference image(s) selection.Hereafter, unless specified, "reference images" refers to the images that are used to stack into the final, single, "master reference image" R used in the subtraction.
Because the master reference image is used to convolve and subtract all the other images, it requires the reference image(s) to have • high SNR.Otherwise the convolution kernels cannot be accurately determined, which would affect the reliability of the subsequent subtractions and flux measurements.
• small FWHM.Because a larger FWHM image is harder to convolve to a smaller FWHM one.The reference images should be roughly the best seeing images.
• symmetric PSF.Similar to the above requirement, an asymmetric PSF cannot be convolved to symmetric PSFs with similar FWHM.
In addition, we want the target source to have approximately the same flux in the set of reference images because stacking variable sources can introduce extra systematic errors.
Therefore, based on the information we obtain from Section 2.3, we select the smallest FWHM, highest SNR, and smallest Irr images.In detail, we start by setting thresholds on SNR > 15%, sky background < 80%, Irr < 0.03, and where is the observation time of each image, and 0 is the peak time of the microlensing event.The thresholds, especially the time interval, can be changed for different events.Here we only present the typical numbers.The thresholds typically rule out ∼ 40% of images.The remaining images are then sorted by FWHM, and the best ∼40 are selected as the initial reference images.
Then, among these initial reference images, we start an iterative process.First, the 20 best FWHM images are convolved to the initial reference image and then averaged into an initial master reference image, which is then subtracted from the best 40 images.After the subtraction, the 20 images with the best sub 's are selected as the reference images for the next iteration.Usually, after 3-5 iterations, the process converges to 20 images, which are taken as the final reference images.They are stacked into the final master reference image, R. Note that after this process, the master reference image is not necessarily constructed from the best seeing or lowest ellipticity images, but the algorithm is robust at rejecting bad images from being used to create the master reference image.With this new reference image selection algorithm, it becomes possible to automate this step without human interaction.

Image Subtraction
The image subtraction algorithm follows Albrow et al. (2009) and is not updated.We only modify the script to allow for adding extra masks and account for images with FWHM smaller than the master reference image PSF.
The kernel can only describe the difference between the reference PSF and target PSF if all of the stars on the images are constant.
Therefore, we need to mask the non-constant stars when calculating the kernel.(After the kernel is solved, all pixels are convolved and subtracted.) However, the default pySIS mask only contains saturated pixels and zero-value pixels.It does not detect and mask variable stars or bad pixels.Therefore bad pixels and variable stars can produce deviations in the kernel K and consequently on the difference image D. See Figure 3 for examples.Therefore, we have updated the script to allow it to use a global mask for all images and individual masks for each image.
Global masks are usually used for masking variable stars.The fluxes of these stars vary considerably from image to image.If they are not masked, the kernel and thus the subtraction will be unreliable.Because the variable stars are at the same pixel coordinates after the image alignments, a global mask for all images is able to handle it.On the other hand, individual masks are usually used for CCD bad columns and other CCD defects.Those pixels are not at fixed coordinates on the aligned images but have the same position on the CCD, therefore they need to be individually masked in all images.
The variable stars can be automatically detected during the iteration in Section 2.4 by averaging over the absolute value of the difference images in the reference set.For constant stars, the averages are dominated by the Poisson error and thus small, but for variable stars the variable fluxes from different phases will add up.After the averaging, any pixels above 1000 counts 4 are masked.Although it is possible that some variable stars occasionally have similar flux over the reference set, this procedure can find most of them automatically.Global masks of these variable stars are then applied to all the subsequent subtractions.
In addition to the masks, although we have optimized the reference image selection, unavoidably there remain some images that cannot be well subtracted.They are the images with smaller FWHM (along both axes or only the minor axis of the PSF) than the reference image.For these images, we first convolve (blur) them to a intermediate image T by a normalized Gaussian kernel K .Therefore, the optimization of the kernel K becomes The Gaussian kernel size is determined by where FWHM minor is the minor-axis FWHM of the original target image T , and FWHM ref,major is the major-axis FWHM of the reference image R. The constant 2 √ 2 ln 2 ≈ 2.355 comes from the FWHM of a standard Gaussian function.

Source Position Refinement
The accuracy of the source position can directly affect the quality of the photometry (for example, see Fig. 1 in Albrow et al. 2009).We use the algorithm by Albrow et al. (2009) to calculate the source position and its error for all individual (subtracted) images, but we update the algorithm for determining the source position from multiple measurements.
We denote the measured source position of the -th image as 4 The value 1000 is a default value and is valid in most cases in practice.It can be changed manually.ì = ( , ), and its error as ( , , , ).The weight of theth measurement is then We first exclude sub >2.0 images.These badly subtracted images sometimes provide nominally precise but incorrect position measurements.For the remaining images, we start an iteration.In each iteration, we compute the weighted centroid ì = ( , ), where and its covariance C, Then all > 3 points are excluded.The remaining images are used in the next iteration.
The iteration continues until all remaining images are within 3 .

Photometry Flux Extraction
In pySIS, the reference PSF is convolved to each target image and interpolated to the refined source position.For each original image, we now have the subtracted difference image D and the normalized pixelated PSF model P. The flux is then the slope of the linear fit between D and P.
For KMTNet photometry, we introduce an additional parameter, b, such that where is a free parameter describes the "background" of the difference image.In most cases, the background of D is close to zero, and is consistent with zero.In a few cases, mainly for large FWHM difference images, the backgrounds or local backgrounds around the source significantly deviate from zero, and can reproduce the deviations.
In addition, pySIS does not use all pixels in D and P in the linear fit.Pixels far from the center do not contain any information about the star but only noise.We only use pixels in a circle centered at the source position.The default pySIS used a fixed radius of 6 pixels for this circle.After tests with KMTNet images, in our new TLC pipeline, we adopt a FWHM-related radius with the minimum and maximum value of (6, 20).We denote this region as phot .
For the linear fit with the addition of the parameter, we minimize where represents -th pixel in the region phot and 2 is the noise of pixel , where T is the original target image, T is the corresponding -th pixel value (i.e., the Poisson variance of the pixel value). 2 RON is the read-out noise, and 2 other is the other unrecognized noise initialized at zero.The Poisson noise of the reference image is ignored since it is negligible compared to T .
So, in our case, the result of the linear fit can be written analytically (more on linear fitting, see also Gould 2003), where we denote the average of a quantity as The variances of the parameters are This is a change from the default pySIS, which uses To eliminate the impact of cosmic rays, bad pixels, and other noise sources, pySIS excludes pixels with |D − • P + |/ > 2.5 pixels from the fitting (with a slight modification to include our parameter).
Considering that there might be unrecognized extra noise sources, we calculate the 2 other term to make the reduced chi-square where phot is the number of pixels used in the linear fit thus phot −2 is the number of degrees of freedom for the linear fit.We require 2 other 05 .We then iterate the whole fitting process until 2 other and the valid pixels are converged.

Summary and Application
The updates described in Sections 2.1 -2.7, fall into four major categories.First, we now compute FWHM, ellipticity, and sky background using DoPhot and we have added additional metrics ( irr , sub , res ) to assess the image quality and quality of the image subtraction and photometry.With those metrics, we can automate the reference image selection, and we have also automated the process for masking pixels and identifying the source position.Third, we have found that for KMTNet data, using star = 50 works better for computing the PSF.Finally, we have made a few modifications to the pySIS algorithm.We find that it is better to allow let the background of the difference image, , be a free parameter and to use a radius proportional to the FWHM when extracting the flux.We also convolve small FWHM images to the FWHM of the master reference image and change the calculation of the flux uncertainty.
In summary, most of these modifications are aimed at allowing the photometry extraction to be automated and remove the humandependent factors from the photometry.However, changes in the last category can significantly reduce seeing correlations in the data, which can result in improved photometry over the original pySIS for some subset of datasets.A more detailed discussion of the accuracy and efficiency of the updates can be found in Section 7.
The new TLC pipeline has been applied to more than 100 events6 , including > 50 for the final analysis of known anomalous events, and ∼ 50 for the systematic search.Among them, 21 have been published.The published events are listed in Table 1.
In this paper, we report a newly discovered candidate planetary signal in microlensing event MOA-2019-BLG-421.The light curve together with the point-source point-lens (PSPL) Paczyński (1986) model is shown in Figure 4.The left panels show the original online pySIS light curve, and the right panels are the re-reduced light curve utilizing the updated algorithm.From the comparison, it is clear that the data from the new TLC pipeline have significantly lower scatter than the online pipeline data.The light curve becomes smoother with less scatter.Moreover, the new TLC pipeline can detect problematic data points by sub > 2.5 or res > 2.0, which are the gray "x" points in the right panels.
With the new data, we find a subtle asymmetric signal relative to the peak of the light curve.The most obvious anomalous regions 1 and 4 are marked on the lowest panels of Figure 4.In the new light curve (lower right panel), the data in 1 are significantly above the PSPL model and the data in 4 are below the PSPL model.The online pySIS light curve also indicates anomalies over 1 and 4 .However, the signals are at the same level or even weaker than other features that are, in fact, due to systematic errors, e.g., 2 and 3 , even if the scattered KMTA03 data are removed.This is the reason why the AnomalyFinder algorithms (Zang et al. 2021b(Zang et al. , 2022) ) identified it as a noisy event and failed to find the real signal in the online data.The analysis of this signal is presented in Section 4.
The images from the MOA survey were mainly taken in the MOAred wide band, which is approximately the sum of the standard Cousins and bands, and a fraction of images were taken in the band.The majority of images of the KMTNet survey were taken in the band, and about 9% were taken in the band for color measurements.
The data used in the light-curve analysis were reduced using various difference image analysis pipelines.The MOA data were reduced by Bond et al. (2001).The KMTNet band data were first reduced by the original KMT pySIS pipeline for producing preliminary, online photometry (Albrow et al. 2009), and then reduced by the new TLC pipeline described in Section 2. In addition, we conduct pyDIA7 photometry to measure the source color in KMTC03 -and -band images.The pySIS flux is then calibrated to the pyDIA flux.
The anomaly of the event is well characterized by the KMTNet TLC data.It was found by using only the KMTNet data.The MOA data alone could not independently discover the anomaly because of the sparse coverage, however, it supports the discovery from KMT-Net.

LIGHT-CURVE ANALYSIS OF MOA-2019-BLG-421
The light curve shown in Figure 4 was first fitted by a standard singlelens single-source (1L1S) model.The 1L1S model consists of at least four parameters ( 0 , 0 , E , ) where 0 is the time when the lens and the source are closest, 0 is the impact parameter in the units of the angular Einstein radius E of the total lens mass, E is the Einstein radius crossing time or microlensing timescale, and is the radius of the source star in the units of E .In addition, two flux parameters are needed ( S, , B, ) for each data set , representing the flux of the source and the blend.The fitting parameters and their uncertainties for the 1L1S model are shown in Table 2.As might be anticipated, there is only an upper limit on 0 .From the lower right panel of Figure 4, one can discern a weak residual from the standard 1L1S model.The left wing of the peak (HJD ∼ 8741 − 8742) is slightly brighter than the 1L1S model, and the right wing (HJD ∼ 8744 − 8748) is slightly fainter than the model.The signals are subtle, however, due to the relatively long duration of the anomaly and the good coverage from all three KMT-Net observations, those data points actually contribute Δ 2 ∼ 180, which is significant enough for a reliable detection.
Such a signal can potentially be reproduced by many models apart from the standard 1L1S model.Below we separately discuss the possible models, including 1L1S with higher-order effects, binary source (1L2S) models, and binary lens (2L1S) models.

1L1S with Microlensing Parallax
The microlensing parallax effect caused by the orbital motion of Earth (Gould 1992(Gould , 2000(Gould , 2004) ) can create asymmetry in the light curve.The microlens parallax is where ( rel , ì rel ) are the lens-source relative parallax and proper motion, and ( L , S ) are the distances from Earth to the lens and the source, respectively.
We add two parallax parameters E,E and E,N (east and north component of ì E ) to the model.The ecliptic degeneracy (Smith et al. 2003;Jiang et al. 2004;Skowron et al. 2011) is considered by fitting ( 0 > 0, 0 < 0) scenarios separately.The results are shown in Table 2.The 2 is improved by 73.5 for both 0 > 0 and 0 < 0 cases.
However, the measurement of ì E for such a E ∼ 15d shorttimescale event is uncommon, thus we investigate the solutions carefully.We find the parallax signal is not self-consistent as a function of time, i.e., the signals before and after the peak are in conflict.Taking 0 > 0 as an example, the pre-peak is better by Δ 2 = 2 parallax − 2 static ∼ −105 but the post-peak is worse by Δ 2 ∼ 35.Moreover, from Section 4.4, we note that even the best 1L1S + parallax model is disfavored by Δ 2 > 70 relative to the xallarap solution and all the 2L1S solutions described below.Therefore, we exclude the 1L1S + parallax scenario.

Static 1L2S
A second source that is fainter or has a larger impact parameter could also produce an asymmetric feature in the peak.To model the standard 1L2S light curve, three 1L1S parameters of the second source plus one flux ratio parameter in each band are needed (Hwang et al. 2013).We use ( 0,1 , 0,1 , 1 ) as the 1L1S parameters of the primary source, and ( 0,2 , 0,2 , 2 ) as the impact time, impact parameter, and the size of the second source.The two sources share the same timescale E .For each observational band , we use , as the flux ratio between the second source and the primary source.
The results are shown in Table 2.The model and its residuals are shown in Figure 5.The 1L2S model mainly improves the fitting before the peak, but still left residuals over the peak ( ∼ 8743.1 − 8743.8) and after the peak ( ∼ 8744.8 − 8748.5).From Section 4.4, we find the 1L2S model is disfavored by Δ 2 50 with respect to the 2L1S solutions.Therefore, the 1L2S model cannot describe the data well, and we exclude this scenario.Moreover, we also check for parallax effects.However, for similar reasons as in Section 4.1, the signal is inconsistent over time, and we exclude the 1L2S + parallax scenario as well.

1L1S with Microlensing Xallarap
The microlensing "xallarap" effect is caused by the orbital motion of the source star (Griest & Hu 1992;Han & Gould 1997), which we later see is consistent with the data.Such motion would cause the primary source to be accelerated, and thus could produce an asymmetric peak in the light curve.
We consider a circular orbit for the source.The xallarap model introduces five more parameters (Miyazaki et al. 2021), the period of the orbital motion , the orbital inclination , the orbital phase , and the xallarap parameter ì E .The phase is the phase of the source when = 0 , while = 0 is defined as the case where the source is at the ascending/descending node.The amplitude of the xallarap E is the semi-major axis of the source star orbit normalized by ˆ E , the angular Einstein radius projected to the source plane.The direction of ì E is defined as , the angle between the linear lens trajectory and the orbital ascending/descending node in the range of [0, ).
For events with 0 1 like MOA-2019-BLG-421, the xallarap model has a pair of degenerate solutions, where the subscript "1" and "2" denote the two degenerate solutions.This degeneracy arises from the geometric symmetry of the trajectories when 0 → 0. We start by searching for the first solution and then use Eq.21 to find the degenerate solution.
We search for the local 2 minima in the xallarap period space.We select a series of values from 4 to 500 days and fix them when optimizing.All the other parameters including the PSPL and xallarap parameters are set free.For each , we search both degenerate solutions.The search results are shown in Figure 6.The lower panel of Figure 6 shows that the 2 as a function of is smooth, and the only < 3 local minima is = 12.6 d.This results in only one pair of solutions.
Table 3 shows the final optimized parameters of the xallarap models.We denote the two degenerate solutions "+" and "-" by their E,N sign.The xallarap models describe the light curve significantly better than the static 1L1S, 1L1S + parallax, and static 1L2S models by Δ 2 ∼ (193,119,88).The light curve and model residuals can be found in Figure 5.However, we notice that Δ 2 ∼ 20 is contributed by the data taken around the full-moon nights (HJD ∼ 8734 − 8737, i.e., = −0.75∼ −0.5), which could be caused by systematic errors.Therefore, we also report the 2 peak for the high signal-to-noise 0 ± 4.5 d peak region in Table 3.This will be used to compare with the 2L1S models below.Now, we simply check whether the system is physically reasonable.Because the parameters related to the physical properties ( , E ) are consistent within 1 for the two degenerate solutions, we take the "−" solution as an example.The results show that the source is in a binary system with a period of ∼ 14. ( Assuming that the mass ratio between the companion and the source is S , we can relate the total mass of the system to the observables using the Kepler's third law, where tot = S (1 + S )/ S .We denote a dimensionless parameter by combining Eqs.22 and 23, where For typical microlensing sources S = 8.3 kpc and S = 1 ， we plug the values and the model parameters of the "−" solution into Eq.24, ≈ 0.60 rel 6 mas/yr 3 . (25) By assuming the mass ratio S = 0.5, we estimate rel = 2.7 mas/yr, which is a common value for the Galactic bulge stars.Therefore, the system is reasonable given Galactic dynamics.
In addition, we do not consider the flux contributed by the companion in the modelling.We now check whether this assumption is self-consistent.The model and the assumed mass ratio gives tot / S = 0.18 E , assuming the source and the companion are both normal main-sequence stars, the companion should be both fainter ( C / S ∼ 4 S ∼ 0.06) and farther from the magnification center ( tot / S / E ∼ 7 0 0 ).As a result, the companion contributes < 1% magnified flux to the peak.Thus, neglecting the companion flux is self-consistent.
Therefore, we conclude that the xallarap solutions are physically reasonable.We keep these models as one of the final interpretations.

2L1S
Binary-lens microlensing (Mao & Paczyński 1991;Gould & Loeb 1992), because of its non-linearity, can produce diverse light curves including asymmetric ones.In this event, the anomaly is centered at the peak.This feature indicates that it can be caused by the perturbation of the central caustic or cusp (e.g., Chung et al. 2005).
We use three extra parameters in addition to the 1L1S scenario to model the 2L1S light curves.They are , , , where is the projected distance between the two lenses in the units of E , is the mass ratio of the two lenses, and is the angle between the source trajectory and the binary lens axis in the lens plane.
Because the 2L1S parameter space is large and can be highly nonlinear, we start by searching for local minima throughout the full parameter space.Because the anomaly is relatively weak and smooth, we use a hot Markov-chain Monte-Carlo (MCMC) as implemented in emcee (Foreman-Mackey et al. 2013) to do the search.The initial parameters are as follows: ( 0 , 0 , E , ) from the 1L1S fit with 3 random scatter; log is randomly generated from −1 < log < 1; is randomly generated from 0 • < 360 • ; and with the knowledge of the absence or weak central caustic crossing and central cusp perturbation of the event8 (see Equation 11 (26) We reduce the log-likelihood by a factor of 9 to heat the MCMC chain so that it can basically cover the whole parameter space.We adopt 200 random walkers and run for 3000 steps after 10000 steps for burn-in.After the local minima are returned from the hot chain, we perform a normal temperature MCMC on each distinct local minima to obtain the parameters and their uncertainties.Figure 7 shows the hot MCMC results in (log , log , , log ) space together with the refined normal MCMC over each local minima.We finally find 8 local minima in total, which are sumarized in Table 4. Except for the two ∼ 0.1 solutions that can be easily seen in Figure 7, we zoom in on the (log , log ) plane in Figure 8 to show the cluster of the other local minima.The topology of the source trajectories and binary-lens caustics are shown in Figure 9.The solutions can be organized into several groups.The first group includes C1 and W1, whose central caustics and trajectories are almost identical.Their mass ratios are similar, and the separations differ by approximately ↔ −1 .This is the well-known "closewide" (or the unified "inner-outer") degeneracy (Griest & Safizadeh 1998;Dominik 1999;An 2005).The source size of both solutions are precisely measured because the trajectories cross the central caustic.The second group is C2, C3, and C4.They all have < 1, but with slightly different and .However, their measurements are quite different.C3 and C4 have non-zero while C2 is consistent with a point source.In addition, the of C3 is different from C2 and C4, corresponding to the "upper-lower" or "inner-outer" degeneracy (Gaudi & Gould 1997).We tried to change C2's to the corresponding "upper" case to get a corresponding non-model.However, after many MCMC iterations, the solution finally converges to C3.The next group is W2 and W3.They both have > 1 separations and are consistent with point source scenarios.They only differ in .They both have almost horizontal trajectories but do not strongly interact with the planetary caustics because there are no obvious planetary caustic crossing features in the light curve.The last group is C5.It has two local minima C5 and C5b, corresponding the near-resonant and resonant custics.The two local minima merge into each other because the 2 gap between them is shallow.This solution also has a measurement.
From Table 4, the best-fit 2L1S model is C3, and it is Δ 2 ∼ 19 worse than the xallarap model.The other 2L1S models (C5, W2, C4, C2, W3, C1, W1) are disfavored by Δ 2 = (0.2, 3.0, 10.7, 15.1, 16.9, 22.8, 24.8), respectively.However, we find a lot of the 2 difference between these models comes from overfitting of some low SNR features in the light curve.In Figure 10, we plot and highlight all the 2L1S and the xallarap model light curves together with the cumulative Δ 2 to illustrate the overfitting.For example, Model W2 has a spike at ∼ 8732, but there are no data points over the peak of the spike.The data points on the wings of the spike which contribute most of the Δ 2 are taken at the time close to the full moon, thus having small SNRs.Similar arguments can be made for Models W3, C2, C3, and C4.The Δ 2 of these models are mainly contributed by low-SNR data.
To separate out the influence of the low SNR data, as we mentioned in Section 4.3, in the lowest panel of Figure 10 we plot the cumulative Δ 2 in a time window of about 0 ± 4.5 d, corresponding to < 0.3 or < 18.7.This region is where the asymmetric feature occurs, and it contains most of the 2 improvement relative to the 1L1S models.The highly magnified source flux makes the photometry more accurate in this region.We list the total 2 of this time region 2 peak in Table 4. Looking at the peak region only, the Δ 2 s between models are reduced.The order has changed as well.For the peak region, the best-fit model is now C5, and then (C4, C3, C2, W2, W3, C1, W1) are disfavored by Δ 2 = (0. 5, 0.8, 4.5, 8.1, 14.0, 16.0, 20.2), respectively.The 2 difference between the 2L1S models and the xallarap models are reduced as well.
We also tried testing parallax effects in 2L1S models, but neither significant improvements nor useful constraints were obtained.Therefore, we do not include parallax in the final 2L1S models.

Summary of Light-Curve Analysis
After the exploration, we exclude the 1L1S, 1L1S with parallax, and 1L2S scenarios for MOA-2019-BLG-421.The remaining models are the 1L1S with xallarap and the 2L1S models.For the 2L1S interpretations, there are many degenerate models that can explain the light curve almost equally well.However, as will be shown in the Sections 5 and 6, the solutions with finite source measurements are very unlikely to be correct.

SOURCE PROPERTIES OF MOA-2019-BLG-421
The source star color can be used to measure the angular radius of the source star, * .With the source radius, the angular Einstein radius and the relative proper motion can be obtained, which are directly related to the physical parameters of the lens.
To measure the color of the source star, first, we construct a Color-Magnitude Diagram (CMD) from stars within a 2 × 2 square centered on the source position using KMTC03 images.The magnitude and color are calibrated to the OGLE-III catalog (Szymański et al. 2011).The CMD is shown in Figure 11.
Then, we place the microlensing source on the CMD.We determine the source -band magnitude from the models (Tables 3 and 4) and the color ( − ) = 1.569 ± 0.073 from the linear regression of the -band and -band source fluxes during the event.The color and magnitude are also calibrated to OGLE-III.
Next, we measure the centroid of the red clump giants (following the method in Nataf et al. (2013)) to be ( − ) RC = 1.787±0.016and RC = 15.313±0.054.We measure the offset of the source relative to the centroid of the red clump giants (Yoo et al. 2004).By comparing the instrumental [( − ), ] RC with the intrinsic centroid of the red giant clump [( − ), ] RC,0 = [1.06,14.339] from Bensby et al. (2013) and Nataf et al. (2013), we can find the intrinsic, de-reddened color and magnitude of the source.
Finally, based on the de-reddened color and magnitude, we estimate the source angular radius * from Adams et al. (2018).The de-reddened source colors and magnitudes together with the derived ( * , E , rel ) of all 2L1S solutions and the xallarap solutions are listed in Table 5.
We immediately see that for the solutions with finite source measurement, the derived E and rel are unusual.For example, if we consider the case that the source and the lens are both in the Galactic bulge, we would expect rel > (0.93, 2.44) for (3 , 2 ) limits, respectively.The limits are not very sensitive to the Galactic components because the probability for a very small rel lens-source pair to create microlensing events is small.Therefore, Solutions (C1, W1, C3, C4, C5) are unlikely to be real.However, if we consider the detection bias introduced by detected planetary events, the expected rel distribution would be systematically moved toward small values (Gould 2022) because longer planetary perturbations are easier to detect.The rel limits are then > (0.32, 1.30) for (3 , 2 ).The Solutions (C1, W1, C3, C4, C5) are still disfavored but with less confidence.

LENS PROPERTIES OF MOA-2019-BLG-421
To uniquely obtain the physical parameters of the lens system L and L , one needs at least two of E , E , and the absolute brightness of the lens object (see also, e.g., Introduction of Zang et al. 2020).However, for MOA-2019-BLG-421, we only have E or its lower limit.Therefore we perform a Bayesian analysis to obtain the posterior distribution of the physical parameters of the lens.The prior of the Bayesian analysis is the Galactic model, including the stellar density profile, the mass function, and the stellar velocity distribution.We adopt "Model C" described in Yang et al. (2021).
We generate 10 8 microlensing events based on the Galactic model, that is, generating the source and lens distance from the line-of-sight stellar density profiles, lens mass from the mass function, and source and lens motions from the stellar velocity distribution.The underlying assumption is that the probability of a star hosting a planet is independent of its mass and Galactic environment.For each simulated event, , we weight it by where Γ ∝ E, rel, is the microlensing event rate.L ( E ) and L ( E ) are the likelihood function of E and E measured for a specific solution from Section 4. The summation is proportional to the total event rate Γ tot for a specific solution.We use a Gaussian likelihood function for E , and use a Gaussian likelihood for E if it is measured or a flat distribution with 3 − lower limit hard cut if the value is not measured.To consider the rel detection bias proposed by Gould (2022), we also report the total event rate Γ tot using the weight = / rel, .The final results of the physical parameters of all models are shown in Table 6.For the xallarap models, the lens is an M dwarf located in the Galactic bulge.For the 2L1S models, if Model (C1, W1) is correct, the lens system is likely to be a brown dwarf orbited by a super Jupiter, and their projected separation is ∼ (0.05, 1.06) AU.In the case of Model (C3, C4, C5), the lens system is a super Earth or Neptune orbiting a low-mass M dwarf at a projected separation of ∼ (0.15, 0.30, 0.16) AU.For Model (C2, W2, W3), the lens system is an M dwarf of low-mass K dwarf with a sub-Jupiter mass planet.The projected separations are ∼ (1.4,2.6, 2.6) AU, respectively.
We list the relative event rate of all models in Table 6.It is hard to generate events like Model (C1, W1, C3, C4, C5) in the Galaxy compared to Model (C2, W2, W3).Therefore, the former models are strongly disfavored under the Galactic prior, a result that is consistent with our preliminary discussion in Section 5. We convert the relative event rate in terms of 2 using Then we combine 2 peak in Table 4 with the 2 Gal.+ rel , to obtain the results listed in Table 6.Finally, we conclude that Models C2 and W2 are preferred among all the 2L1S interpretations.
This preference can also be tested by future observations measuring the relative proper motion rel and/or the lens light.The different rel predictions for different models are already shown in Tables 5 and 6.For the lens light, if the lens is a main-sequence star located in the Galactic bulge, the brightness would be ( L , L ) = (22.9+3.4  −2.4 , 20.7 +2.0 −1.4 ) for Models (C2, W2, W3) or ( L , L ) = (30.7 +3.1 −3.5 , 25.5 +1.8 −2.0 ) for Models (C1, W1, C3, C4, C5).If the currently preferred models (C2, W2, W3) are correct, assuming the rel ∼ 7 ± 3 mas/yr from Table 6, the lens and the source will be separated by Δ ∼ 70 mas in 2030.Given the similar brightness of the lens and source ( S ∼ 20.2, S ∼ 18.2), a measurement of the lens light would be achievable on the current largest telescopes (e.g., Keck, VLT).However, if the other models (C1, W1, C3, C4, C5) are correct, measurement of rel will be challenging even with the nextgeneration telescopes given the small rel and large contrast ratios.
In addition, if the xallarap model is correct, we expect a large radial velocity for the source star ( RV amplitude ∼ 30 km s −1 assuming S = 1 , S = 0.5, and 60 • inclination).However, as a result of the faint brightness s ∼ 20, the radial velocity measurement of the source would be very challenging.Therefore, technically, it will be hard to resolve the degeneracy between the best 2L1S models and the xallarap models.

IMPROVEMENTS
We updated the KMTNet TLC photometry procedures to increase the automation and reduce the need for highly-skilled operators, as well as making a few modifications to increase the photometric accuracy.With our new pipeline, there are a total of three different versions of pySIS for reducing KMTNet data: the preliminary pySIS pipeline (also called online pipeline or end-of-year pipeline), the byhand TLC procedure, and the new TLC pipeline from this work.Here we compare these versions and discuss the improvements.

Efficiency Improvements
For efficiency, here we compare the by-hand TLC and the new TLC pipeline, because we aim to reduce the time cost of TLC so that it can be used to search for new anomaly signals.
The new TLC pipeline is more parallelized, which makes it perform better on multi-core machines.The original pySIS only parallelized image subtractions because image subtraction is the most computationally expensive step.This step can be sped up by increasing the number of CPU cores, at which point the other steps that were not parallelized, such as alignment and photometry, become the bottlenecks.Therefore, we parallelized all steps to improve multi-core performance.
The major improvement is that the new TLC pipeline is more automated.Without that automation, human operators had to wait for preprocessing to finish, then select reference images by eye.After that, the pipeline would align all the images and create the master reference image.The operator was then required to enter the target position.For KMTNet data, this setup process took about ∼1 hour.Although it did not require active participation by the operator the entire time, it did require the operator to check its status frequently during this period.Then, the remainder of the pipeline would run without operator input; a process that took ∼ 1 to 2 hours for KMT-Net data on a single core.If the results were not ideal, the operator would have to start from the beginning.
The human components of the by-hand TLC procedure made the results highly operator-dependent.Because the original pySIS had limited quantitative parameters to describe the qualities of the reference images and other results, robust reference image selection required either extensive experience, external algorithms, luck, or a combination of all three.For some operators, the best approach was to do several iterations with different sets of reference images to test what combinations worked best.
The automations in the new TLC pipeline remove intermediate human interactions, significantly reducing the human workload in executing the reduction.In addition, because new metrics help automatically and robustly select reference images, in most cases, the photometry is good quality on the first try, so the reduction procedure usually does not need to be repeated multiple times.We also added functionality to the code to allow operators to re-start the process from any intermediate step if further adjustments are needed.
For the by-hand TLC reductions, the typical time cost for a primefield event (∼ 6 × 3000 images) is 6-8 hours operating on a 50-core machine, of which 1-3 hr required some level of human attention.Therefore, if we were to systematically run the TLC pipeline for all prime-field events during one season (∼ 1000), a total of ∼ 300 days would be needed, including significant amounts of human review and potentially additional iterations.After the updates, for a prime-field event, the typical computational time cost is now reduced to ∼ 1 hr plus 10 − 30 min for a manual check in the end.Therefore, a monthtimescale operation can cover the re-reduction for ∼ 1000 events.These automations and additional parallelizations make systematic reanalysis possible.

Accuracy Improvements
We performed a series of tests to better understand what aspects of the new TLC pipeline most affect the photometry for MOA-2019-BLG-421.We reduced the data using the old TLC pySIS procedure without any special optimizations, e.g., the reference images were selected by eye based on a list sorted by FWHM.Then, we individually fit the various datasets to a PSPL model using the raw errorbars.
For these fits, we calculated the mean, median, and standard deviations of the 2 contribution from each datapoint to use as metrics for assessing the quality of each dataset.We also calculated the mean, median, and standard deviation of the absolute value of the residuals after scaling the fluxes to the same system.See Table 7. From the residuals, we can see that data from the new TLC pipeline has much less scatter than either the online or by-hand TLC reductions.For the 2 per point, we expect a value of 1 for Gaussian statistics.So, we see that the errorbar estimation from the by-hand TLC data is better than for the online data and that the new TLC pipeline may slightly over-estimate the errorbars in this case.
We also tried repeating the by-hand TLC procedure to test some of the changes in the new TLC pipeline.For example, we tried using the same reference images and lens position in the by-hand TLC reduction as for the new TLC pipeline.These changes did not have a significant effect on the quality of the photometry.Hence, we can conclude that, in this case, the photometry improvement from the new TLC pipeline is due to some other optimization.
Of course, the pySIS algorithms were never intended to produce optimal photometry in all cases without any optimizations, and our comparison to the un-optimized by-hand TLC reduction does not clearly distinguish between improvements to the photometry due to improved photometry algorithms and those due better choices of default parameters (i.e., tuned for KMTNet datasets).So, it is still possible that an expert operator could produce photometry of similar quality to the new TLC pipeline.However, this requires deep, specialized knowledge of the underlying algorithms and photometry parameters.On the other hand, for this particular dataset, most of the significantly magnified points are concentrated in a few nights around the peak, which happen to have seeing in the 10th percentile, meaning it is often better than the reference images.So, the improvements that reduce seeing correlations in the data may also be significant in this case.Regardless, this test demonstrates that the new TLC pipeline can produce much better quality photometry in this case without a lot of effort on the part of the operator.

Comparison to Online Data
In addition, the key that powers the new anomaly search are the improvements from the online data to the new TLC data.
To quantify this improvement, we follow the procedure in Yang et al. (2022) to calculate the planetary sensitivity of MOA-2019-BLG-421, using the online data and the new TLC data, respectively.To quantify the difference between the two data sets, we calculate the planet sensitivity for each one following the procedure of Rhie et al. (2000) as described in Yang et al. (2022).In short, we generate a series of artificial 2L1S light curves using the actual noise from the real light curves.For each artificial light curve, we find its deviations to the 1L1S model.Therefore, the 2 difference between the 2L1S and 1L1S models represents the significance of the artificial signal.Figure 12 shows the 2 distribution on ( , ) plane for an injected log = −2.8planet.Each point represents an artificial light curve generated by the given ( , , ) and the color represents the Δ 2 .We define Δ 2 > 100 for a detection.The rightmost panel of Figure 12 shows the "detection" region enhanced by the new TLC data.
We marginize the Δ 2 > 100 probability over to obtain the sensitivity over ( , ) plane.The results are shown in Figure 13.For the actual planetary signals detected in MOA-2019-BLG-421, the sensitivity changes from < 20% in the online data to > 80% with the new TLC photometry.For the full (−0.3log 0.3, −4.0 log −2.3) phase space region, the sensitivity is improved from ∼ 24% to ∼ 53%.
In conclusion, the new TLC data can significantly improve the detection sensitivity of planets (or planetary-like anomalies).In the specific case of MOA-2019-BLG-421, the new TLC increases the Δ 2 for the anomaly above the threshold, which is why the new anomaly could be detected.
However, new TLC data would not significantly change our knowledge about the planets that have been published.In order for a planet to be published in the first place, the signal needs to be clear, so any improvements would simply tend to improve the clarity of known signals.However, the accuracy improvements from the new TLC pipeline relative to the online data do allow us to find previously undiscovered planets or clarify previously unpublishable signals.

DISCUSSION
In this work, we updated the KMTNet TLC photometry algorithms to improve their automation and photometric accuracy.By applying the new TLC pipeline to historic events in the KMTNet database, we find a new anomaly signal in MOA-2019-BLG-421, which was buried in the noise of the preliminary data.The signal can be explained by either the orbital motion of the source star or a planet in the lens system.For the planetary interpretation, the planetary system is most likely to be a Jovian planet orbiting an M dwarf in the Galactic bulge, which is a typical microlensing planetary system.The discovery shows that there are indeed missed signals under current planet search procedures.These updated photometric data can indeed increase the sensitivity of the KMTNet survey.
Apart from the accuracy improvements, the new TLC pipeline automates human interactions in the middle of the TLC reduction procedure both decreasing the human workload and making the results more robust.Together with some minor improvements regarding parallelization, the typical time cost for reducing a KMTNet �15,000 image prime-field event is reduced to � 1 hr using a 64 CPU-core device.The remaining human work (check the final results) is reduced to the order of minutes.The overall time cost is now 10% of the previous, by-hand TLC procedure.With such improvements in efficiency and automation, a systematic search of hundreds of events becomes possible.
Although the optimized data can potentially enable us to obtain better statistical results, a statistical sample must be defined carefully.Currently, although the new TLC pipeline reduced many of the human efforts, a human reviewer is still needed to verify the final results and deal with special cases.Applying it to all KMTNet events of the past 7 seasons (∼ 18, 000 events in total) is difficult.There are two feasible options.The first is to run the pipeline without any human reviews and exclude bad data though the quality indicators.Another is to define a sub-sample on the order of hundreds of events and apply the full pipeline including human reviews.As discussed in Yee et al. (2021) and Zang et al. (2021a), one approach is to compile a sample of high-magnification events, because they are intrinsically more sensitive to planets.A small number of such events could contribute a large fraction of the total survey sensitivity.Another approach could be compiling a sample with a specific type of source stars, e.g., giant sources.These sources are also intrinsically more sensitive to planetary perturbations than average.In addition, the higher luminosity of such sources could significantly increase the photometric accuracy, especially for the data reduced with the new TLC pipeline.We will implement these approaches in the near future for this systematic reanalysis project.Gal.+ rel = −2 ln Γ tot .The 2 peak is adopted from Tables 3 and 4. The final preferred models are highlighted in boldface.
(a) Preprocess and select an astrometric reference image.(b) Align images.(c) Select subtraction reference images and create a master reference image.(d) Subtract target images from master reference image.(e) Refine the microlensing source position.(f) Estimate the Point Spread Function (PSF).(g) Extract the flux from subtracted images.

Figure 1 .
Figure1.PSF images of an example image, extracted using number of combined stars star = (5, 50, 500).The PSFs are normalized and presented in log scale.The (blue, white, yellow, magenta) lines represent the (−0.001, 0, 0.001, 0.01, 0.1) contours.The dashed black line is the contour of SNR = 5.When star is too small (left), the background pixels are noisy because of small number statistics.When star is too large (right), the SNR of the center pixels are lower because they are dominated by faint stars.We find for 2 × 2 image stamps of the KMTNet bulge field, star = 30 ∼ 200 is the best (middle).

Figure 2 .
Figure 2. ( ).Schematic of the Irr definition.The numbers in each pixel represents the index , where pixels with identical 's have the same distance to the central = 0 pixel.The colors are arbitrary, but indicate pixels with the same .The final irregularity, as defined in Eq. 3, is Irr = STD( P =0 ) + STD( P =1 ) + STD( P =2 ) + • • • .( ) and ( ).Examples of irregular and symmetric PSF images, respectively.To illustrate, = 4 pixels are highlighted with green boxes.The values of = 4 pixels are basically identical in ( ) but are significantly different in ( ).The final Irr values are labeled on the top of each panel.

Figure 3 .
Figure 3. Example images showing the influence of variable stars (upper panels) and bad columns (lower panels) with and without masking.The left, middle, and right column shows the original images, difference images without masking, and difference images with the extra mask, respectively.The difference images show the subtraction residuals (divided by the Poisson noise).In all panels, the cross marks the location of the microlensing event.The dashed circle on the top panels indicates the variable star.The region between vertical black lines on the lower panels are the location of the CCD bad columns, and the dashed regions are the bad colunms on the reference image.
median value and the ±1 range of the posterior distribution of the parameters are presented.LS = S − L .The relative Γ tot is the event rate corresponding to each model.The relative Δ 2 introduced by the Galactic model and the rel bias is defined as Δ 2

Figure 4 .Figure 5 .Figure 6 .
Figure 4. Light curves of MOA-2019-BLG-421.(left).The preliminary data and the best PSPL fit to them.(right).The TLC data from the updated algorithm and the best PSPL fit to them.The error bars are the native error bars from the photometry pipeline.The gray "x" points in the left panels are excluded by the AnomalyFinder algorithm(Zang et al. 2021b).The gray "x" points in the right panels are the bad data automatically recognized by the updated algorithm ( sub > 2.5 or res > 2.0).The preliminary data have more scatter and the errors are significantly underestimated.In the lower panels, the shaded regions

Figure 7 .Figure 8 .Figure 9 .Figure 10 .
Figure 7.The hot MCMC results in the (log , log , , log ) space, the colors are coded by the Δ 2 .The refined normal MCMCs over each local minima are overlapped on the hot chains.The two distinct solutions, C1 and W1 are marked.A zoom-in plot of the small-local minimums can be seen in Figure 8.

Figure 11 .
Figure 11.Color-magnitude diagrams (CMD) for the 2 × 2 square field centered on MOA-2019-BLG-421.The black points are the field stars measured from KMTNet images, and they are calibrated to the OGLE-III color and magnitude (Szymański et al. 2011).Green points are from the CMD obtained by Holtzman et al. (1998) from HST observations of Baade's Window, which we have aligned to the KMTNet CMD using the centroid of the red clump.The positions of the red clump centroid (RC) and the microlens source for different interpretations are marked on the figure.

Figure 12 .
Figure 12.Planetary sensitivity of event MOA-2019-BLG-421 for a log = −2.8planet on the ( , ) plane.On the left two panels, each point represents an artificial light curve generated by the given ( , , ) and the color represents the significance of the injected signal, i.e., Δ 2 = 2 1L1S − 2 1L1S .The rightmost panel shows the detection (Δ 2 > 100) rate enhanced by the new TLC data.The two crosses mark the two preferred 2L1S solutions (C2, W2).

Table 1 .
Published events using the updated photometry data

Table 2 .
Parameters of static 1L1S, 1L1S + parallax, and 1L2S solutions for MOA-2019-BLG-421The parameters and their 1 uncertainties are presented.For the non-detection parameters, the 3 upper limits are provided.No useful is measured in these models.

52 3.199 0.700 14.5 20.26 6802.8/6788 421.7/432 0.0021 0.0005
The parameters and their 1 uncertainties are presented.For the non-detection parameters, the 3 upper limits are provided.The peak region is defined by 0 ± 4.5 d.The final preferred models (in Section 6) are highlighted in boldface.

Table 5 .
Source properties for MOA-2019-BLG-421 NOTE.The parameters and their 1 uncertainties are presented.For the models without finite source effect detections, the 3 lower limits of E and rel are provided."XLRP" represents xallarap".The final preferred models (in Section 6) are highlighted in boldface.

Table 6 .
Physical parameters of the lens (system) from Bayesian analysis for MOA-2019-BLG-421

Table 7 .
Photometry Quality Metrics for KMTS03