Probing nuclear physics with supernova gravitational waves and machine learning

Core-collapse supernovae are sources of powerful gravitational waves (GWs). We assess the possibility of extracting information about the equation of state (EOS) of high density matter from the GW signal. We use the bounce and early post-bounce signals of rapidly rotating supernovae. A large set of GW signals is generated using general relativistic hydrodynamics simulations for various EOS models. The uncertainty in the electron capture rate is parametrized by generating signals for six different models. To classify EOSs based on the GW data, we train a convolutional neural network (CNN) model. Even with the uncertainty in the electron capture rates, we find that the CNN models can classify the EOSs with an average accuracy of about 87 percent for a set of four distinct EOS models.


INTRODUCTION
Core-collapse supernovae are the powerful explosions that take place at the end of lives of massive stars.A fraction of the gravitational binding energy released in stellar core-collapse is transferred to the ejection of the stellar envelope.Despite decades of effort, the exact details of how this happens remain unknown (e.g., Janka et al. 2016;Müller 2020;Burrows & Vartanyan 2021, for recent reviews).The supernovae produce powerful bursts of photons, neutrinos and gravitational waves (GWs) (e.g., Nakamura et al. 2016).The future multi-messenger observations of CCSNe will provide unprecedented insight into these phenomena (e.g., Warren et al. 2020).
As massive stars evolve, they go through all stages of nuclear burning, synthesizing heavier and heavier elements.At the end of the process, iron core forms (Woosley et al. 2002).The core is supported by the pressure of degenerate electrons.Upon reaching their effective Chandrasekhar mass, the iron core becomes unstable and starts collapsing.When nuclear densities are reached, the strong nuclear force abruptly halts the collapse.The inner core rebounds and collides with still-infalling outer parts, launching a shock wave.The dissociation of heavy nuclei and neutrino cooling quickly drains the kinetic energy of shock, stalling the shock at ∼ 150 km.To produce a supernova explosion and leave behind a stable protoneutron star (PNS), the shock must revive and expel the stellar envelope within a second (e.g., Ertl et al. 2016;da Silva Schneider et al. 2020).
The vast majority of stars are found to be slow rotators (e.g., Heger et al. 2005;Mosser et al. 2012;Popov & Turolla 2012;Deheuvels et al. 2014).These stars are expected to explode via the neutrino mechanism and the rotation is unlikely to have a significant impact on the explosion dynamics.However, a small fraction of massive stars may possess rapid rotation (Fryer & Heger 2005;Woosley & Heger 2006;Yoon et al. 2006;de Mink et al. 2013).In these stars, the PNSs are born with immense ≲ 10 52 erg rotational kinetic energy.Magnetic fields transfer a part of this energy to the shock front via the so-called magneto-rotational mechanism (Burrows et al. 2007;Winteler et al. 2012;Mösta et al. 2014;Obergaulinger & Aloy 2020;Kuroda et al. 2020).This mechanism is thought to be responsible for the extremely energetic hypernova explosions (e.g., Woosley & Bloom 2006).Note that magnetic fields may play a significant role in slowly or non-rotating models too (e.g., Endeve et al. 2012;Müller & Varma 2020;Varma et al. 2023).
In addition, if a quark deconfinement phase transition takes place inside the PNS, the PNS may undergo a "mini collapse", launching a second shock wave.This helps the first shock to expel the stellar envelope (Sagert et al. 2009;Zha et al. 2021).However, whether this happens is unclear as it relies on uncertain assumptions about the properties of high-density matter.
One of the promising ways of learning more about CCNSe is by detecting gravitational waves (GWs) from these events.While GWs from mergers of black holes and neutron stars are now observed routinely, we are waiting for the first detection of GW from CCSNe (Abbott et al. 2020;López et al. 2021;Antelis et al. 2022;Szczepańczyk et al. 2023).The GWs coming from Galactic CCSNe should be detectable with the current observatories (e.g., Gossan et al. 2016).CCSNe are expected to occur once or twice per century in our galaxy (e.g., Adams et al. 2013).The future detectors will be sensitive to observe events from longer distances (e.g., Srivastava et al. 2019), which should increase the detection event rate of CCSNe.
In this work, we explore if it is possible to extract any information about the EOS of high-density nuclear matter from the GW signal.As we explain below, we focus on the so-called bounce signal from rotating models.The dynamics, and thus the GW signal, depends on the EOS.Using numerical simulations, we generate a large number of GW signals using different EOS models.For each EOS, we produce up to hundreds of signals that correspond to different rotational configurations and electron capture rates during collapse.We train machine learning (ML) models to classify these signals based on their EOSs.We then estimate how well the ML model can infer the EOS information from a GW signal alone.The aim of this exercise is to answer the question: when the real CCSN GW signal will be detected, will such models be able to tell which EOS model closely represents the detected signal?If the answer is positive, this would mean that ML models will be able to put constraints on the properties of high-density nuclear matter.
The reason we focus on the bounce signal from rapidly rotating models is the following.In these models, the centrifugal force causes strong non-radial deformation of the bouncing core.The PNS is then born with strong perturbation.This drives ring-down oscillations in the post-bounce phase (Ott et al. 2012).These oscillations decay within ∼ 10 ms due to hydrodynamic damping (Fuller et al. 2015).
The bouncing core generates a spike in GR strain (aka bounce GW signal), while the post-bounce PNS oscillations generate GWs at the frequency of these pulsations (e.g., Abdikamalov et al. 2022).This signal can be modeled relatively easily with general relativistic simulations using a simple deleptonization scheme (Liebendörfer 2005).This allows us to create a large set of GW waveforms with moderate computational cost.Previously, Edwards (2021) and Chao et al. (2022) performed machine learning classification of a large set of GW signals that correspond to 18 different EOSs generated by Richers et al. (2017).We extend these studies by further analyzing the impact of the uncertainty in the electron capture rate during collapse.To parametrize the uncertainty, consider six different electron capture models and generate additional waveforms for a subset of four EOSs that produce distinct signals from each other.Even with the uncertainty in the electron capture rates, we find that the machine learning model can classify these EOS with an average accuracy of about 87 percent.
This paper is organized as follows.In Section 2 we describe our methodology.In Section 3, we present our results.Finally, in Section 4, we summarize of results and provide conclusions.
We use the accuracy and F1 score as the evaluation metric.The accuracy metric calculates the ratio of the correct predictions to the total predictions made:  This metric is useful in scenarios with balanced classes and roughly equal costs for false positives and false negatives (Chen et al. 2020).The F1 score is a harmonic mean of precision and recall (Davide & Giuseppe 2020).It is calculated by taking twice the product of precision and recall and dividing it by their sum: where precision and recall are defined as Recall = True Positives True Positives + False Negatives (4) Figure 1 shows the evolution of the loss functions for the training and validation over epochs for the group 1 dataset (described later in this section).These functions experience rapid decrease during the first ∼ 3 epochs.After that, we see mild fluctuations until epoch ≈ 20.During this period, the training loss decreases by ∼ 1, while the validation loss does not change much (except experiencing fluctuations).After epoch 20, the validation loss increases gradually from ≈ 0.65 at epoch 20 to ≈ 1.1 at epoch 100, while the training loss decreases from ≈ 0.3 to ≈ 5 × 10 −7 .This is a sign of overfitting.A similar overfitting was also observed in Edwards (2021).For this reason, we use 20 epochs for training our models in the rest of the paper.
We obtain the GWs from general relativistic hydrodynamics simulations using the CoCoNuT code (Dimmelmeier et al. 2005).We model neutrino effects using the   () parametrization (Liebendörfer 2005) in the collapse and bounce phase, after which we switch to a leakage scheme.The   () parametrization assumes that the electron fraction during collapse phase depends only on density (Müller 2009).Since the stellar core is expected to remain rotationally symmetric during collapse and early post-bounce phase (Ott et al. 2007), the simulations are performed in axial symmetry.We do not include magnetic fields as they have little impact on the dynamics of the core during collapse and early post-bounce phase (e.g., Obergaulinger et al. 2006).
We consider two sets of models.In the first set, we take the GW data from simulations of Richers et al. (2017) for 18 different EOSs.A summary of the EOS parameters is provided in Table 1 of Richers et al. (2017).Rotation is imposed on the initial stellar core according to where  is the cylindrical radius, Ω 0 is the central angular velocity, and  is a degree of differential rotation.By varying the latter two parameters, we obtain up to 98 different rotational configurations, ranging from slow to fast rotation, for each EOS and   () model.Some of the models with extremely rapid rotation do not collapse due to the excessive centrifugal force.The list of these models is given in Table 3 of Richers et al. (2017).These models do not emit significant bounce GW signals, so we exclude them from our analysis.
A similar approach was adopted by Chao et al. (2022).This group contains 1704 waveforms in total.We refer to this dataset as group 0 hereafter.Note that for a given angular momentum distribution with respect to the enclosed mas coordinate in the core, different progenitor stars produce similar bounce GW signals (Ott et al. 2012;Mitra et al. 2023).For this reason, we focus on one progenitor star model s12 of Woosley & Heger (2007).
In the second set, we take four of the 18 EOSs from Richers et al. ( 2017) and perform simulations using additional electron fraction   () profiles.This is done to parametrize the uncertainty in the electron capture rate, which affects the values of   and thus the dynamics of stellar collapse (e.g., Hix et al. 2003).See Langanke et al. (2021) for a recent review of electron capture rates in supernovae.For each combination of EOS and   () profile, we obtain 80 different rotational configurations by excluding those out of 98 model that do not collapse.
We consider three groups of   () profiles.In group 1, which consists of 320 waveforms, we take the   () profiles from Richers et al. (2017).We refer to these as fiducial profiles.In group 2, we add two   () profiles, which are obtained by adjusting the fudicial   () profiles above density  1 = 10 12 g/cm 3 by a factor of where  2 = 10 14 g/cm 3 .This relation is motivated by the fitting formula (1) of Liebendörfer (2005).We consider two different values of  of 0.05 and 0.1.This means that the   value in the stellar core will be 5% and 10% smaller than the fiducial model.In total, group 2 has 960 waveforms.Finally, for group 3, we add three more   () profiles obtained from GR1D simulations (O'Connor 2015) using the electron capture rates of Sullivan et al. (2016), scaled by a factor of 0.1, 1, and 10, as was done by Richers et al. (2017) for the SFHo EOS.The plot of corresponding values of   as a function of  is shown in Fig. 2. In total, we have 1200 waveforms in group 3. Note that group 1 is a subset of group 2, which, in turn, is a subset of group 3.
For an additional test, we create group 3b by randomly removing 240 waveforms corresponding to the SFHo EOS from group 3, resulting in a total of 960 waveforms, the same number as in group 2. This will help us assess how the number of waveforms and variations of   () affect the classification accuracy.
For each of the groups, we randomly shuffle the waveforms such that 80% are used as a training set and 20% as a test set.At the same time, we ensure a balanced representation of classes in both the training and test sets.In the case of group 3, where there are more SFHo samples than others, we proportionally increase the number of these EOS in both training and test sets to align with the general distribution.This procedure is repeated 10 times, and the EOS classification results that we report below are averaged over these 10 realizations.The error measurements are expressed in terms of the corresponding standard deviation.
Each GW signal is labeled with a corresponding EOS that we aim to classify.To quantify the result, we use the accuracy metric defined as the fraction of correct EOS classifications.The results presented below are computed using the time series data in real space.A complementary analysis in the Fourier space is provided in Appendix A.
Before ML analysis, we follow Edwards (2021) and apply a Tukey window with  = 0.1 and Butterworth filter with order 10 and attenuation 0.25 on all data (Blackman & Tukey 1958;Smith & Gossett 1984).We adjust the time axis so that  = 0 ms corresponds to the time of bounce.The latter is defined as the time when then entropy along the equator exceeds 3  baryon −1 , which is the result of the heating by the shock formed at bounce.Our GW data is sampled at the rate of 0.01 ms.We performed comparison with sampling rates of 0.2 and 0.1 ms and find no statistically significant differences between these sampling rates, as we show in Appendix B. This is an expected outcome as most of the physical (and EOS-dependent) signal is contained below ∼ 1 kHz (e.g., Dimmelmeier et al. 2008).

RESULTS
We first look at the main qualitative features of the bounce and ring-down GW signal.For a given progenitor model and rotational configuration, the dynamics depends on the EOS and the electron fraction profile.Approximately, the bounce GW amplitude can be estimated as (Richers et al. 2017) where  is the distance to the source.The dependence on the EOS and   () profile enters this equation via the ratio  2 /, where  and  are the mass and radius of the inner core at bounce.The inner core mass scales as ∼  2  (Yahil 1983).This is caused by the contribution of the degenerate electrons to the pressure before nuclear densities are reached.The leading-order effect of rotation is contained in the ratio /| | of the rotational kinetic energy  to the potential binding energy .The linear dependence on /| | remains valid for /| | ≲ 0.09.For larger /| |, the centrifugal support slows the dynamics, leading to weaker dependence on /| | of the GW amplitude (e.g., Dimmelmeier et al. 2008).
Figure 3 shows the GW strain as a function of time for four select EOSs (upper panel) and for different   () profiles for the SFHo EOS (lower panel) for models with /| | ≈ 0.06.As we can see, different EOSs and   () profiles produce GWs with amplitudes that differ by ≲ 20% around bounce time.In the post-bounce phase, the differences are more subtle.For a more detailed analysis of the cor- relation between GW features and the EOS parameters, see Richers et al. (2017).
In the following, we explore if the machine learning model can exploit these differences and classify the EOSs based on the GW signal.We divide our discussion into four parts, in which we separately explore the dependence on the signal range, the number of EOSs in the dataset, the impact of the   () profiles, and the rotation rate.

Dependence on signal range
We first perform an analysis of the group 0 dataset that includes GW signals for 18 EOSs.When we perform classification analysis using the GW signal range from -10 ms before bounce to 49 ms after bounce, we obtain an overall accuracy classification accuracy of ∼ 0.72 ± 0.07.This is in agreement with the findings of Edwards (2021) on the same dataset.
However, classification analysis using the GW signal in the range [−10, 49] ms has limitations.First, after ∼ 6 ms after the bounce, the signal contains contributions of prompt convection (e.g., Dimmelmeier et al. 2008).Since this is a stochastic process, it is hard to capture all possible manifestations of convection with just one simulation per EOS,   () profile, and progenitor model.Moreover, due to axial symmetry and approximate neutrino treatment used in our simulations, the prompt convection is not modeled accurately.Therefore, the signal after ∼ 6 ms contains inaccurate features.Second, the GW signal before −2 ms has little energy (e.g., Dimmelmeier et al. 2008).Moreover, before −2 ms, the inner core density remains below the nuclear density, where various EOSs differ from each other.For these reasons, in the following, we focus on the signal in the [−2, 6] ms range.In this region, the signal is dominated by the core bounce and early ring-down oscillations of the PNS, which is modeled well with the approximations used in our simulations (see Section 2).
For the GW signal in range [−2, 6] ms, we find that the accuracy of classification of 18 EOSs drops to ∼ 0.48±0.03,which is significantly lower than that for the [−10, 49] ms range.The reason for this drop is simple: there is less information available in the [−2, 6] ms range compared to that in the [−10, 49] ms range.To support this claim, we calculate the accuracy in the [6, 49] ms range, which we find to be 0.82 ± 0.06.Within the statistical errors, this is comparable to the accuracy in the whole [−10, 49] ms range.This means that there is significant information available in the [6, 49] ms range that the ML model can exploit.As mentioned above, since our model cannot guarantee accuracy in the time frame after 6 ms, we do not include it in our analysis.
This finding suggests that it is hard to achieve high classification accuracy for a dataset of 18 EOSs based on the bounce GW signal alone.Next, we explore how the classification accuracy depends on the number of EOSs in the dataset.All our results presented hereafter are based on the analysis of the signal in the [−2, 6] ms range.

Dependence on the number of EOSs
Figure 4 shows the average classification accuracy as a function of the number of EOSs  ranging from 1 to 18.When  is smaller than 18, the results are averaged over 10 random permutations of the 18 EOSs.The blue points show the average accuracy values, while the error bars show the corresponding standard deviation.
As expected, the accuracy decreases with increasing the number of EOSs.In the region from  = 1 till  ∼ 11, the accuracy decreases approximately linearly with , reaching ∼ 0.50±0.07 at  = 11.For larger , the decrease with  is smaller, dropping to ∼ 0.48 ± 0.03 for  = 18.
The orange dots in Fig. 4 show the difference between the average classification obtained by the CNN and the accuracy of purely random selection as a function of the number of EOSs.This quantity measures the advantage that CNN classification provides compared to a random selection.As we can see, this quantity reaches a peak value of ∼ 0.55 at  = 4 − 7.At  ≳ 7, it decreases with , gradually transitioning to its quasi-asymptotic value of ∼ 0.4.This result, in combination with the fact that the classification accuracy decreases with increasing , suggests that at  = 4 the CNN classification offers the biggest advantage compared to a random selection.For four EOSs, the average CNN accuracy is 0.78 ± 0.05.Moreover,  = 4 is a small enough number of EOSs where we can perform a more in-depth analysis with moderate computational costs, which we do below.At the same time, we emphasize that we are not aware of any other argument in favor using  = 4 EOSs.We do not claim that all uncertainties in high-density matter properties can be "grouped" into four distinct EOSs.
Hereafter we focus on the classification of four different EOSs.We select LS220, GShenFSU2.1,HSDD2, and SFHo.As mentioned above (cf.Section 2), these four EOSs represent the relatively realistic EOSs that exhibit distinct peak signal frequencies from each other, as can be seen in Fig. 10 of Richers et al. (2017).A similar analysis was performed by Chao et al. (2022).Instead of classifying all 18 EOSs, they grouped the datasets into families of EOSs.In our work, we go beyond Chao et al. (2022) by including an analysis of the impact of the uncertainties in the electron fraction and impact of rotation, we as discuss below.

Dependence on electron fraction
In this section, we study the performance of the classification algorithm when we add signals that are produced using different electron fraction profiles   ().As mentioned in Section 2, we consider three sets of data.Group 1 contains signals generated using the fiducial values of   (), while group 2 contains two extra   () profiles obtained according to formula (6).Finally, groups 3 and 3b include three more electron fraction profiles (see Section 2 for details).
Figure 5 shows the average classification accuracies for groups 1, 2, 3, and 3b.The accuracies for groups 1, 2, and 3 are 0.78 ± 0.08, 0.85±0.04,and 0.87±0.03.These accuracy values can be understood as the result of two opposing factors: the number and complexity of the waveforms contained in each dataset.The former is beneficial to the training of the CNN model, but the latter adversely affects the accuracy.
The lowest accuracy of 0.78 ± 0.08 exhibited by group 1 is caused by its small sample size of 320, which makes it harder to train the ML model.Group 2 has three times more waveforms, but it also has two more   () profiles.Nevertheless, group 2 exhibits higher accuracy of 0.85 ± 0.04.Group 3 has even larger number of 1200 waveforms.The classification accuracy is accordingly the highest.This suggests that the number of waveforms in the dataset is more important to the classification accuracy than the uncertainty in the   () profiles, at least within the limits considered in this work.
It is interesting to compare the classification accuracies for groups 3 to 3b.The latter has the same number of   () profiles, but it contains 240 fewer waveforms.As a result, the group 3b exhibits lower accuracy of 0.80 ± 0.06.This value is lower than the corresponding accuracy for group 2. This is not surprising: group 2 contains the same number of waveforms as group 3b, but it has fewer variations of   () profiles compared to 3b.
Fig. 6 shows confusion matrices from one classification run for groups 1, 2, 3, and 3b.Close inspection do not reveal any trends in terms of classes being misclassified.If we look at the off-diagonal elements, we see that there are no pairs of EOS which are misclassified by the model systematically across all groups.This suggest that the evaluation is not biased systematically.
For example, in group 1, 25% of GShenFSU2.1 waveforms were misclassified as HSDD2, while in group 2, only 4% of GShenFSU2.1 signal were misclassified as HSDD2, and 13% were misclassified as LS220 data.In both group 1 and 2, 10% and 12% of SFHo signal were misclassified with HSDD2, but in groups 3 and 3b significantly fewer misclassifications of LS220 was related to HSDD2.Other EOSs also do not show a misclassification pattern during experiments, which suggests that the misclassification is random.
For additional insight, we look at the F1 score.The F1 score can give additional insight over the accuracy metric by providing a measure of the robustness of a model performance, especially for imbalanced datasets.Fig. 5 shows the F1 score with the corresponding accuracy score for each group.As we can see, the F1 score shows a similar values to the accuracy score as well as a similar trend across data groups.However, it is important to note that the differences in the F1 scores between groups 2, 3, and 3b are within the standard deviation, so the trends observed in the F1 score (and accuracy) among these groups may not be statistically significant.
To complement the time-series analysis performed so far, we have repeated this analysis in the Fourier space.The corresponding accuracies are shown in Fig. 5.We obtain a similar hierarchy of accuracy values for groups 1, 2, 3, and 3b to that obtained from the time series calculation.However, the accuracies are on average ∼ 3 percent lower in the Fourier space.A similar drop was observed by Edwards (2021).See Appendix A for more detailed discussion.

Dependence on rotation
In this section, we study the dependence of the EOS classification accuracy on rotation.We measure rotation in terms of parameter /| |.We use group 2 dataset, since it has equal number of signals for each EOS and   profile and a bigger size than group 1.
We group the dataset into five /| | bins using quantile cuts, as shown in the upper panel of Fig. 7.For each bin, we compute the clas-   2017), which reveal that there are significant differences in the GWs corresponding to distinct EOSs even at high /| |.Table 3 summarizes all the quantitative findings of this section.

CONCLUSION
We performed machine learning classification analysis of the nuclear equation of state (EOS) using supernova gravitational wave (GW) signals.We use bounce and early post-bounce GW signal from rotating core-collapse supernovae.We parametrize the uncertainty in the electron capture rates by generating waveforms corresponding to six different electron fraction profiles.The GW signals are obtained from general relativistic hydrodynamics simulations using the We first explore the dependence of the EOS classification accuracy on the GW signal range included in the training and testing of the CNN model.For this, we used the 18 EOS dataset of Richers et al. (2017).The classification accuracy is ∼ 0.72 ± 0.07 for the signal range from −10 to 49 ms, where the origin of the time axis corresponds to the time of bounce.The accuracy decreases gradually with the narrowing of the signal, reaching ∼ 0.48 ± 0.03 for the signal in range from −2 to 6 ms.This range includes only the bounce and early ring-down oscillation signal (see Section 3.1 for details).
We then study how the accuracy depends on the number of EOSs  included in the data.The accuracy decreases gradually with .However, the difference between the CNN classification accuracy and the accuracy of a random selection exhibits a peak in the region 4 ≲  ≲ 7.This means that for these values of , the CNN classification offers the greatest advantage compared to random selection (see Section 3.2 for details).
Based on this, we then focus on classification analysis of a set of four EOSs: LS220, GShenFSU2.1,HSDD2, and SFHo.These EOSs represent relatively realistic EOSs from the set of Richers et al. (2017).At the same time, these EOSs yield relatively distinct peak signal frequencies from each other.This dataset contains 320 waveforms.In this case, the classification accuracy is 0.78 ± 0.08(see Section 3.1 for details).
Next, we incorporate additional   () profiles into our dataset.We first add the two different   () profiles given by Eq. ( 6).The dataset size becomes 960 waveforms.The accuracy increases to 0.85 ± 0.04.We then add three more   () profiles and augment the dataset size to 1200 waveforms.In this case, the classification accuracy becomes 0.87 ± 0.03.These results suggest that the classification accuracy increases with the dataset size, even if the dataset contains waveforms that are obtained using different   () profiles (see Section 3.3 for details).
The classification accuracies weakly depend on rotation.Models with moderately rapid rotation (with 0.015 ≲ /| | ≲ 0.14) exhibit accuracies ∼ 0.9.Models with slow (/| | ≲ 0.015) and extremely rapid rotation (/| | ≳ 0.13) have accuracies below ∼ 0.8.This can be explained by the fact that slow models emit weak GWs, while rapidly rotating models, due to centrifugal support, do not reach high densities, where EOSs differ from each other (see Section 3.4 for details).
Overall, our work shows the potential of the ML models in inferring the EOS from the GW signal alone, at least for bounce signal in rapidly rotating stars.This is especially true if the selection takes place for a group of four EOSs.However, whether all uncertainties in the parameters of high-density matter can be "grouped" in family of four EOSs is not clear.For this reason, it is premature to draw definitive conclusions regarding the likelihood of constraining the parameters of nuclear physics using our ML model.Ultimately, instead of classification analysis of EOSs (which is still an insightful exercise), one has to perform a regression analysis of the nuclear parameters from the GW signal.This will be a subject of a future work.
Our work can be further improved in several other directions.While the assumption of axial symmetry imposed in our simulations is sufficient for the bounce and early post-bounce phase, the subsequent phase requires full 3D modeling (e.g., Müller 2020, for a recent review).In addition, the simple deleptonization method that we employ cannot capture the complex neutrino process that take place in the post-bounce (e.g., Lentz et al. 2012;Kotake et al. 2018).Moreover, in our analysis of GWs, we do not include the detector noise that will be present when real detection takes place (e.g., Edwards 2017;Bruel et al. 2023).Also, there exists room to explore alternative ML algorithms as well as even larger GW datasets.These limitations will be addressed in future works.All sigma deviation values are less than 1, which suggests that there is no statistically significant dependence on EOS classification results as a function of the sampling frequency.

Figure 1 .
Figure 1.Loss function as a function of the epochs for group 1 dataset.In our analysis, we use models trained at epoch 20.

Figure 2 .
Figure2.Electron fraction profiles for the SFHo EOS.The red curve represents the fiducial   () profile, while the blue and orange curves represent the profiles adjusted using the formula (6) for  = 0.05 and 0.1, respectively.The dashed, dashed-dotted, and dotted lines represent the   () curves obtained from GR1D simulations using the electron capture rates ofSullivan et al. (2016), scaled by a factor of 0.1, 1, and 10, respectively.

Figure 3 .
Figure 3. GW signals with the same rotation profile  = 634 km, Ω 0 = 5.0 rad s −1 and with /| | ≈ 0.06.The top panel displays the GW signal for four EOSs with a fiducial   () profile, while the bottom panel shows the GW signal for the SFHo EOS with various   () profiles shown in Fig. 2.

Figure 4 .
Figure 4. EOS classification accuracy as a function of the number of EOSs in the dataset.The blue points show the mean accuracy.The error bars represent 1 standard deviation.The orange dots are the difference between the CNN classification accuracy and the accuracy of random selection.This quantity measures the advantage that CNN classification offers compared to random selection.

Figure 5 .
Figure 5.The values of the EOS classification accuracy and F1 score for groups 1, 2, 3, and 3b.The blue dots correspond to the accuracies for the time series data in the Real space, the orange dots represent the accuracies in Fourier space, and the green dots represent F1 scores in Real space.

Figure B2 .
Figure B2.Sigma deviation levels for accuracy values between the 0.01 ms sampling dataset and the 0.2 and 0.1 ms sampling datasets for each group.All sigma deviation values are less than 1, which suggests that there is no statistically significant dependence on EOS classification results as a function of the sampling frequency.

Table 1 .
Parameters of the 1D CNN model architecture used in this work.

Table 2 .
Summary of the hyperparameters of our CNN model.
Correct Predictions Total Number of Predictions.(1)

Table 3 .
Abdikamalov et al. (2014)fication test in five different bins of the values of rotation parameter /| |. sification accuracy.The corresponding accuracies with error bars for each bin are shown in the lower panel of Fig. 7.The accuracy is low for the small and large values of /| |.For the /| | < 0.0148 bin, the average accuracy is ∼ 0.61 ± 0.04, while for the /| | > 0.1365 bin, the average accuracy is ∼ 0.76 ± 0.09.This behavior is expected for two reasons.At low /| |, the signal is weak (cf.Eq.7) and is dominated by numerical noise (e.g.,Dimmelmeier et al. 2008).At high /| |, the centrifugal support prevents PNS from reaching high densities where the differences between EOSs are most pronounced.For example, for the LS220 EOS, we find that the central density at bounce scales as ∼ 4.4 − 8.5 × /| | g/cm 3 (cf.Fig.5ofAbdikamalovetal. (2014)).For these reasons, we obtain relatively high classification accuracy for 0.0148 < /| | < 0.1365.In this region, rotation is sufficiently strong to induce significant quadrupolar deformation of the inner core.At the same time, centrifugal support is not too strong to prevent PNS from reaching high densities in the center.This is supported byFigs.6, 8, and 10 of Richers et al. (