Deciphering and integrating invariants for neural operator learning with various physical mechanisms

ABSTRACT Neural operators have been explored as surrogate models for simulating physical systems to overcome the limitations of traditional partial differential equation (PDE) solvers. However, most existing operator learning methods assume that the data originate from a single physical mechanism, limiting their applicability and performance in more realistic scenarios. To this end, we propose the physical invariant attention neural operator (PIANO) to decipher and integrate the physical invariants for operator learning from the PDE series with various physical mechanisms. PIANO employs self-supervised learning to extract physical knowledge and attention mechanisms to integrate them into dynamic convolutional layers. Compared to existing techniques, PIANO can reduce the relative error by 13.6%–82.2% on PDE forecasting tasks across varying coefficients, forces or boundary conditions. Additionally, varied downstream tasks reveal that the PI embeddings deciphered by PIANO align well with the underlying invariants in the PDE systems, verifying the physical significance of PIANO.


INTRODUCTION
Partial differential equations (PDEs) provide a fundamental mathematical framework to describe a wide range of natural phenomena and physical processes, such as fluid dynamics [1 ], life science [2 ] and quantum mechanics [3 ], among others.Accurate and efficient solutions of PDEs are essential for understanding and predicting the behavior of these physical systems.However, due to the inherent complexity of PDEs, analytical solutions are often unattainable, necessitating the development of numerical methods for their approximation [4 ].Over the years, numerous numerical techniques have been proposed for solving PDEs, such as the finite difference method, finite element method and spectral method [5 ].These methods have been widely used in practice, providing valuable insights into the behavior of complex systems governed by PDEs [6 ,7 ].Despite the success of classical numerical methods in solv ing a w ide range of PDEs, there are several limitations associated with these techniques, such as the restriction on step size, difficulties in handling complex geometries and the curse of dimensionality for high-dimensional PDEs [8 -10 ].
In recent years, machine learning (ML) methods have evolved as a disruptive technology to classical numerical methods for solving scientific calculation problems for PDEs.By leveraging the power of data-driven techniques or the expression ability of neural networks, ML-based methods have the potential to overcome some of the shortcomings of traditional numerical approaches [8 ,11 -15 ].In particular, by using the deep neural network to represent the solution of the PDE, ML methods can efficiently handle complex geometries and solve high-dimensional PDEs [16 ].The representative works include the DeepBSDE method, which can solve parabolic PDEs in 100 dimensions [8 ]; the random feature model, which can easily handle complex geometries and achieve spectral accuracy [17 ]; the ML-based reduced-order modeling, which can improve the accuracy and efficiency of traditional reduced-order modeling for nonlinear problems [18 -20 ].However, these methods are applied to the fixed initial field (or external force field), and they require the retraining of neural networks when solv ing PDEs w ith changing high-dimensional initial fields.
In addition to these developments, neural operators have emerged as a more promising approach to simulate physical systems with deep learning, using neural networks as surrogate models to learn the PDE operator between functional spaces from data [9 ,21 ,22 ], which can significantly accelerate the simulation process.Most studies along this line focus on the network architecture design to ensure both simulation accuracy and inference efficiency.For example, DeepONet [21 ] and its variants [23 -25 ], Fourier neural operators [9 ,26 ,27 ] and transformerbased operators [28 ,29 ] have been proposed to respectively deal with continuous input and output space different frequency components and complex geometries.Compared to traditional methods neural operators break the restriction on spatiotemporal discretization and enjoy a speed up of thousands of times, demonstrating enormous potential in areas such as inverse design and physical simulations, among others [9 ,30 ].However, these methods only consider PDEs generated from a single formula by default, limiting the applicability of neural operators to multi-physical scenarios, e.g.datasets of the PDE systems sampled under different conditions (boundary conditions, parameters, etc.).
To address this issue, message-passing neural networks (MPNNs) incorporate the indicator of the scenario (i.e. the PDE parameters) into inputs to improve the generalization capabilities of the model [10 ].DyAd supervision learns the physical information through an encoder and automatically adapts it to different scenarios [31 ].Although incorporating the physical knowledge can enhance the performance of the neural operator these methods sti l l require access to the high-level PDE information in the training or test stage [10 ,31 ].However, in many real-world applications collecting high-level physical information that governs the behavior of PDE systems can be infeasible or prohibitively expensive.For example, in fluid dynamics or ocean engineering, scientists can gather numerous flow field data controlled by varying and unknown Reynolds numbers, and calculating them would require numerous calls to PDE solvers [12 ,32 ].
To this end, we propose the physical invariant attention neural operator (PIANO), a novel operator learning framework for deciphering and integrating physical knowledge from PDE series with various PIs, such as varying coefficients and boundary conditions.PIANO has two branches: a PI encoder that extracts physical invariants and a personalized operator that predicts the complementary field representation of each PDE system (Fig. 1 (a)).As i l lustrated in Fig. 1 , PIANO employs two key designs: the contrastive learning stage for learning the PI encoder and an attention mechanism to incorpo-rate this knowledge into neural operators through dynamic convolutional (DyConv) layers [33 ].On the one hand contrastive learning extracts the PI representation through the similarity loss defined on augmented spatiotemporal patches cropped from the dataset (Fig. 1 (b)).To enhance consistency with physical priors we propose three physics-aware cropping techniques to adapt different PI properties for different PDE systems, such as spatiotemporal invariant, boundary invariant, etc. (Fig. 1 (b)(iii)).This physics-aware contrastive learning technique extracts the PI representation without the need for the labels of the PDE conditions, thus providing the corresponding PI information for each PDE series (Fig. 1 (b)).On the other hand, after the PI encoder is trained by contrastive learning, we compute attention (i.e. a i k in Fig. 1 (c)) of the PI representation extracted by the PI encoder and reweight the convolutional kernel in the DyConv layer to obtain a personalized operator (Fig. 1 (c)).This personalized operator, incorporated with the PI information as an indicator of the PDE condition, can predict the evolution of each PDE field in a mixed dataset with guaranteed generalization performance.
We demonstrate our method's effectiveness and physical meaning on several benchmark problems, including Burgers' equation, the convectiondiffusion equation (CDE) and the Navier-Stokes equation (NSE).Our results show that PIANO achieves superior accuracy and generalization compared to existing methods for solving PDEs with various physical mechanisms.According to the results of four experiments, PIANO can reduce the relative error rate by 13.6%-82.2%by deciphering and integrating the PIs of PDE systems.Furthermore, we conduct experiments to evaluate the quality of PI embedding through some downstream tasks, such as unsupervised dimensionality reduction and supervised classification (regression).These results indicate that the manifold structures of PI embeddings align well with the underlying PIs hidden in the PDE series (e.g.Reynolds numbers in NSE and external forces in Burgers' equation), thereby enjoying the physical significance.

THE FRAMEWORK OF PIANO
In this section, we introduce the framework of PI-ANO, including how PIANO deciphers PIs from unlabeled multi-physical datasets and the procedure to incorporate them into the neural operator.

Description of the PDE system
Consider the time-dependent PDE system, which can be expressed as   PIANO first infers the PI embedding h i via the PI encoder P, and then integrates h i into neural operator G to obtain a personalized operator G i for u i .After that, PIANO predicts the subsequent PDE fields with this personalized operator.(b) Training stage of the PI encoder.(i) Illustration of contrastive learning.We crop two patches from each PDE series in a mini-batch according to the physical priors.The PI encoder and the projector are trained to maximize the similarity of two homologous patches.(ii) The effect of SimCLR loss, which brings closer (pushes apart) the representations governed by the same (different) physical parameters.(iii) Physics-aware cropping strategy of contrastive learning in PIANO.The cropping strategy should align with the physical prior of the PDE system.We illustrate the cropping strategies for spatiotemporal, temporal and boundary invariants.We also represent the global cropping strategy for comparison, which does not consider the more detailed physical priors and feeds the entire spatial fields directly.(c) Integration of the PI embedding into the neural operator.We use a split-merge trick to obtain the PI embedding h i for the PDE field u i , and feed h i into a multi-layer perception (MLP) to obtain K non-negative scales { a i k } K k=1 with k a i k = 1 .We use a i k as the attention to reweight the DyConv layer in the neural operator and thus obtain a personalized operator for u i , which is incorporated with physical knowledge in where R is the differential operator with parameter θ R ∈ R , is a bounded domain and u 0 represents the initial conditions.Let B[u] be the boundary condition governed by the parameter θ B ∈ B .Let := R × B be the product space between R and B , and let θ := (θ R , θ B ) ∈ be the global parameters of the PDE system.We utilize to denote the t frame ( t ∈ N + ) PDE series defined in .
In this paper, we consider the scenario where θ ∈ is a time-invariant parameter.In other words, the parameters θ that govern the PDE system in Equation ( 1 ) do not change over time, which includes the following three scenarios.r Spatiotemporal invariant: share the same θ R for all k 1 , k 2 ∈ [0, T ] and 1 , 2 ⊂ .r Temporal invariant: given In Table 1 , we give some examples of onedimensional (1D) heat equations to i l lustrate the above three types of PI.

The learning regime
Given the t frame PDE series u k , t [ ] governed by Equation ( 1), an auto-regressive neural operator G acts as a surrogate model, which produces the next t frame PDE solution as follows: ( We assume that the neural operator G is trained under the supervision of the dataset , where u i 0 ,Mt [ ] is the i th PDE series defined in × [0, Mt ] and governed by the parameter θ i ∈ .Existing methods typically assume that all u i in D train share the same θ [9 ,21 ] or have known different parameters θ i [10 ,31 ].However, we consider a more challenging scenario where data are generated from various physical systems (w ith vary ing but unknown θ i in D train and D test ) and no additional knowledge of θ i is provided during the training and test stages.

Forecasting stage of PIANO
As shown in Fig. 1 (a), given the initial PDE fields u i 0 ,t [ ] , the forecasting stage of PIANO includes three steps: (1) infer the PI embedding h i via the PI encoder P; (2) integrate h i into neural operator G to obtain a personalized operator G i for u i ; (3) predict the subsequent PDE fields with the personalized operator G i .As a result, two key technical problems arise when performing the above plans.On the one hand, we need to decipher the PI information behind the PDE system without the supervision of known labels.To this end, we utilize contrastive learning to pre-train the PI encoder in a self-supervised manner

PDE formula
Type of PI PI θ PI space and propose the physics-aware cropping strategy to constrain the learned representation to align with the physical prior.On the other hand, we need to integrate the PI embedding into the neural operator to obtain the personalized operator.In this paper, we borrow the DyConv technique [33 ] and propose the split-merge trick to use the PI embedding fully.

Contrastive training stage of the PI encoder
In this section we introduce how to train an encoder P for extracting the PI information from the training set We begin by considering the scenario where However, the mapping M that can directly output θ i is not available due to the absence of θ .To decipher the information implying θ i , we adopt the technique from SimCLR [34 ] to train P in a selfsupervised manner.In each mini-batch we sample training data { u i 0 ,Mt [ ] } i ∈A from D train with index set A and randomly intercept two patches from each PDE sample, i.e.
The PI encoder P maps each patch to a representation vector, denoted . Subsequently, we employ a two-layer MLP g as a projection head to obtain . Considering the PDE patches cropped from the same/different PDE series as positive/negative samples, the SimCLR loss can be expressed as where sim (u , v ) := u v / u v denotes the cosine similarity between u and v , and τ > 0 denotes a temperature parameter.As shown in Fig. 1 (b)(ii), the SimCLR loss brings the representations governed by the same physical parameters closer, while pushing apart those with different parameters.After the training stage of contrastive learning, we throw away projector g and only utilize encoder P to extract PI information from PDE fields, which is in line with the SimCLR method [34 ].See the Method section for more details on the architecture of the PI encoder and the physics-aware cropping strategy.

Integrate the PI representation
In this section we introduce how PIANO integrates the pre-trained PI representation into the neural operator.Given the pre-trained PI encoder P and an initial PDE field u i 0 ,t [ ] , we first obtain the PI embedding h i via a split-merge trick (see the Method section for more details), and then we adopt the DyConv [33 ] technique to incorporate the PI information into the neural operator G.In the first layer of G there are K convolutional matrices of the same size, denoted { W 1 ,k } K k=1 .In detail, we transform the first Fourier or convolutional layer into a DyConv layer in the Fourier-based or convolutionalbased neural operators, respectively.All other layers maintain the same structure as the original neural operators.When predicting the PDE fields for a specific instance u i , we use an MLP to transform its PI representation k is implemented by a softmax layer.We use { a i k } K k=1 as the attention to reweight the K convolution matrices, i.e.W i 1 = k a i k W 1 ,k .We replace the first layer of G with W i 1 and denote this new operator as G i , which can be considered as the personalized operator for u i (Fig. 1 (c)).It is worth mentioning that the parameters W i 1 in G i are obtained by the weighted summation, whose computational cost is almost negligible compared with the convolutional operation.Therefore, when aligning the parameters of PIANO and other neural operators, PIANO enjoys a comparable or faster inference speed, even considering the calculation of the PI representation h .

EXPERIMENTS
In this section, we conduct a series of numerical experiments to assess the performance of our proposed PIANO method and other baseline techniques in simulating PDE systems governed by diverse PIs.

Experimental setup
We divide the temporal intervals into 200 frames for training and validation.The input and output frames are set as 20 for neural operator and PI encoders in the experiments.In order to assess the out-ofdistribution generalization capabilities of the trained operator, we set the test temporal intervals at 240, with the last 40 frames occurring exclusively in the test set.We refer to the temporal interval in the training set as the training domain, and the temporal interval that only occurs in the test set as the future domain.The spatial intervals are partitioned into 64 frames for the 1D case and 64 × 64 frames for the 2D case.The training, test and validation set sizes for all tasks are 10 0 0, 20 0 and 20 0, respectively.All experiments are carried out using the PyTorch package [35 ] on an NVIDIA A100 GPU.We repeat each experiment with three random seeds from the set {0 1, 2} and report the mean value and variance.The performance of the model is evaluated using the average relative 2 error ( E 2 ) and the ∞ error ( E ∞ ) over all frames in the training domain and the future domain, respectively.

Dataset
In this section, we introduce the PDE dataset utilized in this paper, including two kinds of Burgers' equation, the 1D CDE and three kinds of 2D NSEs.
Experiment E1: Burgers' equation with varying external forces f.We simulate the 1D Burgers' equation with varying external forces f , defined as where f ( x ) is a smooth function representing the external force.In this experiment, we select 14 different f to evaluate the performance of PIANO and other baseline methods under varying external forces.These forces are uniformly sampled from the set { 0 , 1 , cos (x ) , cos (2 x ) , cos (3 x ) , sin (x ) , sin (2 x ) , sin (3 x ) , ± tanh (x ) , ± tanh (2 x ) , ± tanh (3 x ) } .The ground-truth data are generated using the Python package 'py-pde' [36 ] with a fixed step size of 10 −4 .
The final time T is set to 5 for the training set and 6 for the test set.Experiment E2: Burgers' equation with varying diffusivitie s D. We simulate the 1D Burgers' equation with spatially varying diffusivities, defined as where where Experiment E4: NSE with varying viscosity terms ν.We simulate the vorticity fields for 2D flows within a periodic domain = [0, 1] × [0, 1], governed by the NSEs: where ) and ν ∈ R + represents the forcing function and viscosity term, respectively.The viscosity is a crucial component in NSEs that determines the turbulence of flows [37 ,38 ].We generate NSE data with varying viscosity coefficients to simulate heterogeneity, ranging from 10 −2 to 10 generation process employs the pseudo-spectral method with a time step of 10 −4 and a 256 × 256 grid size.The data are then downsampled to a grid size 64 × 64, which aligns with the settings in [9 ].
The final time T is 20 and 24 for the training and test sets respectively.Experiment E5: NSE with varying viscosity terms ν and external forces f.In this experiment, we aim to simulate the 2D NSE as shown in Equation ( 8), w ith vary ing v iscosity terms ν and external forces f .The viscosity coefficients ν range from 10 −2 to 10 −5 .The form of the forcing function is given by f , where the coefficient a is uniformly sampled from [0, 0.2].All other experimental settings are consistent with those described in experiment E4.
Experiment E6: Kolmogorov flow with varying viscosity terms ν.We simulate the vorticity fields for 2D NSEs within a periodic domain = [0, 1] × [0, 1] driven by Kolmogorov forcing [39 ]: The fluid fields in Equation ( 9) result in much more complex trajectories due to the involvement of Kolmogorov forcing.We generate NSE data with varying viscosity coefficients to simulate heterogeneity, ranging from 10 −2 to 10 −4 .All other experimental settings are consistent with those described in experiment E4.

Baselines
We consider several representative baselines from operator learning models, including the following.r Fourier neural operator (FNO) [9 ]: a classical neural operator that uses the Fourier transform to handle PDE information in the frequency domain.r Unet [40 ,41 ]: a classic architecture for semantic segmentation in biomedical imaging recently utilized as a surrogate model for PDE solvers.r Low-rank decomposition network (Lord-Net) [42 ]: a convolutional-based neural PDE solver that learns a low-rank decomposition layer to extract dominant patterns.r MultiWaveleT-(MWT) based model [43 ]: a neural operator that compresses the kernel of the corresponding operator using a fine-grained wavelet transform.r Factorized Fourier neural operators (FFNOs) [27 ]: an FNO variant that improves performance using a separable spectral layer and enhanced residual connections.
For PIANO we conduct experiments on PIANO + X, where X represents the backbone models.For the neural operator X and PIANO + X, we align the critical parameters of X and adjust the widths of the networks to match the number of parameters between X and PIANO + X, thereby ensuring a fair comparison.

Results
Table 2 presents the performance of various models for the PDE simulation on the experiments (E1-E6), as well as their computational costs.PIANO achieves the best prediction results across most metrics and experiments.When compared with the backbone models X (FNO, Unet and FFNO), the three variants of PIANO + X consistently outperform their backbone models on all tasks for both E 2 and E ∞ errors, demonstrating that the PI embedding can enhance the robustness and accuracy of neural operators' prediction capabilities.Specifically, PIANO + FNO, compared to FNO, reduces the error rate E 2 by 26.5%-63.1% in the training domain and by 35.7%-51.7% in the future domain over four experiments.PIANO + Unet, compared to Unet, reduces the error rate E 2 by 32.9%-76.8% in the training domain and by 36.7%-82.2% in the future domain over four experiments.PIANO provides a more significant enhancement to Unet than FNO in most tasks.One potential explanation is that the Fourier layer within the PI encoder introduces additional frequency domain information to the convolution-based Unet.In contrast, FNO is already based on a Fourier layer network.We compare the vorticity fields (in E4 and E6) predicted by FNO and PIANO + FNO from T = 4 to T = 24 in Fig. 2 .Within the training domain, PIANO demonstrates a superior ability to capture the intricate details of fluid dynamics compared to FNO.As for the future domain, where supervised data are lacking, PIANO and FNO struggle to provide exact predictions in E4.However, PIANO sti l l forecasts the corresponding trends of fluids more accurately than FNO.
Regarding computational costs, it is worth mentioning that the PI encoder is a significantly lighter network (0.053 and 0.184 mi l lion for the Burgers and NSE cases) compared to the neural operator.As a result, the inference time added by the PI encoder is generally negligible, which is 0.002 and 0.004 s for the Burgers and NSE data, respectively.Furthermore, in situations where the computational cost of the convolutional layers in the backbone is substantial, PIANO can considerably enhance the computation speed with the help of dynamic NSEs.More detailed discussions on computational costs are given in the online supplementary material.

Physical explanation of the PI encoder
In this section, we describe experiments to investigate the physical significance of the PI encoder on the Burgers (E1) and NSE (E4) data; specifically, whether the learned representation can reflect the PI information hidden within the PDE system.We consider two kinds of downstream task, unsupervised dimensionality reduction and supervised classification (regression), to analyze the properties of PI embeddings for PIANO.Furthermore, we compare several corresponding baselines to study the effects of each component in PIANO as follows.
r PIANO-CL : in this model, we jointly train the PI encoder and neural operator without the contrastive pre-training, which can be regarded as an FNO version for DyConv technique.We train this model to reveal the impact of contrastive learning in PIANO.r PIANO-SM : in PIANO, we utilize the split-merge trick to divide the PDE fields into several patches { v } V v=1 and then input them into the PI encoder during the training and testing phases (Fig. 1 (b) and (c)).In PIANO-SM , we directly feed the entire PDE fields into the PI encoder.r PIANO-PC : we assert that cropping strategies should align with the physics prior of the PDE system and propose physics-aware cropping methods for contrastive learning (Fig. 1 (b)).In PIANO-PC , we discard the physics-aware cropping technique and swap two corresponding augmentation methods for the Burgers and NSE data, respectively.
For dimensionality reduction tasks, we utilize UMAP [44 ] to project the PI embedding into a 2D and 1D manifold for the Burgers and NSE data respectively (Fig. 3 ).For Burgers' data, PIANO-CL fails to obtain a meaningful representation, highlighting the importance of contrastive learning.PIANO-SM and PIANO-PC can distinguish half of the external force types, but struggle to separate some similar functions, such as −tanh ( kx ) for k ∈ {1, 2, 3}.Only PIANO achieves remarkable clustering results (Fig. 3 (a)).We also calculate four clustering metrics to quantitatively evaluate the performance of clustering (Fig. 3 (b)), where the clustering results are obtained via K-means [45 ] with the PI representation.These four metrics include the si l houette coefficient, the adjusted Rand index, normalized mutual information and the Fowlkes-Mallows index, which assess the clustering quality through measuring intra-cluster similarity, agreement between partitions, shared information between partitions and the similarity of pairs within clusters, respectively.The larger their values, the better the clustering quality.As shown in Fig. 3 (b), PIANO is the only method that achieves a si l houette coefficient greater than 0.65, with the other three metrics achieving values larger than 0.90; thus, PI-ANO significantly outperforms the other methods.For NSE data, PIANO is the only method where the first component of PI embeddings exhibits a strong correlation with the logarithmic viscosity term (with correlation coefficients greater than 98%).At the same time, the other three PIANO variants fail to distinguish viscosity terms ranging from 10 −3 to 10 −5 (Fig. 3 (b)).
For supervised tasks, we train a linear predictor T that maps the learned representation h i to the corresponding PDE parameters θ i under the supervision of ground-truth labels (Table 3 ).For the dataset of Burgers' equation, which involves 14 types of external forces, the training of T naturally becomes a softmax regression problem.In the case of NSE, where the viscosity term continuously changes, we treat the training of T as a ridge regression problem.According to the supervised downstream tasks, the PI encoder trained in PIANO exhibits the best ability to predict the PIs in Burgers' equation and NSE compared to other baseline methods, which aligns with the experimental result in the unsupervised part.
The results of downstream tasks indicate that PI-ANO can represent the physical knowledge via a low-dimensional manifold and predict corresponding PDE parameters, thus demonstrating the physical meaning of PIANO.

CONCLUSION
In this paper, we introduce PIANO, an innovative operator learning framework designed to  We propose the following future works to enhance the capabilities and applications of PIANO further.
r Expanding PIANO to PDE t ypes with var ying geometries.In this study, we primarily focused on 1D equations when simulating PDEs with varying boundary conditions.However, it would be valuable to explore the extension of PIANO to more complex PDEs, such as PDEs with 2D and 3D complex geometries.r Addressing large-scale challenges using PIANO.In large-scale real-world problems, such as weather forecasting, PIANO can potentially extract meaningful PI representations, such as geographical information of various regions.This capability could enhance the accuracy and reliability of forecasting tasks and other large-scale applications.r Integrating additional physical priors into PIANO.
Our current study assumes that the underlying PI in the PDE system is time invariant.However, realworld systems often exhibit other physical properties, such as periodicity and spatial invariance.By incorporating these additional physical priors into the contrastive learning stage, PIANO could be applied to a broader range of problems.

Architecture of the PI encoder
In this paper, the architecture of P consists of six layers, successively including two Fourier layers [9 ] two convolutional layers and two fully connected layers.The Fourier layers can extract the PDE information in the frequency space and other layers downsample the feature map to a low-dimensional vector.We employ the 'GeLU' function as the activation function.It is important to note that we only feed a sub-patch of the PDE field to P and that the output of P is a low-dimensional vector.Furthermore the amount of information required to infer PIs is significantly less than that needed to forecast the physical fields in a PDE system.Consequently, compared with the main branch of the neural operator, this component is a lightweight network that extracts PIs and enjoys fast inference speed.

Physics-aware cropping strategy
The cropping of the PDE series can be interpreted as data augmentation in contrastive learning.Unlike previous argumentation methods in vision tasks [46 -49 ], those for PDE representation should comply with the physical prior accordingly.We have previously discussed cases where the PI represents spatiotemporal invariants.When the PI is only a temporal invariant and exhibits spatial variation, such as an external force, it is necessary to align spatial positions when implementing the crop operator.As a result, we extract two patches from the same spatial location for each PDE sample, i.e. { u i k 1 ,t [ i ] } i ∈A and { u i k 2 ,t [ i ] } i ∈A .For boundary invariants, we need to crop the PDE patches near the boundary to encode the boundary conditions.We i l lustrate al l three cropping methods in Fig. 1 (b)(iii).Note that we also illustrate another cropping approach, called the global cropping technique, which directly selects the PDE patch across the entire spatial field as augmentation samples, i.e. { u i k 1 ,t [ ] } i ∈A and { u i k 2 ,t [ ] } i ∈A .This global cropping strategy considers the time-invariant property of PIs, while ignoring the more detailed physical priors of different types of PI.

Split-merge trick
We split the PDE fields according to the physical prior in the contrastive training stage.Compared to global cropping, such a splitting strategy can encode the physical knowledge into P through a more accurate approach.In the forecasting stage, we split the initial PDE fields u i 0 ,t [ ] into V uniform and disjointed patches { u i 0 ,t [ v ] } V v=1 , which are aligned with the patch size in the pre-training stage and satisfy ∪ v v = .We feed all patches into P to obtain the corresponding representations ) , and merge them together as the PI vector of u i , i.e. h i := [ h i 1 , . . ., h i V ] (Fig. 1 (c)).This merge operation can make full use of the PDE information.In practice, we fix the parameters of the pre-trained PI encoder P and only optimize the neural operator G in the training stage.

Figure 1 .
Figure 1.Illustration of PIANO.(a) The overall framework for PIANO when forecasting the PDE series.Given the i th PDE initial fields u i 0 ,t [ ] , PIANO first infers the PI embedding h i via the PI encoder P, and then integrates h i into neural operator G to obtain a personalized operator G i for u i .After that, PIANO predicts the subsequent PDE fields with this personalized operator.(b) Training stage of the PI encoder.(i) Illustration of contrastive learning.We crop two patches from each PDE series in a mini-batch according to the physical priors.The PI encoder and the projector are trained to maximize the similarity of two homologous patches.(ii) The effect of SimCLR loss, which brings closer (pushes apart) the representations governed by the same (different) physical parameters.(iii) Physics-aware cropping strategy of contrastive learning in PIANO.The cropping strategy should align with the physical prior of the PDE system.We illustrate the cropping strategies for spatiotemporal, temporal and boundary invariants.We also represent the global cropping strategy for comparison, which does not consider the more detailed physical priors and feeds the entire spatial fields directly.(c) Integration of the PI embedding into the neural operator.We use a split-merge trick to obtain the PI embedding h i for the PDE field u i , and feed h i into a multi-layer perception (MLP) to obtain K non-negative scales { a i k } K k=1 with k a i k = 1 .We use a i k as the attention to reweight the DyConv layer in the neural operator and thus obtain a personalized operator for u i , which is incorporated with physical knowledge in h i .
B[ u ](x, t ) = 0 represents the boundary conditions.In this experiment, we select four types of B to evaluate the generalizability of PIANO and other baseline methods under varying boundary conditions.In this dataset, four types of boundary conditions include the Dirichlet condition ( u = 0.2), the Neumann condition ( ∂ n u = 0.2), the curvature condition ( ∂ 2 n u = 0 . 2 ) and the Robin condition ( ∂ n u + u = 0.2).The data generation scheme and the final time T align with experiment E1.

Figure 2 .
Figure 2. Comparison of the vorticity fields in E4 (a) and E6 (b) between FNO and PIANO + FNO from T = 4 to 24 in the periodic domain [0, 1] 2 for a 2D turbulent flow.Note that the times T = {4, 8, 12, 16, 20} are in the training domain, while T = 24 is in the future domain.The vorticity fields in the bounding boxes indicate that PIANO can capture more details than FNO.

Figure 3 .
Figure 3.The performances of the learned representation on the unsupervised dimensionality reduction tasks.CL, SM and PC denote contrastive learning, the split-merge trick and the physics-aware cropping strategy, respectively.(a) The dimensionality reduction results of PI embeddings via UMAP for Burgers' data.The horizontal and vertical axes represent the two main components of UMAP, and each color represents a different external force in the dataset.Colors numbered from 0 to 13 correspond to the 14 types of external forces, including 0, 1, cos ( x ), sin ( x ), −tanh ( x ), tanh ( x ), cos (2 x ), sin (2 x ), tanh (2 x ), −tanh (2 x ), cos (3 x ), sin (3 x ), tanh (3 x ) and −tanh (3 x ).(b) Four metrics to evaluate the quantity of clustering via representation vectors given by different methods, including the silhouette coefficient, the adjusted Rand index, normalized mutual information and the Fowlkes-Mallows index.All four of these metrics indicate that the larger the value, the better the clustering performance.(c) The dimensionality reduction results of the PI embeddings via UMAP for the NSE data.The horizontal axis and the vertical axis represent the first component of UMAP and the logarithmic viscosity term lg (ν ) in the dataset.We also calculate the Spearman and Pearson correlation coefficients between the first component and logarithmic viscosity term lg (ν ) , which represent the rank order and linear relationships between two variables, respectively.

Table 1 .
Examples of three types of PIs on 1D heat equations D ( x ) is a smooth and non-negative function representing the spatially varying diffusivity.In this experiment, we select 10 different diffusivities to evaluate the performance of PIANO and other baseline methods under varying spatial fields.Ten types of diffusivities are uniformly sampled from the set {1, 2, 1 ± cos ( x ), 1 ± sin ( x ), 1 ± cos (2 x ), 1 ± sin (2 x )}.The data generation scheme and the final time T are aligned with experiment E1.Experiment E3: CDE with varying boundary conditions B. We simulate the 1D CDEs with varying boundary conditions, defined as Downloaded from https://academic.oup.com/nsr/advance-article/doi/10.1093/nsr/nwad336/7503933 by Academy of Mathematics & System Science user on 12 February 2024 −5.The viscosity fields become more complicated as ν decreases because the nonlinear term −( u • ∇) ω gradually governs the motion of the fluids.The data

Table 2 .
Results of the PDE simulation for experiments E1, E2, E3, E4, E5 and E6.Relative errors (%) and computational costs for baseline methods and PIANO.The computational cost and numbers of parameters for PIANO reported in this table consider both the expenses of the PI encoder and neural operator.The best results in each task are highlighted in bold.

Table 3 .
The performances of the learned representation on the supervised tasks.Accuracy (relative 2 error) of the PI encoder in PIANO and other baselines using linear evaluation on Burgers' equation (NSE).CL, SM and PC denote contrastive learning, the split-merge trick and the physicsaware cropping strategy, respectively.The best results in each task are highlighted in bold.