JEDI: circular RNA prediction based on junction encoders and deep interaction among splice sites

Abstract Motivation Circular RNA (circRNA) is a novel class of long non-coding RNAs that have been broadly discovered in the eukaryotic transcriptome. The circular structure arises from a non-canonical splicing process, where the donor site backspliced to an upstream acceptor site. These circRNA sequences are conserved across species. More importantly, rising evidence suggests their vital roles in gene regulation and association with diseases. As the fundamental effort toward elucidating their functions and mechanisms, several computational methods have been proposed to predict the circular structure from the primary sequence. Recently, advanced computational methods leverage deep learning to capture the relevant patterns from RNA sequences and model their interactions to facilitate the prediction. However, these methods fail to fully explore positional information of splice junctions and their deep interaction. Results We present a robust end-to-end framework, Junction Encoder with Deep Interaction (JEDI), for circRNA prediction using only nucleotide sequences. JEDI first leverages the attention mechanism to encode each junction site based on deep bidirectional recurrent neural networks and then presents the novel cross-attention layer to model deep interaction among these sites for backsplicing. Finally, JEDI can not only predict circRNAs but also interpret relationships among splice sites to discover backsplicing hotspots within a gene region. Experiments demonstrate JEDI significantly outperforms state-of-the-art approaches in circRNA prediction on both isoform level and gene level. Moreover, JEDI also shows promising results on zero-shot backsplicing discovery, where none of the existing approaches can achieve. Availability and implementation The implementation of our framework is available at https://github.com/hallogameboy/JEDI. Supplementary information Supplementary data are available at Bioinformatics online.


Introduction
The ENCODE project has revealed the vital role of different forms of non-protein-coding RNAs. Among these types of RNAs, much attention has been placed on cataloging and studying the long noncoding RNAs (lncRNAs), due to their high relevancy to gene regulation and diseases (Derrien et al., 2012;Ponting et al., 2009). lncRNAs are typical of 200 bp to >100 kb in length (Wang et al., 2020). As a particular type of lncRNA, endogenous circular RNA (circRNA) has recently received a tremendous amount of research highlights not only because of its circularity, but also its implications in a myriad of human diseases, such as cancer and Alzheimer's disease (Dube et al., 2019;Qu et al., 2018). circRNA arises during the process of alternative splicing of protein-coding genes, where the 5 0 end of an exon is covalently ligated to the 3 0 end of the same exon or a downstream exon, forming a closed continuous loop structure. This mechanism is also known as 'backsplicing'. The circular structure provides several beneficial properties over the linear RNAs. To be more specific, it can serve as templates for rolling circle amplification of RNAs (Boss and Arenz, 2020), rearrange the order of genetic information (Lasda and Parker, 2014), resistant to exonuclease-mediated degradation (Jeck et al., 2013) and create a constraint on RNA folding (Lasda and Parker, 2014). Although the consensus of biological functions, mechanisms and biogenesis remains unclear for most circRNAs (Barrett and Salzman, 2016;Yu and Kuo, 2019), there are emerging evidence suggesting their roles in acting as sponges for microRNAs (Hansen et al., 2013;Memczak et al., 2013), RNA-binding protein competition (Ashwal-Fluss et al., 2014) and inducing host gene transcription (Li et al., 2015). Evidently, as a fundamental step to facilitate the exploration of circRNA, it is essential to have a high-throughput approach to identify the circRNAs.
Multiple factors can contribute to the formation of circRNAs. These factors include complementary sequences in flanking introns (Ivanov et al., 2015), the presence of inverted repeats (Dubin et al., 1995), number of Arthrobacter luteus (ALU) and tandem repeats i289 This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited. Bioinformatics, 37, 2021Bioinformatics, 37, , i289-i298 doi: 10.1093 ISMB/ECCB 2021 (Jeck et al., 2013) and single nucleotide polymorphism (SNP) density (Thomas and Saetrom, 2014). These factors, together with the evolutionary conservation and secondary structure of RNA molecules, have been considered as the discriminative features for circRNA identification. Several research efforts (Chen et al., 2018;Pan and Xiong, 2015;Wang and Wang, 2017) have leveraged these features to train a conventional statistical learning model to distinguish circRNAs from other lncRNAs. These statistical learning algorithms include support vector machines (SVM), random forest (RF) and multi-kernel learning. However, methods along this line often require an extensive domain-specific feature engineering process. Moreover, the selected features may not provide sufficient coverage to characterize the backsplicing event.
Recently, the rising of deep learning architectures have been widely adopted as an alternative learning algorithm that can alleviate the inadequacy of conventional statistical learning methods. Specifically, these deep learning algorithms provide powerful functionality to process large-scale data and automatically extract useful features for object tasks (LeCun et al., 2015). In the domain of circRNA prediction, the convolution neural network (CNN) is the architecture that has been widely explored to automatically learn the critical features for prediction, either from the primary sequence (Chaabane et al., 2020;Wang and Wang, 2019) or secondary structure (Fiannaca et al., 2017). Although CNN is capable of capturing relevant local patterns on gene sequences, positional information of the splice junctions and global context of each splice site cannot be recognized. One of these approaches (Chaabane et al., 2020) attempts to address this issue by applying recurrent neural networks (RNNs) to learn sequential and contextual information; however, the essential knowledge, such as splice sites and junctions, are still ignored.
Understanding the properties of splice sites and their relationships is one of the keys to master RNA splicing and the formation of circRNAs because the splicing event can be considered as interaction among those splice sites. To fathom the relations between splice sites, circDeep (Chaabane et al., 2020) explicitly analyzes the nucleotide sequences of two splice sites to predict the circRNAs. DeepCirCode (Wang and Wang, 2019) utilizes CNNs to model the flanking regions around two splice sites to identify if there is a backsplice event. However, all of the existing methods fail in modeling deep interaction among splice sites for circRNA prediction. For example, circDeep only measures shallow interaction among splice sites on the nucleotide level; DeepCirCode is limited to examine only a single pair of splice sites and lacks the capacity of modeling more complex relations among splice sites on multi-isoform genes. Hence, there is an immense gap to comprehensively understand the relationship between splice sites and their interaction regarding the formation of circRNAs.
In this article, we propose the framework of Junction Encoder with Deep Interaction (JEDI) to address the limitations in circRNA prediction. More precisely, we focus on predicting the existence of circRNAs from either the reference gene/isoform sequences or assembled transcript sequences by modeling splice sites and their deep interaction with deep learning techniques. First, the attentive junction encoders are presented to derive continuous embedding vectors for acceptor and donor splice sites based on their flanking regions around junctions. Based on the acceptor and donor embeddings, we propose the novel cross-attention layer to model deep interaction between acceptor and donor sites, thereby inferring cross-attentive embedding vectors. Finally, the attention mechanism is applied to determine acceptors and donors that are more important than other ones to predict if there is a circRNA. It is also important to note that the interpretability of the attention mechanism and the cross-attention layer enables JEDI to automatically discover backsplicing without training on any annotated backspliced sites.
Our contributions are 3-fold. First, to the best of our knowledge, this work is the first study to model the deep interaction among splice sites for circRNA prediction. The more profound understandings of the relationships among splice sites can intuitively benefit circRNA prediction in implying backsplicing. Second, we propose a robust and effective end-to-end framework, JEDI, to deal with both isoform-level and gene-level circRNA prediction based on the attention mechanism and the innovative cross-attention layer. More specifically, JEDI is capable of not only deriving appropriate representations from junction encoders but also routing the importance of forming circRNAs on different levels. Third, JEDI creates a new opportunity of transferring the knowledge from circRNA prediction to backsplicing discovery based on its extensive usage of attention mechanisms. Moreover, our approach can be utilized as a general and user-friendly detection tool to provide a robust estimated ranking for further validation. Extensive experiments on human circRNAs have demonstrated that JEDI significantly outperforms eight competitive baseline methods on both isoform level and gene level. The independent study on mouse circRNAs also indicates that JEDI is robust to transfer knowledge learned from human sequence to mouse for circRNA prediction. This phenomenon is supported by the observation of highly conserved circRNA across species (Barrett and Salzman, 2016;Jeck et al., 2013;Suenkel et al., 2020). In addition, we conduct the experiments to demonstrate that JEDI can automatically discover backspliced site pairs without any further annotations. Finally, an in-depth analysis of model hyperparameters and run-time presents the robustness and efficiency of JEDI.

Related work
Current works to discover circRNA can be divided into two categories: one relies on detecting backspliced junction reads from RNA-Seq data and the other examines features directly from transcript sequences.
The first category aims at detecting circRNA from expression data, specifically from RNA-Seq reads. It is mainly achieved by searching for chimeric reads that join the 3 0 end to the upstream 5 0 end with respect to a transcript sequence (Barrett and Salzman, 2016). Existing algorithms include MapSplice (Wang et al., 2010), CIRCexplorer (Zhang et al., 2014), KNIFE (Szabo et al., 2015), find-circ (Memczak et al., 2013) and CIRI (Gao et al., 2015(Gao et al., , 2018. These algorithms can be quite sensitive to the expression abundance, as circRNAs are often lowly expressed and fail to be captured with low sequencing coverage (Barrett and Salzman, 2016). In the comparison conducted by Hansen et al. (2016), the findings suggest dramatic differences among these algorithms in terms of sensitivity and specificity. Other caveats are reflected in long duration, high RAM usage and/or complicated pipeline.
The second category focuses on predicting the circRNA based on transcript sequences. Methods in this category leverage different features and learning algorithms to distinguish circRNA from other lncRNAs. PredicircRNA (Pan and Xiong, 2015) and H-ELM (Chen et al., 2018) develop different strategies to extract discriminative features, and employ conventional statistical learning algorithms, i.e. multiple kernel learning for PredicircRNA and hierarchical extreme learning machine for H-ELM, to build a classifier. Statistical learning approaches require explicit feature engineering and selection. However, the extracted features are dedicated to specific facets of the sequence properties and present a limited coverage on the interaction information between the donor and acceptor sites. circDeep (Chaabane et al., 2020) and DeepCirCode (Wang and Wang, 2019) are two pioneering methods that employ deep learning architectures to automatically learn complex patterns from the raw sequence without extensive feature engineering. circDeep uses CNNs with the bidirectional long short-term memory network (LSTM) to encode the entire sequence, whereas DeepCirCode uses CNNs with max-pooling to capture only the flanking sequences of the backsplicing sites. Although circDeep has claimed to be an endto-end framework, it requires external resources and strategies to capture the reverse complement matching (RCM) features at the flanking sequence and the conservation level of the sequence. In addition, the RCM features only measure the match scores between sites on the nucleotide level, and neglect the complicated interaction between two sites. CNNs with max-pooling aim at preserving important local patterns within the flanking sequences. As a result, DeepCirCode fails to retain the positional information of nucleotides and their corresponding convoluted results.
Besides sequence information, a few conventional lncRNA prediction methods also present the potential of discovering circRNA through the secondary structure. nRC (Fiannaca et al., 2017) extracts features from the secondary structures of non-coding RNAs and adopts CNNs framework to classify different types of non-coding RNA. lncFinder (Han et al., 2019) integrates both the sequence composition and structural information as features and employs RFs. The learning process can be further optimized to predict different types of lncRNA. Nevertheless, none of these methods factor in the information specific to the formation of circRNAs, particularly the interaction information between splicing sites.

Materials and methods
In this section, we first formally define the objective of this article, and then present our proposed deep learning framework, JEDI, to predict circRNAs.

Preliminary and problem statement
The vocabulary of four nucleotides is denoted as V ¼ fA; C; G; Tg. For a gene sequence S, s½i . . . j 2 V jÀiþ1 indicates the subsequence from the ith to the jth nucleotide of a sequence S. For a gene or an RNA isoform with the sequence S, EðSÞ ¼ fða i ; d i Þg represents the given exons in the gene or the isoform, where a i and d i are the indices of the acceptor and donor junctions of the ith exon in S. Using only sequence information, the two goals of this work are listed as follows: 1. Isoform-level circRNA prediction: Given a gene sequence S and the splicing information of an isoform EðsÞ, the goal is to identify whether this RNA isoform is a circRNA. 2. Gene-level circRNA prediction: Given a gene sequence S and all of its exon-intron boundaries EðSÞ, this task aims at predicting if any of the junction pairs can backsplice to form a circRNA. Figure 1 illustrates the general schema of JEDI to predict circRNAs. Each acceptor a i and donor d j in the gene sequence are first represented by flanking regions A i and D i around exon-intron junctions. Two attentive junction encoders then derive embedding vectors of acceptors and donors, respectively. Based on the embedding vectors, we apply the cross-attention mechanism to consider deep interactions between acceptors and donors, thereby obtaining donor-aware acceptor embeddings and acceptor-aware donor embeddings.

Framework overview
Finally, the attention mechanism is applied again to learn the provided acceptor and donor representations so that the prediction can be inferred by a fully connected layer based on the representations.

Attentive junction encoders
To represent the properties of acceptors and donors in the gene sequence S, we utilize the flanking regions around junctions to derive informative embedding vectors. Specifically, as shown in Figure 2, we propose attentive junction encoders using RNNs and the attention mechanism based on acceptor and donor flanking regions.

Flanking regions as inputs
For each exon ða i ; d i Þ 2 EðSÞ, length L acceptor and donor flanking regions A i and D i can be computed as: where A i ½j and D i ½j denote the jth positions on S for the flanking regions of the acceptor a i and the donor d i ; the region length L is a tunable hyperparameter. Suppose we are encoding an acceptor a and a donor d with the flanking regions A and D in the gene sequence S for the simplicity.

k-mer embedding
To represent different positions in the sequence, we use k-mers as representations because k-mers are capable of preserving more complicated local contexts (Ju et al., 2017). Each unique k-mer is then mapped to a continuous embedding vector as various deep learning approaches in bioinformatics (Chaabane et al., 2020;Min et al., 2017). Formally, for each position A½j and D½j, the corresponding k-mer embedding vectors x a j and x d j can be derived as follows: where F ðÁÞ : V K 7 !R l is an embedding function mapping a length K k-mer to a l-dimensional continuous representation; the embedding dimension l and the k-mer length K are two model hyperparameters.
Subsequently, A and D are represented by the corresponding k-mer embedding sequences,

Bidirectional RNNs
Based on k-mer embedding vectors, we apply bidirectional RNNs (BiRNNs) to learn the sequential properties in genes. The k-mer embedding sequences are scanned twice in both directions as forward and backward passes. During the forward pass, BiRNNs compute forward hidden statesh a andh d as: . GRU a and GRU d are gated recurrent units (GRUs) (Cho et al., 2014) with different parameters for acceptors and donors, respectively. Note that we adopt GRUs instead of other RNN cells like LSTM (Hochreiter and Schmidhuber, 1997) because GRUs require fewer parameters (Jozefowicz et al., 2015). Similarly, the backward pass reads the sequences in the opposite order, thereby calculating backward hidden states h a and h d as: To model kmers with context information, we concatenate forward and backward hidden states as the hidden representations of k-mers in A and D as:

k-mer attention
Since different k-mers can have unequal importance for representing the properties of splice sites, we introduce the attention mechanism (Bahdanau et al., 2015) to identify and aggregate the hidden representations of k-mers that are more important than others. The motivation of the attention mechanism is to learn a computational function for automatically estimating the importance score of each item so that the ultimate representation can focus on items that are more significant. More precisely, the importance scores of representations h a j and h d k can be estimated by the k-mer attention vectors t a s and t d s as: and F d t ðÁÞ are fully connected layers. tanhðÁÞ is the activation function for the convenience of similarity computation. The importance scores are first measured by the inner-products to the k-mer attention vectors and then normalized by a softmax function over the scores of all k-mers. Note that the k-mer attention vectors t a s and t d s are learnable and updated during optimization as model parameters. Finally, the acceptor embedding w a of A and the donor embedding w d of D can be derived by aggregating the hidden representations of k-mers weighted by their learned importance scores as:

Cross-attention for modeling deep interaction
Modeling interactions among splice sites is essential for circRNA prediction because backsplices occur when the donors prefer the upstream acceptors over the downstream ones. Inspired by recent successes in natural language processing (Hao et al., 2017) and computer vision (Lee et al., 2018), we propose the cross-attention layer to learn deep interaction between acceptors and donors.

Cross-attention layer
For acceptors, the cross-attention layer aims at deriving cross-attentive acceptor embeddings that not only represent the acceptor sites and their flanking regions but also preserve the knowledge of relevant donors from donor embeddings. Similarly, the cross-attentive donor embeddings are simultaneously obtained for donors. To directly model relations between embeddings, we adopt the dotproduct attention mechanism (Vaswani et al., 2017) for the cross-attention layer. For each acceptor embedding w a i , the relevance of a donor embedding w d j can be computed by a dot-product w a i T w d j so that the attention weights b a i;j can be calculated with a softmax function over all donors. Likewise, the attention weights b d j;i for each donor embedding w d j can also be measured by dot-products to the acceptor embeddings. Stated formally, we have: : Therefore, the cross-attentive embeddings of acceptors and donors can then be derived by aggregations based on the attention weights as: Note that we do not utilize the multi-head attention mechanism (Vaswani et al., 2017) because it requires much more massive training data to learn multiple projection matrices. As shown in Section 4, the vanilla dot-product attention is sufficient to obtain satisfactory predictions with significant improvements over baselines.

circRNA prediction
To predict circRNAs, we apply the attention mechanism (Bahdanau et al., 2015) again to aggregate cross-attentive acceptor and donor embeddings into an acceptor representation and a donor representation as ultimate features to predict circRNAs.

Acceptor and donor attention
Although the cross-attention layer provides information cross-attentive embeddings for all acceptors and donors, most of the splice sites can be irrelevant to backsplicing. To tackle this issue, we present the acceptor and donor attention to identify splice sites that are more important than other ones. Similar to k-mer attention, the importance scores of cross-attentive embeddings for acceptors and donors can be computed as: representations r a and r d can be derived based on the attention weights of cross-attentive embeddings as:

Prediction as binary classification
Here, we treat circRNA prediction as a binary classification task. More specifically, we estimate a probabilistic scoreŷ to approximate the probability of existing circRNA. The ultimate features r for machine learning are provided by concatenating the acceptor and donor representations as r ¼ ½r a ; r d . Finally, the probabilistic scorê y can be computed by a sigmoid function with a fully connected layer as follows:ŷ ¼ rðF p ðReLUðF r ðrÞÞÞÞ; where F p ðÁÞ and F r ðÁÞ are fully connected layers; ReLUðÁÞ is the activation function for the hidden layer (Glorot et al., 2011); rðÁÞ is the logistic sigmoid function (Han and Moraga, 1995). The binary prediction can be further generated by a binary indicator function as 1ðŷ > 0:5Þ.

Learning and optimization
To solve circRNA prediction as a binary classification problem, JEDI is optimized with a binary cross-entropy (Hinton and Salakhutdinov, 2006). Formally, the loss function for optimization can be written as follows: where N is the number of training gene sequences; y i is a binary indicator demonstrating whether the ith training sequence exists a circRNA;ŷ i is the approximated probabilistic score for the ith training gene sequence; k is the L2-regularization weight for the set of model parameters h.

Remarks on the interpretability of JEDI
The usage of attention mechanisms is one of the most essential keys in JEDI, including the donor and acceptor attention, the cross-attention layer and the k-mer attention in junction encoders. In addition to choosing important information to optimize the objective, one of the most significant benefits of using attention mechanisms is the interpretability.

Application: zero-shot backsplicing discovery
For circRNAs, the attention weights can become interpretable hints for discovering backsplicing without training on the annotated backspliced sites. For example, when the model is optimized for accurately predicting circRNAs, the weights of donor attention are reformed to denote the important and relevant donors, which are preferred for the upstream acceptors to backsplice. In other words, the probabilistic attention weight c d j for each donor d j can be interpreted as the probability of being a backsplice donor site as: where the softmax function guarantees P j Pðd j Þ ¼ 1. Similarly, the attention weight b d j;i of each acceptor a i for deriving the cross-attentive embedding of the donor d j can be explained as the conditional probability of being selected as the backsplice acceptor site from the donor d j as: where we also have the probabilistic property 8j : P i b d j;i ¼ 1 from the softmax function. Based on the above interpretations, for any pair of a donor d j and an acceptor a i , the probability of forming a backsplice can be approximated by decomposing the joint probability Pðd j ; a i Þ as: Therefore, without any training backsplice site annotation as zero-shot learning (Socher et al., 2013), we can transfer the knowledge in the training data for circRNA prediction to discover potential backsplice sites by ranking the pairs of acceptors and donors according to Pðd j ; a i Þ. Particularly, the interpretations can be also aligned with the process of RNA splicing, bringing more biological insights into JEDI. In Section 4.6, we further conduct experiments to demonstrate that JEDI is capable of addressing the task of zero-shot backsplicing discovery.

Experiments
In this section, we conduct extensive experiments on benchmark datasets for two tasks and in-depth analysis to verify the performance and robustness of the proposed framework, JEDI.

Human circRNA
We use the benchmark dataset generated by Chaabane et al. (2020). The positive data generation follows a similar setting as described in Pan and Xiong (2015) to derive 31 939 isoforms of human circRNAs covering a diverse range of tissues and cell types from the circRNADb database (Chen et al., 2016). The negative set is composed of other lncRNAs, such as processed transcripts, anti-sense, sense intronic and sense overlapping. It is constructed based on the annotation provided by GENCODE v19 (Frankish et al., 2019) with strong evidence. Specifically, only the experimentally validated or manually annotated transcripts are considered, resulting in 19 683 negative isoforms. To avoid information leaks through training and evaluating on paralogous genes, we group isoforms into the same cluster if they come from the same gene or duplicated genes. The duplicated gene information is retrieved from the Duplicated Genes Database (Ouedraogo et al., 2012). Combining both the positive and negative cases, these 51 622 isoforms are grouped into 23 674 clusters. The clusters are divided into five parts to conduct 5-fold cross-validation. The sequences of all positive and negative cases are based on hg19.

Mouse circRNA on isoform level
The mouse circRNAs are obtained through circbase (Gla zar et al., 2014), which contains public circRNA datasets for several species reported in literature. There are 1903 mouse circRNAs. Using the annotation provided by GENCODE vM1, we randomly select other lincRNAs, generating 1522 negative cases. The sequences of all positive and negative cases are based on mm9.

Baseline methods
To evaluate the performance of JEDI, we compare with eight competitive baseline methods, including circDeep (Chaabane et al., 2020), PredcircRNA (Pan and Xiong, 2015), DeepCirCode (Wang and Wang, 2019), nRC (Fiannaca et al., 2017), SVM, RF, attentive-CNN (Att-CNN) and attentive-RNN (Att-RNN). Specifically, circDeep and PredcircRNA are the state-of-the-art circRNA prediction methods. DeepCirCode originally takes individual splice site pairs for backsplicing prediction, which is another research problem, and leads to an enormous number of false alarms in our problem settings. To conduct fair comparisons, we modify DeepCirCode by extending the inputs to all sites and aggregating CNN representations for acceptors and donors with two max-pooling layers before applying its model structure. nRC represents lncRNA classification methods that are compatible to solve circRNA prediction as a sequence classification problem. SVM and RF apply conventional statistical learning frameworks with the compositional k-mer features proposed by Wang and Wang (2017) for backsplicing prediction. Attentive CNN and RNN as popular deep learning approaches utilize CNNs and RNNs with the attention mechanism (Bahdanau et al., 2015) for sequence modeling, thereby predicting circRNAs based on a fully connected hidden layer with the ReLU activation function (Glorot et al., 2011). Note that we do not compare with CIRCexplorer2 (Zhang et al., 2016) and CIRI (Gao et al., 2015) because they aim at aligning the sequencing reads to known circRNAs, and performing de novo assembly of novo circRNAs, which is a completely different approach than our proposed method.

Evaluation metrics and protocol
Six conventional binary classification metrics are selected as the evaluation metrics for both tasks, including the overall accuracy (Acc), precision (Prec), sensitivity (Sens), specificity (Spec), F1-score as well as Matthew correlation coefficient (MCC) and the area under the receiver operating characteristic (ROC) curve (AUC) on positive cases. For all metrics, the higher metric scores indicate more satisfactory performance. We conduct a 5-fold cross-validation for evaluation on both isoform-level and gene-level circRNA prediction. Specifically, for each task, the data are randomly shuffled and evenly partitioned into five non-overlapping subsets. In the five folds of experiments, each subset has a chance to be considered as the testing data for assessing the model trained by the remaining four subsets, thereby ensuring an unbiased and fair evaluation. Finally, we evaluate the methods by aggregating the scores over the 5-fold experiments for each metric.

Implementation details
Our approach, JEDI, is implemented in Tensorflow (Abadi et al., 2016) and released in GitHub as shown in Abstract. The AMSGrad optimizer (Reddi et al., 2018) is adopted to optimize the model parameters with a learning rate g ¼ 10 À3 , exponential decay rates b 1 ¼ 0:9 and b 2 ¼ 0:999, a batch size 64, and an L2-regularization weight k ¼ 10 À3 . As the hyperparameters of JEDI, the k-mer size K and the number of dimensions l for k-mer embeddings are set to 3 and 128. We set the length of flanking regions L to 4. The hidden state size of GRUs for both directions in junction encoders is 128. The size of all attention vectors is set to 16. The number of units in the fully connected hidden layer F r ðÁÞ for circRNA prediction is 128. The model parameters are trained until the convergence for each fold in cross-validation. For the baseline methods, the experiments for circDeep, PredcircRNA and nRC are carried out according to the publicly available implementations released by the authors of original papers. SVM and RF are implemented in Python with the scikit-learn library (Pedregosa et al., 2011). As for deep learning approaches, DeepCirCode, Att-CNN and Attentive-RNN are implemented in Tensorflow, which is the same as our proposed JEDI. For all methods, we conduct parameter fine-tuning for fair comparisons. All of the experiments are also equitably conducted on a computational server with one NVIDIA Tesla V100 GPU and one 20-core Intel Xeon CPU E5-2698 v4 @ 2.20 GHz. Table 1 shows the performance of all methods for isoform-level circRNA prediction. Among the baseline methods, circDeep as the state-of-the-art approach and DeepCirCode considering junctions perform the best. It is because circDeep explicitly accounts for the reverse complimentary sequence matches in flanking regions of the junctions, and DeepCirCode models the flanking regions with deep learning. Consistent with the previous study (Chaabane et al., 2020), PredcircRNA performs worse than circDeep. With compositional k-mer-based features designed for backsplicing prediction, SVM and RF surprisingly outperform PredicircRNA by 11.13% and 16.14% in accuracy. It not only shows that the k-mers are universally beneficial across different tasks but also emphasizes the rationality of using k-mers for junction encoders in JEDI. As an lncRNA classification method, nRC also shows its potential for circRNA prediction with a 15.37% improvement over PredcircRNA in accuracy. Although Att-CNN and Att-RNN utilize the attention mechanism, they can only model the whole sequences and present limited performance without any knowledge of junctions. As our proposed approach, JEDI significantly outperforms all of the baseline methods across all evaluation metrics. Particularly, JEDI achieves 9.80% and 7.90% improvements over DeepCirCode in accuracy and F1-score, respectively. The experimental results have demonstrated the effectiveness of junction encoders and the cross-attention layer that models deep interaction among splice sites.

Gene-level circRNA prediction
We further evaluate all methods on gene-level circRNA prediction. Note that this task is more difficult than the isoform-level prediction because each junction can be a backsplice site. Since a full gene sequence can encode for multiple isoforms, there can be multiple site pairs forming backsplices for different isoforms. Consequently, models cannot learn from absolute positions for circRNA prediction. As shown in Table 2, all methods deliver worse performance than the results in isoform-level circRNA prediction. Notably, the evaluation metrics have demonstrated a similar trend as shown in Table 1. DeepCirCode and circDeep are still the best baseline methods, showing the robustness of exploiting the knowledge about splice junctions. SVM, RF and nRC still outperform PredicircRNA by at least 15.08% in accuracy. Att-CNN and Att-RNN using the attention mechanism still fail to obtain extraordinary performance because they are unaware of junction information, which is essential for backsplicing events. In this more difficult task, JEDI consistently surpasses all of the baseline methods across all evaluation metrics. For instance, JEDI beats DeepCirCode by 11.94% and 11.75% in accuracy and F1-score, respectively. The experimental results further reveal that our proposed JEDI is capable of tackling different scenarios of circRNA prediction with consistently satisfactory predictions.

Independent study on mouse circRNAs
To demonstrate the robustness of JEDI, we conduct an independent study on the dataset of mouse circRNAs. Previous studies have  shown that circRNAs are evolutionarily conserved (Barrett and Salzman, 2016;Jeck et al., 2013;Suenkel et al., 2020), and thus we evaluate the potential of predicting the circRNAs across different species. More precisely, we train each method using the human dataset on isoform level, thereby predicting the circRNAs on the mouse dataset. Note that some of the required features for PredcircRNA are missing on the mouse datasets. In addition to this, PredicircRNA perform the worst in other experiments. For these reasons, we exclude PredcircRNA from this study. Table 3 presents the experimental results of the independent study. Compared to the experiments conducted on the same species as shown in Table 1, most of the deep learning methods have slightly lower performance because they are specifically optimized for human data; SVM and RF have similar performance in the independent study probably because k-mer features are simpler and more general to different species. Interestingly, the accuracy of circDeep significantly drops in the study. It is likely due to the fact that circDeep heavily pre-trains the sequence modeling on human data with the serious over-fitting phenomenon. As a result, our proposed JEDI still outperforms all of the baseline methods. It demonstrates that JEDI is robust across the datasets of different species.

Zero-shot backsplicing discovery
As mentioned in Section 3.7, the interpretability of the attention mechanisms and the cross-attention layer enables JEDI to achieve zero-shot backsplicing discovery. To evaluate the performance of zero-shot backsplicing, we compute the probabilistic score Pðd j ; a i Þ using the attention weights c d j and b d j;i , thereby indicating the likelihood of forming a backsplice for each pair of a candidate donor d j and a candidate acceptor a i . Hence, we can simply evaluate the probabilistic scores with the ROC curve and the AUC. Note that here we still apply 5-fold cross-validation for experiments based on the gene-level human circRNA dataset. Since none of the existing methods can address the task of zero-shot backsplicing prediction, we compare with random guessing, which is equivalent to the chance line in ROCs with an AUC score of 0.5. Figure 3 depicts the ROC curves with AUC scores over five folds of experiments. The results show that the backspliced site pairs discovered by JEDI are effective with an average AUC score of 0.8002. In addition, JEDI is also robust in this task with a small standard deviation of AUC scores. Since the cross-attention layer is a major contribution in JEDI, we conduct another study to analyze how donor and acceptor embeddings interact with each other in Supplementary Section S1.

Analysis and discussions
In this section, we first discuss the impacts of hyperparameters for JEDI and then conduct the run-time analysis for all methods to verify the model efficiency of JEDI. Note that, for hyperparameter analysis, we adjust the target hyperparameter while other hyperparameters are fixed as the values utilized in the experiments as mentioned in Section 4.2.  Note: We report the mean and standard deviation for each metric.   Figure 4 illustrates the circRNA prediction performance of JEDI over different flanking region lengths. For all evaluation metrics, the performance slightly improves when L increases to 4. However, the performance significantly drops when L ! 32. It shows that nucleotides nearer to junctions are more important than other ones for predicting backsplicing. This result is also consistent with previous studies on RNA splicing (Ohshima and Gotoh, 1987). Moreover, circRNAs Fig. 4. The isoform-level circular RNA prediction performance of JEDI with different flanking region lengths L based on the 5-fold cross-validation. We report the mean for each metric and apply error bars to indicate standard deviations Fig. 5. The isoform-level circular RNA prediction performance of JEDI with different k-mer sizes K based on the 5-fold cross-validation. We report the mean for each metric and apply error bars to indicate standard deviations tend to contain fewer nucleotides than other transcripts from the same gene (Jeck and Sharpless, 2014), so excessive and redundant information could only lead to noises and lower the prediction performance.

Size of k-mers K
The derivation of k-mers is crucial for JEDI because JEDI treats kmers as the fundamental inputs over gene sequences. Figure 5 shows how the size of k-mers affects the prediction performance. JEDI performs the best with 2-and 3-mers when the performance gets worse with longer or shorter k-mers. It could be because a small k-mer size makes k-mers less significant for representations. In addition, the embedding space of long k-mers could be too enormous for JEDI to learn with limited training data. It is also worthwhile to mention that 1-mers lead to much higher standard deviations because of their low significance induces high instability and sensitive embeddings during the learning process. This finding is also consistent with previous studies (Sacan and Toroslu, 2008). In addition to the flanking region length L and the k-mer size K, we also conduct the analysis to study how the embedding dimension l affects the performance in Supplementary Section S2.

Run-time analysis
To verify the efficiency of JEDI, we conduct the run-time analysis for all methods in our experiments based on the task of isoform-level circRNA prediction. For fair comparisons, all methods can access the same computational resources. Note that we only consider the time in training and testing. The run-time of feature extraction and disk I/O are ignored because the features can be pre-processed. Disk I/O can be affected by many factors that are irrelevant to methods, such as I/O scheduling in operating systems. As shown in Table 4, JEDI is efficient and averagely needs only <3 min because it only focuses on junctions and flanking regions. Similarly, DeepCirCode, which is also a junction-based deep learning method, has comparable execution time to JEDI. In contrast, Att-CNN and Att-RNN are relatively inefficient because they scan the whole sequences in every training batch, where Att-RNN with non-parallelizable recurrent units is slower. Although nRC reads the whole sequences, it runs faster than some attention-based methods because of its simpler model structure. SVM, RF and PredcircRNA are the most efficient because they apply straightforward statistical machine learning frameworks for training. As a side note, the feature extraction of PredcircRNA is extremely expensive in execution time and averagely costs more than 28 h to extract multi-facet features in our experiments. circDeep is the most inefficient in our experiments because it consists of many time-consuming components, such as embedding and LSTM pre-training.

Conclusions
In this article, we propose a novel end-to-end deep learning approach for circRNA prediction by learning to appropriately model splice sites with flanking regions around junctions and studying the deep relationships among these sites. The attentive junction encoders are first introduced to represent each splice site, and the innovative cross-attention layer is proposed to learn the deep interaction among splice sites. Moreover, JEDI is capable of discovering backspliced site pairs without training on annotated site pairs. The experimental results demonstrate that JEDI is effective and robust in circRNA prediction on different data levels and across different species. Most importantly, the backspliced site pairs discovered by JEDI are promising as they designate the hotspots for circRNAs formation. The reasons and insights for these observations and discoveries can be concluded as follows: (i) JEDI only models valuable and essential flanking regions around the junctions of splice sites, thereby discarding irrelevant and redundant information for circRNA prediction; (ii) the properties of splice sites and essential information for forming circRNAs can be well-preserved by junction encoders; and (iii) the attention mechanisms and the cross-attention layer provide intuitive and interpretable hints to implicitly model the backsplicing events as demonstrated in the experiments. Due to data limitation, we are only able to examine the effectiveness of transferring the learned knowledge between humans and mice. As a future direction, we plan to experiment with more species when more data are available. Additionally, we also plan on exploring the potential to extend JEDI to support circRNA prediction from sequencing reads.