iEnhancer-ELM: improve enhancer identification by extracting position-related multiscale contextual information based on enhancer language models

Abstract Motivation Enhancers are important cis-regulatory elements that regulate a wide range of biological functions and enhance the transcription of target genes. Although many feature extraction methods have been proposed to improve the performance of enhancer identification, they cannot learn position-related multiscale contextual information from raw DNA sequences. Results In this article, we propose a novel enhancer identification method (iEnhancer-ELM) based on BERT-like enhancer language models. iEnhancer-ELM tokenizes DNA sequences with multi-scale k-mers and extracts contextual information of different scale k-mers related with their positions via an multi-head attention mechanism. We first evaluate the performance of different scale k-mers, then ensemble them to improve the performance of enhancer identification. The experimental results on two popular benchmark datasets show that our model outperforms state-of-the-art methods. We further illustrate the interpretability of iEnhancer-ELM. For a case study, we discover 30 enhancer motifs via a 3-mer-based model, where 12 of motifs are verified by STREME and JASPAR, demonstrating our model has a potential ability to unveil the biological mechanism of enhancer. Availability and implementation The models and associated code are available at https://github.com/chen-bioinfo/iEnhancer-ELM Supplementary information Supplementary data are available at Bioinformatics Advances online.


Introduction
Enhancers are 50-1500 bp non-coding DNA fragments located at distal regions of target genes. They can cis-regulate target gene expression and control the process of transcription and translation of RNAs and proteins (Rong et al., 2020). It has been found that enhancers are closely associated with several important disease genes, including oncogenes and tumor suppressor genes (Herz, 2016). Regarding the importance of enhancers in gene regularization, the identification of enhancers has become a research hot-spot, especially in the biological field of genome-wide enhancer annotation. Many computational methods have been developed for such issues.
Computational methods usually extract features from raw enhancer sequences firstly, and then train classifiers to predict enhancers. Thus, learning effective features is one of critical issues for improving the performance of enhancer identification. Depending on their feature extraction manners, computational methods can be categorized into three classes, including epigenetic information-based methods, sequence-component-based methods and deep learning-based methods. Epigenetic information-based methods transform external biological assessments, like genome-wide chromatin signatures (Firpi et al., 2010) and histone epigenetic marks (Fernandez and Miranda-Saavedra, 2012), as enhancer features. However, they cannot be applied to cell lines with sparse data. Sequence-component-based methods extract features from enhancer sequences according to special rules for the inherent sequence properties. For example, iEnhancer-2L (Liu et al., 2016) used the pseudo k-tuple nucleotide composition(PseKNC) (Chen et al., 2014) to represent sequences; iEnhancer-EL (Liu et al., 2018), iEnhancer-XG (Cai et al., 2021) and Enhancer-IF (Basith et al., 2021) integrated multiple rules to extract features from sequences. However, these special rules are insufficient as they are limited by human experience. Deep learning-based methods have achieved outstanding performance on enhancer identification by automatically learning powerful features in a data-driven manner. For example, iEnhancer-ECNN (Nguyen et al., 2019) and iEnhancer-GAN (Chen et al., 2021;Yang et al., 2021) used convolution neural networks(CNNs) for feature extraction. However, CNNs can only focus on local information (Tang et al., 2022). BiRen (Yang et al., 2017) used recurrent neural networks (RNNs) to extract features, but RNNs are vulnerable to gradient disappearance in the processing of long sequences.
In recent years, novel natural language processing (NLP) techniques (Ferruz and Hö cker, 2022;Wu et al., 2023;Yan et al., 2023) (e.g. word embedding, attention mechanism and pre-trained language model) have shown the impressive ability to extract complex features from sentences. DNA is usually regarded as the 'language of life', whose alphabets are four nucleic acids (Li et al., 2021;Liu et al., 2019). Regarding this similarity between DNA sequences and natural language sentences, natural language techniques have been successfully applied in enhancer identification. EPIVAN (Hong et al., 2020) identified enhancers based on the nucleic acid embedding learned by dna2vec (Ng, 2017). iEnhancer-5step (Le et al., 2019) further improved the performance based on FastText (Bojanowski et al., 2017), which is an extension of dna2vec. However, due to the limitation of dna2vec that relies only on local information of neighbors, neither EPIVAN or iEnhancer-5step could capture globally contextual relation in enhancer sequences. To tackle such issues, pre-trained language models with attention mechanisms from NLP were introduced into enhancer identification. For example, BERT-Enhancer (Le et al., 2021) incorporated Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2019) as the embedding layer to convert raw enhancer sequences into vector representations. Although BERT-Enhancer achieved comparable performance with existing methods, it used a pre-trained BERT model on human language corpus. Therefore, the patterns captured by BERT-Enhancer lack interpretability. Currently, pre-trained biological language models, e.g. DNABERT (Ji et al., 2021) and TFBert (Luo et al., 2023) are the revolution in various computational biology tasks. However, the aim of these pre-trained models is to learn universal representations of biological sequences, but not for specific tasks. Thus, to facilitate the research on enhancer identification, it is urgent to construct a BERT-based enhancer language model. Currently, applying the language techniques to biological sequences remains challenging (Ferruz and Hö cker, 2022), since words in human languages are natural, while words in biological sequences lack evidence. k-mers are usually regarded as the words of 'language of life'. However, the k-mers with different length contain multi-scale information (Jin et al., 2022;Yan et al., 2022). It is another challenge to integrate the multi-scale contextual information for enhancer identification.
In this article, we propose a novel enhancer identification method named iEnhancer-ELM, which tokenizes DNA sequences with different scale k-mers and captures the contextual information of k-mers by incorporating pre-trained BERT-based enhancer language models. The representations of DNA sequences are generated by averaging the tokens' embeddings projected by encoder layers. If the length of an input sequence exceeds the limitation of encoder layers, it will be split into several segments. The generated representations are then fed into an MLP layer for classification, as shown in Figure 1. We evaluate iEnhancer-ELM on two popular comprehensive benchmark datasets. Experimental results show the different scale k-mers have comparable performance on the same datasets, but their sensitivity and specificity differ greatly. By integrating multi-scale k-mers, iEnhancer-ELM outperforms existing state-ofthe-art methods. To further illustrate the interpretability of attention mechanism, we show a case study that uses the model based on 3-mer to discovery motifs. We find 30 enriched enhancer motifs, where 12 of them are verified by a widely used motif discovery tool (STREME) and a popular transcription factor binding profiles dataset (JASPAR), demonstrating the ability of iEnhancer-ELM to unveil the biological mechanism of enhancer.

Benchmark datasets
To comprehensively evaluate iEnhancer-ELM, we acquire two popular benchmark datasets: Liu's dataset (Liu et al., 2016) and Basith's dataset (Basith et al., 2021). The sequences in Liu's dataset were selected from nine cell lines of humans. Moreover, all sequences were cropped to 200 bp and filtered by CD-HIT (Li and Godzik, 2006) to make sure their similarity less than 80%. For fair comparison, the benchmark dataset has been split into a training dataset and a test dataset. There are 1484 enhancers and 1484 non-enhancers in the training dataset, and 200 enhancers and 200 non-enhancers in the test dataset. As we would like to evaluate the capability of iEnhancer-ELM on enhancer identification and motif discovery, we only conduct experiments on the identification dataset (first phase), not on the strength dataset (second phase).
The Basith's dataset is a more comprehensive dataset, which contains eight subsets. The sequences in each subset are from one cell line, including epithelial (HEK293), normal human epidermal keratinocyte (NHEK), lymphoblast (K562), B lymphocyte (GM12878), human mammary epithelial cell (HMEC), human skeletal muscle myoblasts (HSSM), normal human lung fibroblasts (NHLF) and human umbilical vein endothelial cells (HUVEC). Different with the Liu's dataset in which the length of sequences is fixed, the length of sequences in Basith's dataset is variable ranging from 204 to 2000 bp. The redundancy was reduced to 60% by using CD-HIT for each cell line. Besides, this benchmark dataset has the same number between enhancer and non-enhancer sequences in the training sets, but in the test sets, the number of non-enhancer sequences is more than twice of enhancer sequences. Thus, Basith's dataset is more close to the real-world situations. The more detailed description of this dataset is shown in Basith et al. (2021).

Tokenization of enhancer sequences
An enhancer sequence in length of L nucleotides can be represented as followed: where N i denotes any of the four nucleotides fA; C; T; Gg at position i. We use overlapped k-mers as the 'words' of enhancer sequence, rather than a nucleotide, because k-mers contain richer contextual information. The overlapped k-mers are generated by a sliding window of k-size along the enhancer sequence. Therefore, the original sequence in length of L nucleotides is tokenized into L À k þ 1 k-mers. In this article, we choose k ¼ 3, 4, 5, 6 considering the trade-off between vocabulary size and information in each token. Besides, BERT-based language models have five special tokens, including classification token [CLS], padding token [PAD], unknown token [UNK], separation token [SEP] and mask token [MASK]. Thus, there are 4 k individual k-mers and five special tokens in the vocabulary.

Pre-trained BERT models on human genome corpus
We select a pre-trained BERT-based model, DNABERT (Ji et al., 2021) as an encoder to transform enhancer sequences to feature representations. DNABERT is composed of 12 attention layers and 12 attention heads in each layer. The attention mechanism is calculated by Eq. (2): where M is a hidden state of an input sequence, h is the number of heads in an attention layer. fW the attention matrices, and MultiHeadðÞ is the concatenate operation.
DNABERT learned the embeddings of k-mers via self-supervised learning on a masked language modeling (MLM) task, whose aim was to predict the masked tokens using contextual information. Different with the conventional mask mechanism that randomly masks partial tokens, DNABERT masked a span of k tokens to avoid inferring one masked token directly from the overlapped k-mer tokens. The mask rate was set as 15%. The training corpus of DNABERT was generated by splitting a complete human genome (Harrow et al., 2012) into non-overlapping sub-sequences, where half of the sub-sequences were with length of 510 bp, and the other half were with random length between 10 and 509 bp.

Representation of enhancer sequences
We use DNABERT to learn the representation of enhancer sequences. However, DNABERT limits its input length to less than 510 tokens. As described above, we verify the validity of our model on two benchmark datasets. Liu's dataset consists of sequences in fixed length of 200 bps, but Basith's dataset consists of sequences in variable length with a range from 204 to 2000 bp.
For the Liu's dataset (Liu et al., 2016), all sequences are directly fed into DNABERT after tokenization. Each token (k-mer tokens and special tokens [CLS] and [SEP]) are projected into an embedding vector with 768 dimensions. We take the average of embedding vectors of all k-mer tokens as the representation of enhancer sequences. While For the Basith's dataset (Basith et al., 2021), due to the limitation of maximum input length of BERT-based models, we partition all sequences into 4 non-overlapping segments from left to right, and each segment comprises of 500 tokens, except the last segment that may have less than 500 tokens. These segments are then fed into DNABERT separately. We take the average of embedding vectors of each segment and then concatenate them to embedding vectors with length of 3072 as representations of enhancer sequences. Specially, for those enhancer sequences whose length is not enough to be divided into four segments, we will pad their final embedding vectors with 0 s until their length equals to 3072.

Fine tuning for enhancer identification
DNABERT was pre-trained on human genome data via selfsupervised learning to capture the generally contextual relationship of nucleotide acids. Thus, it is not optimal for enhancer identification. We need to fine tune it to focus on the enhancers patterns.
We attach a 2-layer perceptron network as the classifier to the pre-trained model, and then fine-tune pre-trained weights on two enhancer benchmark datasets. The first layer is a hidden layer, and the second layer is an output layer to output a real value ranging in [0, 1] representing the probability of being an enhancer. A sequence is considered as an enhancer if its predicted probability is larger than a threshold. For training, following loss function is used: where the first term is the cross entropy loss, and the second term is the L 2 normalization.

Motif discovery via attention mechanism
We analyze the captured biological patterns by enhancer language models via exploring the weights in attention mechanism (Fig. 1B). Those biological patterns passed the hypergeometric test are then considered as motifs. Each enhancer language model contains 12 embedding layers and each layer contains 12 attention heads. Since the last layer has the highest distinguishing power to identify enhancers, we find the biological patterns of enhancers in the average of the 12 attention matrices of the last layer.
Step 1: calculate the attention weight of a nucleotide in an enhancer sequence. Since the vocabulary is composed of k-mer tokens and special tokens, the attention weights represent the relationship between tokens, rather than a nucleotide. We only consider the attention between [CLS] and k-mer tokens, as the [CLS] is usually used to represent the entire sequence. The attention of each k-mer from [CLS] is first equally distributed among its nucleotides, and then all attention of a nucleotide is averaged as its attention weight in an enhancer sequence. This attention weight represents the importance of the corresponding nucleotide to the entire original sequence.
Step 2: find out the potential patterns. The potential patterns are found out from above attention vectors by two criteria: (i) all attention weights of nucleotides in a pattern are larger than the average of the attention vector; (ii) the length of patterns is not less than five nucleotides with at most one gap. For facilitating the pairwise sequence alignment, we expand the length of a potential pattern region to 10 nucleotides along both directions of the original sequence.
Step 3: filter significant candidates. For each potential pattern, we perform a hypergeometric test (Beal, 1976) to calculate the P-value and determine whether the candidate is significant.
where X represents a candidate pattern, K is the number of positive sequences, k is the number of regions X occurring in positive Step 1 Step 2 Step 3 Step 4  There are four steps to find enriched enhancer motifs: calculate attention weight of each nucleotide from the special token [CLS], find potential patterns with high attention values, filter significant candidates by hypergeometric test and generate motifs according to sequence alignment sequences, N is the number of all sequences and n is the number of patterns X occurring in all sequences. Here, we apply the AhoCorasick algorithm (Aho and Corasick, 1975) for the computation of n and k. The significance threshold is set as 0.005.
Step 4: generate motifs according to the sequence alignment. We first find motif candidates with high sequence similarity by iterative pairwise alignment, then calculate the nucleotide frequency at each position to obtain motifs. In each iteration, the longest or the most frequent sequence will be selected as a core to align with remaining candidates one by one. If the matched positions exceed the half length of the alignment, this region will be categorized into the same group of the core. At last, we remove groups that have less than 11 members, and then calculate a position-weight matrix of the remaining groups for producing motifs.

Performance metrics
In this study, we select six metrics to evaluate the performance of enhancer prediction: sensitivity (Sn), specificity (Sp), accuracy (Acc), Matthews correlation coefficient (MCC), balanced accuracy (Bacc) (Basith et al., 2021) and Area Under ROC Curve[AUC (Fawcett, 2006)]. These metrics are defined as: where TP, TN, FP and FN represent true positive, true negative, false positive and false negative, respectively. Sn is the proportion of actual positive cases that have been predicted as positive, and Sp is the proportion of actual negatives. Acc is the proportion of all samples that their predictions are consistent with real labels. Compared with Acc, Bacc is more suitable to evaluate performance on an imbalanced dataset. The Matthews correlation coefficient (MCC) is a more reliable statistical rate which produces a high score only if the prediction obtained good results in all of the four confusion matrix categories. Specifically, the range of MCC values is [À1, 1]. AUC is a comprehensive metric, because it provides an aggregate measure of performance across all possible thresholds. AUC ranges in value from 0 to 1.

Experimental setup
In this study, all DNA sequences are tokenized with k-mers (k ¼ 3, 4, 5, 6). And all enhancer language models have 12 attention layers as encoder layers and 2 MLP layers as classification layers. Each attention layer has 12 attention heads. The embedding dimension of each token is 768. After being extracted by encoder layers, features are fed into two 2 MLP layers with one hidden layer of 25 neurons for Liu's dataset or 128 neurons for Basith's dataset. To preserve previous knowledge and avoid catastrophic forgetting, we fine-tune the pre-trained model with a small learning rate by exponential decay. In addition, we add a dropout layer with a drop rate of 0.3 after the hidden layer to prevent overfitting. We use Adam as the optimizer. iEnhancer-ELM is implemented with PyTorch 1.9.0 equipped with NVIDIA A100 80GB.
However, due the different characters of these two benchmark datasets, we use different training tricks for them. For the Liu's benchmark dataset, all models are fine-tuned for 30 epochs. Besides, we regard the threshold as a hyperparameter and find the threshold of 0.55 is the best trade-off between sensitivity and specificity. For the Basith's benchmark dataset, the original training dataset of each cell line is divided into validation dataset and training dataset with a ratio of 1:9. The final models on eight cell lines are selected according to their performance in validation dataset. In order to obtain stable performance on all cell lines, we use the adversarial training (Miyato et al., 2017) and batch normalization during the training process. More details of experiment configuration are shown in GitHub (https://github.com/chen-bioinfo/iEnhancer-ELM.git).

The performance of enhancer language models with different scale k-mers
In this article, all DNA sequences are tokenized with different k-mers (k ¼ 3, 4, 5, 6) and then fed into enhancer language models. Thus, we first evaluate the performance of different scale k-mers. Table 1 shows the results of enhancer language models with different scale k-mers on Liu's test dataset. Different scale k-mers achieve comparable performance. Specifically, 3-mer achieves the best performance with an AUC of 0.8485, followed by 4-mer, 5-mer and 6-mer, respectively. While 3-mer and 5-mer achieve the same performance in terms of Acc, followed by 4-mer and 6-mer, respectively. We also compare their performance with several widely-used features, including manual features [e.g. naive k-mer (Lee et al., 2011), PseDNC (Chen et al., 2013), spectrum profile (Liu et al., 2015) and mismatch profile (Liu et al., 2015)] and data-driven features [e.g. FastText (Le et al., 2019), Word2Vec (Hong et al., 2020) and BERT-Enhancer (Le et al., 2021)]. The comparison is conducted with the 5-fold cross validation on the Liu's training dataset, and the results are shown in Supplementary Table S1. We find that the different scale k-mers learned by enhancer language models outperform those widely-used features. Table 2 shows the performance of enhancer language models with different scale k-mers on eight cell lines in the Basith's dataset. For each cell line, the different scale k-mers achieve comparable performance in terms of AUC. 3-mer achieves the best performance on five cell lines, including HEK293, K652, GM12878, HMEC and HSMM. 4-mer achieves the best performance on two cell lines, including NHLF and HUVEC. 6-mer achieves the best performance on NHEK, respectively. But their performance on different cell lines differs largely, such as 3-mer achieves an AUC of more than 0.9 on HEK293 and GM12878, while it only achieves an AUC of about 0.78 on NHEK and HUVEC.
Besides, to investigate the effect of fine-tuning, we conduct additional experiments that freeze the weights of pre-trained encoder layers and only update the weights of classification layers. The results are shown in Supplementary Tables S2 and S4. We find that the fine-tuned enhancer language models outperform the pre-trained methods on the Liu's dataset. But on Basith's dataset, they achieve comparable performance. To further investigate the nuanced effect of fine-tuning, we also visualize the distribution of enhancers and non-enhancers projected by all encoder layers on the Liu's training dataset ( Supplementary Fig. S1). It is obvious that the distributions of enhancers and non-enhancers projected by pre-trained models are mixed together. While the distributions projected by fine-tuned models are divided into two clusters, which is more obvious at higher encoder layers. These results indicate that fine-tuning can make the pre-trained model focus more on task-specific information. Thus, all following experiments are conducted via fine-tuning manner.

The different scale k-mers are complementary
Although different scale k-mers have comparable performance on most datasets, they still have different performance in terms of sensitivity and specificity. Thus, we then evaluate the complement of different scale k-mers. The experiments are conducted from two views: the distribution of enhancers and non-enhancers based on different scale k-mers and the intersection of predicted results. Figure 2 shows the complementary analysis of different scale k-mers on the Liu's dataset. We are surprised to find that the distribution of all k-mers is centrosymmetric. The enhancers (blue color) lie in the upper right of the space, and the non-enhancers (pink color) are in the lower left corner of the space. Specifically, the 3mer is distributed in the center, the enhancers and non-enhancers encoded by 4-mer, 5-mer and 6-mer are partitioned into two sides. For the intersection of prediction results, there are 200 enhancers and 200 non-enhancers in total, where only 286 samples (including enhancers and non-enhancers) are predicted correctly by all models.
Thirteen samples are predicted correctly by only one model, such as five samples are predicted correctly by only 3-mer, three samples by 4-mer, three samples by 5-mer and two samples by 6-mer. Fourtynine samples are predicted correctly by two or three models. These results demonstrate the different scale k-mers learned by enhancer language models are complementary.
We have the same conclusion on the Basith's dataset. The complementary analysis for 8 cell lines in Basith's dataset is shown in Supplementary Figure S2. The distribution of different scale k-mers in all cell lines does not overlap. For the intersection of prediction results, only 72.1% of the samples (including enhancers and nonenhancers) on average in eight cell lines are accurately predicted by all models. And 8.21% of samples on average were predicted correctly by one model, and 7.98% by two models and 11.71% by three models.

iEnhancer-ELM outperforms competing methods
According to above observations, different scale k-mers are complementary for enhancer identification. Thus, we integrate multi-scale k-mers as one model iEnhancer-ELM for improving the performance on enhancer identification and compete it with several state-ofthe-art methods. Table 3 shows the comparison between iEnhancer-ELM and existing state-of-the-art methods on the Liu's dataset. The competing methods include iEnhancer-2L (Liu et al., 2016), iEnhancer-EL (Liu et al., 2018), iEnhancer-5Step (Le et al., 2019), BERT-Enhancer (Le et al., 2021), iEnhancer-XG (Cai et al., 2021), iEnhancer-GAN (Yang et al., 2021), iEnhancer-ECNN (Nguyen et al., 2019) and iEnhancer-Deep (Kamran et al., 2022). iEnhance-2L, iEnhancer-EL and iEnhancer-XG extract the inherent properties from nucleic acid sequences based on manual rules. iEnhancer-Deep, iEnhancer-GAN and iEnhancer-ECNN represent raw enhancer sequences with one-hot encoding and combine CNN to extract features for classification. iEnhancer-5Step and BERT-Enhancer use pre-trained FastText and BERT to learn representations of raw sequences, respectively. None of competing method achieves an accuracy of larger than 0.8000, while iEnhancer-ELM achieves an accuracy of 0.8300 and an AUC of 0.8560, outperforming existing state-of-the-art methods. Table 4 shows the comparison between iEnhancer-ELM and Enhancer-IF (Basith et al., 2021) on the eight cell lines in Basith's dataset. iEnhancer-ELM outperforms Enhancer-IF on six cell lines and achieves comparable performance on the other two cell lines in terms of AUC. Enhancer-IF is an integrated method based on five machine learning methods (e.g. random forest, extremely randomized tree, multilayer perceptron, support vector machine and extreme gradient boosting) and seven types of features (e.g. Kmer, CKSNAP, EIIP, DPCP, TPCP, PseDNC, PseTNC) extracted by manual rules. However, iEnhancer-ELM just uses an enhancer language model based on multi-scale k-mers. Thus, iEnhancer-ELM is not only more distinguishable but also more convenient to implement than Enhancer-IF.

Enhancer language models capture position-related contextual information of k-mers
Most competing methods also take the k-mers as tokens to encode raw sequences, especially the naive k-mers (Lee et al., 2011) and dna2vec (Hong et al., 2020). However, iEnhancer-ELM outperforms these competing methods, which is because iEnhancer-ELM captures the position-related contextual information of k-mers. The impact of k-mers on enhancer identification usually depends not only on their frequencies but also on their positions.
We present a case study to illustrate that contextual information of k-mers learned by enhancer language models changes according to their positions in DNA sequences. Figure 3 shows the distribution of embeddings of CCTs on the Liu's dataset. We first visualize the distribution of all CCTs in the enhancer sequence Chr19_39177160_39177360, which contains 14 non-overlapped CCTs. The embeddings of CCTs change according to their positions in the raw sequence (Fig. 3A). In general, these CCTs that are close in sequence positions are also close in the embedding space. Figure 3B shows the distribution of CCTs in all enhancers and non-enhancers. We observe a clear gap between the CCTs from enhancer sequences and those from non-enhancer sequences. These results demonstrate that position-related contextual information of k-mers have a powerful ability to discriminate enhancers.

Enhancer language models can be used to discovery motifs
Since enhancer language models pay more attention to patterns that have important contributions to enhancer identification, these important patterns can be extracted by large attention weights in encoder layers. Those patterns passed the hypergeometric test are considered as motifs. We use the enhancer language model based on 3-mer fine-tuned on the Liu's dataset as an instance to illustrate the motif discovery. We discover 30 motifs as shown in Figure 4. The left panel shows two motifs that have high attention weights in corresponding regions, while the right panel shows all discovered motifs. To illustrate the reliability of these motifs, we compare our discovered motifs with those motifs found by a widely-used motif discovery tool STREME (Bailey, 2021) and a popular transcription factor binding database JASPAR (Castro-Mondragon et al., 2022) via TomTom (Gupta et al., 2007), as shown in Figure 5.
Comparison to STREME for finding enriched motifs. STREME (Bailey, 2021) is an effective motif discovery tool to find ungapped motifs that are enriched in one dataset compared to a control dataset. We conduct STREME on the Liu's dataset and find 8 motifs, as shown in Supplementary Figure S3, in which 6 motifs are matched with 10 motifs found by enhancer language models. The comparison is shown in Supplementary Figure S4. Note that several motifs may be matched with the same one.
Comparison to existing transcription factor binding sites in JASPAR database. JASPAR is an open-access database storing manually curated transcription factor (TF) binding profiles as position frequency matrices (PFMs) (Castro-Mondragon et al., 2022). TFs bind to either enhancer or promoter regions of DNA adjacent to the genes that they regulate. Thus, transcription factor binding sites (TFBS) are vital for enhancer function. In practice, we compare the discovered motifs with TF binding profiles in the JASPAR database. There are six motifs significantly matched with TFBS in JASPAR ( Supplementary Fig. S5).
These results show that the attention mechanism in enhancer language models has an ability to discover the enriched motifs and functional regions in enhancer sequences.   Fig. 4. Motif discovery via attention mechanism in enhancer language models. (A) Motifs have high attention weights in corresponding regions. In the heatmap, each row represents a segment that passes the hypergeometric test, and each column represents a nucleotide. The x-axis is centered on the core regions of biological patterns, and the y-axis is the number of segments in the sequence alignment. The highlighted regions are the motifs captured by attention mechanism. (B) Thirty discovered motifs by the enhancer language model based on 3-mer on the Liu's dataset

Conclusion
In this study, we propose a novel enhancer identification method named iEnhancer-ELM based on BERT-like enhancer language models. iEnhancer-ELM captures position-related contextual information of multi-scale k-mers to improve the performance of enhancer identification. The results on two benchmark datasets show that iEnhancer-ELM outperforms state-of-the-art methods. Moreover, we illustrate the interpretability of enhancer language models by showing a case study that captures 30 motifs, where 12 of discovered motifs can be matched with widely used motif discovery tools STREME and JASPAR, demonstrating our model has a potential ability to reveal the biological mechanism. For future works, we will explore the enhancer identification across tissues or species.