Abstract

Motivation

Motif discovery in large biopolymer sequence datasets can be computationally demanding, presenting significant challenges for discovery in omics research. MEME, arguably one of the most popular motif discovery software, takes quadratic time with respect to dataset size, leading to excessively long runtimes for large datasets. Therefore, there is a demand for fast programs that can generate results of the same quality as MEME.

Results

Here we describe YAMDA, a highly scalable motif discovery software package. It is built on Pytorch, a tensor computation deep learning library with strong GPU acceleration that is highly optimized for tensor operations that are also useful for motifs. YAMDA takes linear time to find motifs as accurately as MEME, completing in seconds or minutes, which translates to speedups over a thousandfold.

Availability and implementation

YAMDA is freely available on Github (https://github.com/daquang/YAMDA).

Supplementary information

Supplementary data are available at Bioinformatics online.

1 Introduction

De novo motif discovery is a common technique for the analysis of biopoylmer sequences such as DNA, RNA and proteins. It involves the identification of enriched short patterns, commonly referred to as motifs, of monomer letters from a collection of related sequences. One of the most frequent applications of motif discovery is to datasets arising from transcription factor (TF) binding experiments, where motifs correspond to sequence-specific binding patterns of TFs.

MEME (Bailey et al., 1994) is a popular probabilistic motif discovery program that uses the expectation-maximization (EM) algorithm to infer motifs as position probability matrices (PPMs), which describe the probability of each possible letter at each position in the pattern. Given a background model, a PPM can be converted to a position weight matrix (PWM) of log odds ratios. MEME uses the batch version of the EM algorithm, which updates parameters after a complete pass through the data. In practice, MEME takes quadratic time relative to the number of letters, leading to prohibitively long run times for large modern high throughput datasets. The majority of the runtime is devoted to seed searching because EM is prone to converging to local optima.

EXTREME is a motif discovery program designed to infer motifs as accurately as MEME in linear time (Quang and Xie, 2014). To achieve this goal, EXTREME uses a word-based discriminative algorithm to search for gapped k-mer words that are enriched in a positive sequence set relative to that of a negative control. Starting points in the search space are derived from the enriched words. Moreover, EXTREME replaces MEME’s batch EM with the online EM algorithm. In contrast to batch learning, online learning updates the parameters after each data sample. Online learning converges faster because it performs multiple parameter updates per data pass instead of one.

Recent advances in ‘deep learning’ offer solutions for improving upon MEME. For example, convolutional neural networks (CNNs) have been shown to be effective for motif discovery (Quang and Xie, 2016). The convolutional layer consists of a set of learnable kernels. The kernels are similar to PWMs, except weights are not constrained to be probabilities or log odds ratios. CNNs are slow to train; however, training can be accelerated through the use of graphics processing units (GPUs) and tensor libraries that are optimized for operations like convolution.

2 Software description

YAMDA is a novel program that extends the EXTREME framework by leveraging innovations in deep learning. Specifically, YAMDA uses deep learning libraries to accelerate EM-related computations. Similar to EXTREME, YAMDA’s seeding step uses a discriminative algorithm to find the 100 most enriched gapped k-mer words and converts the words to PWM seeds [see Section 4.4 of Bailey et al. (1994)]. Initial background probabilities are computed by counting the letter occurrences in the dataset. One ‘mini-batch’ (compromise between batch and online) EM iteration followed by one batch EM iteration is run on each starting point. To parallelize these computations across all seeds, PWMs are treated as convolutional kernels, unloading a bulk of the computational burden on the deep learning libraries and (if available) the GPU. It is for this reason that we chose to use mini-batch EM instead of online EM, since mini-batch EM can take advantage of the vectorization. Batch EM is then run to completion on the seed that yields the highest data likelihood.

3 Implementation

YAMDA is built on Pytorch (Paszke et al., 2017), a lightweight deep learning Python package with strong support for GPU acceleration; however, YAMDA can also run on the CPU. It accepts FASTA sequences as inputs, and outputs motifs in Minimal MEME format.

4 Examples

To demonstrate the efficacy of YAMDA, we use it analyze the 100 bp summit-centered peak repeat-masked sequences from ENCODE TF ChIP-seq datasets, and a digital genomic footprint (DGF) dataset (Quang and Xie, 2014) (Table 1 and Supplementary Fig. S1). YAMDA is run in GPU and CPU modes, and both modes are orders of magnitude faster than MEME. Due to MEME’s quadratic runtime, this speedup as a function of input size. In comparison, CUDA-MEME (Liu et al., 2010), another GPU-accelerated implementation of MEME, speedups of less than 1.5, which is orders of magnitude slower than even YAMDA’s CPU mode. These results demonstrate the importance of YAMDA’s linear time seeding; a simple linear speedup of the MEME algorithm is not sufficient since its base runtime grows too fast. Moreover, all of the YAMDA and MEME example output motifs display significant similarity (E<107) to known motifs in the JASPAR database (Khan et al., 2017) according to TOMTOM (Gupta et al., 2007). Visually, however, the YAMDA motifs more closely resemble the MEME motifs than the JASPAR motifs, especially for IRF4. This is likely because motif databases are constantly being updated and therefore may not always have the target motif, the discovered IRF4 motifs aligned to the similar JASPAR IRF1 motif. Together, these results demonstrate how well YAMDA can reproduce MEME’s results in a fraction of the time. As the latest in a long line of motif discovery programs, YAMDA offers a combination of speed and accuracy that is ideal for handling the ever-growing volume of sequencing data.

Table 1.

Runtimes for YAMDA (GPU and CPU modes), MEME and CUDA-MEME to find one motif

ExperimentChIP POU5F1ChIP GATA1ChIP NRSFChIP IRF4ChIP HNF4AChIP FOXA2DGF
Letters399 700407 4001 024 7001 777 1002 080 5004 098 90010 487 345
Most similar JASPAR motifPou5f1::Sox2Tal1::Gata1RESTIRF1HNF4AFOXA1CTCF
JASPAR logographicgraphicgraphicgraphicgraphicgraphicgraphic
YAMDA logographicgraphicgraphicgraphicgraphicgraphicgraphic
MEME logographicgraphicgraphicgraphicgraphic
YAMDA-GPU runtime15 s18 s47 s85 s81 s165 s344 s
YAMDA-CPU runtime76 s96 s249 s449 s462 s981 s2014 s
CUDA-MEME runtime3456 s3868 s56 261 s260 000 s*400 000 s*3 weeks*4 months*
MEME runtime5127 s5488 s65 085 s280 080 s410 654 s3 weeks*4 months*
YAMDA-GPU speedup341.8304.91384.83295.15069.810 00030 000
YAMDA-CPU speedup67.557.2261.4606.2888.917005000
CUDA-MEME speedup1.51.41.21.11.01.01.0
ExperimentChIP POU5F1ChIP GATA1ChIP NRSFChIP IRF4ChIP HNF4AChIP FOXA2DGF
Letters399 700407 4001 024 7001 777 1002 080 5004 098 90010 487 345
Most similar JASPAR motifPou5f1::Sox2Tal1::Gata1RESTIRF1HNF4AFOXA1CTCF
JASPAR logographicgraphicgraphicgraphicgraphicgraphicgraphic
YAMDA logographicgraphicgraphicgraphicgraphicgraphicgraphic
MEME logographicgraphicgraphicgraphicgraphic
YAMDA-GPU runtime15 s18 s47 s85 s81 s165 s344 s
YAMDA-CPU runtime76 s96 s249 s449 s462 s981 s2014 s
CUDA-MEME runtime3456 s3868 s56 261 s260 000 s*400 000 s*3 weeks*4 months*
MEME runtime5127 s5488 s65 085 s280 080 s410 654 s3 weeks*4 months*
YAMDA-GPU speedup341.8304.91384.83295.15069.810 00030 000
YAMDA-CPU speedup67.557.2261.4606.2888.917005000
CUDA-MEME speedup1.51.41.21.11.01.01.0

Note: Motifs are aligned to the most similar JASPAR motifs. Due to limits in time and resources, some runtimes are estimated. Estimated runtimes are marked with a*.

Table 1.

Runtimes for YAMDA (GPU and CPU modes), MEME and CUDA-MEME to find one motif

ExperimentChIP POU5F1ChIP GATA1ChIP NRSFChIP IRF4ChIP HNF4AChIP FOXA2DGF
Letters399 700407 4001 024 7001 777 1002 080 5004 098 90010 487 345
Most similar JASPAR motifPou5f1::Sox2Tal1::Gata1RESTIRF1HNF4AFOXA1CTCF
JASPAR logographicgraphicgraphicgraphicgraphicgraphicgraphic
YAMDA logographicgraphicgraphicgraphicgraphicgraphicgraphic
MEME logographicgraphicgraphicgraphicgraphic
YAMDA-GPU runtime15 s18 s47 s85 s81 s165 s344 s
YAMDA-CPU runtime76 s96 s249 s449 s462 s981 s2014 s
CUDA-MEME runtime3456 s3868 s56 261 s260 000 s*400 000 s*3 weeks*4 months*
MEME runtime5127 s5488 s65 085 s280 080 s410 654 s3 weeks*4 months*
YAMDA-GPU speedup341.8304.91384.83295.15069.810 00030 000
YAMDA-CPU speedup67.557.2261.4606.2888.917005000
CUDA-MEME speedup1.51.41.21.11.01.01.0
ExperimentChIP POU5F1ChIP GATA1ChIP NRSFChIP IRF4ChIP HNF4AChIP FOXA2DGF
Letters399 700407 4001 024 7001 777 1002 080 5004 098 90010 487 345
Most similar JASPAR motifPou5f1::Sox2Tal1::Gata1RESTIRF1HNF4AFOXA1CTCF
JASPAR logographicgraphicgraphicgraphicgraphicgraphicgraphic
YAMDA logographicgraphicgraphicgraphicgraphicgraphicgraphic
MEME logographicgraphicgraphicgraphicgraphic
YAMDA-GPU runtime15 s18 s47 s85 s81 s165 s344 s
YAMDA-CPU runtime76 s96 s249 s449 s462 s981 s2014 s
CUDA-MEME runtime3456 s3868 s56 261 s260 000 s*400 000 s*3 weeks*4 months*
MEME runtime5127 s5488 s65 085 s280 080 s410 654 s3 weeks*4 months*
YAMDA-GPU speedup341.8304.91384.83295.15069.810 00030 000
YAMDA-CPU speedup67.557.2261.4606.2888.917005000
CUDA-MEME speedup1.51.41.21.11.01.01.0

Note: Motifs are aligned to the most similar JASPAR motifs. Due to limits in time and resources, some runtimes are estimated. Estimated runtimes are marked with a*.

Acknowledgements

The NVIDIA Corporation donated the Titan Xp GPU used for development. We thank Vivek Rai for software testing and Tingyang Li for designing the software logo.

Funding

This work was supported by the National Heart, Lung, and Blood Institute [U01HL137182] to SCJP.

Conflict of Interest: none declared.

References

Bailey
 
T.L.
,
Elkan
C.
(
1994
) Fitting a mixture model by expectation maximization to discover motifs in bipolymers.
Proc. Int. Conf. Intell. Syst. Mol. Biol
.,
2
,
28
36
.

Gupta
 
S.
 et al.  (
2007
)
Quantifying similarity between motifs
.
Genome Biol
.,
8
,
R24
.

Khan
 
A.
 et al.  (
2017
) Jaspar 2018: update of the open-access database of transcription factor binding profiles and its web framework. Nucleic Acids Res.,
46
,
D260
D266
.

Liu
 
Y.
 et al.  (
2010
)
CUDA-MEME: accelerating motif discovery in biological sequences using cuda-enabled graphics processing units
.
Pattern Recogn. Lett
.,
31
,
2170
2177
.

Paszke
 
A.
 et al.  (
2017
) Automatic differentiation in PyTorch. In NIPS-W.

Quang
 
D.
,
Xie
X.
(
2014
)
EXTREME: an online EM algorithm for motif discovery
.
Bioinformatics
,
30
,
1667
1673
.

Quang
 
D.
,
Xie
X.
(
2016
)
DanQ: a hybrid convolutional and recurrent deep neural network for quantifying the function of dna sequences
.
Nucleic Acids Res
.,
44
,
e107
e107
.

Author notes

The authors wish it to be known that, in their opinion, the last two authors should be regarded as Joint last Authors.

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.
Associate Editor: John Hancock
John Hancock
Associate Editor
Search for other works by this author on:

Supplementary data