What is it about?

Language models pretrained by self-supervised learning (SSL) have been widely utilized to study protein sequences, while few models were developed for genomic sequences and were limited to single species. Due to the lack of genomes from different species, these models cannot effectively leverage evolutionary information. In this study, we have developed SpliceBERT, a language model pretrained on primary ribonucleic acids (RNA) sequences from 72 vertebrates by masked language modeling, and applied it to sequence-based modeling of RNA splicing. Pretraining SpliceBERT on diverse species enables effective identification of evolutionarily conserved elements. Meanwhile, the learned hidden states and attention weights can characterize the biological properties of splice sites. As a result, SpliceBERT was shown effective on several downstream tasks: zero-shot prediction of variant effects on splicing, prediction of branchpoints in humans, and cross-species prediction of splice sites. Our study highlighted the importance of pretraining genomic language models on a diverse range of species and suggested that SSL is a promising approach to enhance our understanding of the regulatory logic underlying genomic sequences.

Featured Image

Why is it important?

Ribonucleic acids (RNA) splicing is a fundamental post-transcriptional process in eukaryotic gene expression, which removes introns from primary transcripts and ligates exons into mature RNA products. Though the mechanism underlying RNA splicing is complex, a variety of studies have found that some key determinants of splicing are encoded in DNA sequences [1–3]. Therefore, deciphering splicing codes from RNA sequences by computational models is a promising approach and will facilitate the interpretation of genetic variants that affect RNA splicing

Perspectives

Here, we developed a primary RNA language model, SpliceBERT, and used it to study RNA splicing. SpliceBERT was pretrained by MLM on over 2 million RNA sequences from 72 vertebrates. Compared to pLMs for only the human genome, SpliceBERT can effectively capture evolutionary conservation from primary sequences. The hidden states and attention weights generated by SpliceBERT can reflect the biological property of splice sites. Additionally, the context information from SpliceBERT is able to distinguish variants with different effects on RNA splicing. These findings suggest that SSL on diverse species is beneficial to learn biologically meaningful representations from sequences. As a result, SpliceBERT was shown effective to predict BPs in humans and splice sites across species. The SpliceBERT model is available at https://github.com/biomed-AI/SpliceBERT.

Zhixiang Ren
Peng Cheng Laboratory

Read the Original

This page is a summary of: Self-supervised learning on millions of primary RNA sequences from 72 vertebrates improves sequence-based RNA splicing prediction, Briefings in Bioinformatics, March 2024, Oxford University Press (OUP),
DOI: 10.1093/bib/bbae163.
You can read the full text:

Read

Contributors

The following have contributed to this page