Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Learning Meaningful Representations of Life

Transformer Model for Genome Sequence Analysis

Noah Hurmer · Xiao-Yin To · Martin Binder · Hüseyin Anil Gündüz · Philipp Münch · René Mreches · Alice McHardy · Bernd Bischl · Mina Rezaei


Abstract:

One major challenge of applying machine learning in genomics is the scarcity of labeled data, which often requires expensive and time-consuming physical experiments under laboratory conditions to obtain. However, the advent of high throughput sequencing has made large quantities of unlabeled genome data available. This can be used to apply semi-supervised learning methods through representation learning. In this paper, we investigate the impact of a popular and well-established language model, namely \emph{BERT}, for sequence genome datasets. Specifically, we develop \emph{GenomeNet-BERT} to produce useful representations for downstream classification tasks.We compare its performance to strictly supervised training and baselines on different training set size setups. The conducted experiments show that this architecture provides an increase in performance compared to existing methods at the cost of more resource-intensive training.

Chat is not available.