With the start of the new millennium and the accumulation of huge amounts of data, both science and commerce have transitioned into the 'big data era'. Nowadays, through cheap storage space, the internet and an advance in computing power, data is accessible and, in many cases, abundant. This is the case not only for large image databases or social media, but also for bioinformatic data, including biomedical images, genomics, protein structures and many more. The analysis of this data has numerous potential applications, for both industry and academia. To extract meaningful insights from 'big data', however, new algorithms need to be developed and existing ones be tailored to specific scientific questions. In the recent years a new branch of machine learning emerged, called deep learning, which is responsible for major advances in fields that struggled for many years, with its main successes in speech recognition and natural language processing. Deep learning shines brightest in raw data processing, where other machine learning approaches rely too heavily on hand-designed features. Here, deep learning can learn complex features by combining simpler ones learned from the initial data, enabling the representation of data sets with increasing levels of abstraction. Recently, the power of deep learning was once more demonstrated with DeepMindās AlphaFold2 model achieving unseen performance in protein structure prediction.