Data Analysis and Machine Learning for Speech Music Playlist GenerationDigital media content is abundantly available due to technological developments. Users’ attention is very precious for systems providing such content. Therefore, they try to recommend content that is relevant or interesting to the user. Music streaming services have similar opportunities and challenges. In music streaming services, the so-called playlist generation is responsible for selecting and for sequentially arranging pieces of music.Background to the current work is larger-scale research aiming at mixed speech-music playlist generation. As part of this, a large radio broadcast dataset was collected: both audio files and features extracted from those.The aim of this paper is—besides analyzing scientific literature on playlist generation— to analyze this dataset using different technologies and tools. In the first step, a star schema was created and populated from the raw data. This allows efficient, interactive analysis with business intelligence tools, such as Tableau. In the next step, using database and business intelligence tools, patterns have been looked for. Significant features and patterns have been found, e.g., for channels of different types (pop music, classical music, speech channel). Also, daily and weekly temporal patterns, e.g., for speech ratio and silence ratio, have been found for the major channel types. Emotionally loaded words, according to the WordNet Affect library, have also been analyzed. The analysis showed how different emotions mix with each other and which channel types provide content for different emotions. The major patterns are summarized, and conclusions are drawn for a customized, automated speech-music playlist.The Final goal is creating recommendation for mixed speech music playlist generation, the current step is Data Analysis of radio recordings.