Recent advances in neural machine translation (NMT) led to the integration of deep learning-based systems as essential components of most professional translation workflows. As a consequence, human translators are increasingly working as post-editors for machine-translated content. This project aims to empower NMT users by improving their ability to interact with NMT models and interpret their behaviors. In this context, new tools and methodologies will be developed and adapted from other domains to improve prediction attribution, error analysis, and controllable generation for NMT systems. These components will drive the development of an interactive CAT tool conceived to improve post-editing efficiency, and their effectiveness will then be validated through a field study with professional translators.
Gabriele Sarti (University of Groningen)
I am a PhD student at the [Computational Linguistics Group](https://www.rug.nl/research/clcg/research/cl/) of the University of Groningen and part of the project [InDeep: Interpreting Deep Learning Models for Text and Sound](https://interpretingdl.github.io), focusing on interpretability for neural machine translation. Previously, I was a research scientist at [Aindo](https://www.aindo.com) and a founding member of the [AI Student Society](https://www.ai2s.it). My research focuses on interpretability for sequence-to-sequence NLP models, in particular to the benefit of end-users and by leveraging human behavioral signals. I am also passionate about societal applications of machine learning, ethical AI, and open source collaboration
More from the Same Authors
2021 : Q/A Session »
Biagio La Rosa · Gabriele Sarti