< Go Back

AMTA 2018 | Tutorial | De-mystifying Neural MT

by | January 30, 2018
read

Neural Machine Translation technology is progressing at a very rapid pace. In the last few years, the research community has proposed several different architectures with various levels of complexity. However, even complex Neural Networks are really built from simple building blocks; and their functioning is governed by relatively simple rules. In this tutorial, we aim to provide an intuitive understanding of the concepts that lies behind this very successful machine learning paradigm.

In the first part of the tutorial we will explain, through visuals and examples, how Neural Networks work. We will introduce the basic building block, the neuron; illustrate how networks are trained; and discuss the advantages and challenges of deep networks.

The second part will focus on Neural Machine Translation. We will present the main Neural Network architectures that power the current NMT engines: Recurrent Neural Networks with attention, Convolutional Networks for MT, and the Transformer model. We will discuss some of the practical aspects involved in training and deploying high-quality translation engines. Using examples we will illustrate some of the current challenges and limitations of the technology. Last but not least, we will try to look to the future and talk about the still not-fully- realized potential of deep learning.

Presenters:

Dragos Munteanu (Director of Research and Development, Machine Translation – SDL) and Ling Tsou (Research Engineer -SDL)

Target audience:

Localization professionals with limited experience with Neural Networks and Deep Learning

 


Want to read more about the conference?

Be our guest!

 


Leave a Reply

Your email address will not be published. Required fields are marked *