Main Conference
Research Track
Download (2.7 MB)

Commercial and Government Tracks
Download (28.4 MB)

 

Keynotes
Arianna Bisazza – Leiden University
Research Keynote | Unveiling the Linguistic Weaknesses of Neural MT
Download (4 MB)

Macduff Hughes – Google
Commercial Keynote | Machine Translation Beyond the Sentence
Download (2.7 MB)

Carl Rubino – IARPA
Government Keynote | Setting up a Machine Translation Program for IARPA
Download (3 MB)

Glen Poor – Microsoft
Commercial Keynote | Use more Machine Translation and Keep Your Customers Happy
Download (14 MB)

 

Workshops
The Role of Authoritative Standards in the MT Environment
Download (5 MB)

Translation Quality Estimation and Automatic Post-Editing
Download (111 MB)

Technologies for MT of Low Resource Languages (LoResMT 2018
Download (3.8 MB)

 

Tutorials
De-mystifying Neural MT
Download (3.6 MB)

MQM-DQF: A Good Marriage (Translation Quality for the 21st Century)
Download (12.1 MB)

Corpora Quality Management for MT – Practices and Role
Download (15.7 MB)

Bring your own laptop and come prepared for hands-on work with the following tools to be used in the tutorial.

 


 

Want to read more about the conference?

Be our guest!

 

 

In this workshop, we will bring together experts from across the standards community, including from the American Society for Testing and Materials (now just “ASTM International”), the American National Standards Institute (ANSI), the International Organization for Standardization (ISO), the Globalization and Localization Association (GALA), and the World Wide Web Consortium (W3C). These experts will discuss authoritative standards that impact the development, implementation, and evaluation of translation systems and of the interoperability of resources.

The workshop will consist of one-half day of technical presentations with invited talks on topics including the structure of the U.S. and international standards community, developing and implementing standards for translation quality assessment and quality assurance, the Translation API Class and Cases (TAPICC) initiative, and updates to Term Based eXchange (TBX). A panel will discuss gaps in this network of standards. They will also solicit input from co-panelists and from the audience on how to improve the standards and standards processes, particularly in the fast-changing world of semantic and neural technological development. Feedback will be provided to the relevant standards committees.

 

Agenda

02:00pm – 02:15pm | Jennifer DeCamp | Introduction
02:15pm – 02:30pm | Jennifer DeCamp | Language Codes
02:30pm – 03:00pm | Sue Ellen Wright | Term Base eXchange (TBX)
03:00pm – 03:30pm | David Filip | XLIFF 2
03:30pm – 04:00pm | Break
04:00pm – 04:30pm | Bill Rivers | Translation Standards
04:30pm – 05:00pm | Arle Lommel | Translation Quality Metrics
05:00pm – 05:30pm | Alan Melby | Translation API Cases and Classes (TAPICC)
05:30pm – 06:00pm | Panel

 

Participants

Jennifer DeCamp

David Filip

Alan Melby

Bill Rivers

Arle Lommel

Sue Ellen Wright

 


 

Want to read more about the conference?

Be our guest!

 

Nowadays, computer-assisted translation (CAT) tools represent the dominant technology in the translation market – and those including machine translation (MT) engines are on the increase. In this new scenario, where MT and post-editing are becoming the standard portfolio for professional translators, it is of the utmost importance that MT systems are specifically tailored to translators.

In this tutorial, we will present ModernMT, a new open-source MT software whose development was funded by the European Union. ModernMT targets two use cases: enterprises that need dedicated MT services; and professional translators working with CAT tools. This tutorial will focus on both use cases.

In the first part, we will present the ModernMT open source software architecture and guide the audience through its installation on an AWS instance. Then, we demonstrate how to create a new adaptive Neural MT engine from scratch, how to feed its internal memory, and finally how to query it.

In the second part, we will introduce ModernMT’s most distinguishing features when used through a CAT tool: (i) ModernMT does not require any initial training: as soon as translators upload their translation memories in the CAT tool, ModernMT seamlessly and quickly learns from this data; (ii) ModernMT adapts to the content to be translated in real time: the system leverages the training data most similar to the document being translated; (iii) ModernMT learns from user corrections: during the translation workflow, ModernMT constantly learns from the post-edited sentences to improve its translation suggestions. In particular, we will demonstrate ModernMT within MateCat, a popular online professional CAT tool.

In this tutorial, participants will learn about industry trends aiming to develop MT focusing on the specific needs of enterprises and translators. They will see how current state-of-the-art MT technology is being consolidated into a single, easy-to-use product capable of learning from – and evolving through – interaction with users, with the final aim of increasing MT-output utility for the translator in a real professional environment.

Presenters:

Marcello Federico (MMT, FBK) and Davide Caroselli (MMT)

Target Audience:

MT users, specialists, integrators, developers, managers, decision makers.

 


 

Want to read more about the conference?

Be our guest!

 

In the past three years, the language industry has been converging on the use of the MQM-DQF framework for analytic quality evaluation. It emerged from two separate quality-evaluation approaches: the European Commission-funded Multidimensional Quality Metrics (MQM) and the Dynamic Quality Framework (DQF) from TAUS. Harmonized in 2015, the resulting shared hierarchy of error types allows implementers to classify common translation problems and perform comparative analysis of translation quality.

MQM-DQF is currently undergoing a formal standardization process in ASTM F43 and will remain a free and open framework.

Attendees will learn how to apply MQM-DQF to their particular needs, including use in typical MT research scenarios where it can bring consistency and clarity. They will be better prepared to select a quality assessment methodology that is appropriate to their needs and that can help connect the needs of technology developers, users, linguists, and information consumers.

The tutorial will focus on the following topics:

  1. A typology of translation quality metrics. This discussion will enable participants to understand how MQM-DQF compares to other quality evaluation approaches and the comparative strengths and weaknesses of them.
  2. Overview of MQM-DQF and key features. This detailed overview will highlight how the framework relates to existing standards, the role of translation specifications in evaluating quality, and the approach the specification takes to developing numerical quality scores.
  3. Market adoption. This section will cover the tools that have already adopted MQM/DQF and how they apply it.
  4. Detailed case studies. The presenters will discuss specific use cases submitted by tutorial participants to explore how they can create a customized MQM-DQF metric.
  5. Validity and reliability. This section discusses the importance of determining validity and measuring reliability within a translation quality evaluation system.

Note: The presenters were two of the leads in the harmonization of MQM and DQF and are active in the ongoing standardization effort around the resulting combined approach.

Presenters:

Arle Lommel: (Senior analyst, CSA Research), Alan K. Melby (Chair, LTAC Global)

Target audience:

The target audience includes researchers, developers, and linguists interested in understanding translation quality, ways of assessing it, and the strengths and weaknesses of various approaches.

Does post-editing also require a deep learning curve? How do the neural networks of post-editors work in concert with neural MT engines? Can post-editors and engines be retrained to work more effectively with each other?

In this tutorial, we demystify the process, focus on the latest MT developments and their impact on post-editing practices. We will cover enterprise-scale project integrations, zoom into the nitty- gritty of tool compatibility, address the different use cases of MT and dynamic quality models, and share our insights on BI, how to measure it all for informed stakeholder decisions.

Outline:

Presenters:

Alex Yanishevsky (Senior Manager, MT and NLP Deployments – Welocalize); Elaine
O’Curran (MT Program Manager – Welocalize)

Target Audience:

This tutorial will provide guidance to translators, LSPs and translation buyers on how to navigate the complex landscape of tools for production, and effectively measure BI and KPIs for MT and post-editing

 


Want to read more about the conference?

Be our guest!

 

AMTA 2018 | Proceedings for the Conference, Keynotes, Workshops and Tutorials

by Mike Dillinger | March 21, 2018

Main Conference Research Track Download (2.7 MB) Commercial and Government Tracks Download (28.4 MB)   Keynotes Arianna Bisazza – Leiden University Research Keynote | Unveiling the Linguistic Weaknesses of Neural MT Download (4 MB) Macduff Hughes – Google Commercial Keynote | Machine Translation Beyond the Sentence Download (2.7 MB) Carl Rubino – IARPA Government Keynote […]

AMTA 2018 Read more...

AMTA 2018 | Tutorial | Getting Started Customizing MT with Microsoft Translator Hub: From Pilot Project to Production

by Mike Dillinger | January 30, 2018

Develop an Effective MT Customization Pilot Project Learn strategies to plan and carry out an effective pilot project to train a customized MT engine and learn tips to evaluate the MT pilot project against your goals so you can move it toward production. Participants will know how to plan a pilot project, select appropriate training […]

AMTA 2018 Read more...

AMTA 2018 | Workshop | The Role of Authoritative Standards in the MT Environment

by Mike Dillinger | January 30, 2018

In this workshop, we will bring together experts from across the standards community, including from the American Society for Testing and Materials (now just “ASTM International”), the American National Standards Institute (ANSI), the International Organization for Standardization (ISO), the Globalization and Localization Association (GALA), and the World Wide Web Consortium (W3C). These experts will discuss authoritative standards that […]

AMTA 2018 Read more...

AMTA 2018 | Tutorial | ModernMT: Open-Source Adaptive Neural MT for Enterprises and Translators

by Mike Dillinger | January 30, 2018

Nowadays, computer-assisted translation (CAT) tools represent the dominant technology in the translation market – and those including machine translation (MT) engines are on the increase. In this new scenario, where MT and post-editing are becoming the standard portfolio for professional translators, it is of the utmost importance that MT systems are specifically tailored to translators. […]

AMTA 2018 Read more...

AMTA 2018 | Tutorial | MQM-DQF: A Good Marriage (Translation Quality for the 21st Century)

by Mike Dillinger | January 30, 2018

In the past three years, the language industry has been converging on the use of the MQM-DQF framework for analytic quality evaluation. It emerged from two separate quality-evaluation approaches: the European Commission-funded Multidimensional Quality Metrics (MQM) and the Dynamic Quality Framework (DQF) from TAUS. Harmonized in 2015, the resulting shared hierarchy of error types allows […]

AMTA 2018 Read more...

AMTA 2018 | Tutorial | A Deep Learning curve for Post-Editing

by Mike Dillinger | January 30, 2018

Does post-editing also require a deep learning curve? How do the neural networks of post-editors work in concert with neural MT engines? Can post-editors and engines be retrained to work more effectively with each other? In this tutorial, we demystify the process, focus on the latest MT developments and their impact on post-editing practices. We […]

AMTA 2018 Read more...