AMTA 2018 | Tutorial | A Deep Learning curve for Post-Editing
Does post-editing also require a deep learning curve? How do the neural networks of post-editors work in concert with neural MT engines? Can post-editors and engines be retrained to work more effectively with each other?
In this tutorial, we demystify the process, focus on the latest MT developments and their impact on post-editing practices. We will cover enterprise-scale project integrations, zoom into the nitty- gritty of tool compatibility, address the different use cases of MT and dynamic quality models, and share our insights on BI, how to measure it all for informed stakeholder decisions.
- Introduction to MT and Post-Editing
- MT integration for enterprise-scale programs:
- How is it done
- What do translators see: working online versus working offline
- The impact of connectors on MT output
- What is ‘normal’ in raw MT, and what isn’t
- ‘pre-editing’ or ‘post-processing’
- Different types of MT and implications for post-editors:
- SMT and NMT: key concepts, current state, strength and weaknesses, typical errors and can they be fixed?
- static MT and adaptive MT: key concepts, current state, strength and weaknesses, workflow integration
- Adaptive MT demos & discussion: SDL, Lilt, ModernMT
- Dynamic Quality Models and how to post-edit for different use cases:
- Focus on fast, cheap, usable quality: light post-editing
- Focus on technical accuracy: medium post-editing
- Focus on maintaining highest translation quality: full post-editing
- How to evaluate productivity based on automatic scoring, post-edit distances and productivity reports while discarding anomalies
Alex Yanishevsky (Senior Manager, MT and NLP Deployments – Welocalize); Elaine
O’Curran (MT Program Manager – Welocalize)
This tutorial will provide guidance to translators, LSPs and translation buyers on how to navigate the complex landscape of tools for production, and effectively measure BI and KPIs for MT and post-editing
Want to read more about the conference?