Skip to main navigation menu Skip to main content Skip to site footer

Leveraging Orthographic Similarity for Multilingual Neural Transliteration

Abstract

We address the task of joint training of transliteration models for multiple language pairs (multilingual transliteration). This is an instance of multitask learning, where individual tasks (language pairs) benefit from sharing knowledge with related tasks. We focus on transliteration involving related tasks i.e. languages sharing writing systems and phonetic properties (orthographically similar languages). We propose a modified neural encoder-decoder model that maximizes parameter sharing across language pairs in order to effectively leverage orthographic similarity. We show that multilingual transliteration significantly outperforms bilingual transliteration in different scenarios (average increase of 58% across a variety of languages we experimented with). We also show that multilingual transliteration models can generalize well to languages/language pairs not encountered during training and hence perform well on the zeroshot transliteration task.  We show that further improvements can be achieved by using phonetic feature input.
Article at MIT Press PDF (presented at ACL 2018)

Author Biography

Anoop Kunchukuttan

Anoop Kunchukuttan is a senior Ph.D student in the Department of Computer Science and Engineering, IIT Bombay.  His research interests are in the areas of Natural Language Processing, Machine Learning and Information Extraction. His current research work explores various facets of machine translation and transliteration between related languages.

More broadly, he is interested in exploring multilingual NLP solutions i.e. solutions that scale to a multiple languages. This has led to his recent interest in Deep Learning, since neural networks provide a suitable and flexible language for modelling multilinguality.