Skip to main navigation menu Skip to main content Skip to site footer

Cross-lingual Pseudo-Projected Expectation Regularization for Weakly Supervised Learning

Abstract

We consider a multilingual weakly supervised learning scenario where knowledge from annotated corpora in a resource-rich language is transferred via bitext to guide the learning in other languages. Past approaches project labels across bitext and use them as features or gold labels for training. We propose a new method that projects model expectations rather than labels, which facilities transfer of model uncertainty across language boundaries. We encode expectations as constraints and train a discriminative CRF model using Generalized Expectation Criteria (Mann and McCallum, 2010). Evaluated on standard Chinese-English and German-English NER datasets, our method demonstrates F1 scores of 64% and 60% when no labeled data is used. Attaining the same accuracy with supervised CRFs requires 12k and 1.5k labeled sentences. Furthermore, when combined with labeled examples, our method yields significant improvements over state-of-the-art supervised methods, achieving best reported numbers to date on Chinese OntoNotes and German CoNLL-03 datasets. 

PDF (Presented at ACL 2014)

Author Biography

Mengqiu Wang

I am a PhD student in the Natural Language Processing Lab of Computer Science Deparment at Stanford University.

Christopher D. Manning

Professor of Computer Science and Linguistics at Stanford University