Skip to main navigation menu Skip to main content Skip to site footer

Learning to Prune: Exploring the Frontier of Fast and Accurate Parsing

Abstract

Pruning hypotheses during dynamic programming is commonly used to speed up inference in settings such as parsing.  Unlike prior work, we train a pruning policy under an objective that measures end-to-end performance: we search for a fast and accurate policy. This poses a difficult machine learning problem, which we tackle with the LOLS algorithm.  LOLS training must continually compute the effects of changing pruning decisions: we show how to make this efficient in the constituency parsing setting, via dynamic programming and change propagation algorithms.  We find that optimizing end-to-end performance in this way leads to a better Pareto frontier---i.e., parsers which are more accurate for a given runtime.

PDF (presented at ACL 2017)