Neural network language models to select the best translation


  • Maxim Khalilov TAUS Labs, Amsterdam, The Netherlands
  • José A. R. Fonollosa Centre de Recerca TALP, Universitat Polit`ecnica de Catalunya
  • Francisco Zamora-Mart´ınez Dep. de Ciencias F´ısicas, Matem´aticas y de la Computaci´on, Universidad CEU-Cardenal Herrera
  • Maria José Castro-Bleda Dep. de Sistemas Inform´aticos y Computaci´on, Universitat Polit`ecnica de Val`encia
  • Salvador España-Boquera Dep. de Sistemas Inform´aticos y Computaci´on Universitat Polit`ecnica de Val`encia


The quality of translations produced by statistical machine translation (SMT) systems crucially depends on the generalization ability provided by the statistical models involved in the process. While most modern SMT systems use n-gram models to predict the next element in a sequence of tokens, our system uses a continuous space language model (LM) based on neural networks (NN). In contrast to works in which the NN LM is only used to estimate the probabilities of shortlist words (Schwenk 2010), we calculate the posterior probabilities of out-of-shortlist words using an additional neuron and unigram probabilities. Experimental results on a small Italianto-English and a large Arabic-to-English translation task, which take into account different word history lengths (n-gram order), show that the NN LMs are scalable to small and large data and can improve an n-gram-based SMT system. For the most part, this approach aims to improve translation quality for tasks that lack translation data, but we also demonstrate its scalability to large-vocabulary tasks.




How to Cite

Khalilov, M., Fonollosa, J. A. R., Zamora-Mart´ınez, F., Castro-Bleda, M. J., & España-Boquera, S. (2013). Neural network language models to select the best translation. Computational Linguistics in the Netherlands Journal, 3, 217–233. Retrieved from