Deep Neural Language Models for Machine Translation

Minh-Thang Luong, Michael Kayser, and Christopher D. Manning

Thanks for reading our paper (poster) and visiting this project page! If you have any questions, feel free to email us.

Code:
The code is available on github.

Description:
This codebase allows for training feed-forward NLMs, both monolingual (normal) models and bilingual (joint) models that condition on the source text as well. The joint NLM is in the context of machine translation (MT) and replicates the model proposed in the BBN's paper http://acl2014.org/acl2014/P14-1/pdf/P14-1129.pdf with several differences.

Features:
(a) train both monolingual (normal) and bilingual (joint) NLM models.
(b) have self-normalization feature.
(c) include all the preprocessing steps (build vocab, convert text form into integer format, and extract ngrams to train).
(d) resume training from a saved model.
(e) test trained NLMs to produce sentence probabilities.
(f) have dropout (we haven't tested this feature thoroughly and weren't able to achieve gains).

Citation: