Go to:

Evaluation of Extended Word Embeddings

Word embeddings of shallow neural networks provide strong baselines for many extrinsic tasks such as semantic text similarity, text classification, and information retrieval that correspond to real-word end tasks. Unlike state-of-the-art language models, which are accurate but also slow, black-boxed, and monolithic, word embeddings lend themselves to solutions that are fast, interpretable, and modular. Improving the accuracy of word embeddings provides an important counterbalance to the ever-increasing computational and architectural complexity of state-of-the-art language models.

Word embeddings of shallow neural networks have a number of extensions that give strong results on intrinsic tasks (word analogy), but weren't extensively evaluated on multilingual extrinsic tasks. The goal of this project is to prepare a set of tasks for the evaluation of word embeddings on multilingual extrinsic tasks.

Selected Publications

Go to: navigation | start of page | end of page

Relevant projects

Go to: navigation | start of page | end of page