Facebook enhances Automated Language Translation tools

Have you ever wondered how Facebook’s translation tool works? And no, it’s not Google Translate!

The social media giant’s artificial intelligence operations are so vast, that they include extensive research on a type of machine learning called Machine Translation (MT), which is what powers the 20 million daily translations on Facebook’s NewsFeed. Building on immense user data, Facebook is now taking a bigger step by announcing M2M-100, their newest proprietary Multilingual Machine Translation (MMT) model.

The difference with other models, though, is not only the huge number of languages it can work with (which are numerous!) but rather their lack of reliance on English translation data. Why would English data be important, you ask? Well, most technology is developed in English-speaking countries since developing countries experience high levels of high-skill brain-drain to the US caused in part by the global division of labor brought about by colonialism which in turn… you get it.

For those reasons and more, we are stuck with models that are highly prone to mistranslations and severe loss of meaning. With Facebook’s new model, however, things might be different. To illustrate, this is how Arabic would be translated to Spanish:

Most ordinary MMTs:

  1. Arabic à English
  2. English à Spanish

With the new M2M-100:

  1. Arabic à Spanish

Promising! According to Facebook:

M2M-100 is trained on a total of 2,200 language directions — or 10x more than previous best, English-centric multilingual models. Deploying M2M-100 will improve the quality of translations for billions of people, especially those that speak low-resource languages.”

Learn more here.

The social media giant’s artificial intelligence operations are so vast, that they include extensive research on a type of machine learning called Machine Translation (MT), which is what powers the 20 million daily translations on Facebook’s NewsFeed. Building on immense user data, Facebook is now taking a bigger step by announcing M2M-100, their newest proprietary Multilingual Machine Translation (MMT) model.

The difference with other models, though, is not only the huge number of languages it can work with (which are numerous!) but rather their lack of reliance on English translation data. Why would English data be important, you ask? Well, most technology is developed in English-speaking countries since developing countries experience high levels of high-skill brain-drain to the US caused in part by the global division of labor brought about by colonialism which in turn… you get it.

For those reasons and more, we are stuck with models that are highly prone to mistranslations and severe loss of meaning. With Facebook’s new model, however, things might be different. To illustrate, this is how Arabic would be translated to Spanish:

Most ordinary MMTs:

  1. Arabic à English
  2. English à Spanish

With the new M2M-100:

  1. Arabic à Spanish

Promising! According to Facebook:

M2M-100 is trained on a total of 2,200 language directions — or 10x more than previous best, English-centric multilingual models. Deploying M2M-100 will improve the quality of translations for billions of people, especially those that speak low-resource languages.”

Learn more here.

Related