As a result, an increasing number of language technology providers are now based on neural machine translation (NMT), and even more familiar companies like Facebook and Google are transitioning to neural machine translation. The potential impact of NMT is huge both for translation service providers and consumers of translation.
If you’re wondering exactly what NMT is and how differs to previous types of machine translation, then look no further. In this article, we’ll look at some of the basics of machine translation as well as the potential impact of NMT on the translation industry.
How does machine translation work?
Machine translation has been a topic of research and development for over half a century now. But the reason why neural machine translation is creating such a buzz is because it is based on deep learning and neural networks, and it’s now consistently outperforming previous methods: rule-based and statistical machine translation.
Rule-based machine translation (RBMT)
RBMT was the first approach developed in the 1950s. It was essentially based on a set of linguistic rules (concerning the structure of words and sentences) about the source and target languages and used a dictionary to translate each individual source word into the target language. Hopes were initially high, and it was thought full automation would be achieved within several years, however it soon became apparent that this was unrealistic since language is far more complex than just sets of words.
Statistical machine translation (SMT)
SMT sometimes known as phrase-based machine translation, was the next major development and began to take off in the early 2000s. This approach is based on huge corpora of bilingual texts (source texts and translations produced by humans) which are split into phrases a few words long (called n-grams). When a sentence is being translated, it too is broken down into n-grams and each of these is translated based on the most common translation of the n-gram within the corpora.
Neural machine translation (NMT)
NMT takes into account the entire context of the text rather than a few words. The translation is based on constructing and using neural networks, and essentially imitates how the brain works – a process called deep learning. One similarity with SMT is that a very large amount of data is required to build up a neural network and map out the various meanings of words according to different contexts.
Indeed, estimates for the minimum number of sentence pairs vary between 20 million and 1 billion, depending on whether the engine is domain-specific or general purpose. And there’s no upper limit to the amount of data, the more high-quality sentence pairs are fed in to the NMT engine, the more translation quality will improve.
So, how do these neural networks actually work?
During the actual translation process, the meaning of each word is recognised and encoded as a vector, based on its association with all the other words in the sentence. This vector is then decoded in the target language. The same is then done for all the other words in the rest of the sentence.
In effect, NMT reproduces the context-based meaning of each word in the target language, rather than translating a single word or phrase in isolation. As a result, it often produces higher quality translations because it can better take into account complex syntax and grammar, such as gender agreement and levels of formality.
How good is Neural Machine Translation?
NMT is largely seen to be better quality than SMT (it is said to reduce the need for post-editing by 25%). It is more fluent as it considers entire sentences rather than phrases in isolation and it can independently make translation choices even if they did not occur often in the training data, which is a significant advantage over SMT.
Moreover, if trained with multiple source languages, NMT does not need to be told the source language before beginning translation and can even translate a sentence made up of several different source languages into the same target language. This is perhaps not so useful in day-to-day translation, as most source texts are only in one language, but it is certainly a smart feature.
There remains significant debate over how good NMT actually is compared to human translators, as it has some serious limitations, especially at a linguistic level.
NMT engines tend to struggle with longer sentences and linguistic features such as figures of speech, metaphors and irony. Research by Slator into the quality of an NMT’s output of literary translation has shown that the translations produced were equal to human-level translation around only 25% of the time. Admittedly, these literary features are some of the trickier areas of translation for humans too, so it’s perhaps unsurprising that NMT systems might struggle.
Another problem with NMT is that target texts might actually contain mistranslations which are rather difficult to spot. Research by the Common Sense Advisory has shown that although NMT translations produced and tested by Google sounded more fluent, this did not guarantee accuracy. The errors were in fact more difficult to detect since the text actually sounds correct. When judged by bilingual experts, many of the translations were deemed not up to scratch for translation service providers, or indeed their clients. So, a very thorough proofreading is often needed with machine translations.
For the time being, there’s still a long way to go for NMT, but the more high-quality data the systems are trained with, the more it will improve. Indeed, some predict that machine translation will actually reach human levels of translation by 2029. With this potential prospect a mere 10 years away, the effects of NMT upon the sector could certainly be far-reaching.
Where does this leave the human translator?
NMT has the potential to turn the world of translation upside down, and translators (along with translation companies) will need to be on their toes in order to keep up with and adapt to the changes.
On the one hand, NMT has the potential to do away with some of the simpler translation tasks, such as making a syntactically and semantically simple source text comprehensible in the target language. In this instance, the translation could be carried out by a machine and then checked by a post-editor or reviewer.
However, that’s not to say there will be no place for human translators – far from it! The cultural nuances and idioms that NMT cannot handle need to be explained or adapted to the target culture (a process called localization). Often, these translation decisions require extensive knowledge about both the source and target cultures, and they are translated according the translator’s best judgement (usually there’s no single ‘right’ choice). So, for translations with culturally-specific content or a little more flair, it’s difficult to imagine humans being replaced any time soon.
All-in-all, neural machine translation is an exciting prospect and it could certainly help with many translation tasks. However, there is still a long way to go to improve translation quality and humans will still be needed to make those all-important translation decisions to make sure the text both reads well and conveys the correct meaning.