Since it exists, Google Translate has always been a source of frustration and laughter, and sometimes it is also a very useful tool so that we can understand texts in practically any language.
But from there to being the protagonist of what seems to be the plot of the latest summer horror movie, there is a stretch. And yet it is just what is happening. We have a whole subreddit (r / TranslateGate) dedicated to the subject of the strange translations that Google Translate throws up when you enter certain text and use rather elaborate languages.
ZAO, the Chinese MOBILE APP that through DEEPFAKE turns you into DICAPRIO in SECONDS
What happens when you want to translate inconsistencies
Let’s first leave some examples of strange translations, and then we will try to explain the theories that the Internet has of why this happens, without limiting ourselves to demonic possession or voices from beyond.
The thing with these translations is that they are obtained by trying to translate sentences without any meaning, or words that do not exist, they are too random, and the translator ends up showing an even more strange text.
As they count on the same subreddit, the “Translate Gate“was initially discovered on 4chan. There someone posted a thread commenting on the strange translations he was getting from Somali to English.
From there began the theories about where does the translator get those things from. Your emails, chat conversations, websites, or anything online?
One theory suggests that the artificial intelligence that Google uses in its translator is using the training data and repeating it when it is commissioned to translate nonsense text. Another suggests that this nonsensical text can form a kind of “broken” sentences or sentences to ask questions of AI using obscure languages as a back door.
When someone suggested that Google, in addition to collecting data from other websites to feed the translator, could be doing it with emails or private chat messages, and that perhaps that is where the translations came from, the company denied it to Motherboard:
Google Translate learns from examples of translations on the web and does not use ‘private messages’ to carry out translations, nor would the system even have access to that content. This is simply the result of introducing nonsense into the system, inconsistencies are generated.
Google translate freaks out
Some may remember Deep Dreams, Google’s system based on artificial neural networks capable of reinterpreting our photos with a touch between dreamlike and terrifying. Well, the way in which this system identifies and accentuates patterns in images is quite similar to how Google Translate has worked for a few years.
This is explained by Alexander Rush, a Harvard professor who studies natural language processing and computerized translation.
Years ago, Google started using a technique known as ‘neural machine translation’. In it, the systems are trained with large amounts of texts in one language and the corresponding translations in others to create a model moving from one to another. But, when fed nonsense, the system can “hallucinate” bizarre results, as well as DeepDream.
While users keep trying the strangest results that can be obtained, among which are quite a few passages from the Bible, one theory suggests that perhaps Google fed its translator with just that book, after all, it is one of the most translated on the planet, and when trying to meaninglessly translate in languages with less text available than to learn on the web, the translator ends up throwing these things out.
Google did not want to answer if it had or not, and although some of the strange translations have disappeared, the subreddit remains quite active, and there is to entertain a good time, and be a little scared if it is at night and you have just seen a movie of terror.
In Genbeta | Google’s latest algorithm does not predict what you are looking for but when you will die