Temporal translation

I collect old dictionaries, in many languages, translations and otherwise. They are full of rich cultural information, new words, pathways, changed meanings, and I enjoy reading them for the glimpses of other worlds.

Often they contain words that I have to look up in other dictionaries, such as my copy of the first Hebrew-English translation dictionary released in Israel. It has so many words about the desert, about the plants, water, formations, growing, that I had to look a signficant number of them up in English, as I had never heard them.  A more modern Hebrew-French translation dictionary I have does not include nearly as many words of this sort.

I can build these models in my mind, in bits and pieces. But what would it be like to build them in the machine, to provide a rich view into different time periods by pouring in time-specific language data?

What if the machine can translate me to 1700s English? What if the machine translated from time periods, different Englishes, or Frenches? What about dialects?

I don’t know where phonology data would come from. What if I want to translate to Beowulf? How does the machine learn to pronounce the language properly?

I can imagine an amazing visualization, a time line, that I can drag into the past, to hear the sounds. Except it would need regional variation as well.

In the tradtion of vac, in which sound matters, the sound and the meaning intricately entwined, what histories can we learn by having the ability to translate to other places in time, not just other languages?

Neural Machine Translation architecture

Almost all such systems are built for a single language pair — so far there has not been a sufficiently simple and efficient way to handle multiple language pairs using a single model without making significant changes to the basic NMT architecture.

Google’s engineers working on NMT released a paper last year detailing a solution to multilingual NMT systems that avoided making significant changes to the architecture, which was based on a single language pair translation.

This makes me wonder what the NMT architectural structure would be and how it would differ from what is currently in place, to be optimized for multilinguality.  And what the differences would be, in how it behaves, if any.

I wonder if the system were architected in a language other than English, if it would be different. What do you get if you cross Sapir-Whorf with systems architecture?

I wonder how the machine would translate ‘soy milk’ to Romanian. Would it assume English is the source language because of ‘milk’?