Neural Machine Translation architecture

Almost all such systems are built for a single language pair — so far there has not been a sufficiently simple and efficient way to handle multiple language pairs using a single model without making significant changes to the basic NMT architecture.

Google’s engineers working on NMT released a paper last year detailing a solution to multilingual NMT systems that avoided making significant changes to the architecture, which was based on a single language pair translation.

This makes me wonder what the NMT architectural structure would be and how it would differ from what is currently in place, to be optimized for multilinguality.  And what the differences would be, in how it behaves, if any.

I wonder if the system were architected in a language other than English, if it would be different. What do you get if you cross Sapir-Whorf with systems architecture?

I wonder how the machine would translate ‘soy milk’ to Romanian. Would it assume English is the source language because of ‘milk’?

 

Luc Steels and language evolution models in robots

Luc Steels’ work from more than a decade ago on the evolution of language, is one of the few examples of someone thinking about how robots could evolve langauge. He looks at the evolution of language using agents/AIs as the means of exploring how languages are learned and evolved.

What I am interested in is different than this. I am interested in how machines evolve language to communicate with each other, what this means for how humans understand machines, and what the communication will be between the two in the future. So ai/ai conversations as well as ai/ai/human.  I prefer the triad because it is important to my hypotheses that the machines interrelate with each other as well as humans.

To go back for a moment, to the evolution of language, think of it this way. You have the origins of language, the means in which children learn a language, and the ways in which a language evolves. For the latter, for example, take a teenager, whose language may well be incomprehensible to adults.  You can see linguistic variation both in the meanings of words, as well as the grammatical structures.  We don’t question this in teens, though we do usually expect them to speak our languages as well, to control, as a linguist would say, across the continuum of variations.  Now imagine two AIs as teenagers, I want to understand the way in which they evolve language in order to communicate, and what drives both the evolution of parts (words, grammar) as well as what underlies the communication needs that leads to changes in language.

So thus, how do we create models of language evolution that machines may adopt if they are allowed to change language as they see fit, for whatever reason. Mostly, now, we discuss this in terms of efficiency, but that is a very human deterministic view that I prefer to avoid at this time.  Some of the current, and unfortunately very small, data sets I have seen on the AI language evolutions have similar markers to early creole language models, and I’d like to see more data to understand if this is what is happening.

But back to Steels’ and one of his talks I quite enjoy, given at Aldebaran in 2005.  He explains what matters to his work, and the ways in which he is modeling the past.

One of his most interesting points is that embodiment is required for the evolution that he is interested in. He is attempting to model the evolution of human language, and without embodiment, it doesn’t work.

This is also very interesting to consider in the current AI/agents that are being created, and how gestures may change the ways AI and humans will and can communicate.  I haven’t yet seen much written about this, but I also haven’t explicitly looked for papers and research on language evolution and embodiment in the current collections of intelligences being designed and built.

Origins are difficult in linguistics. We don’t really know where and how languages originated in humans or why. We don’t have a clear understanding of how languages work in our minds, or how languages are learned. We can argue different positions and there is an enormous body of work and theory on this, but I wouldn’t say there is agreement. However, and while people do study this, it isn’t on par with the explorations for the origin of the universe, for example. We lack a clear and precise model of behavior, language, and intelligence, so when we are building machines that engage in these domains I might argue that we can’t be sure of the outcome. In effect, unlike the game Go, there are no first principles we can give a machine.

But, as Steel’s points out, asking these questions on origins can provide us with profound insights. And this loops back to his work, on looking at the evolution of language, and how he is going about trying to address these questions.

Here is the link to the video, Can Robots Invent Their Own Language. it’s worth a watch if you are interested in these fields.

 

 

AlphaGo and human culture

Many of the articles note that the machines are making moves that humans have never made.  Both in the original, AlphaGo Lee, and in the evolved, AlphaGo Zero, we see games that “no human has ever played.”

So many questions

  1. How do we know no human has ever played it?
  2. Is the cultural ritual surrounding Go such that as one learns, there is an expectation to adherence of tradition which a human would not diverge from?
  3. How do aesthetics play into the success of the machines? (I remember from learning Go decades ago that this mattered, but have read nothing about the aesthetics of the machines’ games. Caveat: I haven’t played in 20 years so what do I know?)
  4. Is there any difference than the first principles as given to the machines than those given to humans?
  5. Are the games played by the machines admired by the top humans?
  6. Is it expected that humans are learning from the machines and that human/human games will now be played differently?

No humans involved! (Except where they were.)

The headlines declaiming no humans were involved in AlphaGo Zero’s mastery continue to amuse me. Now if a machine had taught AGZ the principles and set it off on its path, that would be really something.

The press’ continued erasure of the humans who built the machines and provided the first principles from which to learn the game are indicative of the larger issue in that we don’t see the humans behind these decisions which are made, and thus have no insight into bias etc.

The linked headline above at least specifies that humans were not involved in the mastery, rather than the creation. AGZ played significantly more games than AGL, though this is also not often mentioned either. If you harken back to Gladwell’s supposition that it takes 10,000 hours of practice to become an expert, and we are looking at machines playing from 100,000 games to 4MM games, to ‘learn’ to excel, it is not surprising that they are outplaying humans.  We simply do not have the capacity (or longevity, and likely desire) to play so many games.

The description below, of allowing AGZ t to have access to all past experiences, which AGL did not have except when playing humans is very interesting and I’d love to know more about this decision and why it was taken.

AlphaGo Zero’s creators at Google DeepMind designed the computer program to use a tactic during practice games that AlphaGo Lee didn’t have access to. For each turn, AlphaGo Zero drew on its past experience to predict the most likely ways the rest of the game could play out, judge which player would win in each scenario and choose its move accordingly.

 

AlphaGo Lee used this kind of forethought in matches against other players, but not during practice games. AlphaGo Zero’s ability to imagine and assess possible futures during training “allowed it to train faster, but also become a better player in the end,” explains Singh, whose commentary on the study appears in the same issue of Nature.

 

Are humans more likely to abuse anthropomorphic bots over others?

Reading Mar’s article today, I find myself re-reading a collection of older articles and wondering about Kate Darling‘s work, sexbots, hitchhiking robots and Ishiguro’s androids.

Are humans more likely to kill/maim/rape/injure human-looking/-seeming machines over non-human ones.

 

Stronger attention to language, please.

AI Now Institute is an independent research institution looking at the ‘social implications of artificial intelligence,’ something very needed as we continue to have such rapid and significant change in AI, driven by a very limited set of creators.

The Institute itself has four domains which it uses to bucket to the work they do on “the social implications of artificial intelligence”.

  • Rights & Liberties
  • Labor & Automation
  • Bias & Inclusion
  • Safety & Critical Infrastructure

Reading their 2017 Report, I believe it should have more emphasis on language in its recommendations, about which I would like to say more. Linguists, specifically those outside the computational linguistics field, need to be more integrated in the creation of AI as technology and interface.

Language is an underlying consideration in Bias & Inclusion, with this description:

However, training data, algorithms, and other design choices that shape AI systems may reflect and amplify existing cultural prejudices and inequalities.

Which does not have a strong enough inclusion or consideration of the language(s) used to train these systems. While bias work is written about language in AIs, it is more likely to fall on the corpus, that is to say, which words are used in descriptions, and the like. When you feed a corpus into a machine, it brings all its biases with it, and since these data sets are extant, they come with all the societal issues that seem to be more visible now than at earlier times.

Language and power, language and bias, minority languages, all of these have long been topics in the field of linguistics. They are also touched on in behavioral economics, with priming and other ways in which we set into the minds of humans particular considerations. You can also see this in the racial bias work from Harvard that was very prevalent on the web a few years back.

Language is a core human communication tool that entails so much history and meaning that without a greater attention to the social and cultural implications of the language we choose, from how we discuss these topics, to how language is embedded in the interaces of our AI systems we are glossing over something of great meaning with far too little attention.

I don’t think that language belongs only in the realm of bias & inclusion, in the long run. It may create outsiders at this time, but language is such a core consideration, it seems larger than any of these four domains.  Though to note, as well, none of these domains explicitly attend to the interfaces and the ways in which we interact with these systems, so language would belong there as well, as an input and an output, with differing needs and attentions on each side.

 

 

 

 

Humans, nature, and machines: Bill Joy and George Dyson

The follwing is from Bill Joy‘s 2000 article in Wired, “Why The Future Doesn’t Need Us”

In his history of such ideas, Darwin Among the Machines, George Dyson warns: “In the game of life and evolution there are three players at the table: human beings, nature, and machines. I am firmly on the side of nature. But nature, I suspect, is on the side of the machines.”

Any thoughts / insights into why Dyson believes this?

“AI Learns Sexism Just by Studying Photographs”

Two articles, one from Wired and one from the MIT Technology Review on bias in software. the quotes below, on gender bias.

As sophisticated machine-learning programs proliferate, such distortions matter. In the researchers’ tests, people pictured in kitchens, for example, became even more likely to be labeled “woman” than reflected the training data. The researchers’ paper includes a photo of a man at a stove labeled “woman.”

 

Machine-learning software trained on the datasets didn’t just mirror those biases, it amplified them. If a photo set generally associated women with cooking, software trained by studying those photos and their labels created an even stronger association.

These are interesting to me for several reasons.

First, it assumes that there is a bias against the kitchen, or women in the kitchen. But when one considers that most top chefs (Michelen 5 star) are men, it isn’t just about women in the kitchen, where the bias exists, but on a different level, and perhaps a nuance the machines don’t yet grasp.

Second, they are labeled as the AI learning sexism. I would be more inclined to suggest that the AI learned American categorization and labeling systems. The humans added the value judgement. I wonder why the machine was looking at images and labeling gender. How does a computer understand man/woman? By what criteria is it taught to differentiate?  Which inevitably brings us to the discussion of who creates the algorithms and programs, chooses the data sets, and teaches the labels to the machines.

It feels like solving an issue of ‘distortion’ in the way a machine learns, if that machine is reflecting both the programmers and the society, isn’t a solve, if it’s machine-level only. This is, perhaps, not the entire conversation, or even the wrong conversation.

It makes me think we need a deeper discussion on what the AI sees, how it applies labels, and how humans both interpret them and understand them. It reminds me of Lombroso and Lacassagne. Are we on our way to recreate the past, with different parameters?

 

 

Linguistic anthropology and AI, part 2

I posted the original set of questions so I could shoot them over to a few people, to get their thoughts on my thoughts. Delivered even more than expected.  In the emails and conversations I’ve had since then, there are ever more questions, that I am going to keep documenting here.

  • If it were possible to allow the AIs to interrupt each other, to cut in before one finished what it was saying, what would happen?
  • What happens if you have three AIs in conversation or negotiation?
  • Are the AIs identical in the beginning? If, so, who modifies language first, and do they do it differently? In concert? In reaction?
  • Does an AI who changes language get considered a new incarnation of the AI? Does it modify itself, as it modifies its language?
  • If you have two AIs with different programming, two different incarnations, of a sort, what modifications do they make, vs two instantiations of the same thing?
  • Does language come about as a means of addressing desires and needs? [Misha wrote this and I find I don’t agree, which is really a deeply fascinating place to go with this.]
  • Can machines have desires and needs? How would we know the answer to this?
  • Is the assumption that machines modify language for reasons of efficiency overly deterministic?
  • What is the role of embodiment in the creation of language? Is it required for something to be meaningful? Does it change the way language works? Would it ‘count’ for cyborgs?

One thing I have discovered is that I go at this from a different perspective than many of my conversation partners, which is that I accept that it is possible that everything we think we know is wrong, both about humans, and about machines.  As I wrote, we assume humans are rational in order to make models of human behavior, which are faulty, because we are not. We assume machines are rational, because we programmed them to be, but what if they, too, are not? There seems to be a sense that binary does not allow for irrationality, or anomaly, but..what if it does?

I think I need to wrap into these discussions four things:

  1.  a primer on computational linguistics for those who don’t have it
  2.  a bit of an overview on general linguistics, and where we stand on that
  3.  an overview of creole linguistics, because I think it is a very interesting model to use for the evolution of AI languages, particularly and perhaps except, for the bit where it requires a power dynamic, historically.
  4. some discussion of the genetic evolution of algorithms, deep learning, adversarial networks etc.

Misha’s last really interesting question to me: “Can you evolve language without pain?” is a bit acontextual as I toss it here, but what an interesting question about feedback loops.

 

Nigerian Pidgin on the BBC

French24 posted an interesting (by which I mean riddled with mis-statements) article about the BBC starting up a service for Nigerian Pidgin, which has 75MM speakers, according to the France24 article, and according to other sources, approximately 30MM first AND second languages speakers.  The article is strangely dismissive of the language and of the speakers, as I read it.

Nigerian Pidgin is the name of the creole language spoken by a significant percentage of the population.  When I studied it, now 20 years ago, it was considered English-based with an influx of words from major trading populations, so it had Portuguese and Swahili origin words. It was not “Portuguese-based” or “Jamaican Patois inspired” as the article claims. The history of the slave populations and the trade routes are the history of the creation of the language.

Creoles, like most languages, have a continuum of formality, but in the case of creoles, rather than going from slang to formal language, because the creole emerged with a base language, in this case English, the ‘higher’ form (because old school linguists were just that way) is closer to English, and the ‘lower’ form, farther away. This can be grammar, it can be vocabulary, it can be cadence, it varies by language. Comparable to, say Verlan and French. Which doesn’t even begin to touch regional dialects.

Back to the article!

The first line of the article is flat out impossible, a language has a grammar. Languages change. Unwritten languages have no standard orthography, until they do.

 Imagine a language without an alphabet, held together without grammar or spelling, which changes every day but is nonetheless spoken and understood by more than 75 million Nigerians.

Many languages have no alphabet specific to their language, we use Roman or Greek or Cyrillic characters to write. Alphabets specific to a language are a newer incarnation, often tied to national interests or identity. Think Georgian, Cherokee, and Korean. These alphabets were created long after the languages.

Writing (Nigerian) Pidgin may be new, delivering the news over the airwaves and on a website, as well, however the language evolved just so that people could communicate and this is really no different.   If it had a prior association with the ‘lower classes’ it is because it began with the oppression of the locals by the British colonial empire, and the trade routes that grew around their outposts.

Says the French linguist quoted in the article, “It’s a language that belongs to bonody at the same time as belonging to everybody.”  And can’t we say the exact same thing about every language we speak?

The BBC anouncement is much better, but refers to the language as a pidgin and defines it as such, even though, at least when I was in grad school for Linguistics, it was considered a creole. There are so many sociocultural and historical complexities of pidgins and creoles, and I haven’t studied this one since grad school, so I will leave it at that. I am looking forward to the service though, reminder of times past. It will be interesting to see if I can still understand the language or if it has evolved too much.