Luc Steels’ work from more than a decade ago on the evolution of language, is one of the few examples of someone thinking about how robots could evolve langauge. He looks at the evolution of language using agents/AIs as the means of exploring how languages are learned and evolved.
What I am interested in is different than this. I am interested in how machines evolve language to communicate with each other, what this means for how humans understand machines, and what the communication will be between the two in the future. So ai/ai conversations as well as ai/ai/human. I prefer the triad because it is important to my hypotheses that the machines interrelate with each other as well as humans.
To go back for a moment, to the evolution of language, think of it this way. You have the origins of language, the means in which children learn a language, and the ways in which a language evolves. For the latter, for example, take a teenager, whose language may well be incomprehensible to adults. You can see linguistic variation both in the meanings of words, as well as the grammatical structures. We don’t question this in teens, though we do usually expect them to speak our languages as well, to control, as a linguist would say, across the continuum of variations. Now imagine two AIs as teenagers, I want to understand the way in which they evolve language in order to communicate, and what drives both the evolution of parts (words, grammar) as well as what underlies the communication needs that leads to changes in language.
So thus, how do we create models of language evolution that machines may adopt if they are allowed to change language as they see fit, for whatever reason. Mostly, now, we discuss this in terms of efficiency, but that is a very human deterministic view that I prefer to avoid at this time. Some of the current, and unfortunately very small, data sets I have seen on the AI language evolutions have similar markers to early creole language models, and I’d like to see more data to understand if this is what is happening.
But back to Steels’ and one of his talks I quite enjoy, given at Aldebaran in 2005. He explains what matters to his work, and the ways in which he is modeling the past.
One of his most interesting points is that embodiment is required for the evolution that he is interested in. He is attempting to model the evolution of human language, and without embodiment, it doesn’t work.
This is also very interesting to consider in the current AI/agents that are being created, and how gestures may change the ways AI and humans will and can communicate. I haven’t yet seen much written about this, but I also haven’t explicitly looked for papers and research on language evolution and embodiment in the current collections of intelligences being designed and built.
Origins are difficult in linguistics. We don’t really know where and how languages originated in humans or why. We don’t have a clear understanding of how languages work in our minds, or how languages are learned. We can argue different positions and there is an enormous body of work and theory on this, but I wouldn’t say there is agreement. However, and while people do study this, it isn’t on par with the explorations for the origin of the universe, for example. We lack a clear and precise model of behavior, language, and intelligence, so when we are building machines that engage in these domains I might argue that we can’t be sure of the outcome. In effect, unlike the game Go, there are no first principles we can give a machine.
But, as Steel’s points out, asking these questions on origins can provide us with profound insights. And this loops back to his work, on looking at the evolution of language, and how he is going about trying to address these questions.
Here is the link to the video, Can Robots Invent Their Own Language. it’s worth a watch if you are interested in these fields.