Are humans more likely to abuse anthropomorphic bots over others?

Reading Mar’s article today, I find myself re-reading a collection of older articles and wondering about Kate Darling‘s work, sexbots, hitchhiking robots and Ishiguro’s androids.

Are humans more likely to kill/maim/rape/injure human-looking/-seeming machines over non-human ones.

 

Stronger attention to language, please.

AI Now Institute is an independent research institution looking at the ‘social implications of artificial intelligence,’ something very needed as we continue to have such rapid and significant change in AI, driven by a very limited set of creators.

The Institute itself has four domains which it uses to bucket to the work they do on “the social implications of artificial intelligence”.

  • Rights & Liberties
  • Labor & Automation
  • Bias & Inclusion
  • Safety & Critical Infrastructure

Reading their 2017 Report, I believe it should have more emphasis on language in its recommendations, about which I would like to say more. Linguists, specifically those outside the computational linguistics field, need to be more integrated in the creation of AI as technology and interface.

Language is an underlying consideration in Bias & Inclusion, with this description:

However, training data, algorithms, and other design choices that shape AI systems may reflect and amplify existing cultural prejudices and inequalities.

Which does not have a strong enough inclusion or consideration of the language(s) used to train these systems. While bias work is written about language in AIs, it is more likely to fall on the corpus, that is to say, which words are used in descriptions, and the like. When you feed a corpus into a machine, it brings all its biases with it, and since these data sets are extant, they come with all the societal issues that seem to be more visible now than at earlier times.

Language and power, language and bias, minority languages, all of these have long been topics in the field of linguistics. They are also touched on in behavioral economics, with priming and other ways in which we set into the minds of humans particular considerations. You can also see this in the racial bias work from Harvard that was very prevalent on the web a few years back.

Language is a core human communication tool that entails so much history and meaning that without a greater attention to the social and cultural implications of the language we choose, from how we discuss these topics, to how language is embedded in the interaces of our AI systems we are glossing over something of great meaning with far too little attention.

I don’t think that language belongs only in the realm of bias & inclusion, in the long run. It may create outsiders at this time, but language is such a core consideration, it seems larger than any of these four domains.  Though to note, as well, none of these domains explicitly attend to the interfaces and the ways in which we interact with these systems, so language would belong there as well, as an input and an output, with differing needs and attentions on each side.

 

 

 

 

Humans, nature, and machines: Bill Joy and George Dyson

The follwing is from Bill Joy‘s 2000 article in Wired, “Why The Future Doesn’t Need Us”

In his history of such ideas, Darwin Among the Machines, George Dyson warns: “In the game of life and evolution there are three players at the table: human beings, nature, and machines. I am firmly on the side of nature. But nature, I suspect, is on the side of the machines.”

Any thoughts / insights into why Dyson believes this?

“AI Learns Sexism Just by Studying Photographs”

Two articles, one from Wired and one from the MIT Technology Review on bias in software. the quotes below, on gender bias.

As sophisticated machine-learning programs proliferate, such distortions matter. In the researchers’ tests, people pictured in kitchens, for example, became even more likely to be labeled “woman” than reflected the training data. The researchers’ paper includes a photo of a man at a stove labeled “woman.”

 

Machine-learning software trained on the datasets didn’t just mirror those biases, it amplified them. If a photo set generally associated women with cooking, software trained by studying those photos and their labels created an even stronger association.

These are interesting to me for several reasons.

First, it assumes that there is a bias against the kitchen, or women in the kitchen. But when one considers that most top chefs (Michelen 5 star) are men, it isn’t just about women in the kitchen, where the bias exists, but on a different level, and perhaps a nuance the machines don’t yet grasp.

Second, they are labeled as the AI learning sexism. I would be more inclined to suggest that the AI learned American categorization and labeling systems. The humans added the value judgement. I wonder why the machine was looking at images and labeling gender. How does a computer understand man/woman? By what criteria is it taught to differentiate?  Which inevitably brings us to the discussion of who creates the algorithms and programs, chooses the data sets, and teaches the labels to the machines.

It feels like solving an issue of ‘distortion’ in the way a machine learns, if that machine is reflecting both the programmers and the society, isn’t a solve, if it’s machine-level only. This is, perhaps, not the entire conversation, or even the wrong conversation.

It makes me think we need a deeper discussion on what the AI sees, how it applies labels, and how humans both interpret them and understand them. It reminds me of Lombroso and Lacassagne. Are we on our way to recreate the past, with different parameters?

 

 

Linguistic anthropology and AI, part 2

I posted the original set of questions so I could shoot them over to a few people, to get their thoughts on my thoughts. Delivered even more than expected.  In the emails and conversations I’ve had since then, there are ever more questions, that I am going to keep documenting here.

  • If it were possible to allow the AIs to interrupt each other, to cut in before one finished what it was saying, what would happen?
  • What happens if you have three AIs in conversation or negotiation?
  • Are the AIs identical in the beginning? If, so, who modifies language first, and do they do it differently? In concert? In reaction?
  • Does an AI who changes language get considered a new incarnation of the AI? Does it modify itself, as it modifies its language?
  • If you have two AIs with different programming, two different incarnations, of a sort, what modifications do they make, vs two instantiations of the same thing?
  • Does language come about as a means of addressing desires and needs? [Misha wrote this and I find I don’t agree, which is really a deeply fascinating place to go with this.]
  • Can machines have desires and needs? How would we know the answer to this?
  • Is the assumption that machines modify language for reasons of efficiency overly deterministic?
  • What is the role of embodiment in the creation of language? Is it required for something to be meaningful? Does it change the way language works? Would it ‘count’ for cyborgs?

One thing I have discovered is that I go at this from a different perspective than many of my conversation partners, which is that I accept that it is possible that everything we think we know is wrong, both about humans, and about machines.  As I wrote, we assume humans are rational in order to make models of human behavior, which are faulty, because we are not. We assume machines are rational, because we programmed them to be, but what if they, too, are not? There seems to be a sense that binary does not allow for irrationality, or anomaly, but..what if it does?

I think I need to wrap into these discussions four things:

  1.  a primer on computational linguistics for those who don’t have it
  2.  a bit of an overview on general linguistics, and where we stand on that
  3.  an overview of creole linguistics, because I think it is a very interesting model to use for the evolution of AI languages, particularly and perhaps except, for the bit where it requires a power dynamic, historically.
  4. some discussion of the genetic evolution of algorithms, deep learning, adversarial networks etc.

Misha’s last really interesting question to me: “Can you evolve language without pain?” is a bit acontextual as I toss it here, but what an interesting question about feedback loops.

 

Nigerian Pidgin on the BBC

French24 posted an interesting (by which I mean riddled with mis-statements) article about the BBC starting up a service for Nigerian Pidgin, which has 75MM speakers, according to the France24 article, and according to other sources, approximately 30MM first AND second languages speakers.  The article is strangely dismissive of the language and of the speakers, as I read it.

Nigerian Pidgin is the name of the creole language spoken by a significant percentage of the population.  When I studied it, now 20 years ago, it was considered English-based with an influx of words from major trading populations, so it had Portuguese and Swahili origin words. It was not “Portuguese-based” or “Jamaican Patois inspired” as the article claims. The history of the slave populations and the trade routes are the history of the creation of the language.

Creoles, like most languages, have a continuum of formality, but in the case of creoles, rather than going from slang to formal language, because the creole emerged with a base language, in this case English, the ‘higher’ form (because old school linguists were just that way) is closer to English, and the ‘lower’ form, farther away. This can be grammar, it can be vocabulary, it can be cadence, it varies by language. Comparable to, say Verlan and French. Which doesn’t even begin to touch regional dialects.

Back to the article!

The first line of the article is flat out impossible, a language has a grammar. Languages change. Unwritten languages have no standard orthography, until they do.

 Imagine a language without an alphabet, held together without grammar or spelling, which changes every day but is nonetheless spoken and understood by more than 75 million Nigerians.

Many languages have no alphabet specific to their language, we use Roman or Greek or Cyrillic characters to write. Alphabets specific to a language are a newer incarnation, often tied to national interests or identity. Think Georgian, Cherokee, and Korean. These alphabets were created long after the languages.

Writing (Nigerian) Pidgin may be new, delivering the news over the airwaves and on a website, as well, however the language evolved just so that people could communicate and this is really no different.   If it had a prior association with the ‘lower classes’ it is because it began with the oppression of the locals by the British colonial empire, and the trade routes that grew around their outposts.

Says the French linguist quoted in the article, “It’s a language that belongs to bonody at the same time as belonging to everybody.”  And can’t we say the exact same thing about every language we speak?

The BBC anouncement is much better, but refers to the language as a pidgin and defines it as such, even though, at least when I was in grad school for Linguistics, it was considered a creole. There are so many sociocultural and historical complexities of pidgins and creoles, and I haven’t studied this one since grad school, so I will leave it at that. I am looking forward to the service though, reminder of times past. It will be interesting to see if I can still understand the language or if it has evolved too much.

 

Nuance, AI, Cancer, and Leadership

I read an article on AI and leadership this morning, where the concern is that if the workers are all AIs, we won’t learn how to lead, as humans.  It is an interesting consideration, and not one I’d have come to without it being raised by someone else. But I don’t want to talk about that, I am more interested in the selection of sources, and the realistic abilities of an AI who replaces a human in cancer diagnosis and treatment being a successful option.

The article, without noting dates etc. uses old sources, a video from 2014 which makes some interesting assumptions (“Replacing human muscle with mechanical muscle frees people to specialize and that leaves everyone better off. This is how economies grow and standards of living rise.”) to bolster the argument that the thinking machines will take our thinking jobs.  The focus is on the robots, so statements such as the above one is just supposed to slide past as being obviously true. I don’t agree carte blanche with that statement, or many others in the video.

The author also cites the Watson PR piece about Watson diagnosing a rare leukemia and recommending a treatment that consequently saves the life of a woman in Japan. This, too, is over a year old, which is not noted in the article. I am not suggesting this invalidates the OOOH factor, but let’s just note that this is a single instance that has been all over the press as a sign of the future of Watson in medicine, and Watson being better than humans at diagnosing cancer.  And this is what I actually want to comment on.

Watson is fed data, millions of records of cancers, tests, treatments, outcomes, any genetic information on the cancer and the patient. Watson is also trained over years to get the right answers, so that Watson can continue to refine. Given that massive amount of data, it is not surprising that Watson can not only pick up a rare instance, but also recommend an unusual treatment which saves a life. I do agree that there are instances like this, where data and processing power will win.  What we never hear of, of course, are the instances where patients die, where we thought Watson could have done better.

And it may be that there will be a greater variety of treatments offered to patients because of the machine’s broad view of all treatments for that cancer to date, in particular populations, and their success rates. BUT the full set of recommended treatments may also be small, it will be the set of treatments which have already worked, because that is what an AI will choose from. It won’t come up with something new and novel, it will apply what already works.  This is fundamentally limiting.

The NCI has already done this, created a set of protocols with fixed recommendations for treatments. I am told it takes years to change these. But at the moment, you can discuss these recommendations with your doctor, specifict to your case. I’d hate to imagine a world where there is no dialogue, but perhaps that is me. I have seen studies which show that the machine is better than humans at diagnosing lung cancer. (This is a different role, not the oncologist who decides treatment path, but the pathologist who diagnoses your cancer, and determines its stage, which of course, feeds heavily in to treatment options.)

I’d like to point out, explicitly, that the end goal of treatments in these instance, in particular in the American medical system, is the continuation of life, and there is _nothin_ about the quality of life, just the extension.  If you look at how hospitals and doctors are rated, this is also obvious. Quality of life post-treatment isn’t on the list.

I am reminded of the dogs who can smell cancer, and other means of diagnosis that are novel but perhaps should be done in concert with humans.

I’d argue that cancer diagnosis and treatment is as much art as science, and if the machines are the science, we shouldn’t drop the art. Or the humanity, which, for the moment, the humans still corner the market on.

A Walk: 17 Aug 2017: Sunnyside to Greenpoint and Back

Yesterday I walked a big loop through space, on cement. Queens. Brooklyn. Queens.  Pondering the city, watching, and ingesting what is around me, as well as the 300 pages of journal articles I read yesterday.I left home to walk across Queens, past the Calvary Cemetery. The cemetery between NY, NJ, and PA is still on my mind from my rambles there two weeks ago.  I think about climbing over the fence to go in, see how it is different.  But I also think I am going to walk to Greenpoint and peer in on the PEN event about women translators/women in translation. A ruse, though.  I secretly want to see if they have on their shelves any of the Notting Hill Editions I don’t have. The design and colors of their covers are delicious and I covet more so I can arrange and rearrange them in visual structures that suit my day.  And I am in need of a walk. Too busy closing up life here, I no longer manage eight to ten miles a day. Four is about the max, and that is commuting, not rambling.

It is a fine walk down Greenpoint Ave, the sky is getting dark. In my mind I am also calculating the path home, given the blackness will set in soon.  I am walking streets I’ve never walked before so there is a wee security concern, but only very small. But for some places I want to walk by, better to route that way while there is still some light in the sky.

I make it to Franklin St, amazed how, even in the few months since I’d walked those streets in Greenpoint, the gentrification has traveled north, the shops look expensive, as do the restaurants. The shift from Queens to Brooklyn is the difference between someone on the stoop with a beer to restaurants with outdoor seating and fancy cocktails.  In the streets of Queens, heads nod as I pass by, maybe a comment or two, one offer for a drink on the stoop, and yet once I reach into Brooklyn everyone looks away. Those in motion look at their phones, those still, as well. No one looks up and around. Those who are drink together are eyeing the person across the table from them. It looks like an ocean of second and third dates and the calculations of how the night will end swirl around me. The further I get into Brooklyn the more I can feel anxiety creeping abut me, the more windows of goods I see, the more things scream “buy me!” and “drink me!” and the more I want to turn back, into Queens.

I walk into the book shop, and peer at the essays section. A quick glance tells me no NEHs. The man by the door is calling out like a hawker — for women’s literature in translation. He calls out that the event will be downstairs. I peer down, but the room is set up in a way that one would not politely be able to leave until it was over, and I know I don’t want to sit for much more than a moment, so I back up the stairs quietly and return to the essay section.

I pick up a book and put it back. I pick up the book to it’s right, it has the word topography in the title, how can I not? The first line says: “Language and landscape are my inspirations.”  I pay the man, put it in my bag, next to the book I have with me, a NEH, of course  (the more I read the more my favorite bit of the book is the hot pink cloth cover), and put my feet back on the ground and head north.

I am only a block before Newtown Creek when the gentrification stops, when the people sitting on the sidewalks in rickety old chairs are sitting together, no phones, chatting some, mostly waiting for the heat to abate, and the day to end, and the next day to end as well, I think. Two months ago this was six blocks south.

Turning to the east there is the stretch that runs me to the Pulaski. It’s not yet fully dark but its getting dusty and I am a bit more alert. Even though my earbuds are in my ears, the music is no longer on. I pass by trash blown up on the edges of building, spatters of graffiti, a pair of abandoned pants, the flies swarming around them making it clear why they were abandoned in just such a heap. The cars are still burnt out and the streets still look never cleaned and are coated in the slimey summer that NYC builds up, layer upon layer. A group of multicolore teens pass by on their bicyles, speaking a mixture of languages and I don’t really listen anyway. I just note them as they come behind me, and then on they go, slowing only for a moment. I unsling my backpack and pull my house keys out to clip them to my pocket. One of the very few concessions I make to walking alone in places of solitude under the darkness of city overpasses, in the spaces on the edge, as night falls.

An angular junky stumbles past, gives me a long look, but keeps going. I turn to watch the back of him, I stop entirely, he goes on a bit, then pauses a moment, looks over his shoulder, nods at me, and keeps going. I turn and head up the stairs onto the bridge, over the Pulaski and back into Queens. A man in a DOT truck is parked at the base of the stairs, staring into his phone, the glow from the truck, unaware of anything but the device in his hand. I walk up Vernon, through LIC, more attentive to how far the gentrification has made it over here, how much, too, this place has changed, even in the past two months. Who the people are, what they are doing. I loop back down to Jackson Ave, thinking I will head that way a bit, then cross over to Skillman at 33rd and walk those blocks which I have not yet walked.

Skillman at 33rd is the edge of the railroad tracks. I watch workers standing about, one at a time hauling dirt or doing some work, Mostly they seem to be a group of men, talking, posturing. They are all at ease, casual, unhurried, unaware that I am standing a level above them, watching, for a long time. Across the tracks I can see a train car, thick with graffiti, a eulogy for a friend, a prayer to the lost one’s family. It’s parked on the side. It is beautiful, reminds of me of the city that came before this one. I wish I had a camera, but I don’t, not on this walk. I think I should come back and take a photo, but I won’t. I know this. I never do.

I continue down Skillman and its pretty close to dark. I am wearing torn up black jeans, berry trail runners, a black and white striped tank top with a button down — unbuttoned — black shirt on top, and am carrying a black backpack. Still the headphones in, this time I am listening to French pop music and peering into all the warehouses and garages. Things are still running and whirring, work happening. Food carts, refrigerator trucks, palettes covered in plastic, motion and energy in the back of dark spaces. I peer into everything I can. Much of the work looks like it will go all night. I am the only pedestrian in most areas and definitely the only female. I ponder, as often I do, why this doesn’t bother me, and I wander on.

Looking ahead the next stretch of Skillman says, “don’t walk me” so I turn right, with the intent of taking first left on to 43rd, to continue a parallel path until it seems ok to cross back to Skillman. As I walk up the street, I see a wide open space tucked behind the buildings on my left, and odd things, undefinable things, and I peer in, slowing, trying to see. I can’t say what they are, and I catch the eye of the solo worker there, a burly black man in a deep blue coverall, I nod, he nods, I keep going.  A moment later I hear him call out, he’s walked up to the street from the depths of the warehouse. I turn around, and for the show of it, I pull out my left earbud, even though, again, there is no music playing. He rumbles out a hello, and again I nod. I am slowly walking backwards and he is slowly walking forwards. He isn’t threatening however he is rather large. There is not another human in the vicinity, I am not really worried, just cautious. I am also realistic, there is little I could do, unless I could outrun him. None of this is important, as he means me no harm, I can tell.
He says, “I like your style, can I give you my number? How about you call me and we go for a drink?” He as a thick Caribbean accent, speaks slowly, and there is nothing at all disrespectful about the way he speaks to me or what he says or even how he looks at me. I grin, say thank you, but no. He grins back at me, and asks if I am sure, gives me a moment or two to think about.Earlier in the day, cleaning through my storage locker as I prepare to leave NYC, I found my grandfather’s freemason’s ring, and had slipped it on the ring finger of my right hand. I hold that hand up to him, and waggle my fingers. Ah, he says, still smiling, when you get home, tell your man he is a fortunate fellow. Thank you, I say, partially turn away, and before I am entirely gone, around the corner to the left, I hear him wish me a beautiful evening. I smile as I walk and think about men on the street and how they call out and speak to me. I realize that almost never are they disrespectful, nor do I feel objectified but largely they are charming and human and had I felt like a chat, I’d have learned much of him and his life, and still traveled on my way.  I like the strangers I meet, here on the streets of NYC, but also in the remote areas I like to travel to.

I keep meandering the neighbourhood until I reach home, all told, about eight miles, something around three hours. Once in Queens, through the warehouses, past the parks, around the burkas and saris, the jeans, and the guayaberas, the Dutch wax and the djellebas, I realize I feel more at home than other places I have lived, and resolve, at least, to consider this before choosing a next home.  It’s sticky and dark and in the last few blocks there are darkened, abandoned bars with funny names, and a nail shop called Green Tara. There are humans engaged in being present with each other, children running the streets, and always the slow rumble and occasional screech of the elevated train.

I open my bag, pull out the book and it falls to this:

We are in some strange wind says the wind and it has always been that way in southern Utah. Downwind from nuclear testing. Downwind from the state lawmakers who want to sell public lands to the highest bidder so they can develop them. Downwind of shale oil and gas extraction that threatens to erode the very beauty that defines America’s red rock wilderness.

Downwind, I think. Downwind.

Linguistic anthropology of AI

I have spent years thinking about how AIs would evolve language, amongst themselves.

When they stop talking to humans and start talking to each other.  I have surmised it is likely to follow a similar language evolution to creole languages, which brings up really interesting questions about power. I’ve intended to write more on this for years, and instead I just talk to people about it.

Now that the AIs are modifying their languages, it seems more significant to write and ponder this, from the perspective of linguistic anthropology and sociolinguistics. You may want to argue that applying these fields which include concepts of culture and interrelationships are not relevant, but I hope to convince you that this is exactly what needs to happen.

Without involving linguists other than computational linguists, there are entire shades of what is happening that can be missed. When the Facebook negotiating bots modified their language, the initial thought was that it was meaningless. Sure, to the humans, but language change, language shift, is a well studied and fascinating realm of study in humans, why would it not be in other domains? It was considered a coding error, not intent, not of meaning to the AIs. Spoken human languages are inefficient. Means are meant to be efficient. Why would they not modify?

As I have been more intently thinking and reading about this in the past week, it has opened up a lot more questions than answers. While I work on writing up some of the initial thoughts I have, I keep pulling at all these threads, and thought I’d list some of the questions I have, just to keep track.

  • What is the language model of evolution in the AIs who are modifying language? (If anyone wants to share larger transcripts of the conversations with me, I would love that.)
  • What happens when humans cannot understand the languages being spoken?
  • Will AIs continue to use spoken language to communicate when it may be more efficient not to?
  • How/do new theories of the epigenetics of language in human language evolution possibly fit into ways of creating a framework for language evolution in AIs?
  • Can creole language development be used as a framework for evolution given that in human language models it historically requires a power dynamic and in/out group behaviors?
  • Are we right to assume that once a set of AIs with evolved language are ‘shut down’ that they and their language are gone?
  • Have we seen any sign of learnings from former languages to move more quickly into evolutions of language or do all AIs start from scratch, so to speak, from whichever is their first spoken language.
  • Is it appropriate to kill off languages? This is super complex, in human terms the answer of killing off minority languages is now know. Historically, colonialism forced assimilation. Not that I am suggesting that our treatment of AIs is similar, but it opens some interesting ways of thinking. It also opens up an really interesting path of, if you are modifying and creating language, does that language have rights? Do the creators have rights?
  • I am only seeing this in English right now, but much similar work goes on in French and Japanese. What’s happening there?
  • What, exactly, are the syntactic shifts we are seeing in these languages? Can we evaluate meaning and purpose? Efficiency would be an obvious answer, since spoken human languages are not efficient, but is that really what it is?
  • What would the linguistic evolution look like, along the lines of DeepMind’s visual creations of art? Can this happen in the same manner that it does for visual and aural creations (project magenta)? Or does our desire as humans to understand that which we expect to have meaning, more so than our assumptions of the meaning in art and music, mean that if we let the algorithms run, we won’t like it?
  • Why do we kill off AIs that modify language? Are we afraid? Is this not fascinating? Do we believe that this is a path to sentience that we want to avoid?

More on all this soon..

 

Lots of comments on current papers and articles as well, which Iwill pull into here as well.

Self-discovery and grief: women’s writing of alternative lives

Melville House has published Lynda Schuster’s book about being a war correspondent. Here it the marketing copy they have opted for:

Screen Shot 2017-07-19 at 7.11.13

Because being a war correspondent, if you are a woman, is all about self-discovery. The longer description includes the word grief, the other thing that gives women permission to write about ways of life that don’t align with the expectations of our current world.