Every morning, of late, when I read the news, there is a slew of headlines of what AI has done for us lately.
Just this morning, I read:
Robert Wickham of Salesforce is the source of the last statement, that AI will be the new electricity, once we are done oohing and ahhing. Or being afraid that we will all lose our jobs.
AI, however, is not like electricity. It is not so straight forward. While it may, eventually, be ubiquitous and unconsidered, so far we cannot provide a single and clear definition for what it is, and thus these reductive metaphors create greater confusion than clarity.
In each article ‘AI’ describes something different. Deep learning, neural networks, robotics, hardware, a combination, etc. Even within deep learning or neural networks, the meanings can be different, as can the nuts and bolts. Most media and humans use ‘AI’ as shorthand for whatever suits their context. AI, without an agreed upon definition, but the lack of clarity, differentiation, and understanding does make it very difficult to discuss in a nuanced manner.
There is code, there is data, there is an interface–for inputs and outputs, and all of these are (likely) different in each instantiation. Most of the guts are proprietary, in the combination of code and data and training. So we don’t necessarily know what makes up the artificial intelligence.
Even code, as shorthand to a layperson, as the stuff that makes computers do what they do, is a broad and differentiated category. In this case, like language, it is used for a particular purpose, so this reduction is perhaps not as dangerous. We’ve never argued that code is going to take over the world, or that rogue code is creating disasters. As compared to algorithms, a few years ago, and AI, now.
So much of this lumping is a problem? We lump things, such as humans or cats, into categories based on like attributes, but we do have some ways to differentiate them. These may not be useful categories, nationality, breed, color, behavior, gender. (Even these are pretty fraught of late, so perhaps our categorization schemes for mammals needs some readdressing.) On the other side, we could consider cancer, an incredibly reductive title for very a broad range of…well of what? Tumor types? Mechanisms? Genetic predispositions? There are discussions, given recent research, as to whether cancer should be a noun, perhaps it is better served as a verb. Our bodies cancer, I am cancering, to show the current activity of internal cellular misbehavior.
What if we consider of this on the intelligence side, how do we speak of intelligence, artificial or otherwise? For intelligence, like consciousness, in humans, we do not have clear explanations for what it is or how it works. So not perhaps the simplest domain to borrow language from, and apply it to machines. Neural networks is one aspect, modeled on human brains, but it is limited to the structural pathways, a descriptor of how information travels and is stored.
The choice to use AI to represent such a broad range of concepts, behaviors, and functions concerns me. Even in the set of headlines above, it is difficult to know what is being talked about, from a continuum of visible outputs to robots who speak. If we cannot be more clear about what we are discussing it is incredibly complicated to make clear decisions, about functions, about inputs, about outputs, data, biases, ethics, and all the things which have broad impacts on society.
I haven’t seen clear work on how we should use this language, and though I worked with IBM Watson for a while on exactly this concern, I can’t say I have a strong recommendation for how we categorize not just what we have now, but, as importantly, what is being built and what will exist in the future. The near future.
I’ll work on this later, as in soon, ways in which to talk about these parts in a public context that are clearer, and allow for growth of creations into a systems model. Check back!