Reading comprehension and correct answers

Reading the Bloomberg article on nlp comprehension.

Alibaba Group Holding Ltd. put its deep neural network model through its paces last week, asking the AI to provide exact answers to more than 100,000 questions comprising a quiz that’s considered one of the world’s most authoritative machine-reading gauges. The model developed by Alibaba’s Institute of Data Science of Technologies scored 82.44, edging past the 82.304 that rival humans achieved.

What is notable to me is that in this instance, these questions can only have one answer, to be correct.

The quiz itself is based on wikipedia articles. Remember when you would never let your students use wikipedia as a source?

As the Bloomberg article notes, NLP ‘mimics’ human comprehension.  The underlying belief is that the machines can answer objective questions.

“That means objective questions such as ‘what causes rain’ can now be answered with high accuracy by machines,” Luo Si, chief scientist for natural language processing at the Alibaba institute, said in a statement.

Functionality and thus comprehension and correctness is based on a binary model of knowledge, and is using wikipedia for the source of correct. Much about that sentence is complicated, from my perspective. The binary model of correctness allows for no nuance, and is based on those who have the power to control the narrative. No alternate views, no other models.

It reminds me of taking standardized tests, when none of the answers seemed exactly correct, and I spent my test taking time trying to imagine which one the test makers believed to be correct. I was forced to fit into the culture of the creators of the exam. Extending this out to what it means that machines ‘know’ and allowing them to provide authoritative answers seems reductive, dangerous, and seems to be moving ahead apace.

 

Leave a Reply

Your email address will not be published. Required fields are marked *