New documentary examines the future of artificial intelligence and the impact it will have on our world.
Reviewed By Jeffrey Sanzel
We Need to Talk About AI is an intriguing and occasionally alarmist documentary that explores the historical and current developments in Artificial Intelligence. It raises far more questions than it even attempts to answer and that, most likely, is its point. The title’s urgency is appropriate to this peripatetically engaging ninety minutes.
Director Leanne Pooley has conducted extensive interviews with scientists, engineers, philosophers, filmmaker James Cameron, and a whole range of experts, along with dozens of clips from news broadcasts and nearly one hundred years of science fiction movies. The film plays at a breakneck pace, fervently bouncing from one opinion to an alternate point-of-view.
Currently streaming On Demand, the documentary is appropriately hosted by Keir Dullea, who gives a dry menace to the narration and occasionally appears walking through crowded streets like a being from an alternate universe. Dullea is best known as astronaut Dave Bowman in Stanley Kubrick’s landmark 2001: A Space Odyssey (1968). Pooley uses the film’s HAL (Heuristically Programmed ALgorithmic Computer) as the example of man’s greatest fear in the world of AI: a computer that becomes sentient and will no longer obey its human creators.
The early days of AI work seems almost quaint in comparison to latter-day capabilities. Much of this can be traced to the advancement in the computer technology and the rise of the internet. The internet’s considerable expansion in the last two decades has been the greatest gamechanger.
A constant refrain is that the dialogue surrounding AI has been “hijacked” by Hollywood: the majority of the populace associate AI in negative terms. It is about the rebellion of manmade machines (e.g., The Terminator). The scientists are in agreement that this is a misrepresentation. That is, they are for the most part. As the film progresses, the views on the dangers of AI diverge.
It all comes down to the question of conscience and autonomy. There is a dissection of the issues behind self-driving cars and how to embed ethics into the machine. The Trolley Problem — how do you decide who to save — is used to demonstrate the challenge. To make the decision, the machine would have to be a conscious being.
Furthermore, can a machine be conscious or have a conscience? The idea of conscious and conscience becomes central. As it is almost impossible to define what “conscious” is, it creates additional conflicts in the narrative. This leads to conversations on emotion and whether machines will ever be able to feel and react to social cues.
The film poses many hypotheses and explores the predicament from all sides. There is rarely uniform agreement. Can a machine make itself smarter by programming itself? Will the evolution be gradual or exponential?
Even now, robot surgery, agriculture, and even Facebook’s suicide awareness algorithms are examples given of the recent uses of AI. Computers can now beat the world’s greatest chess players. Not that many years ago, these were considered impossible outside of speculative fiction.
Throughout, Pooley returns to the teaching of Baby X, an intelligent toddler simulation that is both fascinating and chilling. Baby X almost seems human and appears to be learning. It is a strange and exciting phenomenon.
Already, the argument is made that we carry less in our brains because we carry parts of them in our pockets in the form of cell phones. In essence, they are the merging of minds with computers. They are an augmentation and a symbiotic integration.
Ultimately, it comes down to not so much how we build AI but what we do with it. The unifying position of the interviewed is the fear that this power will be used for evil — or at least negative purposes. (Pooley unsubtly does a quick montage of the world’s foremost demagogues.)
The consensus is that it should not be about who arrives first but who gets there safely. They hope but doubt for regulation. If it is corporations or business (Google, Microsoft, etc.) that get primary control, it will be driven by greed. If it is the military, it will be about killing. They say we only have one chance to get it right, and the leader in the field must, in essence, be the good parent. AI will dominate the economy and, therefore, the world.
There are myriad questions raised: What it means to be human? If machines become more, will we be become less? Is AI going to do something for you or to you? Is science fiction the canary in the coal mine? That is, do we face the apocalypse if AI doesn’t play out the positive scenarios?
And then there are the moral questions. Can machines be made accountable? Does a machine have rights? If so, is this a form of slavery, where conscious beings are created and then dehumanized? There is a brief section about the rise of sex robots that is twinned with a clip from the 1927 silent film Metropolis. Can a machine say, “No?”
Perhaps we have come a long way from the science fiction movies of our past. Maybe we will never face the voice of HAL saying, “I’m sorry, Dave. I can’t do that.”
Or perhaps we will.
The final line sums up the entire journey: “What do we want the role of humans to be?” We Need to Talk About AI is a great place to start.