October 27, 2015

Can Computers Think For Themselves? Machine Learning at Google

A graphic of a female Android sleeping and plugged into a machine.




























By Megan G.
“I think the development of full artificial intelligence could spell the end of the human race.” – Stephen Hawking, Physicist

Your skin may begin to crawl at the idea of a computer having completely independent thoughts.  A fully artificially intelligent character is half of what makes films like 2001: A Space Odyssey, Blade Runner, and Ex Machina so tense and terrifying.  The other half of that terror is conjured by human characters, whose hubris has convinced them they could create such a powerful being and still manage to control it. 


A screenshot from Ex Machina and the Android character.
Still from Ex Machina
aka If Google Was Run By An Egomaniacal Loner. 
Image from Back to the Movies.
This AI apprehension exists not only in the realm of science fiction but also in our own world’s tech industry.  As you can see from Mr. Hawking’s quote, not everyone is starry-eyed over AI technology, also known as machine learning.  

Other tech greats like Bill Gates and Elon Musk have expressed concern with the advancement of computer super-intelligence, with Musk saying that AI is potentially more dangerous than nukes!

Google, however, embraces the study of artificial intelligence.  In Google’s Q3 earnings call, CEO Sundar Pichai announced that “machine learning is a core, transformative way by which we’re rethinking everything we’re doing.” 

Uh-oh.  Better start fortifying our defenses against an onslaught of AI overlords! 

An artist's representation of SHODAN, the supercomputer from System Shock 2.
SHODAN, yet another hostile (and fictional) AI entity.  
Image from Reddit.

But wait!  Halt your mad scrambling!  Thankfully, we are still at a point in our innovations where computers are horses that can be led to water and made to drink, but the dumb beasts could not be left to find watering holes on their own.

A horse who has its head stuck in a tree.
“Is this where the river was?”  
Image from Horse Nation.
“I think computers are remarkably dumb,” says John Giannandrea, Google’s Head of Machine Learning.  “A computer is like a 4-year-old child.”  The possible offense taken by four-year-olds at this comment momentarily disregarded, Giannandrea quiets a lot of fears about AI with his statement.  

While Google continues to make advancements in AI, the “holy grail” of machine learning would be to successfully program a computer to mimic the human mind.  Engineers have yet to accomplish this task.

Learning With Limits

Machine learning has already been present in Google’s projects for some time now, including Gmail, the Knowledge Graph, self-driving cars, their slew of robots at GoogleX, as well as the most recently unveiled signal of Google’s Search Algorithm:  RankBrain, which is an intelligence that interprets more complex or vague queries. 

Search Engine Land gives the example of typing “Barack” into Google.  RankBrain would help the Search Algorithm understand that a user is searching for US President Barack Obama, as opposed to some other dude named Barack or simply just the name “Barack.” 
So, in light of all of these helpful advancements, what’s keeping computers from completely taking over humanity?  Google researchers gave Tech Insider the scoop, which we have expanded upon below!

1) Machines lack senses.

Humans have survived and still survive to this day because of our senses:  hearing, vision, touch, taste, and smell.  We learned how to deal with stimuli in our environments by touching hot and cold surfaces, smelling fresh and rotten foods, tasting the edible and inedible, and seeing the behaviors of both friends and foes.  Machines do not have that history, and their only “experience” of the world must be crafted and inputted by humans.

For example, if your apartment caught fire, you would smell the scent of it burning and you would run.  Chances are your smartphone, even equipped with the ever-charming Siri, would not.  It could take a video or photo of the fire, but that action must be prompted by its human master. 

A true artificial intelligence would have fancy olfactory sensors that could detect air toxins from the smoke, see through the smoke with its Zero-Smog-O-Vision, and save all of the humans inside the burning building without any damage to itself. 

A screenshot of the robot Sonny from I, Robot (2004)
“I did it to save you…”   
Image from I, Robot.

Or perhaps the AI started the fire in the first place, because the building was observed to be too old or unsafe for human dwelling and needed to be eliminated…

Either way, a true AI interacts with and responds to the world.  It does not just passively observe.

#2 Machines, on their own, are just not motivated to learn.  

Computers require human teachers in order to ingest data and to know what to do with that data.  Little kids can make a stuffed bear hold a teacup, but will it drink its make-believe tea if they don’t move its arm?  Furthermore, just because these imaginative kids go through this teatime process once, doesn’t mean the bear will know what to do the next time around.  The same concept applies to computers who lack advanced machine learning capabilities.

 A photograph of two little kids coloring with colored pencils.
Image from Video Hive.
According to Tech Insider, the most successful form of machine learning has been “supervised learning,” which involves a process similar to a teacher pointing to an item and naming it for the student.  However, instead of having a history of these learning moments to create insightful inferences and “fill in the gaps” as a human does, a computer must start from scratch every time it learns a new task.  This means machines need their human teachers almost constantly.

Take the image above, showing young children coloring.  Once you show a child how to color—staying inside the lines, drawing clear images, etc—they can pretty much be left on their own to doodle, create masterpieces, or leave their crayons on the floor to play with their toys.  Why’d you make them color a stupid pony, anyway?!


If you draw the first line for a computer, it could perhaps copy that exact line perfectly, or mirror that line, or draw that line perpendicularly to the first.  However, it couldn’t create something on its own. 

WALL-E from the Pixar film.
Aw, sweet little WALL-E.  A junker robot like this would never actually instill value into certain objects, especially not cheesy musical numbers or female-gendered robots.  
Image from Heroes Wikia.

Simply put, computers aren’t curious.  They just do what you tell them to do, and it’s that lack of motivation keeps them from doing other things like, say, contemplating human destruction.

#3 Machines are NOT conscious.

This is perhaps the most important debunker of all AI fears.  Many films, video games, books, etc. that have artificially intelligent characters with an agenda or feelings.  Often, the machine wants to learn or wants to be human.  This would imply a conscious entity, one aware of its own existence and, more importantly, its difference from humanity.

A still of Michael Fassbender playing David 8.
Synthetic David from Prometheus found human dreams “inspiring.”  
According to computer scientist Stuart Russell, “no one has a clue” how to program consciousness into a computer because it is such an elusive part of the human mind.  Google researcher Geoffrey Hinton further argues that consciousness is “no more useful than the concept of ‘oomp’ for explaining what makes cars go… that doesn’t explain anything about how they work.”  If we don’t even know how our own consciousness works, how can we program a computer to understand it, much less has it?

In Conclusion

So, in short, computers cannot think for themselves, though Google does not seem too disturbed by moving in that direction.  From where robotics and AI tech is today, I would say we have nothing to worry about.  If robots were going to take over the world, it would be entirely the fault of human beings programming them and actively controlling them to do so. 

Thankfully, Google works towards bettering human existence as opposed to halting it.  The company’s machine learning team, for now, has its sights set on self-driving cars, which should curb the very human fault of terrible driving behaviors.  By detecting pedestrians and other cars in order to avoid them, self-driving cars should make for a safe driving experience all around.  Just don’t expect their adorable, little Lexuses to talk to you like KITT from Knight Rider.

Are you paranoid about the AI takeover or are you excited for the advancements in machine learning?  Let the Tek Team know in the comments!

No comments:

Post a Comment