Artificial Intelligence Problems
It seems that every day we hear about another company that is developing a smarter artificial intelligence robot.
And heavy weights like Bill Gates and Elon Musk and even renowned physicist Stephen Hawking are starting to express concern about this headlong rush.
According to Bill Gates in an interview in Redditt:
“I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super-intelligent. That should be positive if we manage it well,” wrote Gates.
“A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”
Musk spoke out in October 2014 during an interview at the AeroAstro Centennial Symposium, telling attending students that they should be concerned about how the technology industry approaches AI advances that are sure to happen in the future.
“I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful,” said Musk. “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.”
In a December interview, Professor Hawking went further.
“The primitive forms of artificial intelligence we already have, have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race.”
I don’t know about you but if people like these who are among the smartest minds of our age are expressing concern, maybe we should be concerned too.
How We Use Artificial Intelligence Now
Once just a gleam in the eye of movie makers and book authors artificial Intelligence or AI is starting to become a reality. And there is no doubt that it will become a larger presence in our lives.
Over the last decades AI has become a larger part of our lives, many times in ways that we don’t even think about.
GPS in our cars and phones uses rudimentary AI to search through the thousands of possible routes to a destination to suggest the best route for us to take.
Our smartphones are getting better at understanding our spoken commands and virtual assistants like Siri, Cortana, and Google Now are getting better at anticipating and understanding our needs.
AI algorithms are used to detect and recognize the faces in our photos that we take with our phones.
Internet search engines like Google and Bing use AI systems to understand and provide hundreds of millions of search results to people every day.
Hospitals are even using AI algorithms to search through the massive amounts of patient data that they have collected to calculate which patients could have complications and which medicines could have serious side effects.
Military Uses Of Artificial Intelligence
Right now the U.S. military is testing robots with limited AI that will carry out surveillance and one day could even participate in armed military actions.
Being a soldier is a dangerous job in general but some of the tasks that a soldier has to do is more dangerous than others. Passing through minefields, de-activating unexploded bombs and clearing out building occupied by hostiles are just a few.
But what if instead of sending humans into these and other dangerous situations we could send robots. That way if something went wrong the only loss would be hardware and money rather than human lives.
Drones are another situation where AI could be used. There is a shortage of trained operators for Drones so the thought of having AI drones that would be able to fly themselves and complete missions is an appealing idea.
This is why all branches the U.S. military have been busy developing all sorts of different robotic systems both smart and dumb with some of them even being field tested on the front lines in Iraq right now.
Concerns About Artificial Intelligence
There are several specific concerns about Artificial Intelligence as it continues to develop and our usage of it evolves.
In an article written by Eric Horvitz, director of the Microsoft Research lab, and Tom Dietterich, president of the Association for the Advancement of Artificial Intelligence they cite three main areas of concern.
One is programming errors that might be in the AI software. AI software is incredibly complex and the possibility that there might be errors in how it works should be considered.
Their second concern is cyber-attacks on the AI systems carried out by criminals, terrorists and even government-backed hackers that aim to gain control of the AI program to advance their own purposes rather than what the robot/drone/program with the AI was intended to perform.
The third concern is what they call Sorcerer’s Apprentice scenarios, when AI systems respond to human instructions in unexpected (and possibly dangerous) ways.
All of this seems like something that is far in the future but is in actuality here already in our lives.
I think that we need to start thinking about how the use of artificial intelligence should be used before it’s to late.
Doing something just because isn’t a good enough reason to risk the possible consequences that could come out of this.