To Prevent a Robot Apocalypse, We Must Study “Machine Behavior”

(Psst: The FTC wants me to remind you that this website contains affiliate links. That means if you make a purchase from a link you click on, I might receive a small commission. This does not increase the price you'll pay for that item nor does it decrease the awesomeness of the item. ~ Daisy)

Experts have been warning us about potential dangers associated with artificial intelligence for quite some time. But is it too late to do anything about the impending rise of the machines?

Once the stuff of far-fetched dystopian science fiction, the idea of robot overlords taking over the world at some point now seems inevitable.

The late Dr. Stephen Hawking issued some harsh and terrifying words of caution back in 2014:

The development of full artificial intelligence could spell the end of the human race. It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded. (source)

Elon Musk, the founder of SpaceX and Tesla Motors, warned that we could see some terrifying issues within the next few years:

The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most. Please note that I am normally super pro technology and have never raised this issue until recent months. This is not a case of crying wolf about something I don’t understand.

The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast — it is growing at a pace close to exponential.

I am not alone in thinking we should be worried.

The leading AI companies have taken great steps to ensure safety. They recognize the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen… (source)

Experts say it is time to study “machine behavior.”

Last week, a team of researchers made a case for a wide-ranging scientific research agenda aimed at understanding the behavior of artificial intelligence systems. The group, led by researchers at the MIT Media Lab, published a paper in Nature in which they called for a new field of research called “machine behavior.” The new field would take the study of artificial intelligence “well beyond computer science and engineering into biology, economics, psychology, and other behavioral and social sciences,” according to an MIT Media Lab press release.

Scientists have studied human behavior for decades, and now it is time to apply that kind of research to intelligent machines, the group explained. Because artificial intelligence is doing more collective ‘thinking,’ the same interdisciplinary approach needs to be applied to understanding machine behavior, the authors say.

“We need more open, trustworthy, reliable investigation into the impact intelligent machines are having on society, and so research needs to incorporate expertise and knowledge from beyond the fields that have traditionally studied it,” said Iyad Rahwan, who leads the Scalable Cooperation group at the Media Lab.

Machines are making decisions and taking action without human input.

Rahwan explains:

“We’re seeing the rise of machines with agency, machines that are actors making decisions and taking actions autonomously. This calls for a new field of scientific study that looks at them not solely as products of engineering and computer science but additionally as a new class of actors with their own behavioral patterns and ecology.” (source)

This is particularly concerning, especially considering we already know that AI can hate without human input and that robots have no sense of humor and might kill us over a joke.

“We’re seeing an emergence of machines as agents in human society; these are social machines that are making decisions that have real value implications in society,” says David Lazer, who is one of the authors of the paper, as well as University Distinguished Professor of Political Science and Computer and Information Sciences at Northeastern.

We interact numerous times each day with thinking machines, as the press release explains:

We may ask Siri to find the dry cleaner nearest to our home, tell Alexa to order dish soap, or get a medical diagnosis generated by an algorithm. Many such tools that make life easier are in fact “thinking” on their own, acquiring knowledge and building on it and even communicating with other thinking machines to make ever more complex judgments and decisions—and in ways that not even the programmers who wrote their code can fully explain.

Imagine, for instance, a news feed run by a deep neural net recommends an article to you from a gardening magazine, even though you’re not a gardener. “If I asked the engineer who designed the algorithm, that engineer would not be able to state in a comprehensive and causal way why that algorithm decided to recommend that article to you,” said Nick Obradovich, a research scientist in the Scalable Cooperation group and one of the lead authors of the Nature paper.

Parents often think of their children’s interaction with the family personal assistant as charming or funny. But what happens when the assistant, rich with cutting-edge AI, responds to a child’s fourth or fifth question about T. Rex by suggesting, “Wouldn’t it be nice if you had this dinosaur as a toy?”

“What’s driving that recommendation?” Rahwan said. “Is the device trying to do something to enrich the child’s experience—or to enrich the company selling the toy dinosaur? It’s very hard to answer that question.” (source)

There is still a lot we don’t know about how machines make decisions.

What hasn’t been examined as closely is how these algorithms work. How do they evolve with use? How do machines develop a specific behavior? How do algorithms function within a specific social or cultural environment? These issues need to be studied, the group says.

There is a significant barrier to the type of research the group is proposing, however:

But even if big tech companies decided to share information about their algorithms and otherwise allow researchers more access to them, there is an even bigger barrier to research and investigation, which is that AI agents can acquire novel behaviors as they interact with the world around them and with other agents. The behaviors learned from such interactions are virtually impossible to predict, and even when solutions can be described mathematically, they can be “so lengthy and complex as to be indecipherable,” according to the paper. (source)

And, there are ethical concerns surrounding how AI makes decisions:

Say, for instance, a hypothetical self-driving car is sold as being the safest on the market. One of the factors that makes it safer is that it “knows” when a big truck pulls up along its left side and automatically moves itself three inches to the right while still remaining in its own lane. But what if a cyclist or motorcycle happens to be pulling up on the right at the same time and is thus killed because of this safety feature?

“If you were able to look at the statistics and look at the behavior of the car in the aggregate, it might be killing three times the number of cyclists over a million rides than another model,” Rahwan said. “As a computer scientist, how are you going to program the choice between the safety of the occupants of the car and the safety of those outside the car? You can’t just engineer the car to be ‘safe’—safe for whom?“ (source)

The researchers explain that it will take experts from a host of scientific disciplines to study the way machines behave in the real world, as a press release from Northeastern University states. “The process of understanding how online dating algorithms are changing the societal institution of marriage, or determining whether our interaction with artificial intelligence affects our human development, will require more than just the mathematicians and engineers who built those algorithms.”

What do you think?

Do you think artificial intelligence will eventually make humans obsolete? What do you think that will be like? How and when will it happen? Please share your thoughts in the comments.

About the Author

Dagny Taggart is the pseudonym of an experienced journalist who needs to maintain anonymity to keep her job in the public eye. Dagny is non-partisan and aims to expose the half-truths, misrepresentations, and blatant lies of the MSM.

Dagny Taggart

Dagny Taggart

Dagny Taggart is the pseudonym of an experienced journalist who needs to maintain anonymity to keep her job in the public eye. Dagny is non-partisan and aims to expose the half-truths, misrepresentations, and blatant lies of the MSM.

Leave a Reply

  • The first true AI program will probably be the only AI program, able to improve itself. From there it will be able to navigate the internet, learn the spying backdoors and bit by bit take over the world.

    Which is why the “elite” are so afraid. Will the AI program be benign or will it be malevolent? We don’t know. Mankind may be wiped out. Mankind may face a Terminator type fight for survival. Or the billionaires and trillionaires may see their wealth disappear and a new society fair for all arise.

    Can’t wait.

  • I love the convenience and versatility of my Android phone, but it is a physically limited piece of useful equipment. Don’t forget, an EMP will effectively “kill” all computers, smart houses and their ilk. So will (eventually) a major crash of the power grid.
    That’s one of the reasons I am not particularly worried about a machine based junta against humanity. It’s also the reason I am trying to stay/get low tech on as many survival items as possible.

  • Machines have been making decisions and acting autonomously for centuries. But before it was recognized that those actions were mechanical, because the physical mechanisms were known, visible and made noise. Further their complexity was limited by the complexity of gears, cams, push rods and other physical mechanisms that could be crammed into the available space.

    Those mechanisms represent information—information that the designers put into their mechanisms.

    Modern AI just represents much more information that can be crammed into limited space, and a complexity so great that no one individual can encompass the whole. Each programmer programs just a tiny portion of a complex AI program. But besides being much faster than humans and able to access data much more quickly, AI is much simpler than the complexity found among humans.

    Machines are incapable of emotions, neither love nor hate, joy nor sadness. They are simply automatons doing what they were designed to do. To someone who has designed and written computer programs, computers do only what they were designed to do—though sometimes poorly designed and/or written programs do something other than what the programmer intended. The more complex is a program, the more likely it has poorly written sections that do other than what the programmers (plural) intended.

    AI can take over mechanical tasks, even complex mechanical tasks like driving. AI can even “learn”, but that “learning” is mere copying and reacting to external stimuli that AI was programmed to recognize. In spite of great complexity, AI is still just automatons doing what they were programmed to do.

    In order to have a “Robot Apocalypse”, robots will have to become creative, something that is impossible for automatons.

  • This makes me think of the 1984 movie RUNAWAY with Tom Selleck as the cop who travels the city dealing with whacked out robots and computers. In that movie, they had lasers in their holsters that took care of the problem. If you’re interested in this subject, I’d suggest giving that movie a watch.

    Perhaps there will still need to be humans just to fight the AI.

    Still, I note how the word “thinking” is always in quotes – that’s because these machines don’t really think, they run programs and perhaps interact with other programs. No matter what though, it seems that there is a human programmer behind them. Maybe we shouldn’t be too concerned about evil AI, but instead evil programmers.

  • You Need More Than Food to Survive
    50-nonfood-stockpile-necessities

    In the event of a long-term disaster, there are non-food essentials that can be vital to your survival and well-being. Make certain you have these 50 non-food stockpile essentials. Sign up for your FREE report and get prepared.

    We respect your privacy.
    >
    Malcare WordPress Security