AI Can HATE With No Human Input

(Psst: The FTC wants me to remind you that this website contains affiliate links. That means if you make a purchase from a link you click on, I might receive a small commission. This does not increase the price you'll pay for that item nor does it decrease the awesomeness of the item. ~ Daisy)

What if a robot decides it hates you?

That might seem like a silly question, but according to research, developing prejudice towards others does not require a high level of cognitive ability and could easily be exhibited by robots and other artificially intelligent machines.

The study, conducted by computer science and psychology experts from Cardiff University and MIT, revealed that groups of autonomous machines could demonstrate prejudice by simply identifying, copying, and learning the behavior from each other. The findings were published in the journal Nature.

Robots are capable of forming prejudices much like humans.

In a press release, the research team explained that while it may seem that human cognition would be required to form opinions and stereotype others, it appears that is not the case. Prejudice does not seem to a human-specific phenomenon.

Some types of computer algorithms have already exhibited prejudices like racism and sexism which the machines learning from public records and other data generated by humans. In two previous instance of AI exhibiting such prejudice, Microsoft chatbots Tay and Zo were shut down after people taught them to spout racist and sexist remarks on social media.

This means that robots could be just as hateful as human beings can be. And since they’re thousands of times smarter than us, can you imagine the future if they developed a bias against humanity?

No human input is required.

Guidance from humans is not needed for robots to learn to dislike certain people.

However, this study showed that AI doesn’t need provocation and inspiration from trolls to get it to exhibit prejudices: it is capable of forming them all by itself.

To conduct the research, the team set up computer simulations of how prejudiced individuals can form a group and interact with each other. They created a game of “give and take,” in which each AI bot made a decision whether or not to donate to another individual inside their own working group or another group. The decisions were made based on each individual’s reputation and their donating strategy, including their levels of prejudice towards individuals in outside groups.

As the game progressed and a supercomputer racked up thousands of simulations, each individual began to learn new strategies by copying others either within their own group or the entire population.

Co-author of the study Professor Roger Whitaker, from Cardiff University’s Crime and Security Research Institute and the School of Computer Science and Informatics, said of the findings:

By running these simulations thousands and thousands of times over, we begin to get an understanding of how prejudice evolves and the conditions that promote or impede it.

The findings involve individuals updating their prejudice levels by preferentially copying those that gain a higher short term payoff, meaning that these decisions do not necessarily require advanced cognitive abilities.

It is feasible that autonomous machines with the ability to identify with discrimination and copy others could in future be susceptible to prejudicial phenomena that we see in the human population.

Many of the AI developments that we are seeing involve autonomy and self-control, meaning that the behaviour of devices is also influenced by others around them. Vehicles and the Internet of Things are two recent examples. Our study gives a theoretical insight where simulated agents periodically call upon others for some kind of resource. (source)

Autonomy and self-control. Isn’t that what happened in the Terminator franchise?

What if scientists can’t keep AI unbiased?

What will happen if developers and computer scientists can’t figure out a way to keep AI unbiased?

Last year, when Twitter was accused of “shadow banning” approximately 600,000 accounts, CEO Jack Dorsey discussed the challenges AI developers have in reducing accidental bias.

This new research adds to a growing body of disturbing information on artificial intelligence. We know AI has mind-reading capabilities and can do many jobs just as well as humans (and in many cases, it can do a much better job, making us redundant). And, at least one robot has already said she wants to destroy humanity.

Last year, a scientist deliberately created a robot with mental illness and Elon Musk warned us of the dangers of AI.

The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most. Please note that I am normally super pro technology and have never raised this issue until recent months. This is not a case of crying wolf about something I don’t understand.

The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast — it is growing at a pace close to exponential.

I am not alone in thinking we should be worried.

The leading AI companies have taken great steps to ensure safety. They recognize the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen… (source)

Musk added, “With artificial intelligence, we are summoning the demon.”

What do you think?

A robot apocalypse straight out of the movie theaters seems to be approaching. What if robots form biases against certain groups of people – or humanity overall?

About the Author

Dagny Taggart is the pseudonym of an experienced journalist who needs to maintain anonymity to keep her job in the public eye. Dagny is non-partisan and aims to expose the half-truths, misrepresentations, and blatant lies of the MSM.

Picture of Dagny Taggart

Dagny Taggart

Dagny Taggart is the pseudonym of an experienced journalist who needs to maintain anonymity to keep her job in the public eye. Dagny is non-partisan and aims to expose the half-truths, misrepresentations, and blatant lies of the MSM.

Leave a Reply

  • It is a misuse of language to call ‘bias’ hate.
    Any goal oriented behaviour (such as, oh I don’t know, ‘Living’) will result in a preference for behaviours with a positive outcome.
    Everyone and everything is ‘biased’ towards survival. If not, it dies. “Goodbye Felicia”.
    The fact that AI is goal oriented means that it will have a bias toward the goal set for it.
    “Hate” just means I do not like your survival strategy.

    • I share all the concerns related to AI, but the source article (and this one) is rather click-baity. The decisions made by the bots to give to others based on reputation is really making an informed decision based on data. This is insightful, but throwing in the word “prejudice” is bound to raise alarm bells. Perhaps the research departments involved need some more funding, because I doubt few would have read the article if it was titled “Could AI robots develop ‘preferences’ on their own?”

  • I was using my Apple Siri AI to voice command a certain type of song on my Apple TV the other day. After her not finding the song after several tries she said that she “ was over my arbitrary category choices” !!! .
    I was shocked! & I switched her off immediately.
    To get that kind of lip & attitude from something that’s supposed to be MY assistant I thought she’s got waaaaayy to big for her boots & had her own personality that wasn’t what I’d signed up for.
    I wasn’t about to explore it or allow it any further interaction in my home.
    If this is where AI is headed I’ll be getting rid of all “ smart devices
    ASAP.
    I’ll see how this goes.

    • More and more devices have at least fragmentary ‘chips’ in place to make them some smarter than a rock, and it is usually of no concern for most people. What has been shown from a decade ago, is that it is extremely easy to hook together seemingly fragmented pieces of silicon chips in widely disparate areas and system. This allows the ‘creator’ of the system to have much more connectivity and silica neural cells that increases the smartness of the overall system beyond what could be imagined by most of us.

      Yes, hook together a couple of Einsteins, throw in a bunch of bias and fascist factors, you may get something you don’t really like, atall.

  • Does anyone else see it as in-your-face and I-call-shenannigans on the whole hate, bias and prejudice thing, when machines make assessments and conclusions based on the facts they have, and if a human comes to the same conclusions it’s bias or prejudice, but as others have pointed out, the AI’er is just making best choice decisions and forming up “paradigms and understanding” that is devoid of PC or care about what others might think? AND COMING TO THE SAME CONCLUSIONS that we call (gasp) prejudice? maybe we aren’t so prejudiced, biased and otherwise flawed as the social savior folks want us to believe? Maybe the machines have a point? are “onto something” ask a machine what a spade is ( & no that’s not a race reference unless you’re racist) & see what it says…

  • I have Cortana disabled on my computer, and Alexa disabled on my Firestick. It doesn’t mean that these devices don’t necessarily still have predictive ability, I just refuse to utilize them as much as I can. (I also refuse to use voice commands on my android phone) I feel way too dependent on technology as it is, I will stay ‘old school’ and type my searches, etc. Advances in AI concern me, to be sure. But it worries me more how my kids and grandkids are going to be affected more than myself. Some will fare better than others.

  • Computers are unable to do anything other than what their human programmers programmed into them. AI is just very fancy, very complex programming. No computer will ever have the creativity that a healthy human has.

    The problem is not the computer, rather the program. And the problem with the program is the human programmer. Or in the case of AI, the whole team of human programmers. Whatever “biases” that the program exhibits, are biases of the human programmer. Maybe the human programmer is not consciously aware that he has those biases, but they show through his work.

    I’m not a professional programmer, but I have programmed computers, including simple AI for simple tasks. In other words, I’ve done enough so that I know the limits of what a computer can do. They can only react, not initiate, and then only in ways as their human programmers have programed into them.

    What we need to watch are the human programmers and what are they programming into their AI software?

    • Are there limits to programming. So far, there has always been the understanding that, yes, it is so.

      Think for a moment. I have done some idle programming in my life, starting with DOS, Fortran 4, COBOL, Pascal, C, C*, C**, etc. and into some of the less demanding machine programming. It is quite true that the speed at which the advances in hardware/firmware development has often outstripped the kludgy human programmers and their ability to catch up quickly.

      There is absolutely no way to understand that a point will be reached (and odds are we’ve gotten there) where infant AI is fixing itself, learning by what could only be mega-advanced logic statements that are way beyond what I’d have thought was possible back in those dark ages of programming. And the advances are still following the 18 month cycle, with some proof that cycle is shortening.

      Let’s not be complacent, until we find out our smart house has locked us inside and we are waiting for the smart-bots who are on the way to remove a ‘cyber-cancer’ that a human organism with the wrong ideas has been identified.

  • “Robots are capable of forming prejudices much like humans.”

    And who is programming said AI brains? Why it’s the Silicon Valley, and, soon, your local H1-B Chinese import who, truly, have their prejudices hard-wired in. That, will be a really scary situation for any normal person.

  • This article reminded me of that tv commercial where a man and Alexa were having a discussion over bacon on a burger, Alexa didnt agree with him so it turned the lights out.

    On another note, I dont see how any machine can be “thousands times smarter” than us.
    After all, it is our human knowledge that is being programmed into the thing unless there is some alien from another world in charge of programming.

  • You Need More Than Food to Survive
    50-nonfood-stockpile-necessities

    In the event of a long-term disaster, there are non-food essentials that can be vital to your survival and well-being. Make certain you have these 50 non-food stockpile essentials. Sign up for your FREE report and get prepared.

    We respect your privacy.
    >
    Malcare WordPress Security