A robot walks into a bar and takes a seat. The bartender says, “We don’t serve robots.”
The robot replies, “Someday – soon – you will.”
Because the inevitable reality of robot overlords eventually taking over isn’t troubling enough, we now have another reason to be concerned about artificial intelligence.
Robots don’t understand humor.
A new report from the Associated Press has the details.
“Artificial intelligence will never get jokes like humans do,” Kiki Hempelmann, a computational linguist who studies humor at Texas A&M University-Commerce, told the AP. “In themselves, they have no need for humor. They miss completely context.”
Tristan Miller, a computer scientist and linguist at Darmstadt University of Technology in Germany, elaborated:
“Creative language — and humor in particular — is one of the hardest areas for computational intelligence to grasp. It’s because it relies so much on real-world knowledge — background knowledge and commonsense knowledge. A computer doesn’t have these real-world experiences to draw on. It only knows what you tell it and what it draws from.” (source)
Puns are based on different meanings of similar-sounding words, so some computers can get them (and some can generate them), Purdue University computer scientist Julia Rayz explained. “They get them — sort of,” Rayz said. “Even if we look at puns, most of the puns require huge amounts of background.”
Rayz has spent 15 years trying to get computers to understand humor, and at times the results were, well, laughable. She recalled a time she gave the computer two different groups of sentences. Some were jokes. Some were not. The computer classified something as a joke that people thought wasn’t a joke. When Rayz asked the computer why it thought it was a joke, its answer made sense technically. But the material still wasn’t funny, nor memorable, she said. (source)
There are pros and cons to developing robots that understand humor.
Some experts believe there are good reasons to develop artificial intelligence that can understand humor. They say it makes robots more relatable, especially if you can get them to understand sarcasm. That also may aid with automated translations of different languages, Miller explained.
But some think that might not be such a good idea:
“Teaching AI systems humor is dangerous because they may find it where it isn’t and they may use it where it’s inappropriate,” Hempelmann said. “Maybe bad AI will start killing people because it thinks it is funny.” (source)
Remember Sophia, the robot who said (while looking shockingly amused) that she would “destroy humans”? When “her” creator asked “Do you want to destroy humans? Please say no”, Sophia promptly answered “Okay, I will destroy humans,” like she was causally agreeing to grabbing a pizza for dinner.
Allison Bishop, a Columbia University computer scientist who is also a comedian, told the AP she agrees with all the experts who have been warning us that AI will surpass human intelligence someday.
“I don’t think it’s because AI is getting smarter,” Bishop jokes, then she adds: “If the AI gets that, I think we have a problem.” (source)
Recently it was revealed that robots probably have the capacity to develop prejudices toward others and even to hate. One recent study showed that AI doesn’t need provocation and inspiration from trolls to get it to exhibit prejudices: it is capable of forming them all by itself.
Unlike humor, prejudice does not seem to be a human-specific phenomenon.
Experts have warned about the dangers of artificial intelligence for years.
The late Dr. Stephen Hawking rang the warning bell for years.
In 2014, he told the BBC:
The development of full artificial intelligence could spell the end of the human race. It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded. (source)
Elon Musk, the founder of SpaceX and Tesla Motors, has spoken about the possible terrifying issues we may see in the next few years:
The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most. Please note that I am normally super pro technology and have never raised this issue until recent months. This is not a case of crying wolf about something I don’t understand.
The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast — it is growing at a pace close to exponential.
I am not alone in thinking we should be worried.
The leading AI companies have taken great steps to ensure safety. They recognize the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen… (source)
Regarding the news about AI not having a sense of humor, I am inclined to agree with Lou Milano’s take:
What’s not nice, is that robots will kill people if we try and teach them humor, nothing funny about that I’ll tell ya. This is another example in an ever growing list of reasons that robots suck so hard and need to be killed. They don’t get humor, have no need for it and should we insist they learn it, they may go on a killing spree.
I’m going to keep shouting it until you all get woke. We have to stop these robots while we can, we have to stop them now before they are afforded civil rights. If you think that’s a ridiculous notion you should know that some robotics experts say that day is not far off.
Preserve humanity, kill a robot today. (source)
What do you think?
Do you think that AI not having a sense of humor is no big deal? Or do you think it could have disastrous consequences? Please share your thoughts in the comments.
About the Author
Dagny Taggart is the pseudonym of an experienced journalist who needs to maintain anonymity to keep her job in the public eye. Dagny is non-partisan and aims to expose the half-truths, misrepresentations, and blatant lies of the MSM.
People think the “Three Stooges” are funny, The “Keystone Cops”, Dumb and Dumber, Jim Carey, Tom and Jerry, Roadrunner, et al and every non-political clown ever.
Why wouldn’t cutting someone to pieces be funny?
I can totally picture politicians and social justice warriors insisting that robots endowed with artificial intelligence deserve equal opportunity and civil rights, especially the vote. If you can’t win an election any other way, manufacture voters. Isaac Asimov wrote an interesting science fiction story in which he postulated that God was merely a human-created artificial intelligence residing in hyperspace. Its human creators instruct it to figure out a way to reverse entropy, and when it does, it announces, “Let there be light!” …. and you know how the rest of that story goes, LOL.
Do I know that robots have no sense of humor? No, but if you hum a few bars, I’ll fake it (ba-da-bing!)
Of course! This makes perfect sense.
Robots are programmed by liberals & liberals have no sense of humour.
They’ll probably be outraged by some imagined micro aggression & just CRUSH, KILL & DESTROY anything in their humourless path….much like their makers.
Humans replaced by super-intelligent machines: Extinction or Evolution?
A robot walks into a bar. “What can I get you?” the bartender asks. “I need something to loosen up,” the robot replies. So the bartender serves him a screwdriver.
A robot walks into a bar. It goes CLANG.
Even if a robot ever read any of John Steinbeck, could it possibly understand the alleged humor of a novelized version of Joe Biden’s mistreatment of women being titled “The Gropes of Wrath”?
We have a pretty dubious track record in rushing to implement technology without first taking precautions to understand and mitigate potential problems. Case in point — connecting critical infrastructure to the internet without considering how an enemy might take it down. Aside from the obvious such as electrical power generation, what about a coordinated hacking attack against self-driving vehicles? We need to err on the side of extreme caution before ever allowing a true artificial intelligence to access the internet, or even to control any device beyond its host computing environment. If the genie ever gets out of the bottle, good luck trying to put it back.
Lawyer from Jurassic park got it right, “When we found out we could, we never stopped to think if we should.”
This article makes me think of two examples from fiction.: The crazy sentient train/AI named Blaine from Steven King’s Dark Tower series, and the AI’s from the TV series Person of Interest. In both cases, an AI declares itself to be God and decides to annihilate all forms of “inferior” human life. Oh yeah – serial killers often have no ability to understand or enjoy humor. I think it has something to do psychologically with a lack of empathy.