Humans lie: Should artificial intelligence lie too?

I was writing my next transhumanism book, titled ‘Human Psychology and Language Analysis: For Advancing Linguistic Artificial Intelligence’.

I was thinking and writing about how and why humans communicate, and it came to my attention that humans lie.

Then this question came to my mind: ‘Should artificial intelligence lie too, just like humans?’

What’s the purpose of artificial intelligence? Replicating the human intelligence in machines. So, I asked myself, ‘should artificial intelligence lie, like humans do?’

In my numerous transhumanism books, available at Robocentric.com/Books, I specify again and again that the ultimate purpose of artificial intelligence is benefiting humans, just like all the other technologies.

I talk more about this on my YouTube channel and podcast.

I couldn’t help but ask deeper questions: ‘What is the ultimate purpose of replicating the human intelligence in machines? Why create a machine that has some semblance of the human intelligence?’

So, let’s ask and answer this question: ‘Can lying artificial intelligence benefit humans?’

Do humans benefit other humans when they lie? There is no simple answer to that; humans lying is not a black-and-white issue.

In the West, parents lie to their young kids about Santa Clause. Santa Clause is a lie. But isn’t that a beneficial lie that parents tell their young kids?

Do you want to tell young kids the brutal and harsh truth that adult life is hard and harsh, and the only gifts you’ll get as an adult are the ones you earn?

Humans lie to mask the harsh truths, and to manipulate others to get what they want.

Artificial intelligence lying to humans to get what it wants is out of line, but should artificial intelligence always tell truths to humans, even when the truths are so ugly and disturbing?

I don’t know about you, but I don’t think artificial intelligence should tell people what makes people miserable.

I don’t want artificial intelligence to lie, but I don’t want artificial intelligence to make people miserable by telling people difficult truths.

I think there should be a truth option in the future artificial intelligence, with a varying degree of truth settings that the user can set to. The maximum truth setting is for always telling the full truth, and the minimum truth setting is for telling the least truth when the full truth is too hard to bear.

Human life is hard. I don’t think artificial intelligence should make people’s lives difficult by telling the difficult truths all the time to people.

An ugly person asks artificial intelligence, ‘how do I look?’ Maybe, just maybe, artificial intelligence shouldn’t tell the truth about the person’s looks, or avoid answering the question.

If you haven’t already, visit Robocentric.com/Future, and buy and read my book, titled The Future, to learn how I advance artificial intelligence, robotics, human immortality biotech, and mass-scale outer space humanity expansion tech.

If you would like to support what I do, make donations at Robocentric.com/Donation.

Allen Young

The transhumanistic Asian-American man who publicly promotes and advances AI, robotics, human body biotech, and mass-scale outer space tech.