Passing on Human Values

Passing on Human Values

Reading Time: 1 minute

Passing on Human Values

This piece focuses on two different methods for passing on human values to artificial intelligence. The first is training, where that training may happen through intensive work with human trainers or through reading a wide range of human stories from books, television and movies. The second is embedding a kind of synthesized emotion, such as guilt.

On the teaching part, my first question was, “well, what about all those dark stories out there?”

There’s a certain poetic symmetry to the solution: from the Golem to Frankenstein’s monster and beyond, humans have always turned to stories when imagining the monstrous impact of their creations. Just as there are gloomy conclusions to these stories, there is also a worry that, if you feed the AI only dark plotlines, you could end up training it to be evil. “The only way to corrupt the AI would be to limit the stories in which typical behaviour happens somehow,” says Riedl. “I could cherry-pick stories of antiheroes or ones in which bad guys all win all the time. But if the agent is forced to read all stories, it becomes very, very hard for any one individual to corrupt the AI.”

https://www.1843magazine.com/features/teaching-robots-right-from-wrong

18 comments

  1. We all have had mentors to help us develop our moral intuitions and habits through our moral experiences; parents, teachers, coaches, friends, public figures, celebrities, fictional characters, athletes.

    If we want AIs to have moral intuitions that resemble ours – and really, if you don’t, then I suspect you don’t fully understand the implications of that desire – then it stands to reason that they would also need mentors to help them process moral experiences.

  2. Is this a JUSTIFICATION FOR what..??..So WHO is COURAGEOUSLY going to BE the ONES’s TO INCUBATE and LOAD the MAGIC CHIP of HUMAN MORALITY and ETHICAL PROPER MANNERISMS into the AI BRAIN of HUMAN DIVERSITY COMPLETE..?

  3. Oh WOW … Fantastic !!!

    Much Appreciated Gideon Rosenblatt​

    Enjoy your Long Weekend !!!

  4. Yvonne N T Makita see my previous comment.

  5. Most non-scientific philosophers and many scientists as well fail to notice one crucial difference between us and most* types of AI we will create: we are still motivated largely by ancient instincts, to try to become the most powerful, have access to most food and get the best females. While those instincts have helped us survive in the environment of our home planet, most of them are irrelevant to survival in a reality dominated by intelligence, not by chance, so will not be advantageous to any AI (or modern human, let’s face it). And in most cases, those instincts are the motivating force of what humans consider evil.

    So no, I don’t believe a rogue AI will enslave humans, take their food and women, that’s just cave man psychology. The only motivation to harm humans could be if humans endanger AI’s survival, and even that – only if said AI somehow develops survival instinct. Like a mirror for humanity, it will be cruel to us only if we’re cruel to it.

    Even the rational scaremongers, who think AI will step on humans not out of malice, but like we step on ants – fail to see the greater picture: we only occupy a single small pebble in a universe with quintillions of stars. Stepping on us is like human specifically buying a ticket to Africa, then taking a long journey to the savanna and stepping there on a specific ant hill. No, AI, even left to roam the universe, will prefer much less wet places with less corrosive oxygen in the atmosphere (or better no atmosphere at all), so Moon will do nicely.

  6. Dr. A suggested three laws.

  7. Interesting perspective. I think the reality is that we have no idea, really, what is going to happen. I say that not to throw up my arms in surrender, but because I think that is the uncertain reality into which we are now headed. That said, I think that there are things that we can probably be doing along the way that could increase the probability of a nice knitting together of civilizations. I think some of the ideas here are a good step. Thanks for the comment, פליקס כץ

  8. Grizwald Grim

  9. We can embrace artificial intelligence for the good of people but not for obscene profit that is already damaging countless workplaces for that same financial; gain.

  10. The meaning itself of “profit” when labor and resources cost is 0, is debatable. And it will be 0 long before we have created an AI strong enough to actually become a civilization in its own right.

  11. If we presume that we fully understand human morality right now then I can pretty much guarantee that any AI built on that faulty assumption will in fact be a moral monster.

    On the other hand, if we presume that no AI will have a practical need for human morality then I can pretty much guarantee that some AIs built on that faulty assumption will in fact be moral monsters.

    Some AIs will have a practical need for human morality because they will (are already) socially interacting with us.

    No, human morality is not fully understood. Ask any expert working on morality (cognitive science, philosophy, psychology, etc.) Therefore, we need AIs that learn things about us that even we do not fully understand.

    Thankfully, this is not as impossible as it sounds.

  12. Gideon Rosenblatt​ ICYMI, in Developer Keynote at Google I/O, the got AI talks session. Check out and would like to hear your elaboration in ur next post.

  13. Dear Sir,

    I understand the present thoughts expressed on artificial intelligence

    but unfortunately the monsters are already with us as people are being

    killed through this same intelligence. While I wrote of embracing AI

    for the good of people it was based on the wider concept of progress

    of the human mind and related AI to the future as an aid – not as some

    new regime of technology overtaking humanity.

  14. Mohd Akmal Zaki, I’m not sure which session you’re referring to, but I did catch the session panel discussion on AI (and shared it last night). Very interesting.

  15. Gideon Rosenblatt yah that one with that chinese speaker.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Sign up here for the latest articles. You can opt out at any time.


Subscribe by email:



Or subscribe by RSS:

Subscribe-by-RSS---Orange-Background
%d bloggers like this: