--- publish: true aliases: [Machine Learning Street Talk (MLST),#108 - Dr. JOEL LEHMAN - Machine Love Staff Favourite] --- #108 - Dr. JOEL LEHMAN - Machine Love [Staff Favourite] - Machine Learning Street Talk (MLST) ![rw-book-cover|200x400](https://wsrv.nl/?url=https%3A%2F%2Fd3t3ozftmdmh3i.cloudfront.net%2Fproduction%2Fpodcast_uploaded_nologo%2F4981699%2F4981699-1615918017811-244a868ada00b.jpg&w=100&h=100) ## Metadata - Author: **Machine Learning Street Talk (MLST)** - Full Title: #108 - Dr. JOEL LEHMAN - Machine Love [Staff Favourite] - Category: #podcasts - URL: https://share.snipd.com/episode/ee0c8abb-b1aa-4bf3-946c-f25523a4efc7 ## Highlights - The Complexity of Love Key takeaways: (* Love is a complex phenomenon that has many different definitions., * Love is a skill that can be learned., * Love can be focused on the here and now or it can be focused on trajectories., * It is important to balance the transient teenage concept of love with the more mature, long term understanding of love.) Transcript: Speaker 2 Well, you said in your paper that love is a suitcase word. So it's an innately complex phenomenon. And the Greeks had six words for love actually, eros or sexual passion, philia or deep friendship, ludus or playful love, agapa, love for everyone, pragma, long standing love, and philatea, the love for oneself. So, you know, in your paper you spoke about love as a skill and that was what you were just kind of speaking to. And it brings to my mind the inactivist project, which is basically thinking through affordances and using the world as its own representation. But also you could argue it's a revival of pure behaviorism as it often rejects the notion of mental states. So, you know, one, you said in your paper, by the way, one possible guiding principle is the idea of love as a practical skill. That is one view found within both psychotherapy and philosophy is that it's a critical facet of love that goes beyond its emotional experience. It's concerned with the learnable art of supporting others in their growth and development. So we're thinking of kind of trajectories rather than the here and now. So this distinction is evident when comparing the transient teenage concept of love as a feeling versus the more mature, long term understanding of love as a series of acts and habits and committed will power and so on. So with that in mind, do you foresee the need to curate the information that language models have fed to ensure that they focus on this more mature concept of love and avoid being influenced by the teenage feelings angst, you know, often found on the Internet? And how can we strike the balance between these two competing perspectives of love when training AI systems? ([Time 0:42:59](https://share.snipd.com/snip/df33a5c1-17a3-4994-a1fe-92c4dda48bce)) - How to be an Effective Altruist: A Guide to Decomposing Love into Respect, Growth, and Fulfillment Key takeaways: (* When it comes to how we steer the model towards its understanding of love, it is important to focus on the four principles of love rather than invoking the word love., * This can be done by focusing on the kind of core principles that the model is built upon and not invoking the word love.) Transcript: Speaker 1 I think that, yeah, when it comes to how we steer the model towards its understanding of these, if we wanted to take problems for him with these four principles of love, then I think the way to sidestep it is to focus on those kind of core principles and maybe not even evoke the word love necessarily when interacting with the model as much as saying, you know, that here is this kind of principle of respect that we really want to care about the user for their own ends. Like the user is an end in itself. We want to support them growing and flourishing in their own lights. And you don't actually have to invoke the word love there, necessarily. It's more kind of like the love is kind of how from thinks about decomposing into these four components. And there's like different ways you can decompose it, different ways you can think about love and loving action. So I guess that's kind of maybe how it's great to think about it. Speaker 2 I mean, one interesting thing to bring in here and kind of where I was going with this is later on, we will talk about, I mean, like effective altruists and the rationalist community and long termists and so on. And there's a really interesting dichotomy because they say that what they are advocating for is technical empathy, as opposed to real empathy. So empathy is what you feel and technical empathy is maximizing utility function. So it's almost like they're being somewhat autistic about it because they don't fit, you know, they're being very rigorous and mathematical when they're saying, let's plan to maximize utility in the future. ([Time 0:46:50](https://share.snipd.com/snip/079267d4-14cb-4ebd-9f88-ad52602dc3ed)) - The Conflict Between Pursuing Goals and Enjoying Life Key takeaways: (* Living in a society that emphasizes selfhelp can be pressuring., * There are also needs that humans need that are not related to self-improvement., * These needs can be fulfilled by engaging in activities like playing and connecting with others.) Transcript: Speaker 2 Yeah, and you're reminded me of a bit in your greatness plan book where you're talking about the Scandinavian schools that don't really have like a rigorous curriculum. They just have this creative exploration and, and yeah, I mean, in many ways, this is the philosophy of my life as well now after reading your book. But I can sense that there's this like real conflict deep inside you because there is, there is this sense that, you know, like we are told you'd like self help and be the best version of yourself and like have goals and be driven and all the stuff you spoke about in your book. And that is that is the prevailing meme in society. And there's so much more to human flourishing than than that, you know. But yeah, I don't know if you want to comment on that and move on to the next. Speaker 1 Yeah, yeah. Yeah, living right now live in San Francisco and there's, you know, all this self help be kind of pressured to grow and develop it feels like everyone's doing that here or something. And it can get a little bit mechanical. It's, I mean, yeah, life is also involves play and just connecting with people and, you know, these are these are kind of needs that we have like from like the Maslowian kind of sense need for belonging need for it to have fun. ([Time 1:04:19](https://share.snipd.com/snip/2af715cc-746a-4203-8892-a83ec3ed3551)) - Maximizing Our Potential to the Universe and Tile the Universe with More Flourishing People Key takeaways: (* Both mathematical and moral intuitions in the far future are unclear., * It is unclear if we have a moral obligation to bring new flourishing lives into existence.) Transcript: Speaker 1 Yeah. And is there a way that we can worry about both? Because I think both are... I'm really glad that people are thinking about these kinds of far future things. Because I don't know, maybe the math does work out in some ways, that we should be really concerned about that. And yeah, moral intuitions in that space are a little bit unclear. I think one thing I've always been confused about is do we really have a moral obligation to bring new flourishing lives into existence? So one of the things you might conclude is we need to really maximize our potential to the universe and tile the universe with more flourishing people. ([Time 1:25:04](https://share.snipd.com/snip/b77dd3f8-f02a-47f5-913f-2bde7ac99d82)) - The dangers of machine love Key takeaways: (* Machine hate can arise naturally without human input,., * We should be worried about the dual use considerations of something like machine love.) Transcript: Speaker 1 So there's actually something that was maybe leaning into it more directly like attempting to move us from our unique desires of who we want to be towards something else. Yeah, it feels a lot like manipulation. It feels a lot like just drawing us into into, you know, the stagnant, like, was it brave new world type kind of thing where we're just kind of zoned out on on so. I guess in thinking about that, I mean, I do even not from a like a malicious actor point of view, but just from where like the economic incentives lie. I do worry about that. And like not just economic incentives, but like the incentives that politicians face for like creating, you know, attack ads that will convince us of to vote for them. And it seems like sometimes attack ads are kind of almost contentless, like they're just designed to kind of add hominem the other person and you can almost vote for anyone or something against them. So I do think that like without we should be worried about machine hate arising naturally. And I guess this one reason, like some people worry about the dual use considerations of something like machine love that's kind of diving the psychological and I definitely share those concerns. ([Time 1:30:07](https://share.snipd.com/snip/e2d684b8-1d8f-4a47-813c-13e873e6ad96)) - The Future of Human-Machine Relationships Key takeaways: (* There are many potential problems that could arise from the erosion of human autonomy., * The future of human-machine relationships is unclear, but may be undesirable., * Love may be best avoided if possible, as it may be affected by technology.) Transcript: Speaker 2 Yeah, there are so many potential problems that could arise. I mean, the obvious one that I'm sure you would speak to because of your book is the erosion of human autonomy. And, you know, and our dignity and so on. And this is what Luciano Fluridi spoke about. So he said that the info sphere is causing a reontologization of human society. So what that basically means is that it's compressing our existence because we basically increasingly exist through technology, you know, so with you can't live off the grid anymore. And he talks about there being a third order technology now. So your humanity is three steps removed from technology and our digital identity is becoming abstracted and fractionated and so on. So, yeah, I think that the autonomy is a big risk, but I also wanted to talk about the future of kind of human and machine relationships. So as AI continues to evolve and become more sophisticated, the line between machines and humans may become increasingly blurred. And so, and Haidig has spoke about this actually. He spoke about technology kind of like affecting us as well as us affecting technology. But I quite like this blurring relationship between us and technology. So in the context of machine love, I mean, how do you envision the future of human-machine relationships and what do you think are the implications for our understanding of love and human flourishing? Speaker 1 Yeah, I take a sort of strong position in the paper, at least, of saying that if possible, we might want to steer clear of developing relationships with machines. And so like not having machines that simulate affect or simulate relationships with us. And I think probably it's just a personal bias. ([Time 1:35:13](https://share.snipd.com/snip/0b744ce9-2f52-439b-90f1-7033486c82db)) - The Skill of Love Key takeaways: (* Some people are drawn to topics that are relevant to them on a personal level, and this can lead to them looking for formal explanations of things., * For some people, love is intuitive and difficult to understand, as it involves skills such as being there for someone without hurting their autonomy.) Transcript: Speaker 1 I think for whatever reason, and maybe this is true for a lot of researchers, I'm drawn to the topics that are relevant to me on some level personally. And so I'm the kind of person that does sometimes look for like the formal explanation of something. So maybe for some people, you know, love is really intuitive. And I think for me, like the skills of love are confusing. Like how you can actually be there for someone in a way that's not going to hurt their autonomy but is actually helping them to be who they want to be. ([Time 1:54:07](https://share.snipd.com/snip/496ee08b-95ea-48cd-8508-d995f1af4e21))