Here's the full video of Liv Boeree's conversation on the Triggernometry podcast, which covered much more than just poker. We already skimmed through it for the poker portions, if you'd like to go to them directly.

WSOP and EPT winner Liv Boeree spoke on the Triggernometry Podcast about poker and life, using probability for parking situations, and career highlights.

Read

In these parts of the podcast, Liv gets into the Moloch concept, social media, and our increasingly sci-fi reality.

What's this Moloch concept that you've come up with to describe where our media and new media ecosystem is going wrong?

– (Liv Boeree) Well, I didn't come up with the concept.

It actually comes from an old, originally, it comes from this old Bible story about this horrible cult that was so obsessed with winning wars they were willing to sacrifice more and more of the things they cared about, up to and including their children, who they would sacrifice in a bonfire in this burning effigy of this demon god thing called Moloch. In the belief that it would then reward them with, you know, all the military power they could want. So this sort of story became kind of synonymous with this idea of sacrificing too much in the name of winning and the force of when competition goes wrong, essentially.

Then in 2014, Scott Alexander of Slate Star Codex or now Astral Codex 10, wrote this amazing blog post called "Meditations on Moloch" where he basically connects the dots between all of these mentions of Moloch throughout history and puts it into modern game theory terms. Because he noticed, he's like, it seems like there's this mechanism. The same sort of mechanism that is driving a lot of different problems in the world, whether it's like a tragedy of the commons type problem where companies will take shortcuts to, you know, to keep their share of the market or use cheap plastic packaging because that's the most cost-efficient thing they can do. But then, it's creating all these negative externalities for the future or deforestation. All of these tragedy of the commons type situations are created by these misaligned game theoretic incentives, as well as things like arms races. The fact that we ended up with 60,000 nuclear weapons on Earth, far more than we would ever need to maintain mutually assured destruction, is again because it's like, the game theory dictates it. If your opponent builds up a stronger arsenal, now you've got to do it, and now they've got to do it, and so on. So it's like these screwed-up short-term incentives that each individual person is technically rational for following, but if everyone does them, it creates these bad outcomes for the world.

That's kind of what this Moloch thing is, and that's what it's become synonymous with. I'm sure, like you guys, generally appalled at the direction that the media has been taking over the last few years. I mean, "If it bleeds, it leads" has been a strategy they've been using since whenever, right? But it feels like since the internet, and certainly, since social media, the competition dial has been turned up. It feels like even the really respectable papers are leaning more and more into clickbaity, rage-baity tactics in order to maintain their market share, essentially. So, it's the same kind of mechanism. You're an editor, and you notice that your readership numbers are waning compared to your competitors, and you notice all your competitors are now doing slightly more clickbaity stuff. Well, now you kind of have to do it too, right? Because if you don't, you're going to get left behind. This is the same Moloch mechanism again. So yeah, I made a whole little short film about it.

(You could play a freeroll while you watch Liv's short movie.)

Register using this link to get access to GipsyTeam bonuses:
  • Increased first deposit bonus
  • Increased rakeback and reloads
  • Help with deposits and cashouts
  • Access to private freerolls
  • Round-the-clock support
Win real money in tournaments without buy-ins!
114 more freerolls
Today 13:00 EDT (17:00 GMT)
$150
Free To Play - Round The Clock - Poker Tournament - $150 USD GTD
  • Late check-in 60 minutes
GT Bonuses
Today 13:15 EDT (17:15 GMT)
₮40
Freebuy NLHE Satellite
  • Satellite to Sunday Special
GT Bonuses
Today 13:45 EDT (17:45 GMT)
₮30
Freebuy NLHE Satellite
  • Satellite to Last Chance to ₮100 TurboCharger
GT Bonuses
Today 14:00 EDT (18:00 GMT)
$200
Free To Play - Round The Clock - Poker Tournament - $200 USD GTD
  • Late check-in 60 minutes
GT Bonuses
Today 14:05 EDT (18:05 GMT)
$50
$50 GTD Freeroll PLO
  • Late check-in 30 minutes
GT Bonuses
Today 14:15 EDT (18:15 GMT)
₮15
Freebuy NLHE Satellite
  • Satellite to Last Chance ₮50 TurboCharger Mid
GT Bonuses
Today 14:30 EDT (18:30 GMT)
$13.2
$1.10 AIOF Freeroll
  • Satellite to $55 SuperSat
GT Bonuses
Today 14:45 EDT (18:45 GMT)
₮20
Freebuy NLHE Satellite
  • Satellite to Last Chance ₮20 TurboCharger Mini
GT Bonuses
Today 15:00 EDT (19:00 GMT)
$200
Free To Play - Round The Clock - Poker Tournament - $200 USD GTD
  • Late check-in 60 minutes
GT Bonuses
Today 16:00 EDT (20:00 GMT)
$200
Free To Play - Round The Clock - Poker Tournament - $200 USD GTD
  • Late check-in 60 minutes
GT Bonuses
Today 16:00 EDT (20:00 GMT)
₮0.05
swordfish007 Challenge Freeroll
  • Satellite to swordfish007 Challenge Qualifier
GT Bonuses
Today 16:45 EDT (20:45 GMT)
₮40
Freebuy NLHE Satellite
  • Satellite to Sunday Special
GT Bonuses
Today 17:00 EDT (21:00 GMT)
$200
Free To Play - Round The Clock - Poker Tournament - $200 USD GTD
  • Late check-in 60 minutes
GT Bonuses
Today 18:00 EDT (22:00 GMT)
$150
Free To Play - Round The Clock - Poker Tournament - $150 USD GTD
  • Late check-in 60 minutes
GT Bonuses
Today 19:00 EDT (23:00 GMT)
$100
Free To Play - Round The Clock - Poker Tournament - $100 USD GTD
  • Late check-in 60 minutes
GT Bonuses
Today 20:00 EDT (00:00 GMT)
$100
Free To Play - Round The Clock - Poker Tournament - $100 USD GTD
  • Late check-in 60 minutes
GT Bonuses
Today 21:00 EDT (01:00 GMT)
$100
Free To Play - Round The Clock - Poker Tournament - $100 USD GTD
  • Late check-in 60 minutes
GT Bonuses
Today 22:00 EDT (02:00 GMT)
$50
Free To Play - Round The Clock - Poker Tournament - $50 USD GTD
  • Late check-in 60 minutes
GT Bonuses
4 Mayam24 03:00 EDT (07:00 GMT)
$50
Free To Play - Round The Clock - Poker Tournament - $50 USD GTD
  • Late check-in 60 minutes
GT Bonuses
4 Mayam24 03:01 EDT (07:01 GMT)
$50
Free To Play - Round The Clock - Poker Tournament - $50 USD GTD
  • Late check-in 60 minutes
GT Bonuses

It's very good. I've really enjoyed watching it. The interesting thing to me is, for a while, there was the narrative that the mainstream media is dying, corrupt, blah, blah, blah, which is true. The new media is the answer and I think there's an element of that that can potentially be true.

I mean, I look at some very popular YouTubers who comment in our space on stuff, the titles and thumbnails, and I'm like, if I just ingested that for a week, I don't think I'd be a very happy, emotionally stable person. They are doing—you know, and every now and again, we'll have a thumbnail that says something along those lines. But I'm just saying, it seems to me that while the new media potentially offers a solution, it is subject to many of the same flaws and perverse incentives.

– Yeah, it's just a big old attention game, right? Everyone is trying to compete for each other's attention, whether it's big media companies, whether it's individual influencers, people, even government organizations. Everyone is trying to get their voice heard. So, it incentivizes people to do whatever tactics are best at doing that. It seems like the best emotions for going viral, I mean, they're certainly not cool-headedness or nuance, right? It's fear, rage, and then the occasional really exciting, happy story.

But rage, in particular, even more than fear, is a sort of action-triggering emotion. And because the business models, not only of influencers, but also mainstream media now, is more like, how can you maximize impressions? How can you want an active emotion that encourages people to go out and share and comment? That's why rage is just so useful and the most effective way of triggering rage is getting people, well, you know, whipped up into a tribal frenzy. So it's this incentive structure is a big part of why we're seeing such incredible polarization. You know, it's hard to say where the polarization started. It's been there. There was this really cool, like, chart that was posted. I'll try and send it to you guys.

I called it The Mitosis of Congress. I don't know if you saw it. It's Democrats and Republicans over the years, like how much overlap there was in opinion and was in aggregate. Just over time, it's become more and more and more polarized until the point now it's like there's basically no overlap ever.

What's interesting, though, is that this process started before the internet. So I don't think the internet is the cause, but it's basically just turned up the acceleration because it's—you know, the tails were already coming apart. It's just that, it's like it's turned up the competition dial. Everyone's leaning into it harder and harder.

It's really interesting that you say that because if I think back to our country of the UK, and we're talking about generating rage – who did that better than the tabloid press in the '80s and '90s? I mean, they were masters of it.

There's a Netflix series out about David Beckham, and David Beckham during the World Cup. He got sent off for basically, a little kick-out at an Argentine player who then made a meal out of it. Then, he became a national hate figure. The Daily Mirror put a dartboard with his face on it, and they generated this campaign against him where he became the most hated man in the UK. So it's been going on for a long time. What I find interesting is how, in a way, these mainstream media outlets are doing this even more because they realize they're becoming less and less relevant.

– I completely agree. That's the thing. I'm angry at them for doing it, particularly like the BBC. To be fair, I think they have held on, perhaps the longest out of all the outlets. But, you know, there are certain things, like I see them make articles around the tech space or something like that, areas that I know, and I'm like, “Okay, it's very clear that you have a particular political slot”. Usually they lean left, but not always.

– That interview between Elon Musk and the BBC journalist, where the BBC journalist ran out of questions. I'm like, ”We will spend the next year working incredibly hard to get Elon on the show, and we would be desperate for every extra minute of time”. This guy just wanted to attack him, and then he ran out of attack questions. Well, okay, I'm bored now.

– It was unbelievable. You've got one of the most relevant men on the planet, and you run out of things to say.

– Yeah, it's not ideal.

Well, more generally, it just feels like there's this force, like a razor blade coming up through the fabric of reality, of shared reality that is trying to bifurcate everything. That's what this, again, I keep calling it "Moloch." You know, it's helpful to almost think of it as a kind of agentic entity.

– What does that mean? Sorry.

– I'm not saying actually there is a force that wants us to fight. But it's almost helpful to think of it as there is this, like, a demon Its lifeblood is people being at war and people arguing and people fighting and the world doing badly because we aren't able to coordinate. That's the sort of outcome of this, because of the type of problems civilization is moving into, you know. I have a background in philanthropy and global catastrophic risk. I've sort of worked semi-in research in that, but certainly in communicating about it. Almost all these problems, whether it's, you know, future pandemics (and there will be worst ones in COVID), or climate change or any of these big, really big problems, they're all a result of us not being able to really coordinate effectively. If we could coordinate well, then they would be relatively trivial. Like, we've known roughly what we need to do to mitigate climate change, or at least temper it, but we haven't been able to get our act together to do it because there's so many incentives for everyone to defect each time.

It's like, well, you're a poor country who's trying to grow their GDP and they've got a bunch of coal. Of course, like, they, what are they meant to do? Like, you know, this is the fastest way to lift our people out of poverty, but technically they are defecting from the global optimum, which is no one uses coal, right? Everyone uses perhaps a slightly more expensive, but clean, source of energy. So the problem with this media issue in particular, the fact that, like, the media are becoming increasingly polarized. Everything is more optimized towards rage and volatility and hype, unnecessarily hyperbole and that kind of stuff.

Which is one of the reasons why, like, you know, I think COVID was always going to be too, um, it was so transmissible, that was the cat was out the bag, really, as soon as that, you know, it took governments too long to realize they needed to do something. And then in the end, they ended up going crazy. Um, you know, they acted too slowly in the beginning and then they lingered with stupid solutions for too long. Um, but if we can't have a shared understanding of reality and we have a media system, which is meant to, like, the purpose of the media is to, you know, in an ideal world, inform people about the nature of reality so that you can get, like, a healthy parallax of views and come to, like, sane conclusions, kind of as a hive mind. If the media are doing the exact opposite of that, like making people whipped up into frenzies and splitting them apart, then we can't coordinate on these problems.

– Well, you make really good points there. One of the things you said there that I think probably isn't true though, is I don't think the function of the media, at least one in terms of observable behavior, is to inform people.

I think politics, culture, and everything to do with those things has now become entertainment. The media is entertainment. The news is entertainment. People don't tune into the news to find out what the facts are. They find out to get the emotional hit, right?

– It's like a dopamine source.

– Yes, you know, and the tribal rage that comes with it, obviously is a very powerful stimulant – its a drug. But I'm curious to talk about this concept of shared reality because I suppose we've got to a point where, whatever you think, I mean, this is why the trans debate has become so prominent. Because you're just going, it's two groups of people who can't even agree on something as basic as biology, right?

How do we have a shared reality if there are people who can't define what a woman is and there's other people who think something else – do you see what I'm saying?

– I mean, I don't know. I think with issues like that, almost every culture war issue is, the reason why it's so front and center, even though in theory it shouldn't be, you know, like there are far bigger issues in the world than trying to define what is or what isn't a woman.

– Not to some people Liv!

– The feminists would disagree with you. But the thing is, I agree with you in terms of the issue itself, it's insignificant compared to the problems we face. However, I would argue if you have a disagreement about the very concept of truth at that basic level, that is like, woah.

– You're right. It points to, and this is the thing, so whether it's trans stuff, whether it's the debate over capitalism, all of these different, culture war hot topics. I think the reason why they are so successful in the meme space, if each war topic isn't its own entity, is because there are genuinely valid perspectives from both sides.

I think biology is the closest thing, you know, it's one of our solid paths to reality. At the same time, I very fundamentally believe that people should be free to live and choose how they express themselves

– As long as they don't hurt other people.

– Yeah, sure.

So when someone says, you know, “Abracadabra Stacy, I'm now a different person”, I respect your right to call yourself whatever the hell you want, but as you say, it butts up against the rights of other people. Also, there are truth claims being made in that discussion and I'm like if we can't even agree about truth at this basic level how are we going to solve any problem?

– Look, you can argue that these corporations, these organizations are evil, you know, they're manipulating us and that very well may be true, but there's also a part of it is you have agency and you are allowing yourself to be manipulated.

– No, it's a very good point. This guy, Patrick Ryan, came up with the term psychosecurity and he's been saying the biggest issue of this decade, in his opinion, (I'm not sure if I would completely agree), we all need to be thinking about and working on is this idea of psychosecurity. Same as you'd have cybersecurity for your computer or physical security for your house or whatever, we need psychosecurity to protect ourselves from the increasingly powerful manipulation tools that are flying about on the internet. Whether these tools are being used because someone is evil or whether these tools are being used because they're simply stuck in a for-profit incentive game. You know, they're funneling. They're trying to just maximize their profits, so it's more the game that's evil.

It doesn't really matter. The point is we're spending more and more time on these devices and these devices have not really been built for our mental health. They've been built for either maximizing profits or just getting people to stay on them for as long as possible. And so how do we build these psychological defenses against these various things? Whether it's like TikTok trying to just turn you into a moron, or the media trying to turn you into a foaming-at-the-mouth politically polarized rabid person.

How do we build up these sorts of psychological defenses without going full Amish and going like, “Okay, no more phone for me”? The thing is, is that AI is going to speed all this up because AI is such a broadly useful technology. If you can hack intelligence itself, then anything that there's an incentive to use intelligence for, it will get used for. That includes all the really good stuff, solving all these big problems, but also speeding up the existing problems we have. Like, it'd be terrifying to think, if like, all the really partisan news outlets suddenly got AI. You know, really personalized AIs for each individual user to get them to keep getting even more and more angry. That's the way things are trending.

– Wow,

– Yeah. Every company on Earth is waking up to the fact that AI is like, there's going to be an AI tool to speed up their company and that means all the good ones but also all the like bad ones and even the criminal ones. Criminal enterprises will soon have access to AI for whatever crappy thing is they want to do. Casinos wanting to addict people to slot machines, I mean, those are already sort of dopamine hijacking enough – but that kind of thing. I mean, arguably that's what social media is. It's our first interaction on a broad scale with like rudimentary AI. They might have started out with really basic algorithms, but these things are getting more and more intelligent, more and more personalized.

Like my Twitter feed. Damn, that shit knows exactly, same with my Instagram as well. Twitter is what makes me all like fired up and like intellectually interested in something and Instagram is all the things that make me want to chill out and I want to be entertained on the toilet. I

They're so tailored to my brain because I've been freely giving them my information all this time and as AI gets better, this is going to get stronger and stronger and stronger.

– And that does not sound like a good thing.

– Doesn't seem like it.

– I think it was a few months ago when Elon and a few people signed a document requesting that there be a moratorium on AI. As noble as that is, and I would like that, the reality is that's simply not realistic.

– No, well again, it's like a coordination problem because – where do you draw the line? Technically, Google Maps is an AI. AI is so broad as a term. I think the purpose of that letter was just to raise attention, and it did a good job. Definitely did a good job. You know, the type of regulation that I think makes the most sense in the interim while we're figuring this out is like regulation on what are called Frontier models, which are the leading most powerful ones. So like technically GPT-4 was a frontier model 6 months ago. Whatever is currently being worked on now, that's the upgrade and now a frontier model. I think it was pretty irresponsible for OpenAI to go and just release it, or even like, Microsoft with Bing. They did that first actually. There's no way they could know and we still don't know what the downstream effects are of having such a powerful language manipulation tool released to the internet, released to billion people, nine, eight billion people. To be fair, there is no way they can know until they do it. So like these companies are going to be running real-time experiments on humanity. And if it turns out that these experiments actually have a bunch of unintended consequences, we won't know until it's, you know, either too late. I'm not saying that like the current models are a risk of extinction.

I've realized as I get older that the more the less time I spend online, the happier that I am. Life really is all about connection. It is. That's why people love podcasts, because it's about people connecting in a way that we rarely do anymore, sometimes, because our lives are so busy. The only time in my day when I actually get to sit and have a conversation with another human being for a prolonged period of time where my phone is off, is to do this.

– Wow..

– and that's fundamentally unnatural. I think more and more that in order for us to be happier, we need to kind of just take a step back and have that personal responsibility.

-- Yeah, and there's like genuine wisdom in the saying “Go out and touch grass”. Everyone laughs about it, but like no. People are so disconnected, not only from each other physically, but just from the physical reality. The digital realm is a universe of sorts, but it's not a universe that we evolved out of. It almost feels to me like it's like this reality that's growing stronger that's like feeding off our consciousness in some way.

We love to bang on about how evil social media is, but it's also fucking great. It's fucking great. How many friendships do we have, all of us because we're on social media. How many amazing people have we connected with? How many amazing things have we learned? How have we improved our understanding of the world, right? I think the same thing is the case with AI. I'd be curious to hear kind of what you think are the biggest risks, but also the biggest rewards that will come from that.

– So I mean there's like kind of four different categories of risk. There's the unintended consequences type stuff. So like you build an incredibly powerful model and let's say it learns what you want it to or it doesn't even have to learn to. People are already trying to build models which can edit their own code and recursively learn. So like, that opens up like a pretty obvious can of worms, at least to me. You know, it's like now something can basically evolve itself. It's going to be doing it at a faster rate than any form of biology. So that's like the whole sort of like Darwinian type thing, and it doesn't have to turn evil and want to kill us. It's just like, it might be so good and fast. You know, its goals might not be perfectly aligned with ours and it would therefore perhaps just use all the resources that we need. You know, or our environment is not suitable to us, and it wouldn't intend to kill us. It just, we would be a byproduct of whatever it continues to do. That's like one category, the most like classic Extinction risk type thing.

– There are lots of sci-fi movies in my youth about this. A benevolent AI realizes that the root of human miseries is humans.

– I mean, that's a very like anthropomorphized version of it. I think that's less plausible than just the idea of an unintended consequence. It wants more compute and the best way to get more compute is to turn every little bit of silicon it can find into chips. We need that silicon for other stuff, you know. Just biosphere changes, that kind of stuff. That's like the extreme sci-fi type thing.

Then there are the more near-term things again, like speeding up the misalignment in the system. Like companies that are just wanting to do their thing, maximizing for profits or whatever, and now are made even faster and more efficient at doing that. Like, you know, cutting down the rainforest or all these things.

Then there's like the bad actor problem. One argument people put out for open sourcing is like, "Well, if we open source, then we can get more people to, like, be thinking at how to incorporate safety. We can hive mind this," which seems nice in principle, but the trouble is if you completely open source a very powerful model, a model that had been kept close, well, now any bad guy, how do you protect against them? You can't. So there's that sort of category of risk.

Then you've got the fourth one, which is sort of structural-type problems that might come from, like, basically, the sudden shock of such a powerful new technology becoming ubiquitous. So like mass unemployment is a classic one.

– I just wanted to finish on the benefits because in your video, you talk about negativity bias, right? Human beings prioritize negative information. And I said to you, "What are the risks? What are the benefits?"

– That's the difficult thing because there are so many problems that AI can help to solve. You know, like a lot of the environmental issues we have because we haven't figured out how to have like more abundant clean source of energy. AI could help us figure out nuclear fusion, so we need it for that. It could help us with drug discovery. So, you know, especially if like there are all these new potential pandemics on the timeline. That's the thing again, it's what you call a dual-use technology. Not all of the different flavors of it are, but certain categories of AI tend to be dual-use. They can be used for good or for bad.

We almost need some kind of superintelligence to help us better coordinate on these things in a way that doesn't also make us vulnerable to tyranny or nightmarish-like type top-down scenarios. So, you know, the bull argument for like going all in on AI as quickly as possible is that we won't be able to solve these other problems without it. But, then it opens up new cans of worms that might make these existing problems or even brand new ones worse. It feels like it's like this minefield we have to navigate to get through, but if we get through then it's like yeah!