The Answer Is Transaction Costs

Curation Bubbles, Verification, and the Splintering of Ideology

Michael Munger Season 2 Episode 20

Send us a text

What happens when we no longer consume scarce information through trusted, verified institutions, but instead through an abundance of unbundled content without context or curation? John Green, rising star in political science from Duke University, takes us on a tour of the rapidly evolving  landscape of political information.

Green challenges conventional wisdom about how ideologies function, arguing they're not so much coherent philosophical systems as they are socially shared belief networks. In these networks, most people specialize in just one or two issues they deeply care about, while adopting their coalition's positions on everything else. This creates an environment where signaling group loyalty becomes crucial—explaining why people sometimes make outrageous claims not despite their falsity, but precisely because the willingness to say something costly signals authentic commitment.

The conversation takes an illuminating turn when Green unpacks his groundbreaking research on "curation bubbles." Unlike echo chambers or filter bubbles, these environments emerge when people strategically share content based on its utility for their side, regardless of source. A conservative might enthusiastically share a New York Times article criticizing Democrats, while generally dismissing the publication as biased. This selective curation creates information environments that are neither completely closed nor genuinely diverse.

Perhaps most troubling is Green's insight about misinformation in the digital age. The real danger isn't simply false claims from unreliable sources, but rather the strategic repurposing of true information to create misleading narratives. When accurate statistics or facts are stripped of context and woven into deceptive frameworks, traditional fact-checking approaches fall short.

As we navigate this unbundled media landscape, the question remains: can we rebuild institutions that verify and curate information effectively? The answer may determine the future of our shared reality and democratic discourse.

LINKS

Jon Green at Duke

Green, et al on "Curation Bubbles" in APSR

Converse on Belief Systems

Munger on "Direction of Causation"

Munger on Pub Cost, Curation, and Verification


Letter Response:

Sweden is NOT socialist!  (If you don't believe me, believe Andreas Bergh...)


Book’o’da Month:  

If you have questions or comments, or want to suggest a future topic, email the show at taitc.email@gmail.com !


You can follow Mike Munger on Twitter at @mungowitz


Speaker 1:

This is Mike Munger, the knower of important things from Duke University. Today's guest is John Green of Duke University's Political Science Department. Dr Green has published in top political science journals and he had well over 500 citations in Google Scholar just in the past 12 months. Very impressive total, I started to say for one so young, but actually 501 years straight up impressive for anyone. We'll be talking about his recent paper in the American Political Science Review entitled Curation Bubbles. There's a new twedge this month's letter plus book it a month and more Straight out of Creedmoor. This is Tidy C.

Speaker 2:

I thought they'd talk about a system where there were no transaction costs. It's an imaginary system. There always are transaction costs. When it is costly to transact, institutions matter and it is costly to transact.

Speaker 1:

This month on. The Answer is Transactions Cost. Our guest is John Green from Duke University's Political Science Department. John, welcome to Tidy C and I wanted to ask as I ask all guests if you could tell us something about how you came to be an academic and how you became interested in the problems of information, ideology and political institutions.

Speaker 2:

Yeah well, thanks for having me. I think my journey starts probably pretty early. In elementary school. I always really liked talking politics, probably more than was healthy for an eight-year-old, and you know I always had an interest. I used to like arguing politics a lot more than I do now, and that led me to in college, you know, play around with political ideas. Where were you in college?

Speaker 2:

I went to Kenyon College, a little small town in Ohio, and I wound up doing some independent work on political psychology and I was trying to figure out kind of what made me tick and why do people think the things they think? Why do they? You know, why do some people enjoy talking politics and other people don't? I worked on a couple of campaigns, took a couple of semesters off to go work campaigns. It was a lot of fun when I was 19 years old. Not a great lifestyle to pursue, long term for me at least, and I realized in that context I liked being right more than I liked winning, and that tends not to lend itself to competitive electoral politics so much. So after college I got a nine to five job in the private sector for a little bit and found that to be not as intellectually stimulating as college had been, and that's a whole separate conversation. I was at a tech startup in Boston which sounded really cool, but I was, you know, not not as interesting to me, and so I was. I should go back to school, you know, not as interesting to me, and so I was. I should go back to school, and I was.

Speaker 2:

When I was sort of pitching my grad school applications, it was very much organized around this intersection of political psychology and political deliberation, right, why, how do people talk about politics? And that very much fit with wound one at Ohio State, working with Mike Neblo, Amelia Minozzi, who were very much in that space, and so I originally thought I was going to do like empirical deliberation work, right, where you get people in a room and they talk politics and you see what happens. And my own research just kind of went more toward the elite discourse space and as I was getting more interested in sort of computational methods and you know, twitter was kind of the big this was 2018 and 2019. Right, so it was peak Twitter and found myself really seeing that, like Twitter was a place where elite discourse was really shaping.

Speaker 2:

What are the contours of what it means to be a liberal or a conservative, how do we decide what these ideologies are in the first place? And so I'm a dissertation work and I have some papers that are either just out or in the pipeline or about this kind of how do we make ideologies and where does that action happen? And so that's the I guess, shorter version of how I got here, but yeah, Well.

Speaker 1:

So one of the things that is a benchmark for my approach to transactions cost is that my claim is that there are problems that people have in finding what they want to do and to accomplish their needs.

Speaker 1:

Those three problems are triangulation, transfer and trust, and that's just a way of summarizing the kind of information problems and trust problems that people face. So I need to know what someone else says they're going to do and I have to have some expectation about whether they're actually going to do that in market or at least commercial adjacent setting. But exactly the same problems exist in politics and in fact Anthony Downs, in his book Economic Theory of Democracy, actually conceived of the information problem as one that voters couldn't solve. They would be rationally ignorant. But then he pretty quickly backed off and said that wouldn't make any sense, no-transcript. So you have elites trying to figure out this problem, to devise institutional responses. So I realize that's a pretty big question, but you can say something about ideology if you can. And then how is it that elites compete to solve this problem? What's the analogy to reputation and brand name?

Speaker 2:

Yeah, so, um, there are a lot. There are a lot of different definitions of ideology out there. Um, the one I like, or when I use my work, um, I think, uh, corresponds with you know very much informed by some of your work on this stuff. Um, I think of ideology as a socially shared belief system that describes how society is and ought to be ordered. I think the socially shared aspect is really important there. You've described ideologies as disembodied belief systems. This is the flip side of that. They are socially learned, they're distributed, they don't exist just inside one individual's head. They're a sort of group-level belief system, and this means that the group has to decide what the belief system is. It's not a philosophy in the sense of we outline some principles and then we deduce all the correct preferences based on those principles. We as as a group, have a common agenda of compatible preferences. We want to advance the whole package and so we need to make some norms that jointly rationalize all of these preferences. So I have a working paper on who sets the norms and the group members get to specialize in their ability, right? So you think of, like, the liberal coalition as a constellation of climate and race and labor, and there isn't necessarily a logical entailment between all these different preferences, but they're all together in a big group. So who decides how the coalition thinks about climate change? Well, it turns out that there's a small cluster of climate specialists who get to set the agenda for how the coalition thinks about climate. The tricky part there is how do we get the whole coalition to cooperate, versus everyone going off and thinking through these issues themselves and only advancing their, their interest. And I think norms are really important for sort of binding the coalition there in order to sort of maintain that cooperative arrangement over time.

Speaker 2:

The coalition needs a mechanism to monitor, reward and sanction, a mechanism to monitor reward and sanction. And in a strong political party, the mechanisms for doing that would be really formal. Right, you can exclude people from the ballot line if they're not upholding the agenda. Or you can withhold funding, you can take them off the legislative agenda, things like that. And in ideology, where group membership is more informal, you do this through social sanction.

Speaker 2:

Right, you can, if and you do this through signaling, so I can use certain words or, you know, articulate certain beliefs that are the credible commitment or are the credible signal that I'm a good group member right, I can go to the protest. That's a costly signal that I'm a member in good standing of the group and then the coalition might advance my agenda. If I use the wrong term right, if I say illegal immigrant instead of undocumented, then I can get sanctioned for that. And these are ways of you know. You know serving those functions, so I think that's kind of a roundabout answer. Serving those functions, so I think that's kind of a roundabout answer.

Speaker 1:

That's perfect. So let's take a step back and look at it in a sort of institution-free environment, because even though that counterfactual is ridiculous, it does highlight the importance of the institutions in a way that you might not recognize if you just start in the middle. Voter walks into a voting booth and for one of the offices there's a set of names. I don't know any of these people. How do I decide who to vote for?

Speaker 2:

Right. So you know, in practice, oftentimes, you, there's a poll worker from each party right standing outside of the voting booth. Here's our sample ballot, the example I like to use for this. I went to grad school at Ohio State in Franklin County. You know every locality I'm sure your locality listeners out there there are some ballot lines that you're familiar with and some ballot lines that you have absolutely no idea what the office even does, right. So in Franklin County, ohio, we elect the coroner. There's a Democratic coroner and a Republican coroner.

Speaker 2:

I have no idea what the main policy differences on how to be a coroner are between the Democrats and Republicans, but they each endorse candidates for coroner. How do voters decide who to vote for for coroner? They get the sample ballot from the party, and that's the signal of-. But how do I decide which party ballot from the party? And that's the signal of. How do I decide which party? Oh yeah, you have perhaps a small set of issues you care about or there's a whole literature on socialization. Maybe you've identified with the blue team fairly early on in life. You have a small set of heuristics you use to pick and then you sort of adopt the broader agenda from there.

Speaker 1:

So the old story, the old Michigan story, was that there was this funnel of causation and the determination of party identification was basically the key. Party identification rarely changed. I might know fairly little about the actual policies that my party was going to. I just knew that I was a Democrat or Republican. Maybe it's because my parents were, maybe it's because my peers were. Sometimes people change party identification, but not very often, and so the acquisition of a party identification is a kind of substitute for having to go through all of the reasoning process. Having to go through all of the reasoning process.

Speaker 1:

Now, anthony Downs, when he talked about ideology, acted as if and he's an economist. So it makes sense we're going to start with people having preferences in an indimensional policy space, and that's very expensive to find out all of the information for each candidate about these dozens of different issues. Find out all of the information for each candidate about these dozens of different issues. So what I can use instead and in fact some people probably noticed this if I have an ideology, which Douglas North's definition of ideology was a shared belief system, basically full stop. So a shared belief system is an ideology. If it's applied to politics, it's a political ideology and it is more the sharedness than the internal coherence of the precepts of that In political science.

Speaker 1:

For a long time, following the famous paper in 1964 by Philip Converse, there was a claim that people weren't ideological because ideology required an intellectual constraint, some sort of coherence.

Speaker 1:

Melvin Hinnick and I argued that all that ideology is is the empirical phenomenon of the reduction of the space. That is, there may be in dimensions, but I don't need more than one or two dimensions to be able to describe the position of most people. So if I were to ask you what is your position on right to work laws, abortion and the environment and you told me what you think about those three, I can predict with pretty high accuracy all of your other positions and I can tell what your ideology is going to be. So there's not end dimensions. It collapses into a relatively low dimensional space empirically. The question is what is the direction of causation there? And I'm afraid that I think the Downsian Hinnick Munger approach to that has just lost. It doesn't make sense to think that people have informed policy position in the end dimensional space. It's actually ideology and those the reduced values where people live in terms of how they think about politics.

Speaker 2:

Yeah. So in this sort of ideologies as institutions framework I draw pretty heavily on. Kathleen Bond had a 1999 paper, sort of taking the classic game of politics model that had been used to explain party formation and just applied it to ideological formation. And in that model the players each come to the coalition with one thing they really care about. Right, it might be right to work, you know, labor, or climate or race but once you're in the coalition you've got to cooperate on all the other stuff.

Speaker 2:

So it looks like you've now taken preferences that are in line with the coalition on all of these issues and if I surveyed you you would look very ideological and you are very like. That says you might develop sincere beliefs on all of those preferences. But the direction of causation is the opposite. You came to the coalition with one thing you really cared about and then you adopted all the coalition's preferences. As a result, the really subtle thing I think is really important about the ideologies being these socially shared or socially learned belief systems is that you're not really thinking for yourself across all of these different issues. You're delegating that intellectual labor to your partners on everything except the thing you really care about. That got you into the coalition in the first place.

Speaker 1:

Steve Jobs famously said you couldn't have focus groups to decide what new products to make. Consumers don't know what they want until you show it to them, and so it may not even be that different. Really, you say, all right, here's who we are, here's what we do. All right, I want to be part of that and I adopt the other positions and I come honestly to believe them. It is the economist's way of approaching this, of pre-existing fixed preferences, and then I come in, make a rational choice between the parties and then forget everything that I used to know, because all I need to know is the parties, and that reduces the transactions cost of solving this problem. I mean, it's a perfectly elegant way of thinking about it. It's just empirically incorrect and I have come to believe it is.

Speaker 1:

I'm happy I wrote that book. It was an interesting exercise to go through. It is just mistaken. So what about trust? So I had said triangulation, transfer and trust. A big problem in politics is it's hard to tell whether the party whoever I'm associated with, the candidate or the party that I have favored are they actually trying to do what they said. And how can? Because the opponent's always going to accuse them of flip-flopping or doing something else. That's what campaigns are about. It's an information poor environment in the sense I may not be able to tell. How do elites solve that problem of trust?

Speaker 2:

um, they send costly signals, uh. And so, uh, you know, downs talks about this and in your, in your book, you have a nice little mini rant about how everyone forgets the down said this uh, right, downs is where we get the median voter theorem, and it's this um, you know, analogizing from a hot dog vendors on a beach, right. And then you start thinking through all the different ways that picking a candidate is not like buying a hot dog. And there is this problem of how do I know that the hot dog vendor really set up where they set up? And if you're on a beach and you can see the hot dog vendor, that's very obvious.

Speaker 2:

If the candidate is promising what they're going to do next year after you've elected them, this gets a lot harder. Right, there's this whole uncertainty around. Are they really positioned where they say they are? And if they're positioned at the most politically convenient middle of this imagined spectrum, that's really uncertain, right, are they really going to be a moderate? Or are they just saying that because that's what the median voter theorem says they should say? And so one way that a candidate can solve this is by taking an off median position. That is looks politically costly and because it's costly, you might infer that they really mean it, right.

Speaker 2:

So there's I think this is really apparent with Donald Trump, right who there's a paper I really like from his first term called the Authentic Appeal of the Lying Demagogue, right, the sort of puzzle here is why do people say that the thing they really like about Donald Trump is that he tells it like it is, despite him famously having no regard for how it is when he talks? And the answer they come up with is well, in politics we have this norm of not necessarily telling the truth, but at least speaking in a way that bears some correspondence to the truth. Right, you might twist the facts to make a point, you might be vague, but you speak in a way that if someone says, hey, that's not true, you have some way of tying it back to how what you said makes sense. And Donald Trump famously does not care about this norm. Right, he gets on a debate stage and says that Haitian immigrants are eating cats, and there's just no basis for those kind of things.

Speaker 2:

But what that does is it's a costly signal that he does not care about the elite consensus around whatever norm that is, and so voters who share his preferences, really believe that he shares their preferences, because he's willing to take these really costly actions in order to communicate it, even if it means, like in the first order sense, saying something that's patently false. I think it's just a really elegant way of getting around this like well, it's not true. Why are you saying that? He tells it like it is? Well, there's a deeper sense in which he really shares my preferences and he's willing to pay this cost in elite spaces to signal that.

Speaker 1:

I'm afraid that is all too deep in insight for many things also in everyday life. It solves a problem that a lot of people will ask themselves or sometimes have asked me. Someone says that they believe a thing and it has to be pretty strange for it to be very meaningful. So yes, these immigrants are eating cats. Well, that's not true. It might have happened once. Maybe there's no evidence it happened at all. But if I repeat that as if it were true, then that means I'm on the team. You can tell I'm on the team because I'm not saying that for my own self-interest, I'm not saying it because I'm trying to appeal to anyone. In fact it's going to make a lot of people angry. So in religion, in politics, this notion of costly signals, a lot of people will say why would you say that? Why would you believe it? And the answer is I'm trying to show that I'm on the team. It only counts if it's not in my interest to say it, if I pay some cost for saying it. So all sorts of things that otherwise seem irrational, I think can be explained by the insight that you just gave, and it is an important. It's one of the first things that you should check before you try to argue with someone, say, okay, I see what you're doing and they may actually believe it. It's easier if you can make yourself believe this ridiculous thing, because then you really are on the team, but you don't have to believe it. And I think one of the things that the left has underestimated about Trump is his genius for saying things he doesn't believe as if he believes them. His capacity for coming up with things and then sticking to them very stubbornly is actually a benefit, it's not a harm.

Speaker 1:

Well, you have written, recently had a paper in the APSR and I should note, before you make some modest disclaimer, most political scientists never publish even one paper in the American Political Science Review. It is our premier journal. You were the lead author, so congratulations on that. That is quite an achievement. Title of the paper was the online information ecosystem in the early 21st century is character. This is that's a. That's a quote from it. Forgive me, the title of it is just curation bubbles, but there's a sentence from it that is enigmatic pregnant. The online information ecosystem in the early 21st century is characterized by unbundling and abundance. So what is the online information ecosystem? You mentioned Twitter already, and what are unbundling and abundance?

Speaker 2:

Yeah, so the main technological change obviously here is the internet. We've moved from a model of media consumption that might be characterized by having a newspaper or magazines mailed to your house perhaps, to our interaction with even those same publications. But they're happening on the Internet, they're happening on social media websites from Twitter and we look at Facebook as well in the paper YouTube, tiktok, instagram, reddit, all these different social places, and it's, I guess, an ironic time to come out with a line about abundance. There's a whole new separate branding exercise in abundance right now. Ezra Klein.

Speaker 2:

Ezra Klein is trying to take it back which you already published using the words so we use the terms unbundling and abundance, and by unbundling we mean that people are interacting with information at the unit of the information itself rather than the unit of the source. So in the earlier 20th century we'll call it a shorthand model. We're subscribing to a newspaper. You subscribe and you get at least one copy of the whole newspaper in order to read a story in the newspaper. Even if you go to the newsstand and you don't subscribe, you have to buy a whole copy of the newspaper. You get every story in the newspaper, even if you are interested in one thing, one headline. You get the magazine. You get the whole magazine. You don't get one individual article On the internet, especially on social media websites. You get a URL that goes to a specific story, and so we've sort of taken out the. You know, if you think of the newspaper as the bundle, we've taken out an individual item from the bucket and you just get that little bit of information.

Speaker 1:

So that's even on a newspaper. Now you're just looking at the one story. If you happen to be on a newspaper website, certainly if you look at one tweet and there's some claim that is made, that's just one little bit Correct.

Speaker 2:

So yesterday, right, the Atlantic publishes a story that the editor-in-chief of the Atlantic had been added to a group chat where they were discussing war plans for a bombing campaign in Yemen, and Elon Musk's response to the story is well, no one reads the Atlantic and that might be true, but everybody reads this story in the Atlantic magazine, right? The URL is going around, everybody's seeing this story and most people who see the story he's right do not subscribe to the Atlantic, might not visit the Atlantic on a regular basis. That really doesn't matter for the circulation of this particular bit of information. It has been unbundled from the Atlantic magazine, writ large. So that's what we mean by unbundling is that people are just engaging with individual stories rather than the sources that produce them, and oftentimes don't really care as much about the sources that produce them. They don't have a subscription. They don't have a subscription, they don't have a sort of identity-based relationship. I might not see myself as a New York Times reader, even if I engage with individual stories from the New York Times all the time, and by abundance we mean there is just a lot of information.

Speaker 2:

One of the early sort of optimistic takes about the internet was that unlimited information right, has never been easier to find information, because the internet makes everything so easy to access, and that's true. But the flip side is it's now very hard to sort of figure out how to attend to which information. Right, there's too much information for an individual to sift through to identify sort of what's important, what's reliable and what is useful for achieving your goals. And so one of the sort of analogies that I think transaction costs matter here is it has weakened some of these informal institutions like legacy media, which would have an editor doing the work of what is the priority story here?

Speaker 2:

What goes on the front page? What goes on page 20? What's reliable? What are the processes for verification and reliability there? That on social media it's just other people you don't know right. What's necessarily the reliability of the source that's giving you this information and what's useful? How does this information fit into some broader framework that I might you this information and what's useful? How does this information fit into some broader framework that I might have as well? So there's a lot of different doors that opens, but yeah, that's what we mean by unbundling and abundance.

Speaker 1:

So five years ago I started writing and I just published a pretty long paper comparing the current situation to the 16th century, when the printing press first made it possible to print anonymous, low-cost pamphlets. And so my claim was that the way to think of the analogy is that there's an intervening variable, which is publication cost my access to the ability to contact people and then there's two functions that institutions have to carry out verification and curation. Verification is the determination of what's true. Curation is the determination of what's important. If publication cost goes to zero, both verification and curation are very difficult to perform because instead of this is important, it's just clickbait.

Speaker 1:

This is interesting and if you look at, I was surprised.

Speaker 1:

I had not realized how many of the anonymous pamphlets of the 16th century were absurdly false, and some of the most popular ones were the ones that were most interestingly false because they were outrageous.

Speaker 1:

The way to make money was basically the paper equivalent of clickbait just make up the most outrageous and they're anonymous. So it was impossible to find out, but then, pretty quickly, the result was by the middle of the an end of the 16th century, nobody knew what was true or what was important, and you get the reassertion of institutions and because, again, consumers want to know what's true and what's important, but without that they'll just buy stuff that's clickbait. So it is the producers that have to solve this problem and it struck me we may be moving in this direction. Um, about a year ago I saw someone on x say that they wished that a bunch of authors would get together on substack and have a common subscription price, so that, instead of charging that. And then somebody else immediately on x said, yeah, and we could call that a magazine, because magazines Congratulations, you've invented the Atlantic.

Speaker 1:

It's not. The reputational device, is not the individual author, because that can be faked. It's hard to tell. But a magazine is something that has a reputation. If they publish something false or something unimportant, now the difference is unbundling means that the Atlantic? Now the difference is unbundling means that the Atlantic? Maybe I believe it, but I don't actually care because they're just selling individual stories. So is there some reputation?

Speaker 1:

Or the other thing that solved the problem in the 16th century was publishers. You would see publication of books. They were just ripped off. But if you have a publisher's brand name, then I know that this is actually they own the copyright. This is a correct and true copy of this book, and people started to be willing to pay more from reputable publishers. Well, we're still at the zero publication cost stage here. We've lost curation and verification, because there's just no reason to believe this. In your paper you talk about something called curation bubbles, which does seem like it might be even worse than just having a wide open field where all of us kind of read everything. So how do curation bubbles work?

Speaker 2:

Yeah, so this is kind of a play on some of the other catchy names that have populated this space. You've heard about echo chambers and filter bubbles. This is kind of another take on this general concern, right, that the Internet allows us to inhabit sort of homogeneous information environments. Homogeneous information environments, whether that's echo chambers of users self-selecting into congenial information, or filter bubbles, where the algorithm does it. With curation bubbles, we highlight there are two faces of curation.

Speaker 2:

There's consumptive curation of I get to choose which users say on Twitter I follow or on Facebook who I'm friends with.

Speaker 2:

And then there's productive curation of I get to pick and choose what information I share that I see as useful for me, right, whether that's promoting my identities or performing who I am on the internet or advancing my interests, right, I might share information that's good for my party and bad for the other party, but if that's both what users are doing and how users are encountering information, this means that you get the potential for what we're calling a curation bubble to form where people are sharing individual stories that are useful for some of these functions, with little regard for the source, necessarily, that produced them for some of these functions, with little regard for the source necessarily that produced them and so, even going beyond that, they might be willing to or find useful to share information from what at the source level looks like an out-party source, precisely when that source shares or publishes something that's socially useful for your in-group right.

Speaker 2:

So you know, conservatives may have generally self-selected out of reading the New York Times. When the New York Times covers Hillary's emails, the people who are happiest to share the negative coverage of Hillary's emails are the people who are least likely to subscribe to the New York Times, even though it's a New York Times story. So if you try to measure the partisan valence of information at the story level, you get a much more sort of mixed bag of measurement there than if you measure it at the source level. At the source level the New York Times looks very consistently liberal, because most of the time liberals are the ones sharing New York Times stories, except when the New York Times publishes something that's useful for conservatives and that stuff still circulates in right-wing spaces.

Speaker 1:

So I was really taken by the idea of curation bubbles, because it makes it clear that these are emergent and dynamic. So an echo chamber means that everyone who agrees picks this, we get in there and we all talk to each other. That's sort of static. A curation bubble means that we're deciding and curation bubbles are competing, in the sense I can always switch, it's easy, and so I see some things from this other source and I go there. Now they're loose. It may be hard to define the limits of it, but it's a kind of network in which people learn and it it may be better than nothing, in the sense that I am likely to get a variety of different perspectives. One of the things that conservatives have been re can, can you re? Can you re X something I was going to say retweet, but I just said retweet.

Speaker 2:

It's always going to be Twitter for me.

Speaker 1:

There was an op-ed in the New York Times by a woman saying that we were lied to and that the Wuhan lab leak theory actually is plausible and that the mainstream media should not have tried to repress people who argued about its plausibility. And that's everywhere. That's. It comes from the New York times, and so it's just perfect for that curation bubble that they can reproduce it, because it's not obvious.

Speaker 2:

The New York times would publish something like that and you're talking about credible commitments earlier right, especially because it's the New York times published why it's not an accident. Yes, it viewed as more credible because even the liberal New York times is saying something that conservatives like. That makes it all the more useful to share.

Speaker 1:

Yeah, but that that makes it, in a way, kind of more permeable. I am likely to get a lot of different, rather than if if I just go to the same website or if I just listened to Fox news. So this is a kind of emergent institution that might lead to something else. I do wonder, though, how long it's going to be before we start to see something like what used to be the mainstream media. I'm really old, growing up in the 1960s.

Speaker 1:

In the evening, it would often be that it was as if Mount Rushmore was speaking, because Walter Cronkite would come on and tell us the truth, and it's. At one point he told us that the US government had lied about the Vietnam War, and this would just thunderstruck. Is that possible? Because the government doesn't lie. Well, he said it. This is true. We all believe him. If you read now, someone says that the government lies. That's not. It's not. It's probably not true. If it is true, it happens all the time. We don't have anything like Walter Cronkite. Now. Maybe we don't need Walter Cronkite himself, but what's going to happen five years from now? Are we going to have solved this problem some, or are we still not really going to have verification or curation?

Speaker 2:

That's a great question. I don't. I guess I'm not optimistic about a five-year time horizon, but I think you know well, it depends on how bad outcomes get. I guess Right Can. Can the system survive as the kind of conformity costs of you subscribe to the New York Times, which is going to structure the information that you acquire and how you use it in some meaningful way, have gone down, right? So turns out that doing your own research on each and every issue is really expensive in a sort of time and energy kind of sense.

Speaker 1:

Well, and in terms of abundance. There's so much information you might reach either conclusion. There's a whole bunch of information. I'm making air quotes.

Speaker 2:

Yeah, there's too much information to make sense of. And you know, right now, I think, especially you know, as we pivot even away from social media more toward YouTube and TikTok. The verifications or reputations are a lot less important. Right, the TikTok is purely algorithmic driven, almost relies very little on who you follow to determine what you see. Algorithmic driven almost relies very little on who you follow to determine what you see. And there are just ordinary people doing the news, right, and who are not trained journalists but are sort of performing that kind of function.

Speaker 2:

I remember seeing, on a tick tock, there's a guy who just does the weather. He's just some guy, right, but he's like doing the weather as a meteorologist would, because there is still demand for, like, I still want to know what the weather forecast is, um, but on TikTok, there's no other mechanism for you know, making sure that you see the meteorologist every day, uh, so I just think that's, um, uh, you know, interesting. I don't know how. Tiktok, uh, you know, tiktok, if users want to stay on TikTok to get their information, I don't know how TikTok solves that problem.

Speaker 1:

That's just not how the platform architecture is set up In 2000,. John Aldrich, another colleague of ours at Duke, and I went to Cuba for 10 days. We were having dinner with a professor there one night and he was saying we cannot believe anything that is in the Cuban newspapers because they're just published by the government. So there was one called Granma. Granma was the boat that Castro had used to land on the coast and so it was named after this rickety old boat. And the guy was saying this lie, this lie, this lie were published and they had something about that. The United States had protected Israel by vetoing a US Security Council. And Aldrich and I said well, that's actually true, that happened. I mean you threw the merits of it or a different question, but that happened.

Speaker 1:

The fact that the paper published doesn't mean it's false. He reached the conclusion that everything was false. That would be really helpful if there were a source where all the news was false. The problem is that some of the things you encounter are false. Some are new, are true and I have no way of telling which is which. And this is not. We're not Cuba. This is not something that is published under constraint by the government. It's just the, the, this super abundance of sources make it's impossible for me either to verify or curate the the. Yes, it's hard to predict what might solve the problem. But five, 10, 20 years from now, what would, what would be required for a solution? How could a solution, what conditions would a solution have to satisfy?

Speaker 2:

So one, I think norms around like supporting your beliefs with reliable as opposed to unreliable information of really weakened We've just, there's just fewer penalties now.

Speaker 1:

It's costly signals. There's not only fewer penalties, there's benefits.

Speaker 2:

Yes, right, so you know, I, I, I. I don't want to just constantly use Elon Musk as an example, but he's kind of the main character right now. One of the first things that Elon Musk did when he was going through this efficiency initiative was turned his followers loose on an already existing government database of spending where they could unearth these deep, dark secrets of all the wasteful things the government was spending money on. And anyone who got in Elon Musk's replies with like I found some wasteful spending would get unearthed as this deep, dark secret of uh things the government didn't want you to know they were spending money on, where it was an old government database. The government had been transparent about what they were spending money on.

Speaker 1:

Uh, you'd probably have to search a couple go a couple levels and it might've taken you five minutes, but it was publicly available.

Speaker 2:

Yeah, I forget the exact name of the website. It was like usspendinggov, right. And the projects themselves are completely decontextualized from their purpose, right? So it's easy to sort of demonstrate.

Speaker 1:

Some of them are silly. I used to work for NSF. Is it outsider? Some of them probably look silly.

Speaker 2:

Sure, if I had told you 20 years ago that the government was funding research into Gila monster venom right, it would have sounded pretty ridiculous, but that's how we got Ozempic. It's the nature of research.

Speaker 1:

There's all sorts of different dimensions.

Speaker 2:

Yeah. So the tricky part here is, like, the information being shared is strictly true. Right, the government is spending money on these projects. The tricky part is that this strictly true information from an extremely reliable source right USspendinggov, let's call it that is getting repurposed for an extremely misleading narrative that the government is spending. In some people's telling, a majority of the money the government's spending is on these like ridiculous liberal fantasies that are essentially stealing money from the more useful projects, right, so this is not even a problem of, like, individual pieces of information are false. This is true information getting stitched together to serve a misleading function, and we are really, just, as a discipline, figuring out how to start even thinking about or quantifying this kind of thing.

Speaker 2:

So I have a paper in the pipeline that looks at trying to reconceptualize misinformation, because if you measure misinformation as is common in the literature, as we have a list of unreliable sources, of sources that don't have the sort of standards of verification, repeatedly published false claims, things of that nature the market for misinformation is vanishingly small. Right, these websites are just orders of magnitude smaller than reliable sources like the Washington Post or CNN or you know that have the more sort of professional norms around these things and I think that that that literature has gotten really caught up in. Like, you shared a link from a bad website, therefore the claim is false. Well, breitbart publishes a lot of true claims, just like grandma. Just like grandma, you can't tell, yeah, and so what we do is we look at co-sharing, right? So we have a list of users who have shared you know, frequently share URLs from these bad sources. They also share plenty of information from good sources, and it turns out that the information they share from good sources is different than the information from the same sources that other users share, right?

Speaker 2:

So the example we like to use is the Washington Post, a couple years ago, had a headline it's no longer a pandemic of the unvaccinated, right. More than half of people who have COVID also are vaccinated now, and it's easy to anyone who, you know, understands how base rates work. If everybody gets vaccinated, then, by definition, the people who get COVID will be vaccinated. But you can guess who was really excited to share this headline from the Washington Post, right? It was people who didn't want you to get vaccinated because the vaccine won't protect you.

Speaker 2:

And this is again true information, right? Most people who get COVID now also got vaccinated, but it's not useful to then act on that information and not get the vaccine. The vaccine is still good. You should still get vaccinated, even if you get COVID. It'll be a more mild case, all those kinds of things. But if you only count misinformation as you shared a link from InfoWars like that gets totally missed in our conceptualization of, like the you know, less useful or less reliable kind of information that's out there because it's users strategically repurposing true information for misleading narratives and it's just a really hard problem to solve. And it's unclear the extent to which you even want to like sort of impose solutions, cause in some sense this is just people talking Right and like yeah, because it strikes me I.

Speaker 1:

I know people who would basically say what you just said, but reverse all the valences Right, and so you know reliable information like Breitbart. But then there's these unreliable things like the Washington Post Right, and it's very difficult to adjudicate that if you just have different sources of what is authentic information. So that bifurcation is catastrophic, where we no longer can even agree about what's an authoritative representation of the truth.

Speaker 2:

Yeah, well, and if you think about the history of particularly right-wing media, it evolves very much in reaction to the professional norms of the mainstream media that were perceived to be hostile to conservative interests. So if conservatives, not even 10 years ago but 50, 60 years ago, are thinking we don't feel represented in the New York Times, we don't feel like our views are getting a fair shake here, there's something about the process of how the New York Times or these other mainstream media sources are doing journalism that's not useful for us. We're going to go set up our own counter institutions, which is where you get things like the Heritage Foundation and the National Review. These are very much counter institutions for sort of pushing back against the professional norms of these elite spaces.

Speaker 2:

And this is one of the frustrating things when we then do like misinformation research at the source level is, you know, the sites like the Daily Caller or Breitbart get categorized as unreliable because they don't adopt these professional norms of, say, the Washington Post. But they're not trying to write. It's not like they're doing the Washington. They're trying not to that's their brand name, something else, yeah. They're trying not to that's their brand name, something else, yeah. And it just gets really hard to disentangle, like you know, misinformation as unreliable sources from these are sites that are trying to promote conservative narratives and have less sensitivity to the standards and practices that would be required of a Washington Post or New York Times, and it's just yeah, I don't have a solution.

Speaker 1:

I'm not even sure how big of a problem it is. You do have a lot of other thoughts.

Speaker 2:

If someone wanted to learn more about your research and perspectives on this, what should they look at? So my website is public jgreen4919.githubio. You can also find me. That's very catchy. Yeah right, I should have put a little more thought into my GitHub handle five years ago, whenever it was that I set that up. You didn't realize you were going to be so famous, or I could buy a domain and redirect. I just haven't done that yet. You can find me on Duke.

Speaker 1:

I will put that up so it's easy to click on that, as it is something that actually makes sense.

Speaker 2:

So I will put that up, and it's. It's also linked on my Duke scholarsdukeedu profile as well. If you just search my name, I'm I'm there as well.

Speaker 1:

All right. Well, I appreciate that and thank you so much for being on.

Speaker 1:

The Answer is Transactions Cost. It was great to talk. Yeah, thanks so much for having me. Whoa, that sound means it's time for the twedge. We have two political jokes this month about information. First, a push poller calls a voter that really, really hates the candidate the poller is calling about. The voter goes into a five-minute tirade ending with and if your guy gets elected, I don't know whether I'm going to kill myself or leave the country. And so the pollster says okay, so undecided.

Speaker 1:

Second, a tourist wanders into a back-alley antique shop in San Francisco's Chinatown. Looking through the objects on display, he discovers a detailed, life-size bronze sculpture of a rat. The sculpture is so interesting and unique. He picks it up and asks the shop owner what do you want for this? Oh, that's $12 for the rat, sir, but $1,000 more for the story behind it. The shop owner looked at him expectantly. The shopper said look, you can keep the story, old man, I will take this rat for $12. Transaction complete. The tourist leaves the store with the bronze rat under his arm.

Speaker 1:

He crosses the street in front of the store, two live rats emerge from the sewer drain and fall into step behind it. Nervously looking back over his shoulder, he starts to walk faster. Every time he passes another sewer drain, more rats come teeming out and follow him. By the time he's walked two blocks, at least a hundred rats are at his heels. People start to point and shout. He walks faster, soon, he breaks into a run. Multitudes of rats swarm from the sewers, the basements, the vacant lots and abandoned cars. Rats by the thousands are at his heels. He sees the waterfront at the bottom of the hill, he panics. He starts to run full tilt. But no matter how fast he runs, the rats keep up squealing hideously, not just thousands, but millions. By the time he comes, rushing to the water's edge, a trail of rats, thick and black, 12 city blocks long, is behind him, making a mighty leap. He jumps up onto a light post, grasping it with one arm. As he hurls the bronze rat out into San Francisco Bay with the other arm, as far as he can, heave it, pulling his legs up, clinging to the light post, he watches in amazement as the seething tide of rats surges over the breakwater into the sea where they all drown. Shaking, muttering to himself, he makes his way back to the antique shop. Ah, I thought you'd come back for the story, says the owner, no, says the tourist. I just wanted to see if you had a bronze politician this month's letter. Hi, mike.

Speaker 1:

In this episode that is the previous episode with Pete Betke about socialism you question the emergence of socialist thought among American college students. Your approach assumes that socialism is synonymous with central planning, which it is not. Marx didn't address the issue of central planning. Socialist thought, in my understanding, is the search for a solution to the externalities of the free market pollution, consumerism, concentration of income. Perhaps the Scandinavian countries are the best socialist solution we have at the moment, even though they have clear support for the market economy. Signed JCF. End of letter. Um gosh, jcf.

Speaker 1:

Socialism is an ideology that advocates for collective or government ownership or control of the means of production Full stop. Now their goal is to reduce economic inequality and provide for basic needs. But a socialist system has state ownership of the means of production and so having a capitalist system with pollution, consumerism and concentration of income, socialists claim you wouldn't have those if you had state ownership of the means of production. So the Nordic countries in particular are not socialist. I've written about this a fair amount, about how Sweden is one of the most capitalist countries in the world, and I'll put up a link in the show notes, the prime minister of Denmark specifically admonished Bernie Sanders to stop saying that Denmark was socialist. They're capitalists. Socialism requires state or collective ownership of the means of production. That's not true of the Nordic countries. They sold off almost all of their state-owned enterprises.

Speaker 1:

Now, marxism is a particular kind of socialism where state ownership of the means of production is required. Now, it's true, he didn't address the issue of central planning, but he certainly favored state or collective ownership of the means of production. So the goal was to abolish private property entirely, at least of the means of production. Now, the mechanism by which this was to be achieved was class struggle and a political revolutionary approach. So it is true that there's many kinds of socialism that are not Marxism, but all Marxism is socialist. So, in short, socialism is the broad umbrella of ideas about economic inequality, of public ownership achieved by collective ownership of the means of production. Marxism is a specific theory within socialism. So you're right to say that, marx, we were wrong to equate socialism and Marxism. You're right as far as that goes. But I have to remind you, I am the knower of important things. So, as usual, disagreeing with me, the knower of important things does not make you a bad person, but it does mean that you were wrong by TV. You were wrong by TV.

Speaker 1:

March's book of the month is Alexander Kirshner's Legitimate Opposition, published in 2022 by Yale University Press. This book makes a fascinating argument about the importance of being able to express opposition, not just because it improves the quality of a competitive democracy, but because it is a means of exercising agency. So it's a basic liberal primary goal, not an instrumental goal. So, reading the book, I found there's a lot of deep insights in it and it's interesting. It's hard to classify ideologically because it follows the logic of its conclusions wherever they may lead. So Alexander Kirshner's Legitimate Opposition 2022, yale University Press. The next episode will be released on Tuesday, april 29th. We'll have a new topic, some letters and, of course, a new hilarious twedge. All that and more next month on Tidy C.