The Answer Is Transaction Costs

Effective Altruism and the Transaction Costs of Maximizing Expected Value

September 26, 2023 Michael Munger Season 1 Episode 18
Effective Altruism and the Transaction Costs of Maximizing Expected Value
The Answer Is Transaction Costs
More Info
The Answer Is Transaction Costs
Effective Altruism and the Transaction Costs of Maximizing Expected Value
Sep 26, 2023 Season 1 Episode 18
Michael Munger

A thought-provoking conversation about Effective Altruism (EA) with technologist Ben Goldhaber, as we explore its intersections with utilitarianism and transaction costs. We'll try to navigate the tricky terrains of libertarianism and the more "directed" world of EA, balancing directional and destinationist solutions, and the role of strong leadership and community dynamics in maintaining this equilibrium. 

We'll question the limits of utility maximization as a framework and ponder over the potential dangers it could pose if unchecked. Our discussion investigates how EA, rational thinking, and global development has influenced the field of AI alignment. 

And my favorite new TWEJ, from @dtarias. In the first monthly edition of TAITC.

Some resources:


If you have questions or comments, or want to suggest a future topic, email the show at taitc.email@gmail.com !


You can follow Mike Munger on Twitter at @mungowitz


Show Notes Transcript Chapter Markers

A thought-provoking conversation about Effective Altruism (EA) with technologist Ben Goldhaber, as we explore its intersections with utilitarianism and transaction costs. We'll try to navigate the tricky terrains of libertarianism and the more "directed" world of EA, balancing directional and destinationist solutions, and the role of strong leadership and community dynamics in maintaining this equilibrium. 

We'll question the limits of utility maximization as a framework and ponder over the potential dangers it could pose if unchecked. Our discussion investigates how EA, rational thinking, and global development has influenced the field of AI alignment. 

And my favorite new TWEJ, from @dtarias. In the first monthly edition of TAITC.

Some resources:


If you have questions or comments, or want to suggest a future topic, email the show at taitc.email@gmail.com !


You can follow Mike Munger on Twitter at @mungowitz


Speaker 1:

This is Mike Munger of Duke University, the knower of important things Rationalism, effective altruism and Samuel Bankman Fried. Are these people whining and melancholy moralists? My interview with Ben Goldhaber and a new twedge and more Straight out of Creedmore. This is Tidy C.

Speaker 2:

I thought they'd talk about a system where there were no transaction costs. It's an imaginary system. There are always transaction costs. When it is costly to transact, institutions matter and it is costly to transact.

Speaker 1:

Last week's letter was on utilitarianism and effective altruism. Here's the letter. One issue that your thoughts on transaction costs have helped me make some progress on is Peter Singer's argument in his infamous paper Famine, affluence and Morality. To recall that argument, the centerpiece is a story wherein Singer, while walking past a shallow pond, sees a child drowning in that pond. The intuitive judgment is that Singer or anyone else in this situation ought to, morally speaking, wade into the pond to rescue the child from drowning. The argument continues by claiming there's no morally relevant differences between a case where a child is drowning in a pond and the suffering and death of people in other parts of the world, suffering and death we know of, at least in the abstract, and are in some position to alleviate or prevent. The argument concludes that each of us who has resources and knowledge of the situation in other parts of the world is morally obliged to sacrifice as much as we have to ameliorate that situation, up to the margin of sacrificing something morally comparable to the suffering and death we aim to alleviate. In a key passage, singer claims that now, quoting Singer from the letter From the moral point of view, the development of the world into a global village has made an important, though still unrecognized, difference to our moral situation. Expert observers and supervisors sent out by famine relief organizations or permanently stationed in famine prone areas can direct our aid almost as effectively as we could get it to someone in our own block. End quote from Singer. Now back to the letter.

Speaker 1:

This passage has always bothered me as too glib, too easy. But meditating on transaction costs has yielded some conceptual resources to better say what might bother me about this passage. Singer seems to ignore the difference in transaction costs encountered by someone rescuing a child from drowning in a pond right in front of them and someone donating to a Bengali famine relief charity. To my mind, the difference in transaction cost is substantial, but what remains unclear to me is whether the difference in transaction costs makes much moral difference. I can see how the development of the effective altruist movement, as well as services like GuideStar and Charity Navigator, constitute efforts to reduce the transaction cost of charity.

Speaker 1:

I'm curious to hear your thoughts on the matter. With best wishes, aa. Well, thanks again, aa for that question. To get some answers, I went to Ben Goldhaber, who describes himself as effective altruism adjacent, but he has been around the movement for nearly a decade. Ben Goldhaber is a technologist who has worked at Google and various startups and cryptocurrencies, education and artificial intelligence. Here's my interview with him. Now. It was on Zoom, when there are some problems with sound consistency, my apologies. So let me ask, ben, if you will introduce yourself and just say something about your background and how you came to be interested in EA.

Speaker 3:

Sure, yeah, you know I should have thought of this before I actually joined on the call. What the good, succinct summary is, because it's been a bit of a winding journey. Well, so, all right, let me get to it. Yes, it's a pleasure to be on. So my background is I've worked in tech in various companies and startups for a number of years a little over a decade now and I have been well, and I'll say I'm currently a director at FAR, which is a AI alignment research lab.

Speaker 3:

I am have been involved in the EA space, as broadly defined, for probably just under a decade about like nine maybe. I first learned about it about a little over 10 years ago, but then, like, really got more involved in 2014. And I, like many people nearby EA or in like this kind of general intellectual milieu, like hesitate a little bit to label myself as an effect of altruist, but that in and of itself is a bit of a running joke within this community where, like many are like the first rule of the EA the first rule of the yeah something is like oh, you're not EA, you're EA adjacent, and I do kind of label myself in that way, coming a bit from the rational space and just other intellectual traditions, including libertarianism, probably Bless you, yeah.

Speaker 3:

So yeah, that is, that is me.

Speaker 1:

Well, thanks. I had wanted to ask because I think probably almost everybody thinks they know what EA is, but they differ and so they do differ. I want to ask what what you think EA is and how is it related to utilitarianism? Because a lot of people really hate utilitarianism. Shouldn't they just hate EA?

Speaker 3:

Interesting. I think they are closely related. Let me answer that second part. There are so many good reasons to hate EA. I don't necessarily think you need to just pick utilitarianism as the reason and, in point of fact, I think that you maybe finish hate EA for utilitarianism. So to jump back to the first thing you said, I think that they are pretty tightly connected in that many early founders in the movement would label themselves as utilitarian. I think the tradition comes from a utilitarian tradition. Peter Singer is a notable early EA proto, EA huge influence, a clear utilitarian thinker, many others as well. But I don't necessarily think it's exclusively that I would not necessarily label myself as a utilitarian. I think to a certain degree maybe this is a dodge, but consequentialist style thinking is maybe the broader thing, that's not a dodge at all.

Speaker 1:

That clearly is a different thing.

Speaker 3:

Clearly a different thing. Good, well, I do think of it as consequentialism, as being maybe the key part of it, though I know people who identify as EA and are not that To maybe put it a bit more succinctly, then it would be something like deeply influenced by an idea that you should actually think about the outcome of things due to the expected value calculations, but by no means is it exclusively requiring signing up for the full utilitarian crazy train. And I say that with praise, by the way, for the people who do.

Speaker 1:

The question really is about the difference, and the way I have thought of the difference is consequentialism, which I associate more with. Effective altruism is, if you are going to engage in charitable activities or you're going to try to participate in some way of helping people who are less fortunate, you have an obligation to make sure you maximize the positive effects of your efforts and your money. So if you have already allocated some effort and some resources to charity, you should care about information about ways that you could increase the impact of it. Because why? I think I like sign up.

Speaker 3:

Yeah, I think that that's a great and probably captures 95%, maybe 98%, of the spirit of it, with the only quibble that I would say there is.

Speaker 3:

I think a lot of people argue within this space around the term obligation and then also the term maximization, where those are still contentious about whether or not a true spirit of this movement is feeling an obligation versus some other motivation for caring about the information which, by the way, I love. I think I'm definitely going to steal that phrase, which I think is right about caring about information that could help you maximize the good you're doing. And then the other term that people are capable about is maximization. It's like, well, there are a lot of things that can go wrong when you start to maximize things, a lot of other values that can fall to the side. How do you avoid having that happen? But I also think it's just a fair characterization that, like quite a bit about it is like as an intellectual tradition and as a practice is about maximization and like trying to actually think through how do you really get the most of this thing that you state you care about?

Speaker 1:

And that is why EA is in some ways tainted with the association with Peter Singer, because he's actually operating on the other margin. Utilitarianism almost starts at the other end. Utilitarianism is the source of the obligation. So long as there are differences in utility between you and someone who's less well off and you have the capacity to do something about those things, then you have an obligation to do something about those things. That seems to me highly questionable and I've always seen EA is starting at the other end.

Speaker 1:

Given the massive differences, wouldn't you want to try to do something? And if you did do something to reduce those differences, wouldn't you try to do it in utilizing as much information as available? And if the movement can reduce the transactions cost of having information about who needs help most and what the best kinds of help are? So I see those two things as starting at different margins. Yes, they come pretty close together, but utilitarianism is deriving an obligation, any difference at all. Now, that's a caricature, but you can certainly read the real utilitarians that way. So long as I'm somewhat better than often someone else, there's some obligation. Maybe I don't have to act on it, but I have some moral obligation to care about other people.

Speaker 3:

Yeah, that seems fair as a difference.

Speaker 1:

But also as a like 10,000 foot level. No, no, yeah.

Speaker 3:

And I think that that. So one thing that I think is very true about that is something about the like message and also focus of EA as a movement from its outward facing communications versus its inter and intra communications, where I strongly agree, especially traditionally when EA was first and foremost about I don't know, first and foremost that's the wrong phrase it's actually always had kind of a multiplicity of viewpoints within it. But you know, a decade ago, five, even five years ago I think, it had much more of a global welfare point of view, strongly influenced by Peter Singers, and I think in that world, both some of the communication with others and within the movement was quite focused on pointing out just the sheer difference, the magnitude, as you say, between somebody in the first world with their suit and then the drowning child in the pond and how, of course, you would want to try to save the child and that's also you should consider donating, and I think I, and I think that that makes a lot of sense. Now, a difference is and I think this happens in a lot of intellectual communities is in the arguments and discussions you're having with people within the community. It focuses more on where does the line get drawn and like at what point do we like stop here?

Speaker 3:

And I think that's where you start to see the part that looks a little bit more like what you're pointing out and the kind of utilitarian obligation of like, well, if you do really care about this, why would you just stop at donating X amount of money to ending malaria? Why not also go further? And yeah, I think that is kind of a natural consequence of a lot of utilitarian thinking, which is there is not an immediate point in which you are like no, this is super augatory, this is. This is something I don't have to do as a moral demand. And that is been fascinating to see as well in terms of how that idea translates into praxis within within a community. That is, at least in my mind, fundamentally about trying to put the intellectual ideas here into practice.

Speaker 1:

So one of the things I have enjoyed from afar about watching the scrum is something that I also see inside libertarians. A lot of libertarians argue about different end state utopias. Yeah, the fact is, we agree on almost everything and we have 1000 miles to go in that direction before we decide okay, this is enough or we should go further.

Speaker 3:

Yes.

Speaker 1:

Why not go 10 miles in that direction before you start arguing? Preach, sure, the end state utopias that you can conceive of, great, have a beer, talk about that, but at the office during the day, let's try to get the train moving.

Speaker 3:

Let's try to get one mile at a time.

Speaker 3:

Yeah, I really agree with this.

Speaker 3:

If I have a theory on why this happens, I don't think it's about some kind of explicit part of the ideology.

Speaker 3:

Well, actually, maybe I'll retract that. But one thing is I think so much of this comes about from just the actual structure and of the communities, the institutions that exist and the personal dynamics between members, where certain communities seem much better at being able to both have one hand grasping the big ideas and being able to really think through the implications of them and be like no, but when do we stop? Well, also being able to have another hand grasping just the idea of like okay, but also when ideas hit reality, there's a bit of a decomposing, a certain amount of flex you need in order to just get things done, and I think a mark of a good, thriving, fertile community is one that can do both. And this is obviously something you know more about than I do, at least as something of an outsider now to some of the libertarian movement. It strikes me as one that is like had a hard time navigating that kind of, that kind of differ, or being able to hold that tension and still do things productively.

Speaker 1:

I may know more about the problem, I know less about the solution because it is something I have never been able to solve. I've compared directional and destinationist solutions. The directionalists all have an idea of let's move in this direction, and that seems to be more like EA. Destinationists have a particular end state vision, and anything that doesn't immediately lead to that, they say well, that's not enough. And, to be fair, comparing those is a way of measuring your commitment and so if you're not fully committed, then right, you're not really in the club. Can we trust you?

Speaker 3:

I think that's right. That process is like always running and requires some really good leadership maybe, or some kind of like embedded wisdom or groundedness. I don't quite know what this phrase would be or what the like year's level meaning of those terms are, but it is like how I think about it within the community and the members. Where you can like, resist the temptation to constantly be doing the purity checks. Yeah.

Speaker 1:

Yeah, and well, have a beer at night and do that, that's okay. Yeah, yeah, there's a lot of other stuff to do. Well, some people you've already mentioned this a little bit Some people characterize EA as maximize expected utility in almost every aspect of our lives. Really, so it's not just in the way we treat others, it's the way that we do things as individuals. Is that accurate and what has been your experience? How is it changed? How is doing that change the way you?

Speaker 3:

do things. So it's complicated. One thing I would say is that it's important to think of the demographic composition of EA and the associated communities when thinking about the fervor that some bring utility maximization to and obviously I'll note this is not a concept alien to the Munger household about age being an important thing, noting your son's work on this, I think it's really important. And when I think about EA as a movement, one of the things that has struck me is how much a decade ago the oldest members of EA, when it was like a proto movement or just forming, were maybe in their 30s and not in the upper 30s, maybe in early mid, if that really shaped the styles of conversations and the attitudes and behaviors. And if I think about it now a decade on, one thing I'm struck by also it's gone through a lot during that period is the degree to which some of these ideas, at least I think, are being held a little bit better and a little bit more looser or gentler. Again, let me be oh, I don't know. I didn't say this at the start, so I should note just one guy who has been involved in this for a while and like still is not quite sure his relationship to the broader movement. But my take at least, is that the attitude towards utility maximization seemed much more common in the earlier days when it was forming and also there was more probably like a use among people as well or less of a contention of people who could see the ways in which, like deeply gripping to utility maximization had negative effects on people and on institutions and groups. And I can talk more about that. And now at least many people at least pay homage to the idea of being careful about forcing maximization too hard and having something more like an 80% attitude towards it, like most notably, the FTX collapse, which I think is important to talk about, and San Benjamin pre-actions and the degree to which this was like interlinked within the movement is just such a stark and was such a clear indicator of some of the risks and systemic risks of unchecked utility maximization that it was been like a huge signal to avoiding that, certain aspects of that mindset within the community. But even before that I think there were many incidents and difficulties that can come from just holding to utility maximization.

Speaker 3:

Here's something I want to note I feel a little bad because there are people who I think can make very good cases within the movement and outside of it. I'm like why utility maximization is the right framework to have. It's just not one that I subscribe to and I think, to wrap up what has been a long soliloquy here, the important part is like being able to have that kind of attitude towards altruistic endeavors and parts of your personal life and other things Like being able to know no, I actually care about this value, I want to work to maximize it, but without doing it in a way that feels like the kind of classic naive utilitarian world thing that you end up sacrificing a lot and not noticing all the secondary third effects which, if I may to name check a libertarian who I think is absolutely right about this and has and like, has the attitude that is being reflected in EA but maybe could even have more, is kind of the Hayekian attitude here, which I think has just like a humility underscoring it that maximizing attitudes often don't.

Speaker 1:

The difficulty, and the thing that Hayek is very good about is recognizing the limits of human capacity to know the right thing. So the example that I always use is suppose you're in a new city, you don't have any resources and your phone's not working for some reason. You want to go to the best restaurant you can. It's a city of 5 million people. You're not going to find the best restaurant that you can. It would take you two days to do any sort of research. In fact, even if you have your phone you look on Yelp you're not going to maximize. The last thing you want to do is to try to maximize along that one margin as you start to account for the costs of information and the opportunity costs of time. Sure, maximize expected utility, that's okay, but that's actually a very different thing than always. Make sure you have the best in the feasible set, because the research required to do that with any real probability is it just can't happen.

Speaker 3:

Absolutely, and I do think EA and again, I keep saying this broader intellectual community, because I think it is a little broader than EA has an appreciation for the value of information and being able to stop the search process at some point. One thing that as a community it does well, though, is something like ongoing intellectual conversations around it, where quite a bit of the intellectual culture and the cause prioritization within EA has shifted over the past decade, in part because of this continuing search process that is running, that maybe doesn't get fully maximized but is still happening, and so I'm bringing this up because it comes to mind from the restaurant search, where it's like, yeah, having a bit of a long-term view on these kind of search things I think is important and is a part of the culture where we don't know yet if we're at the right spot.

Speaker 1:

The advantage is that that suggests that it should be possible for groups of people to create institutions that facilitate others. So we have a list. Here's the different cuisines I'm going to search within a cuisine and there's no reason for our charitable impulses not to have the same sort of informational benefits.

Speaker 3:

EA is not one of the best, I think, of any modern movement. I would say the best. I would have unequivocally said this maybe a year ago. Obviously everything in this group kind of shifted post-FDX but in terms of really having a commitment to the institution building that can then support intellectual infrastructure. I at least don't know of other comparable groups that have had that priority for it and where I think it's clearly paid off.

Speaker 1:

To an outsider. Ftx and Sam Bankman Fried sort of showed their words not mine the superficiality of this kind of faith, because this person was an obvious fraud and con man who was all these supposedly hyper-rational people were completely taken in by and was running a pyramid scheme which, when it collapsed, they were all shocked and in fact to all of the outsiders. You knew all along this was going to happen. It wasn't a surprise. Now I think that's quite a bit of Monday morning quarterbacking, looking back and saying that they knew because no one said that.

Speaker 3:

No one was making Unfortunately, no, I would respect it a lot more for the people that had the prediction track record down that, yes, this was obviously going to happen and, by the way, you would have made hundreds of millions shorting Absolutely you can tell if somebody actually believes something, about whether they trade on that belief?

Speaker 3:

100%. It didn't, and I don't think that superficiality is the right term. There are many reasons to critique EA as a movement and what this indicates about it, and I could list many of those. As somebody who was aware and has worked in TAPK and various things nearby that world, I don't think that there was a what one would I actually believe here.

Speaker 3:

Mostly that I think there's a very clear lesson here about the dangers of centralization and the way in which this pillar of EA and I think there's no way to say that he wasn't outside of just funding amount and also the degree to which other EA groups lionized him in his efforts. I think that it's very clear that it was seen as a model of things to do up until the collapse. I don't think that that's because people were blinding themselves to the pyramid scheme nature of it, but I do think that there's a way in which the worldview of well, we I'm putting air quotes around that we clearly have the knowledge of what to do and run style the smartest people in the room and so we should just get power in order to do that Is a failure mode of the technocratic or high modernist impulses within EA and this collapse pointing out the ways in which there's so much systematic risk from that and there is no way that you can overcome some of the just like getting the right worldview from many other people.

Speaker 1:

Well, to be fair, I don't believe it means yeah, please. It's also the success mode, that idea that there are things to be done, we're actually going to try to do them in the best way that it's possible and, instead of relying on existing institutions, we can actually make it be. And OK, yes, it's totally agree with that with the smartest people in the room. So it is. It is certainly possible to say that people with that mindset are going to be susceptible to failures which, when they happen, will be spectacular.

Speaker 3:

Yes, yes, I think that's all true.

Speaker 1:

I wanted to ask you about was about some successes that you could point to in a in a and these can be small, but some sort of proof of concept, because what? What most people know about was oh EA, if you get to talking to someone at a cocktail party, you're going to be sorry because they're going to talk for a long time, and well, that's true, and then that, but now it's all over, and in fact it's not all over, and there have been some successes, so can you say something about that?

Speaker 3:

Yeah, absolutely, and this is for my position again as somebody who's been followed and worked on various projects associated with EA over the years like I think it's like examples include within the animal welfare world. I think that it is both done like a really good job of shifting some of the conversation around animal welfare in a way that is consequentialist, like how do you reduce suffering, and I both think about that with like real clear wins on advocacy for, you know, certain better, better treatment of animals. Also shifting the discourse within animal welfare circles where I turned the second hand, but like just a degree to which, like years ago, people wouldn't even consider things in within these groups that was about like trying to like prioritize ways of more humane killing or like different, different value of different animals, and I think that can still be a bit taboo. But I think EA as an intellectual movement did a lot with an animal welfare to shift the conversation, to be more consequentialist there.

Speaker 3:

Other examples I mean could look at just like the sheer numbers, amount of money donated by Givewell, by other orgs that did take this clear cause prioritization route and gave money that is probably going to be much better spent within more effective charities in global development, but then I guess the main one in this comes from the world that, like I, kind of come more from, which is focusing on AI.

Speaker 3:

Oh and I should know because I didn't say this at the beginning you know, far is not an EA organization. We're an alignment research lab, but, like many of us in this space, influenced by ideas that have been talked quite a bit about in EA and rational spaces which is around, like the potential risks from AI and the importance of getting safe and aligned systems, and to the extent that AI alignment was not a field 20 years ago basically wasn't 10 years ago either and it was like a far richer, like like there's been. There's been a tremendous amount of field building and intellectual progress on the potential risks from AI, and if you do think that this is one of the more important problems in the world, then I think you have to credit EA for both being very early on it and investing a lot in trying to develop the field that could help tackle these issues.

Speaker 1:

I wanted to ask what may strike you as a kind of an off the wall question, but it's a sort of philosophy first kind of care about. I was reading and it really struck me that Adam Smith, but so the the this podcast is the answer is transaction costs, with the claim, really, that almost everything is about transaction costs and that changes in transactions costs can have large unexpected effects because they change the number of cooperative activities that are possible. And in a way it's a price. It's not a monetary price, it's just something that used to be wasted can now be used. So there is a Smith actually talks in the theory of moral sentiments about something that's very close to the hearts of a lot of EA people. So let me, let me just read it. It's probably 200 words, so it's a little bit long.

Speaker 1:

Two different sets of philosophers have attempted to teach us the hardest of all lessons of morality. One set have labored to increase our sensibility to the interests of others, another to diminish that to our own. The first would have us feel for others as we naturally feel for ourselves. So by that he's referring to the charitable impulse. So I care about myself. I see some more suffering. I think I should care about them too, right?

Speaker 1:

The second would have us feel for ourselves as we naturally feel for others, and that was the story. And the story would say look, you're no better than anybody else. Yeah, your foot hurts, shut up, everybody has problems, and so stop pretending that you should get extra treatment. Right? So it's interesting. He himself is kind of a stoic, but he's interested in the first of those that would feel for others as we naturally feel for ourselves. Both perhaps have carried their doctrines a good deal beyond the just standard of nature and propriety. And this next sentence is one of my favorites in all of Smith and I'm pretty sure that Peter Singer played on an intramural softball team by this name at Princeton the first are those whining and melancholy moralists. So whining and melancholy moralists is a great intramural softball team. That's a great, that's a great team name.

Speaker 3:

That's a great band name. I must reuse this. At some point I'll get t-shirts.

Speaker 1:

I'll make sure you get. I'll get t-shirts I'll make sure you get. The first are those whining and melancholy moralists who are perpetually reproaching us with our own happiness while so many of our brethren are in misery, who regard as impious the natural joy of prosperity, which does not think of the many wretches that are at every instant laboring under all sorts of calamities, the languor of poverty, the agony of disease, the horrors of death, commiseration for those miseries which we never saw, which we never heard of but we may be assured of or at all time are infesting such numbers of our fellow creatures. Ought, they think, to damp the pleasures of the fortunate and to render a certain melancholy dejection habitual to all men? But first of all, this extreme sympathy with Mitch Fortunes we know nothing about seems altogether absurd and unreasonable.

Speaker 1:

Now, the reason I think that's important is that when he wrote that in 1759, there was geography and there was time, something that was across an ocean was beyond my capacity to have much of an effect on it. And when I heard of something, it's like looking up in the sky and seeing a supernova that happened years before. By the time I hear about some earthquake in China, it was a long time ago and I think, damn, that's a shame. Geography and time have been eliminated by the way that the world has become digital Right, and so actually being able to know things about the world around us more or less in real time and to be able to send resources almost in real time, particularly if you have an infrastructure of mobilization Right, changes those people from. I actually have some sympathy with his claim In some ways. I think this is me saying this. I'm looking for your reaction. Peter Singer is a whining and melancholy moralist. He would like it if everybody felt bad about their child having a birthday party.

Speaker 2:

He would throw it to that margin.

Speaker 1:

Kevin had birthday parties. Damn it, he had birthday parties.

Speaker 3:

He had birthday parties.

Speaker 1:

But still there is really something to the claim that, at the margin, if we can get rid of the obstacles of time and geography, and so to me this is the value proposition, saying that if Smith were here now he'd say, well, yeah, okay, I see, that makes sense. Ea actually means that there is something that can be done. It can be done quickly and it can be done effectively. So is the biggest impact of EA the reduction in transaction cost? I always want it to be about information and infrastructure and that's my own procrastion bet, but is it fair at least to say that's a contribution?

Speaker 3:

100%. It's a contribution and on a number of axes, and so I think this is going to address some of what you just said, but also will spiral it a little bit, which is the institutions and makeup of EA is dramatically influenced by the internet and the decline of transaction costs, where it almost entirely came from. Weirdos with blogs talking online with each other. This is something I have as a huge gripe, by the way, against most mainstream coverage of EA, certainly back when it was shiny, entirely shiny, unblemished by the FDX drama and crime and obvious caribolness, but you would read these stories in like Time Magazine that had this amazing shiny take on like the origins of EA, how it came from Oxford University and the minds there. No, yes, oxford is definitely a hub in an intellectual center for it, but where it came from was weirdos online arguing with each other and like the decreased transaction costs of that and then also being able to then like hold and having the right prioritization for this, holding community meetings, people flying out to see each other, bay Area style things. I say this as somebody who was the sponsor of the EA Global in 2015 at Google like they just decreased in transaction costs and how much easier it was to bring people together, to form the relationships and make a community are things that were possible now, not 20 years ago, even or certainly not in Adam Smith's time, and that adds a like distinct, global kind of attitude towards it that I think has influenced a lot of ideas just by existing. And then, yes, to move to more towards the like impact on the global development side, that like lower transaction costs. I think that's really true, especially in the global development vein of EA, which is, you know, I think, like one of the big wins from this has been getting towards the idea of just giving money to people identifying and giving them the right to do that, getting towards the idea of just giving money to people identifying and giving money to people in less developed countries and a standout EA startup called Wave has almost exclusively focused on that and building the infrastructure to reduce the transaction costs, to make this kind of money remittance very low cost. And then again, as I mentioned earlier, like standout EA charities, like give directly and give well, have been focused on yep. How do we actually just think about this in the right way? Now the transaction costs can be lowered and we can find the right kinds of people where we can give this money to the most effectively. How do we just scale that and do it? Then, finally, third like or I don't know. There's so much more I could talk about on transaction costs as it relates to EA, but I like to cut myself off on the like.

Speaker 3:

Third point, from the kind of like stream of the movement that I've mostly gestated in A long term is the belief that time is not a distinguishing factor between who you should care about, should you have, should you care about future generations, both our grandkids, our great grandkids, but then, thinking about the entirety of humanity and the like moral value of all of our potential descendants, well, long term is a belief that that has weight and that you should care about them.

Speaker 3:

And this is maybe forcing a little bit too much into the frame of transaction costs, but I think of this as an idea, that about reducing transaction costs so that you can make trades with these future generations and benefit them. One of the biggest things that has changed about EA over the last 10 years is, I would say, long termism, and this kind of moral argument has carried the day in many different parts of the philosophy and that has had real implications in terms of where energy, attention, capital both money and human capital, talent are going to different groups. And yeah, I think that that is like, in some ways, an argument that like well, having ideas about transaction costs, or like how to, how to shift resources to people, how to do transactions is super important to the movement.

Speaker 1:

Well, I really appreciate that and I will work on those whining and melancholy moralist t-shirts and make sure that yes that'd be great, because I'm not.

Speaker 3:

I'm with you, we need. I my favorite part again. I don't think I would label myself as an EA, or at least I'd have hesitations. Like the thing that I like is the kinds of movements that have a certain joyous attitude towards the ideas, and it's almost like you know, I don't quite feel like I have an obligation. It's more that I just have like an unusual interest in some aspects of this altruism and like that's fun.

Speaker 1:

To be fair, that means people are going to ask and if you have that t-shirt on, you'll be able to say well, this was before Adam Smith was converted to EA, but now yeah, before we found the frame. He is a big EA advocate now that transactions got reduced.

Speaker 3:

Yeah, I think this is very valuable as a stream and, by the way, to get back to the way in which like institutions have been such a decor of this, like when my immediate thinking was like, yeah, ea forum, less wrong, all these like online forums that like talk about the ideas, I think it's a, yeah, it's, it's the kind of thing that should be in the water supply.

Speaker 1:

Well, is there anything that listeners who wanted to learn more and wanted to do it in a way that was pretty low in transaction cost Is there something that they should look at? I'll put up some links in the show notes, but what would you say? Are the one or two things? Where would you start?

Speaker 3:

That's a really good question, but as somebody who is not necessarily in emissary normally outside of this specific podcast for EA, I'm not entirely sure. I do think I can give like I'm not sure where the best place to start is. If I were to.

Speaker 1:

Oh, you're right, You're right.

Speaker 3:

What is good enough. I think that the EA forum team has done a great job of like a having a forum that has epistemic standards and does a good job of collecting like links from different people and also has like sequences that like kind of both introduce things but also kind of help people tumble down the rabbit hole. I would also say they've done a good job of like avoiding the kind of like epistemic lock in that can often happen. So that'd be my recommendation.

Speaker 1:

Well, my guest today has been Ben Goldhaber. Ben, thanks so much for being part of TidyC.

Speaker 3:

My absolute pleasure. Thank you for having me.

Speaker 1:

Whoa. That sound means it's time for the twedge. What I've done this time is record what I think is a really excellent joke about effective altruism from the Reddit site of Deterius D-T-A-R-I-A-S. Now, you might not find this joke funny, but it is a great send up of the kind of extremism that might characterize the maximized expected utility implication. So, yes, it's not entirely fair, but come on, it's a joke.

Speaker 2:

What is the most efficient charity Like? How can I help the most people while spending the least amount of money?

Speaker 2:

And it turns out it's actually buying malaria nets, or rather anti-malaria nets. It's not a smallpox blanket situation. And they're like I got you this net. They're like, why is it buzzing? I'm like, just enjoy the net. It's a gift. But it's true.

Speaker 2:

If you wanted to help the most people, malaria nets is the best by far. So I don't understand why haven't the other charities switched to nets yet? What are they doing? You know I have my friends who's into autism speak. It's like no, autism shouldn't speak. They should make nets. I feel like they've been good at that. Or they make a wish foundation. No, they make a net foundation. Those kids still got a few good weeks left to them. They show up. They're like are you going to grant my wish? It's like that depends. Is your wish to be a net manufacturer? The worst are animal people, people who are charities or animal stuff, because you're funding wheelchairs for a fish when there's people who need help. You know it's nonsense. People are so much more important than animals. Like if it was up to me and I had to decide between letting one person die or killing 800 dogs, I wouldn't even have to think about it and I would lower the net full of dogs into the lava.

Speaker 2:

And you're like, oh, that's so sad. It's like I know what a waste of a net, but like I'm helping.

Speaker 1:

Now catching up on letters. There was another letter we got last week. Mike, I happened to eat at a $195 buffet yesterday. It was the Sunday brunch at the breakers in Palm Beach, florida. I've attached the receipt and pictures. Of course I wanted to consume the money cost value in food, but I'm not sure I was able to get there. Rc in Birmingham, alabama. Well, this was about the discussion on the previous episode that the most expensive, about what the most expensive buffet in the world was, and RC has found one even more expensive than the one that I found, and I've actually stayed at the breakers but I did not go to that buffet, so I did not see that price. Kind of hard for me to imagine $195 per person for a breakfast buffet but I bet it was really good. Well, thanks for that letter. Rc. The next episode will be one month from now Halloween day, october 31st. We'll work on the problem of transaction costs, cryptocurrency and the blockchain space. Plus, we'll have another hilarious twedge and more next month on TidyC.

Effective Altruism and Transaction Costs
Navigating Libertarian Ideals and Practical Implementation
Utility Maximization and Effective Altruism Attitudes
EA's Impact on AI and Development
Transaction Costs in Effective Altruism
Effective Altruism and EA Forum Recommendations
Animal Charity Funding and Expensive Buffet