Where should I give?

I am trying to work out where to give a large-ish amount over the next week or two. I currently pay UK tax at the 40% rate, so UK-deductibility gives a 5/3 multiplier on donation value.

Plan

I should learn enough about the following EA meta organisations to write a page on them in similar depth to the one I’ve already done on GWWC:

I should do at least a superficial investigation into these:

I should look at the EA wiki and see if what I’m writing has a natural home there.

Not investigating

Prior strategy

Give something like 75% to AMF, 15% GiveDirectly, 10% DtWI. This is precisely per GiveWell’s recommendations of now quite a long time ago (although they weren’t exactly that… maybe I did something like increase my AMF share without increasing the other ones, I don’t really remember).

Major challenges to prior strategy

GiveWell updated

GiveWell’s advice is to just go for AMF now. If I don’t do something radically different, I should at least do this.

AI risk

Some people think that AI risk or something like it is overwhelmingly the most important cause. That’s not by itself enough for me to pay attention (e.g. I don’t make a serious effort to find out if there is a God, even though the cost/benefit claimed is huge), but that coupled with a sense that concern about AI is increasingly mainstream (Hawking et al) makes it hard to dismiss. I’ve put this off for a long time partially because I know that contributing to AI safety would be basically unsatisfying. That’s not just emotional bias, though: partially it would be unsatisfying because the opportunities to learn are pretty limited. I think without good feedback loops, EA is pretty extremely likely to veer off course.

I generally don’t hear people rave about X-risks that are not AI risk. That puzzles me a little bit, but I guess the path forward for (say) biosecurity is not as clear.

Meta

Some people think that meta-organizations are overwhelmingly (perhaps an order of magnitude, perhaps more) more effective than direct donations, because of multiplier effects. I’m sort of suspicious of this kind of thinking because it feels on some dumb gut level like the same kind of thinking that makes pyramid schemes sound like a really good idea. Making a few naïve and somewhat evidence-sparse guesses about payoffs of things can lead to some wildly wrong conclusions. That said, if you believe in the EA movement (which seems like no small if all on its own) then it does make sense that growing it would be a big deal. But maybe slow growth is actually the right thing for the movement – a lot of organizations seem very much to be learning on their feet and developing in all sorts of interesting ways, so we don’t want to burn out all of our novelty appeal before we know what to do with it or have people make up their minds on us before we know what we want them to think. All of that argument is a bit speculative, and the obvious view to take is that growth is good and more growth is more better.

Non-challenges to prior strategy

I downweight animal welfare a lot compared to many EA figures. This is because I value a lot of things other than suffering / happiness, and the other things I value seem to be particularly human, like meaningfulness, curiosity, discovery, and self-actualisation.

That said, I’m a vegetarian, and that’s mostly out of acknowledgement that animal welfare doesn’t have to be very important relative to human welfare in order for you to want to stop factory farming, because factory farming is so terrible. I don’t know exactly how terrible, because I haven’t researched it – this is a problem, although one I’m seemingly willing to put off forever. Anyway the point is there is at least in principle some amount of terribleness which would make it still the most urgent cause, even though it doesn’t promote my non-hedonic values.