Things I learned about being more effective at Effective Altruism Global 2016

Making better arguments, the power of upstream multipliers, outstanding career capital, and more

Wednesday, Aug 10, 2016

How to solve (reduce uncertainty while regularly updating your approaches towards) your problems. - Stephen Frey, Planning Under Uncertainty,

I went to Effective Altruism Global 2016 this past weekend in Berkeley, CA, and came away with a lot of great thoughts from the sessions and talking with folks. Since it was my first EA Global, I went to a good number of sessions. Here are my key take-aways – you can also read my raw notes from some of the sessions. (If you also went to EA Global, I encourage you to share, even if brief, your key take-aways, and email me if you do!)

I’ve listed and provided a one-sentence gist of each one in this table of contents, so you can click into whichever one seems interesting to you. Or, you can just skip the contents and read the first one.

  1. Invent technologies that invent technologies — Developing better tools (research tools or core technology) can be a multiplier on the possibilities of what you can do with research and technology.
  2. On arguments: you know “what would change my mind?” better than you know “what would change their mind?” — In a disagreement, we’re not good at knowing what would change someone else’s mind, so each person should specialize to the question they’re best at, and then exchange notes.
  3. Making arguments more objective with subjective-to-objective conversions — Using CFAR’s double crux method, we sometimes may be able to convert disagreements about subjective questions to disagreements to slightly more objective questions—which can be answered with data more easily.
  4. The power of multipliers—people who help get other people into impactful areas of work — People who advocate for others to work in a neglected but impactful area of work can have a huge multiplier impact.
  5. The types of outstanding career capital—of which I need more of — Outstanding career capital, like social impact achievements, extensive resources, or cutting-edge expertise stand out much more than credentials, and we should be intentional about earning it.
  6. Generating new models from just the information in your head — That we’re able to sit down and generate new models and ideas without any outside information suggests that we haven’t explored all of the implications of the information we have in our minds at any given time.
  7. Find problems that nerd-snipe you — Speaks for itself. Also, being nerd-sniped by a problem might be a signal that you understand a field to a certain degree that you understand what an interesting problem looks like.
  8. Vicious rock-paper-scissors — Since my top priority at school is doing schoolwork, when I’m burned out of doing schoolwork, I don’t switch to the second-best thing, which is reading or personal projects, but instead to reading Reddit or something. Malcolm Ocean suggests this could be because it seems wrong to your brain to consciously choose the second-best option.
  9. Meeting people with intentionality — Alton Sun uses a set of questions to ask people at the conference. In general, being intentional about meeting people and asking questions that are shortcuts to what people care about makes for illuminating conversations.

1. Invent technologies that invent technologies

Certain research discoveries can have the potential to be the precursor to many other research discoveries. One thing that can have a huge impact are research tools. The way I think about this is if research is a tool that we use to illuminate some aspects of reality, we can do several things to illuminate more of it. We can have more people illuminating things and illuminate in parallel. But we can also have people who are working on the illumination process itself to make the tool better. In some cases, this can lead to huge downstream improvements in how much illumination is done.

Of course, we can see this in many cases in real life. The one I’m most familiar with is brain imaging, and specifically fMRI (though that one has recently come under fire). When used correctly, the level of illumination that these tools bring to understanding the brain can completely change a field of research or create new ones.

And that’s also the case with technologies that create technologies. Generalizing the concept of making better research tools that help us better illuminate reality, better technologies increase technology capability. What comes to mind here are mobile platforms and the blockchain. While this concept is widely known, it’s good to be reminded about the power of creating technologies that can have a multiplier effect, which can have large downstream effects.

What’s not very well known is the difference in value and risk between working on developing tools (new technologies) vs. using existing tools to make things. If the downstream value of developing tools in a field is, say, 10, and the downstream value of using existing tools to make practical things is 1, and the risk of developing tools is 20x higher than that of working on making practical things (e.g. if the failure probability or difficulty of developing new tools and technologies is higher), it might make sense to just use existing tools to make things.

But there must be neglected fields out there where the risk-adjusted (expected?) value of building tools is pretty high, because if the value of tools is 100, and the value of building practical things is 1, even if it’s 20x harder to make tools than practical things, it’s still worth developing research tools because they have a higher expected value. (I’m reminded of the lack of massive innovation in the financial technology sector.) This is pretty abstract, so let’s get concrete: certain fields are in need of better tools, and in certain fields, the risk-adjusted value of developing a tool might be really high, and maybe higher than building things with tools. If we can figure out, roughly, which fields this is true in, those fields could be good targets for optimization.

Thanks to Luca Rade for pointing me to Ed Boyden’s talk related to this. Video here.

2. On arguments: you know “what would change my mind?” better than you know “what would change their mind?”

An insight from a workshop taught by Andrew Critch on a technique from CFAR: when you disagree with someone on something, it’s best to think ‘what would change my mind’ instead of ‘what would change their mind’. Since you don’t have any direct insight into the other person’s mind, it’s better to think about what would change your mind on that thing. When both of you do this, each of you specializes on the thing that you’re good at—knowing what would change your mind—and both people can exchange the lists of things that would change their mind on the disagreement at hand. This reduces the talking-past-each-other that comes from trying to convince someone of something that they don’t even care about in the context of the disagreement.

In general, it’s the mindset that you don’t have as good an understanding of another person’s mind as you might think. This is obvious, but I’ve found that keeping this fact in mind has made me a bit more mindful of the other person and has moved me from giving advice like ‘You should do <thing> because <reason>’ and more toward ‘Have you thought about doing <thing>?’ or ‘Have people suggested you do <thing>? If so, why haven’t you done that yet?’

3. Making arguments more objective with subjective-to-objective conversions

The continuation of the above is this: when both of you share the lists of things that would change your respective minds, at times there may be common elements. The workshop leader, Andrew Critch, gave the example of ‘should our organization make fundraising our top priority in 2016?’ with Alice saying ‘yes’ and Bob saying ‘no’.

After Alice and Bob think about “what would change my mind” and compare notes, if Alice thinks ‘if money is not a bottleneck, I’ll change my mind’ and Bob thinks ‘if money is a bottleneck, I’ll change my mind’, then we have what’s called a double crux, a crux shared by both parties. At this point, we’ve narrowed the original question of fundraising down to the question of whether funds are a bottleneck.

One thing not mentioned during the workshop that I noticed is that in this case, we’ve moved from the more subjective question of ‘should we make fundraising the organization’s top priority’ to a slightly more objective question of ‘is money a bottleneck’. The latter question can be more easily answered with data—in a sense, we’ve made the argument at hand more concise by doing a subjective-to-objective conversion, or at least made it more objective. And since this gets us incrementally closer to objectivity, in some cases one can iterate over this to get more objective. This won’t always be the case when using something like double crux, but when it is, it can be pretty powerful.

4. The power of multipliers—people who help get other people into impactful areas of work

People who didn’t have lots of direct involvement in advancing a field through research, but helped popularize a field and get more researchers passionate about it, can have a huge impact. Let’s say that, hypothetically, there is some unknown person that was influential in getting five scientists interested in a field, and they made massive discoveries in that field that moved it forward. Even though nobody knows that person who influenced those who do the object-level work (the actual research), they had an influence that I would say overshadows that of any individual one researcher that they influenced.

That’s the whole idea behind the career coaching at the excellent 80,000 Hours, people advocating for more research to be done in neglected fields like AI safety and existential risk, and other people working on advocacy. Despite the lack of prestige in this position, this is an upstream multiplier—kind of like ‘invent technology that invents technologies’—that can have a significant downstream impact.

Of course, there can’t just be a hundred advocates and one researcher in that position, so there has to be some true ratio of advocates to researchers. Maybe we need people to advocate for the neglected field of figuring out the right ratio of people advocating for a neglected field to researchers. Seriously though, meta-meta-research aside, it would be useful to know whether a field needs more advocates or not.

Thanks to Zach Schlosser and Daniel Colson for discussing ideas related to this.

5. The types of outstanding career capital—of which I need more of

Ben Todd from 80,000 Hours did a talk on advanced career planning, and he talked about forms of career capital that are most valuable. These are:

  1. Impressive social impact achievements (which stand out more than credentials and open the door to meeting high-performing people)
  2. Extensive resources or network
  3. Cutting-edge expertise

These are commonsensical, but how much of our time are we really spending on building outstanding career capital like this? I realized I’m not spending enough: while I’ve been focused on doing well academically at school and have more-or-less succeeded on that front, a high GPA is only a slight differentiator. On the other hand, impressive social impact achievements or cutting-edge expertise absolutely stand out more than credentials. My goals have updated toward that direction, and I’ll be using this list as a barometer and something to check when I’m planning. I think it’s more often than not that I find that something qualified as outstanding career capital after the fact, instead of intentionally doing things that constitute outstanding career capital from the beginning.

This is further exacerbated by the number of ridiculously impactful and impressive people that I’ve met at EA Global—which is why one of the reasons I loved being there: not being the smartest person in the room by a long shot is a great motivator.

6. Generating new models from just the information in your head

One thing that was mentioned during a workshop by Emily Crotteau was fascinating: you can sit down and generate new models and ideas without any new information—just the information you have in your head. That means that you haven’t fully explored all of the latent information in your brain at any one moment—and by sitting down and building models, you can traverse those nodes in your knowledge graph and expand them. It’s fascinating to think of the depth of information that we haven’t explored yet that are already in our minds, and it’s also a great reason to get lots of different kinds of information into your mind so you can make more connections as you sit down and expand them.

Or even better: have a conversation about them. For me, sitting down and just thinking and expanding ideas in my mind doesn't work as well as writing them down (Kevin Kelly: “I write in order to think… I don't actually know what I think until I write it. Writing is a way to find out what I think”) or talking about them.

I suppose for me I need some sort of writable buffer to record where I've been (writing something, saying something) to feel comfortable and anchored enough to explore adjacencies, and the act of articulation might make ideas and their adjacencies more concrete. And if you're talking to someone else, that's an increase in the variety of information that can be expanded and shared, not to mention that your expansion also triggers expansions in someone else's mind through thin air. Neat!

7. Find problems that nerd-snipe you

This is a simple one. I haven’t heard this one in a while, though most folks in EA/rationality are familiar with it, so I’m throwing it in in case. I heard this a lot at EA Global, and love the phrase. It’s also neat that the fact that some problem nerd-snipes you probably means that you either 1) understand that field well enough so that you know how to interpret a problem and what an interesting problem looks like, or 2) is commonsensical enough that you’re able to understand the gist of it, or the importance of it, without knowing the field behind it (maybe it uses an analogy to something you’re familiar with). In that vein, the feeling of getting nerd-sniped by a problem might be a signal for the level at which you understand the basic ideas in a field.

The feeling of being nerd-sniped is pretty great. The most recent one I can think of is personal knowledge management systems. I’ve been writing about them and building them and can’t stop thinking about them. It’s especially great because I feel like I have the competency to go and build software to attack that problem. I’m looking forward to gaining that level of competency and interest when it comes to real-world AI problems.

Thanks to Nate Soares and Patrick LaVictoire from MIRI for bringing this up.

8. Vicious rock-paper-scissors

One problem that I’ve come across when it comes to productivity is a maladaptive prioritization behavior I have. During the semester, I prioritize school over everything else and dedicate most waking hours to doing well at school. However, my efficiency isn’t as high as I’d like it to be, because when I start to get tired of doing schoolwork, I don’t do the thing that’s the second-best thing to do—read a book or do personal projects—because ‘that’s not what I’m supposed to do with my time,’ especially if I’m not as far as I want to be with schoolwork.

Paradoxically, I then do the thing that is easy to do but not as valuable as reading or doing personal projects, like go on Reddit or clean my room or something like that. Despite the fact that the chain of value (from most value to least) goes like A) schoolwork, B) personal projects, C) reading Reddit, I choose C when I don’t want to do A, instead of choosing B which is the better option.

Malcolm Ocean calls this vicious rock-paper-scissors, though the way we talked about this was somewhat different. He suggested the idea that while choosing B) personal projects when not wanting to do schoolwork is the best thing to do, it requires conscious effort to choose to do personal projects, and consciously choosing personal projects when schoolwork isn’t even done yet goes against what I consciously think is the right thing to do. Instead, I less-than-consciously choose to go on Reddit or something since it doesn’t feel like a real “decision”.

Going on Reddit feels more like I’m taking a break from the most important work and less like I’m wasting time doing something that is not the most important work—but obviously, my time would be better spent reading or doing personal projects instead. There’s also the problem where reading Reddit or something can be framed as ‘something I’ll do for a few minutes, and then get back to work,’ unlike working on personal projects, which is a more involved process.

The best way to counteract this, I think, might be to reframe the act of reading or working on personal projects as a form of rejuvenation for the purposes of the most important task of schoolwork. I’m not sure if this will be effective, since willpower and energy are also a part of the equation, but the important thing is to try different strategies to get my behavior back to matching what has the highest value. But the root cause is my over-optimization on schoolwork being my top priority to the point where it causes me to think that anything else other than schoolwork is wasting time, which causes an adverse reaction of doing something with less value because I don’t have to explicitly say ‘I’m not doing schoolwork’.

Thanks to Malcolm Ocean for discussing this.

9. Meeting people with intentionality

I love meeting people, but I hate networking. (Thanks to Twitter, I now know this is a common feeling.) As an introvert, I want to be in interesting conversations with interesting people, but I don’t like the process of getting to that point.

Alton Sun published an awesome post about how he meets people at the event. He’s very intentional about the whole process, and one thing I really like is the question of ‘what updates have you made recently?’ (In rationality talk, an ‘update’ is when you change a belief based on new information.) I asked people a similar question of what updates they made as a result of the conference and what led to that update, which was fruitful in getting people to talk about stuff they cared about.

The great thing about EA Global is that 1) you could have very good confidence that people are there to have in-depth conversations and aren’t just looking for small talk, which is not always the case with other events, and 2) I feel like people are more open to talking about these sorts of things, like updates, which other groups of people may not be as open to talking about when meeting someone totally new. I think these are characteristics of really open and engaging groups.

In a more general environment, having conversations with intentionality and asking questions like “what have you changed your mind about recently?” or one of my favorites that I stole from Quora, “What's the most unexpected thing you've learned along the way?”, are like a shortcut to what people actually care about and thus a shortcut to illuminating discussions.

Thanks to Alton Sun for the prompts and inspiration.

By Mark Bao

I write about behavioral science, personal growth, strategy, and the true nature of things under the surface.