IN THIS EPISODE: Journalist Kelsey Piper interviews Convergent Research CEO Adam Marblestone and Professor Paul Niehaus on the inputs to scientific production. They talk through the funding ecosystem, labor force, the culture of scientific labs, and the search for important questions.
“Metascience 101” is a nine-episode set of interviews that doubles as a crash course in the debates, issues, and ideas driving the modern metascience movement. We investigate why building a genuine “science of science” matters, and how research in metascience is translating into real-world policy changes.
Episode Transcript
(Note: Episode transcripts have been lightly edited for clarity)
Caleb Watney: Welcome, listeners, to this episode of the Metascience 101 podcast series.
In this episode, Kelsey Piper, a writer at Vox, leads a conversion with Adam Marblestone and Professor Paul Niehaus. Adam is the CEO at Convergent Research, working to launch new science institutions using a model called “Focused Research Organizations.” Paul is a professor of economics at UC San Diego and a cofounder of GiveDirectly, a nonprofit focused on ending extreme poverty. Together, they explore what makes science tick, including the funding ecosystem, the labor force, culture of scientific labs, and the fundamental search for important questions.
Kelsey Piper: Adam and Paul, you both work on science and the process of how scientists pick which questions they work on, who they work with, how they get funding, how they get other access and resources. Paul, you work on this mostly as an economist in social sciences, and Adam, a lot more in the life sciences.
What we're really excited about here is comparing notes about the process of science: what's holding it back, what it would look like to do a better job of deliberately producing valuable scientific research, and how that differs across fields.
Paul Niehaus: We have been excited about this conversation. Adam and I both sense that the issues of problem selection – of deciding what to work on – are really big and important ones. I'm hoping we get into that.
Then having chosen a problem, questions of how you execute on that, how that's changing, how the requirements and the skills needed to do that are changing, and how the funding models do or don't support that. These questions interact with what you choose to work on in the first place and whether you feel prepared, equipped, and resourced to tackle a problem.
[00:01:49] Picking scientific questions with a long view for impact
Kelsey Piper: Do you want to speak to how you see that playing out? How do scientists pick which questions they work on? What are the big driving factors?
Adam Marblestone: Sometimes people think about this as grants and peer review constraining people in terms of what problems they can propose. I see it a bit more meta than that or a little bit more as layers of structure. Two observations: one, individual scientists might want to do things that somehow they don't have a structure or mechanism or incentive to do. And two – this is the one that I've been more focused on – if you sort of take a macro analysis of a field and you say, “Well, what would be the ideal thing to drive progress in that field?”
There's a question first of all whether scientists are able to work on that thing. Maybe that thing requires a different shape of team, or requires a different level of resources to actually go after the biggest leverage point in a field. Maybe they're not even incentivized to spend the time to write down and figure out what that leverage point even is in the first place.
Paul Niehaus: You’re talking about having something like a product roadmap in a company. Having an analog of that for science and being able to map out a longer term vision for where the thing needs to head and whether people actually have the time, resources, or incentives to do that.
Kelsey Piper: When people don't have that, when there's not a larger roadmap, is that mostly a lack of a person whose job it is to build a large roadmap? Is it mostly a problem of short term grants that incentivize short term thinking? Is it about turnover? What's going wrong?
Adam Marblestone: I'm really curious how this differs across different fields. Something that I saw in neuroscience, for example, is that there are several big bottlenecks in the field that are sort of beyond the scope of what an individual scientist can just go after by themselves. Scientists are often rewarded for technical excellence in certain areas, but those areas are selected for. Those areas are themselves selected for areas where individual people can have technical excellence in that thing.
Maybe you need the equivalent of a project that's more like building a space telescope or something like that. That's not something an individual scientist can do. Then you might say, “Well, if they can't do that for their next grant or their next project, are they even incentivized to think about the existence of that? Or whether that thing should exist in the first place?”
Kelsey Piper: The ideal of tenure as a system was somewhat that it would help with that. You get away from the pressure of keeping a job and can think about big questions. Having demonstrated that you can make intellectual contributions, can make the ones that seem to you like the ones that really matter? Is tenure a system that functions to achieve that?
[00:06:15] GiveDirectly
Paul Niehaus: There is an implicit theory of change. In general, the accumulation of more knowledge is going to be good, and it's hard to know what kinds of knowledge are going to be useful. So it's just good to let everybody be curious and pursue whatever they're interested in. That’s the unspoken theory of change that a lot of people absorb when they come into grad school.
I think of it as optimal search. If you want to search a landscape and find the best alternative, there should be some degree of noise in that process. You could think of it as having some people that just do whatever they're curious about. Because I might sit here and say, “That looks like the best direction,” but I could just be finding some local optimum. That could be the wrong thing and I want to search globally, so it is good to let some people be curious and explore. Sometimes you'll have some of the biggest hits come through that.
But it is also good to have a big part that is more directed, where you have a pretty thoughtful theory of “I think this will be useful because it can create this type of value.” I don't really see much of that type of thinking: no frameworks and no teaching or training in that. That's really sorely missing in the social sciences that you described.
For me as a development economist and as a founder of GiveDirectly – which does cash transfers to people living in extreme poverty – an example of a very motivating and focusing question is: how much money would it take for us to end extreme poverty? That's actually a tractable question in that we're close to having enough evidence to come up with pretty good numbers.
In the past, people have tried to do these things, but they're based on a lot of very tenuous assumptions about what the impacts and the returns of different things are going to be. But I'm talking about a relatively brute force approach to it. I'm saying, “let's find everybody and figure out how much money they need to get to the poverty line and get them that much money.”
That’s the assumption I need, but there is actually a bit more to it than that. I need some statistics for the targeting of this that don't really exist yet. Clearly, I need to start thinking about the macroeconomic effects of this kind of redistribution on this scale. For example, what would happen to economies and prices?
What I find exciting is it populates a whole roadmap – a research agenda that we can split up. Different people with different technical skills could work on different parts of it. We all understand that what we're working on feeds into this broader whole, which is this vision of being able to tell the world this is what it would cost.
By the way, I think it's something that we could do. It would cost us a fraction of a percent of our income if we all donated it. How motivating would that be?
I think it's great to encourage people to think about exercises like that. Imagine that you want to solve this problem or you want to make this decision, even if it's not something you're doing today, what would you need to know to do it? Then, build a research agenda around that.
Adam Marblestone: Do you think that that will spawn other questions that would actually lead to us being able to give those people that money? It seems like the obvious first step is that you have to know this. This is kind of the beginning of that roadmap: “let's quantify, what's the machine I need to make to end global poverty?”
Paul Niehaus: Yeah.
Adam Marblestone: What comes next?
Paul Niehaus: Part of my theory is definitely that if I told you the number and it was low, you would say, “I'd be happy to do my bit.”
Adam Marblestone: Mm-hmm.
Paul Niehaus: If you're telling me that if everybody gave 1% of their income, we could end extreme poverty, I will sign up to give 1%. Because then I'll feel like I've done my share. Yes, I feel like that could be a powerful motivator. To get there, we have to have a number that we believe and that's well backed by science. It's fun to figure out what that science would need to be.
Adam Marblestone: Is there an obstacle to you going and starting this thing? Is it whether you can get an NSF grant?
Paul Niehaus: That's a great question. I think it's time. You're right that with a little bit more time and with flexible funding, you could build a team around that. That'd be really exciting.
[00:08:54] The scientific labor force
Adam Marblestone: On the idea of tenure, my guess is that it works better in some areas than others.There are certain fields where the core of it is basically, “what does that professor have the freedom to think about and to teach their students about?” Then, the students are absorbing the intellectual tradition of the professor and that's the essence of it.
Some factors make it not as simple as that, though. In biology, it's pretty heavy in terms of needing a labor force. The students, postdocs, and trainees in biology are also effectively the labor force of doing experiments. If you're the professor, you need to be able to get the next grant, which supports that next batch of students and postdocs. The students and postdocs need to be successful enough that they can get a grant of a certain size that would support, in turn, their own trainees one day. And on this first grant, you need to get preliminary data for the next grant.
There is also this need to not mess up your students' careers – if that makes sense – by choosing something too crazy. This has a pretty strong regularizing force on what people can choose to work on. Students will potentially select against professors that are doing something that's too far out there, even if that professor has tenure.
Paul Niehaus: This feels to me like something that social sciences and economics needs to be somewhat worried about. There are all these things that have changed in the last couple of decades, which I see as super positive: that it's gone from being primarily a theoretical discipline, to primarily an empirical discipline.
By the way, for people listening, if you took undergraduate economics, you might still think that it's primarily a theoretical discipline, but in fact, what most people do now is get huge quantities of data from the real world and analyze it.
I think this is great. We're more connected to reality than we were in the past. At the same time, it takes more money and it takes more people. We're starting to look more like hard science disciplines where all the dynamics that you're talking about come into play, but I'm not sure economics is thinking about that and about the impact that's going to have.
Adam Marblestone: I don't see this as inherently a bad thing. It's okay if projects become more resource intensive or more team intensive. It makes sense as you deal with more and more complex systems, right?
On the other extreme, not necessarily that you want each individual neuroscientist having to learn how to – it is not quite this extreme – in the olden lore, blow glass to make your own patch pipette to talk to that neuron. And you'd write your own code, you'd train your own mice, you'd do your own surgeries, and you'd make your own buffers, reagents, chemicals, and everything like that. There's this sort of artisanal tradition.
It's a good thing, potentially, if there are more teams and more division of labor. But it does mean that it's more expensive. The model of how you achieve that labor is still stuck in this world where it's more modeled on the theorist – where the primary goal is to transmit a way of thinking to your students – or modeled on apprenticeship, where students learn lots of skills, as opposed to what does it take to get that job done? What are the resources that I need?
You have a lot of people that are working in this very labor-intensive, very capital-intensive system, where they're nominally learning an intellectual discipline, but they're also kind of participating in this economy.
Paul Niehaus: Yeah. On the funder side, I feel like there's very little. We're in this world now where sort of research capital matters a lot, and what things happen is largely a function of what things can get funded. But at the same time, I don't feel like there's much return to being good at allocating the capital. It's largely seen as a chore.
I get invited to serve on these committees where we decide who's going to get a grant for a project. It's something that you do out of a sense of professional obligation, but nobody is thinking like, “Wow, I could have such a huge impact and this could be like a big part of my legacy to be the person that picked a project that ended up being transformative.”
The same way that if I were like a VC, I'd be like, “Yeah, there's going to be one entrepreneur that I pick and bet on that's going to make this firm, make this fund, make my reputation.”
There isn't anything like that, so I do it as quickly as I can and then get back to my own work. But maybe I should be incentivized to really invest in that and figure out how to get good at it.
[00:13:21] The field architect
Adam Marblestone: Yeah. I would go much further and say that there is a role for strategists, architects, and designers, in terms of what we need in a given field.
I'm curious where this lives in economics and social sciences. But it’s definitely a problem that we've come across in the course of thinking about how to map neurons in the brain or something like that.
Well, it turns out what I need is a microchip that has billions of pixels, switches billions of times a second, and is transparent. I need something that actually requires an engineering team. It needs a totally different structure than myself or my students or my postdocs or what a biology researcher would have.
You may identify that that's what's needed, but then you forget that thought quickly because there's no way you're ever going to be able to control a big division of a chip company in order to make that thing.
So you go back and say, “Well, what's the next actionable step I can take?” Ultimately, that starts to really shift things. You're no longer on the basis of what's the actual best thing to do. You're talking about what's the best thing to do within a very limited context of our action space, assuming that all the other actors have certain incentives.
Paul Niehaus: I like that. We should have a ledger somewhere of ideas that died a quick and sudden death in Adam's brain because he didn't see them as viable. Maintaining a list of these things is what we're missing.
Adam Marblestone: Or maybe it’s that they take too much design and coordination. People say writing grants is a sort of tax on people's freedom, but I actually see writing grants as a time when multiple researchers are incentivized to coordinate. They can go in together on a funding opportunity which actually causes them to spend however many hours, brain cycles, and conversations co-designing some bigger, more targeted set of actions that are more coordinated with each other.
That's only up to the level of writing a grant of a certain size on a certain time scale and with a certain probability of success of getting it. Instead of three labs in different parts of biology or physics or engineering coordinating to write this grant and then we can get a million dollars, what if we're actually trying to find the very best one across the entire community and then that thing gets a billion dollars. What's the right scale of coordination and planning?
Planning on these different horizons is seen as something the NIH or the NSF is doing, but then they delegate a lot of the decision making to peer review committees that are much more bottom up saying, “What did we get in? Which is the best one?” rather than what's the ideal, optimal thing to do at a system level.
Paul Niehaus: One thing I've seen a lot of – which has really struck me – is that a lot of universities have this sense that it's important to stimulate and encourage interdisciplinary work. You mentioned collaboration between multiple labs, but also working with engineers or people in other departments. The standard reason for why we want to encourage that is because we think that the social problems we want to speak to are getting more and more complicated, and that no one discipline has all the tools that you need to address that.
You've given some examples that are consistent with that. But we sort of realized and we've talked about this at UCSD. None of us really knows who are the right people to go to in computer science about a particular thing that might come to mind.
When we try to sort of artificially stimulate that by having joint hires or mixer events where everybody comes together, that just relies on serendipity and it really doesn't seem to work very well. The hit rate is not very high. I've been interested in this idea that what we actually need is to articulate some problems that help define who should be in the room to help to solve them.
Not “I'm going to hang out in computer science for a bit and see if I meet anybody interesting,” but more like, “Here's a problem that I'm really motivated to solve. I know I need a computer scientist, and I have to figure out which one would be the right one. Then, we write a grant application together.” To me, it's putting the cart before the horse to say we need interdisciplinarity to solve social problems. You start with a problem and figure out how to put them together.
Adam Marblestone: I think there is value in random collisions. But there's value in this very circumscribed space where the best outcome is a couple of these people writing a grant together. But what you really want is an industrial-grade level of coordination, planning, and systematization. That's not to say that there isn't a lot of serendipity and things bubbling up there as well. But it's interesting that we both see this planning or coordination gap.
Paul Niehaus: When you say industrial grade, what do you mean by that? A lot of people get into the profession and academia because they really cherish the freedom to work on whatever they want to work on. They don't want anybody to tell them what to do.
As we're discussing, there are actually a whole bunch of constraints that really limit and narrow what you can do. So that's all still there, of course. But I think a lot of people are very resistant to anything that feels like somebody is telling you what to do your research on.
At the same time – as you say – in order to get the right teams together to tackle these big complicated problems, it's actually really critical that somebody is thinking about this. Who would be the right people? Maybe there’s a soft leadership of getting a bunch of people excited about a shared project or vision because they can see the social value that it could produce.
I don't think many of my colleagues see that as part of their role, but that could be an exciting role for someone to play .
[00:20:03] Indicators on the value of scientific questions
Adam Marblestone: Well, I think there's a question of non-academic skills as it applies in research. Who's the best person to collaborate with in computer science – there's a lot of assumptions behind that, right?
There's an assumption that the person is a professor who's studying a research question in computer science, and they have the labor force that is their students. What if the best person to collaborate with in computer science is a 20 person software engineering team or something? I don't know.
I guess my interest in this is: what are the processes that lead to identifying the best, ideal actions that could be taken within the space of the next steps in research? Then, can we work backwards from that in some way? Who articulates that? Whose job is it to articulate that?
And you may be wrong and a huge amount could be serendipitous. It's not that there's one dictator that describes that. But is there a process of figuring out what this research field should do that goes beyond local actors?
I mean, it's interesting to me that you see the same thing. I've often thought of this as well. If you think about neuroscience or biology as my home field, the brain is just so complicated. We need so much engineering. We need so much to deal with it. It sounds like some of what you're seeing in social sciences has a similar character
Paul Niehaus: I don't know that it's a function of the complexity so much. I think that the interfaces between the university and the outside world play this really critical role in sort of guiding us and giving us a sense of what is actually worth working on. That happens right now in a fairly haphazard way, at least in my discipline.
There are individual people who are super motivated to engage with policymakers or with business leaders, or with nonprofit leaders. They build these relationships and learn about what kinds of questions people are having. They end up becoming the arbitrageurs who bring those things back into the field and communicate about them to other people. But it doesn't happen in a very systematic way. Especially for a young person who doesn't have those relationships yet, maybe hasn't had a lot of training or background in the skills that would be needed to build those relationships, it's tough.
I see a lot of people start out and they quickly feel overwhelmed by just the volume of stuff that they have to learn in graduate school. “Oh my god, I just need to do something that has not been done before to prove that I'm competent and get a job and then get to tenure.” That totally resonates with me. And I get that.
But I think it's exciting to think about how to design universities and departments, where there's more intentionality in these things - where there's a systematic effort made to help connect young researchers to the people in the outside world that they need to be talking with to get a sense of what problems are likely to matter and make a difference. That could be part of my role as a mentor, an advisor, and an institution builder, and not just something that we leave to chance.
For example, I had a student recently that really brought this home to me. He has a really beautiful job market paper on FDA regulation and medical device innovation, which I thought was a great project. I asked him, “Who are you talking to in the medical device space about this stuff?” because we're in San Diego, which is a hub for medical devices. And he said, “Nobody.” It really stuck with me. He's a very entrepreneurial student by any standard. It's not a knock on him. Nobody sees that as part of our function to make sure that you're engaged with the part of the outside world that's relevant to you. That seems to me like such low hanging fruit.
Adam Marblestone: At some level, the research has goals and it is a question of how is it doing relative to those goals? This idea of a totally bottom up, totally creativity-driven research. But in some sense, a project like that has some societal goal.
Part of what you're saying is just inevitable, right? I mean, a graduate student needs to find a bite-sized project that they can hone their skills and prove their abilities, right? That's just core to what grad school is about.
[00:23:15] Ideal scientific architecture
Kelsey Piper: I feel like it keeps coming up that this is a system that no one would have designed for the purposes it's serving for. Partly because what it does has changed over the last couple decades, both in economics and in the life sciences. Another part of it is that no one was really designing it.
I'm curious, pie in the sky, if you were designing it, what would it look like? Not just making some of these changes, but if you were trying to design a good academic science process, what would you do?
Paul Niehaus: Lovely. I think we're trying to field that out. At least for the social sciences, I think that one thing you'd have is much more intentional investment in the boundaries between the university and the outside world.
Right now when people come into graduate school, they get really good, high-quality training on what we already know, and then are left to themselves to figure out what we don't know that would be worth working on. Those two things would be at least roughly equated if you're designing a program from scratch. You'd have people start thinking and talking about it from day one.
We give people two years of training on tools before we expect them to start doing stuff. I think what you do is: from day one, we're going to be talking about what's important and what we need to know. They're constantly iterating and thinking about that, and the tools are somewhat more on demand. More like, “Once I figure out that this is the problem I'm going to work on, then I know I need to go and learn how to do this or how to do that.” I think this would be much more flexible. In terms of pedagogy and the way you'd structure it, I think it would look a lot more like that.
There are people who think that we want to change the incentives in deep ways as a way of getting at this. Instead of tenuring people based on how many publications they have in top journals, let's tenure people also as a function of how much real world impact they've had. Let's look at some of these other metrics. There are some efforts underway in this direction and I think it's interesting. There may be some scope there, but I have some doubts about it. I have pragmatic doubts that all that much is going to change.
My deeper question is that this stuff is really hard to measure, and I think it can open the door to a lot of politicking and a lot of favoritism. One of the things that's nice about our system – imperfect as it is – is that nobody disagrees about how many publications you have in the top journal because that's how many you have. It’s a little bit harder to bring in your friends and things like that.
My instinct is actually to worry too much about that, but to focus on the real challenges of figuring out good problems to work on. It's a really hard problem.
Adam Marblestone: A couple of interesting observations there. One is that there's something that wasn't ever really that purposely designed. There were some principles in it, but a certain number of institutional structures or incentive structures have ended up getting scaled massively over time. When you come back and you say, “Well, what if we design it differently?”, that feels like top-down interference now. The thing that has scaled is something that has a lot of role for peer review, for the individual. I mean, you get to choose what grant you submit. And other people who are your peers will be on a committee and they will review it. It won't be some program officer or some person from industry or some philosopher who says, “No, you actually have to do this thing because this is better for society” or something like that.
Who else can judge what these biological scientists in some really niche field can do except other biological scientists in that really niche field? That makes sense that that has emerged. You can kind of understand why this thing has scaled. It's kind of democratically very appealing. If someone else is interfering with you, you say, “No, no, no.” But if it's your peers, “Okay, that's all right. They can interfere.”
What I would design is not really one thing, but it's just much greater diversity of different tracks, structures, and incentive pathways, within science very broadly. Certainly, there’s a role for the form of training that emphasizes technical excellence in certain areas and emphasizes finding your own niche that's very different. Your PhD thesis, by definition, is something that's different from somebody else's PhD thesis and represents your own skills.
There should be a track that is what we have been discussing like a field strategist track. That's more about the road mapping or problem identification. There should be tracks that are more entrepreneurial of how you grow and build a larger non-academic structure that's meant to accelerate science or that's based in science in some way.
I think some of that is emerging organically, and some of it less so. Y Combinator, deep tech startups, and the startup boom has had a huge influence in terms of how students and postdocs see their careers. One option is that you go into academia, the other option is that you go and co-found or join a biotech startup. And that's a very different mindset.
You do see that filtering back. When you are that grad student, you're thinking about that pathway, and you potentially prioritize what you're doing differently. But maybe there should be many, many more of those types of tracks or niches. Maybe there should be certain systems that are more optimized for very early stage fields and very early stage discoveries where peer review looks very different. Then, a different structure is put in place for more mature fields, where you're filtering out the noise versus generating any signal in the first place.
It's a diversity, a much greater diversity of structures would end up being designed. They would circumvent this problem of “Oh, there's this dictator saying how science works or how individual scientists work.” It is more that you have enough different ponds to swim in that you can choose.
[00:29:22] Bettering the funding ecosystem
Paul Niehaus: Could I pick up on the dictator thread? Also what you said earlier about peer review and thinking about funding particularly. We've been talking a lot about the way you could design a university or journals or gatekeepers differently, but the funders are obviously an important center of power for all this.
One slightly controversial view that I'm coming to is that peer review is something that makes you feel safe that your opinion is never all that consequential. Nobody actually has to take responsibility for the decision. Another word for peer review in the rest of the world might be “decision making by committee.”
Is there room for funding models where individual people take on more responsibility for the decisions and are identified with the success or the failure of those decisions? They're free to do things like, “I'm going to make a bet on this person because I think the kind of things they're doing are great.”
Adam Marblestone: I think this is a huge issue.
Why is it so hard to design these? Why hasn't the world just emerged with lots and lots of niches and program structures and incentives? I think part of it is that funders are also in their own kind of evolutionary struggle. If you're a new funder and you come in and say, “I want to do something different,” well, who judges that? If you're funding this area of science, there's no notion of expertise other than the luminaries in that field. If you don't have endorsement for your program or who you funded from the luminaries in that field as they exist now, you as a funder will not have legitimacy.
You have to have something that has enough horsepower or strength to bite that bullet and say, “Look, we're making a choice. We think this person has a vision, and we're going to let them do this.” By definition, there will be peer reviewers that will say, “This is not as good as what you could have done with a hundred R01 grants or the more traditional structures.” What is it that allows you to survive that shock either as a funder or as an individual program officer?
The system has proliferated and it is judged by its own members. And there's also no obvious alternative to that. Science is so intricate that you couldn't really ask a product manager to judge what the scientists are doing… unless you could, right? DARPA kind of does that with program managers.
Paul Niehaus: This is a core issue for economics as well. I've really been struck by the lack of diversity in funding types. Most funding is at the project level, but we're moving towards a production function that is much longer term and requires larger teams. You want to set things up so that I have an incentive to invest in the culture of my team and to invest in training younger people because they're going to be with me for a long time. And that there's room for them to grow within the team and take on more responsibility. All those things that you think of as part of building an organization.
But the funding models don't support that. The funding models are like, “Well, this looks like a good project.” And so, we might spin up a team, do the project and then wind it back down after a year or so.
Adam Marblestone: What would be your ideal, if you want a way of doing this type of research? Would it look more like something where students don't cycle out or grants don't cycle as often?
Paul Niehaus: Yeah, so what we've said. We've been able to raise some funding like this for some of the things that I've worked on. I think you want a diversity of different types and different models.
You want to have some that can be on an initiative basis with some sort of broad agreement about the scope of things that the research team is going to tackle and the kind of team you need to put together to do that. Then also some ability to be reactive to opportunities or ideas that come up. In my own personal work, for example, what that looks like is that we do a lot of work in India, typically working with the government on potential reforms to large scale social programs that impact people living in extreme poverty.
This is super policy relevant work. I'm very motivated to do it. It often depends on these windows of opportunity where you get a particular government that has decided they want to prioritize a particular issue, and it's not too close to an election. They're willing to try some things and that's the right time for us to partner with them. We need to be able to react to that, which means we need to have the organization and the capital already in place. At that point, we can’t be going out and filling out an NSF application and waiting for three to six months to hear back from them.
We have been able to get funding support like that, but I think most people have not. It's not an established model. Idiosyncratically, we found foundations that have been willing to back that approach.
Adam Marblestone: I've, I've heard of the need for that in some other spaces too. Like if a volcano erupts, you need to go and study that volcano.
You need to be able to immediately get people and sensors and get data collected. That means you can't be applying for an NSF grant that will take another six months or a year to come through and then hire and train that student. You have to actually deploy quickly. That’s an interesting niche example where the systems aren't set up super well to do that. We have government agencies that operate that way, but do they have the exact right scientists that need to be involved in that.
Paul Niehaus: Yeah. We have had things like Fast Grants. Tyler and Patrick experimenting with models where the money can get out the door faster. But if there's still a long lead time from getting the money to putting your team together and building infrastructure and so forth, there's a class of problems where the money needs to have already gone out the door quite a long time ago for the research team to be able to execute on the opportunity when it comes up.
Adam Marblestone: Right. Then how do you sustain that? Is that sustained based on novelty or tenure? What is the driving incentive of that team or institute to exist?
I think it's amazing that certain types of larger infrastructure exist. Let's say in physics or astronomy, you have the Hubble Space Telescope. In principle, if some supernova goes off here, we could re-point the Hubble Space Telescope. There might be so many other areas where you need that.
Kelsey Piper: What funding options do you have if you're trying to do something that's outside the scope of a normal NSF grant – or outside the scope of the normal grant options in economics which I know less about? Is it individual philanthropists, individual people with a blog? What's the space there?
Paul Niehaus: For economics, that's right. There's a set of very well established sources that everybody knows about. You can apply to the NSF. In development, there's a fund, the Weiss Fund, which funds a lot of randomized controlled trials and is great. That's an obvious go-to source.
Then, I think if you want to do something that doesn't really fit into the box for those kinds of funders, there's this long tail of private philanthropy that a lot of students and young people are just not even thinking about. They really need to be told, “Look, you're going to need to be more entrepreneurial.” The decision making process is going to involve more in-person interaction with people. It may not be standardized like just filling out a form. It's going to be different. It's going to be like raising money for a thing. They're out there and I think helping make those connections is something that we focus on a lot now with the students in our program. I think it is super important.
Adam Marblestone: There's a pretty wide spectrum of different shapes of gaps. If you think of the small group of students and postdocs working on technically intensive and deep problems but within a five-year grant scope with preliminary data as the bread and butter, biomedical science is doing really well with NIH R01 grants. On either side of that there are pretty big gaps.
One gap is the ‘unknown next Einstein’ who has some totally new ideas that don't really have preliminary data. They don't really have experiments, they don't really have pedigree, it’s maybe more synthetic in some way. How do you support that person?
On the one hand, that's really hard because it doesn't work super well with peer review. But on the other hand, sometimes those people are just like blogging, so it takes a relatively low cost to support that person and let them think. I think we could be much better at funding the gaps in new ideas or new theory. In some ways, we're lucky that the world has progressed to the point where, as long as they have an internet connection and an apartment, they can do that work.
The other end of that gap – and the one that I've been a little bit more obsessed with or concerned about recently – is where you need that larger, more professionalized engineering team or you need industrial grade, or maybe you need this rapid response capability.
You need something that's not really the same speed and scale that you would associate with the academic traineeship or apprenticeship model. For that, it's a hard thing because even speccing out such a project might take several people a few years to even define what the engineering roadmap is for that. How much does it really cost? Who's going to be the leaders for that? It's more like creating a company. In the company space, there's the equivalent of a seed round that gets you there and everyone is incentivized. What's the equivalent of a seed round for the next Hubble Space Telescope? That doesn't really exist.
Paul Niehaus: One model that I like for funding at UC San Diego, we have a center where it sort of pairs this funding problem with the sort of problem selection question that we started with earlier. What they do – and I'm interested in seeing more experimentation like this – is once a year they bring in ten of the top fund managers, like pension funds, and ask them “What are your big questions?” They agree on a few of those and say that those are the top priority questions, and then attach funding and have a request for proposals, an RFP, linked to that. The theory there is that you're providing funding to work on things that have been pre-screened and selected precisely because they matter to somebody who is going to make a decision.
Adam Marblestone: I think RFPs can go a long way because there are these self-reinforcing positive and negative feedbacks.
If you imagine – well, there's no such thing as a seed round towards the Hubble Space Telescope. On the other hand, if you were to give someone such an RFP and say, “what Hubble Space Telescope would you design?” As long as it's not completely suicidal for their career to spend six months answering that question, then you do get a proposal for the Hubble Space Telescope.
Now the funder can go back and say, “Okay, actually that's what I want,” and so now offer you more money for you to spend more than six months and more than one student on this. You could actually bootstrap these things because the knowledge production does have a lot of positive feedback. On the other hand, everyone is always sort of doing that at their own risk. What if there isn't the next RFP that will take you to the next level? Then they say, “we did this crazy thing, but we're never going to be able to do it again.”
Kelsey Piper: I feel like this is a vision for how funders could solve a lot of the problems you have been talking about, almost unilaterally via a broader scope of proposals, more kinds of proposals, and more options to fund things. Is that basically true?
Adam Marblestone: Pretty much, yeah. Each one has to have a pathway. You imagine the person, what's the journey you want them to go on?
You want the person who designs and is ultimately the entrepreneur who creates the next Hubble Space Telescope. Or maybe you want one person to design it and then find the entrepreneur who then creates that. Or you want something else like you want someone who creates a new field.
At any given point in that process, they have to have something that allows them to take the time and effort to ask that question. If they need students or postdocs working with them, those people need to be able to do that. You need a series of steps that would ultimately lead them to the right place.
Everyone is always in competition. They're always working really hard to do the next thing or get the next grant or have the next result. They don't have time really to sit on their own and just design the Hubble Space Telescope. You need to help them get to that point. If you do that though, then there's a lot of room for directed funding structures and programs. That's just very underappreciated. It's hard to build consensus on whether any one of those should be done or is the best thing to do.
[00:42:29] Culture in science
Paul Niehaus: Yeah, in brief, I agree. I just feel like there's enormous scope for people to experiment with funding research in different ways. Anybody who has the capital and wants to experiment, those experiments are super valuable because they teach us about the kinds of research output you get from these. That would be wonderful.
I think it would be cool to talk a bit about culture and sort of cultural subgroups.
Kelsey Piper: Like culture in science?
Paul Niehaus: Yeah. Like I feel there's a subgroup of economists who think about the world the way I do and care about the same things. So when I'm with those people, it's great. I feel like other people may care more about other stuff, but who cares about them. I think that's really powerful.
I'd be curious to hear what Adam thinks about that.
Adam Marblestone: Yeah, no, absolutely. That is one of the real strengths of academia writ large in the huge diversity of it. It not being all that top-down means that these research cultures emerge. That is why it is in many ways different than, “Hey, we're going to go form a startup that solves economics.”
That's not how it works, right? You need a person who thinks in a different way to train a generation of students. Those students think in a different way and they perturb and challenge each other.
You build these cultures and that's a longer term development. But the more subcultures that you can support that way, the more paths there are for ideas to flourish and succeed, even if they're otherwise different – those people will become the reviewers that will legitimize a body of research that might in some other culture be not okay.
Really core to everything is that there are these medium-sized subcultures of very, very deep training, apprenticeship, and shared value formation. That's one of the huge strengths of academia, as opposed to the transactional nature of just going and doing something, hiring people and then firing them.
That's part of the key to it all. What's the level of diversity and richness of that culture? What actually sets that? There are definitely some fields that have ended up tabooed for whatever reason and they don't get to have a mutually supporting culture to nurture them.
Paul Niehaus: Oh, what gets tabooed?
Adam Marblestone: Just to give you a little bit of an off-the-wall example. A sort of obviously great thing to do would be to freeze organs. Say, I want to be able to freeze my kidney. I want to be able to unfreeze it then I have infinite transplants.That field – like large volume vitrification of organs – has been sort of very marginalized because it's very close to the idea of cryonics. A mainstream cryo biologist said, “You know, don't think about that. You know, we can think about freezing sperm and eggs and doing basic science studies, but we shouldn't think about freezing entire giant hunks of matter that are the size of your body.”
Partly as a result of that, you can’t really go to an engineering department and you say, “I want to freeze the entire kidney or an entire brain and then unfreeze it.” That's not really something you can go to a biomedical engineering department most of the time and say I just want to go do that. It's too close to cryonics.
Paul Niehaus: I would never guess that. Does that also mean there's more of a role for individual courage in all this too? I don't know what your thoughts are on this, but I think a lot of what drives people in science is the quest for peer recognition, to feel like other people value what you've done and respect you and your contributions.
I think that's something to be excited about because I think it is very malleable. Getting papers published in good journals is certainly one marker of that. But it's very easy to create other communities. I definitely feel like I'm part of communities that value all the other things that I do, even if they're invisible and unmeasurable. In other ways, those are the things that people respect about me the most. I think there's a lot of scope for that.
At the same time, sometimes people are like, “Oh, my career incentives, blah, blah, blah.” I'm just like at some point just decide what your life is about and do that, you know what I mean?
Adam Marblestone: Yeah.
Paul Niehaus: Like stop crying about the incentives. If something's important, just do it.
Adam Marblestone: I don't know where it comes from but: “the way to get tenure is to not try to get tenure.” Try to ignore those forces and then if you're a maverick enough and you still survive, then you’ll actually do well. But if you just try to really follow the incentives, and that actually ends up being pretty boring.
There is some of that dynamic and I don't know what allows that dynamic to exist, but that dynamic that makes it actually healthy is that the mavericks can still succeed. But what is it that determines that?
Scientists are pretty smart sometimes, so maybe it’s that they actually see the value in something that's new.
Paul Niehaus: I think there's that version of it that's like, “don't worry, things work out in the end.” Even if right now everybody thinks you're crazy, in the long run, being a maverick is a good career strategy. People will eventually recognize the importance of what you do.
I think that can be true, but sometimes you may do a lot of good and other people don't value it. And you just have to be willing to have the strength of mind to live with it.
Adam Marblestone: Yes. I think that you need that. That’s part of what tenure does allow us. A lot of people don't like what you're doing anymore, but you can. It's not so much that you're doing it. It's that you're encouraging other people and you're creating that culture.
I think this is a pretty subtle thing. What is this trade-off between self-censorship or the peer review element of things and the maverick, ignoring convention aspect? Maybe some of you have studied this, I don't know.
[00:47:54] Tradeoff between impact and academic convention
Kelsey Piper: I would expect that the optimal career strategy for maximizing your chance of securing tenure or a prestigious role is not the same as the optimal career strategy for impact on the world, right?
Adam Marblestone: Right.
Kelsey Piper: You can maybe affect how stark those trade-offs are and you can also maybe affect culture, where you affect whether people are willing to make that trade-off. Like whether people are the kinds of people who will say, “Yeah, I am trading off some odds of tenure for some good accomplished, because guess what? There's a lot of poverty.” But, there's probably always going to be some tension.
Adam Marblestone: Yeah. It's pretty complicated because people can realize that. The committee that's supposed to judge you can realize, “This is not the kind of thing that people are going to like, so therefore we should hire this person at our university because it's not going to be something that other people will buy into, but we understand.” It seems like it has a lot of complexity and feedback and this is exactly why you don't want a top-down product manager to determine what happens. You want a scientist to balance these trade-offs.
Paul Niehaus: Yeah, that's a good point. I want to add to that. On a positive note, I do think I've had that experience, personally.
I've spent my time on things that have not maximized my academic output, but that other people in my profession have valued that. I've had professional opportunities open up to me – that maybe could have gone to somebody with more publications – because people respect the way I've spent my time.
Adam Marblestone: From that perspective, the thing that's a little bit scary or a little bit more dangerous in the system – it's not what necessarily happens – imagine you do all this work, and then at the end, the wise people that are on your tenure committee will make the decision.
It's that you never actually did the thing because you had this peer pressure and you're afraid that they never would. Maybe in the end, they always would've been like, “Yeah, this totally makes sense. You did this different thing, this is what science is for, and we understand it.” But then all of your fellow students or whatever would've said, “You should never do that. This is never going to work. They're never going to pass you.”
Kelsey Piper: It does seem like a lot of censorship functions on the level of people not thinking about doing that or people throwing the idea out there, but don't seriously commit to it. Rather than on the level of “you seriously committed, you went and did it and like then you lose out career-wise for it.” But that's still like a very powerful force.
Adam Marblestone: It's very powerful, but does that reflect a system that's really broken at the level of its basic decision making? Or is that a system that's messed up at the level of social transmission of what those decisions are?
Kelsey Piper: And if you go and say, “Oh, you have to do nothing but get published. That's the incentives.” Then maybe you're actually making the censorship worse compared to what I think you were just saying.
Adam Marblestone: What we should say is: “Hey, it's actually great, just do whatever you want, and you will always be successful.”
Paul Niehaus: To your point Kelsey, there was a survey recently done within economics about what economists think we should do more or less of.
And there's fairly broad consensus. People would generally like to see more in terms of real world relevance and impact. And they're open to the possibility you might have to give up some other things – some degree of rigor, for example, which is something we really prize. There's not uniformity on that, but actually creating common knowledge around that is very powerful.
Adam Marblestone: It's also different in different stages of fields too. There is a point where rigor is really important. As fields sort of scale, there's just more people, there's more opportunity in that field. You're going to have more things that are failing on technical grounds. You did your statistics wrong or something like that. As a field develops, you need to have standards and metrics, but at the beginning of a field, that's really hurting it.
I'm trying to create a totally new form of AI or something. Well, it doesn't pass my metric in terms of this loss function or something. Well, who cares, right?
You need to be applying these rigors at different phases. Part of the problem is you go, “I'm in a psychology department. Okay, well which rigor standard should I be applying? Should I be applying the ‘statistical analysis of fMRI in extreme detail’ level of rigor? Or should I be applying the level of rigor you would apply to a totally new idea or theory?” which kind of mixed together at the level of journals and theses.
Kelsey Piper: I think there's something grounding there about trying to solve a problem. If you're trying to develop a drug that works, then I think that sort of answers for you how much rigor to go for. You want enough rigor that you won't waste your time if it doesn't work and you're not trying to convince people beyond that.
Adam Marblestone: Yeah, that's an interesting thing. It's maybe that some of these more industrial systems strike that balance better.
Kelsey Piper: I don't know very much about the culture of industry, but I do feel like there's something healthy about the thing you're aiming for with rigor. Like getting the right answer with as little data as you need to get to it, but not less.
Adam Marblestone: Right. Sometimes when industry comes into a field, it can have a clarifying or healthy effect.That's something that has changed positively I think over time. It used to be viewed as a universally corrupting influence if you have capitalism getting mixed into your science. But it can have a lot of positive effects, including the fact that an alternative to going on the tenure track is to join industry. In that case during your PhD, you might actually be more crazy because you're not worried about what the tenure committee thinks. You're just worried about whether you have enough rigor to go to industry.
Kelsey Piper: You were saying earlier about how the option of industry is maybe good even for the people who stay in academia. Because they're more experimental, they're more ambitious, and they feel less like it's all or nothing.
Adam Marblestone: Yeah, exactly. It's very much what we were talking about. Don't worry, you'll always be okay.
[00:53:48] From a ‘doing’ career to a research career
Paul Niehaus: I've always felt that way. When I decided to get a PhD, I was deciding whether to get a PhD or go do something that was more like doing. The most influential conversation I had was with somebody who said something very simple: “it's easier to go from research into doing than the other way around.” And I was like, that's a good option value argument. So I did it and that really paid out for me in my own career. That knowledge that it is a viable option is very liberating.
Adam Marblestone: Yeah. We should also create more ‘doing into research’ paths as well.
Kelsey Piper: Yeah. I think it has got to be common for people who are trying to do things to run into some fundamental theoretical questions that they would benefit from having an answer to for the work that they're doing. And it's very hard for them to go study them because you need all of this experience to be a good scientist. Also partly because there's no mid-career go get a PhD to answer this question you've already spent half your life on. That's like a rare thing.
Adam Marblestone: I think there's maybe a good selection in some way for people that are incredibly bored with anything that anybody already knows how to do.
You make an incredibly great car company or something like that, but at least there's somebody else who already knows how to do that. Nobody understands the brain, so I'm just going to focus on understanding the brain. On the other hand, you want those people that know how to build a car company to come back and help us do neuroscience.
Paul Niehaus: Yeah. Very specifically, if there are people listening who are like in that situation where you're like, “I have this problem. I feel like I would need a PhD sort of research training to be able to answer it.” I want to talk to you.
In fact, what I want to do is build an economics profession that wants to talk to you. Because we need you in order to find good problems to work on, as much as you need us to solve the problems.
Kelsey Piper: Man, I have this interaction quite frequently. In tech, there are all these people who are trying to figure out things like AI and the progress of automation. They'll be trying to answer these questions that feel to me like labor economics questions, but they don't have a labor economics background.
I'm not blaming them for trying to work on those problems without the background. And I'm not blaming labor economists for working on better defined problems that don't rely on having access to secret models or whatever. But I'm nonetheless like, “Wow, I wish there was a way to use this knowledge that our society has to answer these questions that our society has a stake in answering.”
Paul Niehaus: There's a gap.
Adam Marblestone: You talked a little bit about what the ideal structure you would have. Maybe you'd have more continuity or maybe you'd have more industrial push. What would be the ideal project you want to apply that to in the social sciences? If you didn't have any funding constraints, if your students were empowered maximally to do what they want, what's most important?
Paul Niehaus: One way I've been thinking about this is that it’s good to be engaged, to build these relationships, to be listening to people outside the university when they tell us what problems they're dealing with and in some cases to be responding to that. But I also think that you do not want to be entirely customer driven and end up building a lot of faster horses to use the old metaphor.
It's also great for students and for researchers to feel free to say, “What is a broad goal that I would like to see accomplished in the world? What would I need to know to do that?” Go through that exercise yourself and sort of work backwards and I think that would end up looking a bit like one of these road mapping exercises.
[00:57:00] Benefits of a roadmap for communicating broadly
Kelsey Piper: I think another advantage of roadmaps like that is that a lot of people think of science as bottomless pits into which a lot of resources go. And it's unclear how that corresponds to when problems get solved.
As a science reporter, you run into a lot of people who are like, “Oh, I heard that cancer got cured like 20 times.” That's like a bad way to relate to the public, which is ultimately funding all of this. If there is a roadmap and it's like, “We're going to do these things and we're going to – by solving those problems – get these results.”
I think that does a lot for trust. I think it does a lot for buy-in. A lot of people are willing to spend a lot of money when they understand how that money produces results. There's not a lot of clarity on that as a product of how the current system works.
Adam Marblestone: Yeah. The brutally honest roadmap that also takes into account that you could take some pretty non-traditional actions to get the thing done. It's not the worst case roadmap that it'll take forever to cure cancer or something. But you also don't want to say, “Well, we've done it already.”
Kelsey Piper: We’ve made progress on cancer. But if you had said in an upfront roadmap that these particular childhood cancers, we can cut mortality by 90%. We've basically done this for many childhood cancers. Then it's clearer to people where our effort is going, what these brilliant researchers are doing, and how it's changed the world. Which is just hard to see otherwise.
Adam Marblestone: Sometimes we would struggle in certain areas of science to do that all the way to the end goal. But you could say, “solve the bottleneck that's holding back these cancer researchers.” So the problem for the cancer researchers is they can't see the different cell types inside the live tumor, whatever it may be. You would do a roadmap for that and be very clear on that.
Kelsey Piper: Yeah, I think people can understand that there's lots of steps that might not seem directly on the road but are indirectly on the road. But when there's no visibility then it's quite hard to see where we're headed.
Paul Niehaus: This old heuristic that floats around in economics that you should be able to explain to your parents why what you're doing is interesting. Which is not a terrible heuristic, but I think a better one might be, “you should be able to explain to a taxpayer why what you're doing is important.”
Kelsey Piper: I think we should fund science more, but I think part of that is making a stronger case that by funding science more we will be getting more things that really matter to everybody in the world.
Paul Niehaus: The one caveat I'd add is what I said earlier. I do think it's good to have some degree of noise in the process. People who are free to pursue any wild idea that they think is interesting because directed search will tend to get us to the local optimums, but we'll tend to miss out things that are not within our field of view.
I think that's harder to explain and to rationalize. Maybe to people who are used to numerical optimization algorithms, I can explain it to them, but to the broader public, it's harder. I guess that's your job, Kelsey. You gotta figure out.
Kelsey Piper: Well, step one is to convince the broader public of the numerical optimization algorithms.
Paul Niehaus: I genuinely believe that it is good to have some people in the world who are free to pursue whatever they think is interesting. But there should be more emphasis on stuff that's justifiable, rationalizable.
Kelsey Piper: One thing that stood out to me from earlier is that some people want to go do their pie in the sky thing that has no particular social benefit. Probably, to some degree, we want to let people do that. A lot of people – if they're doing things that have low impact on the world – are not doing these things because they don't care about impact on the world, but rather because they don't actually see a route to have that high impact.
Paul Niehaus: Yes. There are many people like that. And it'd be so straightforward to help find things that would have more impact.
Adam Marblestone: Sometimes those may not be straight shots. Sometimes they may be very indirect and it's in your optimization algorithm. You're going after this because it is the greatest uncertainty or the greatest state of confusion, and we want to resolve that state of confusion. But then that state of confusion is actually so big that you can justify it to your grandma. There's no way I'm going to be able to create aligned AI or whatever, unless I can understand something about how the brain does it or how consciousness works.
I think that the big scientific questions in my mind are not that hard to justify them relevant to the applied outcomes, if you're ambitious enough about it.
Caleb Watney: Thank you for joining us for this episode of the Metascience 101 podcast series. Next episode, we’ll wander into the details of the renaissance in new scientific funding models being explored including ARPAs, FROs, Fast Grants and more.
Metascience 101 - EP3: "The Scientific Production Function"