IN THIS EPISODE: Journalist Dylan Matthews sits down with economist Heidi Williams and IFP co-founder Caleb Watney to set the scene. They talk about the current state of science in America, what metascience aims to achieve, and what empirical experimentation in metascience is revealing.
“Metascience 101” is a nine-episode set of interviews that doubles as a crash course in the debates, issues, and ideas driving the modern metascience movement. We investigate why building a genuine “science of science” matters, and how research in metascience is translating into real-world policy changes.
Episode Transcript
(Note: Episode transcripts have been lightly edited for clarity)
Caleb Watney: This is the Metascience 101 podcast series. My name is Caleb Watney, I’m the co-founder of the Institute for Progress – a think tank focused on innovation policy in Washington, D.C. Last year, we brought together a group of scientists, economists, technologists, and policy wonks to talk about the state of science in the United States. What’s been going right? What’s been going wrong? And, most importantly — how can we do science better?
We recorded these conversations last year to understand the facts and the open questions in this emerging field. And now we want to make these conversations open to the public so everyone can catch up to the frontier of metascience. We’re partnering with our colleague Tim Hwang and the Macroscience newsletter to bring this series to you.
We’ll talk about whether scientific progress has been “slowing down” (and whether that’s even a meaningful question), exciting new models for scientific advancement like Focused Research Organizations, how to think about the potential downsides of new scientific discoveries, and how you could make a difference in this emerging field.
For this first conversation, my friend Dylan Matthews leads a conversation with Professor Heidi Williams and myself on the basics of metascience and why anyone should care about this field in the first place.
Welcome to Metascience 101.
[00:01:36] Science is important
Dylan Mathews: So science is very important. That's a kind of a trite statement, but the more you think about it, the more profound it seems. Much of the world's prosperity seems to derive from scientific innovation, translating basic science into technology, and yet we don't have a lot of conversations about how science is doing and how to do it better.
This series of podcasts is going to talk about ways in which science might be slowing down or falling short, and ways in which we can improve it.
Maybe just to get a baseline. Caleb, do you want to give us a broad overview of the state of science in the U.S. and the world now? And what grade would you give it? What's going right? What could be limitations that could be worked on?
Caleb Watney: For sure. Well, off the top of my head, we'll go with a B minus. I think the U.S. federal government is the single largest funder of basic scientific research in the entire world. I think that's both really important for understanding the U.S. context, but also for understanding the global context.
Other countries obviously have their own scientific ecosystems, but the United States is the largest. We've got a huge concentration of the world's top scientists, of the world's top labs. And so the science that we do is not only affecting Americans, but it really can create innovations that can spill over and positively benefit the rest of the world in terms of new medicines, new energy technologies, all sorts of things.
Within the U.S., we have a couple of major players. First, there's the National Science Foundation, which focuses on especially basic scientific research. There's also the National Institute of Health which is more focused on applied biomedical research, but they also do some basic research. Between the two, we're looking at around $60ish billion every year and that is continuing to grow.
Obviously, there's a range of private sector actors that also invest in research and development. We have tax credits to help incentivize private actors to invest in R&D. There's universities that have huge biomedical labs. There are a number of philanthropists that also support science. So there's a whole ecosystem here.
Dylan Mathews: So Heidi as an economist, you've studied, and your field is very preoccupied in the ways the government can help or hinder certain industries, and science is a really particular industry here.
What are the cases for heavy government subsidies and, and intervention of the kind that Caleb was just outlining?
Heidi Williams: Yeah, so a lot of why economists would make a case that this is a market where we really want the government to come in and intervene, is because we think of new scientific knowledge as being a public good.
So if I come up with a new idea for developing a drug, but I bring that to market and I will need to be selling that at cost immediately. The fact that it took me many millions of dollars to go through the process of learning whether that drug is safe and effective for people, and satisfying all of the manufacturing requirements that we have to make sure that it's manufactured in a safe way. I'm never going to be able to recoup those expenses. And so we put a lot of structures in place that try to acknowledge that we think that the private market on its own would under-provide research relative to the level that we might want as a society. In terms of giving us the value of new innovations that's kicking in for growth and progress in society.
And so some of that is through public funding, like Caleb was saying, through the NSF and the NIH. Some of that is through policies that try to shape investment by private firms. So, you know, he mentioned R&D tax credit. The patent system is obviously another thing that you would point to. But in general, I think there's a sense that there isn't just one subsidy that addresses all of the things that need to take place.
Because a lot of the basic research that happens at universities isn't even patentable. And so it's not really reasonable to say, well, patents are just going to solve this underinvestment problem because we're going to provide an incentive for you to get a patent. Which essentially allows you to charge a higher price for some period, and that's going to solve this problem - because a lot of the basic discoveries made at universities are themselves not patentable.
And so oftentimes the way that the structure gets characterized is that at universities, there's basic research that's generally grant funded or what you might think of as “push funded” – we're paying for inputs rather than outputs. And then as things get closer and closer to commercialization, they tend to transition out of universities into the private sector where we still might have an interest in subsidizing and encouraging more research, but we tend to do that more through tax credits and patents.
So the landscape of that is really very intricate in terms of what are the handoffs of when things optimally transfer out of the university and into commercial firms? What are the right policy levers at different points in the situation?
I would just say that at a high level, when we try to estimate: ‘Are we spending enough on research in the economy as a whole?’ The estimates that we have suggest that we should be spending a lot more. In the sense that the private returns to research look like they are much lower than the social returns. So as a society we want to come in and subsidize research beyond what the private market themselves are providing.
[00:06:59] Pharmaceuticals and patents
Dylan Mathews: One of the first detailed illustrations I've seen of this was a paper that you and some colleagues did on pharmaceuticals. Can you tell us a bit about that and how that affected your thinking on the size of science as a public good and the scale of the problem here?
Heidi Williams: Yeah. Pharmaceuticals is an interesting sector because I think you get a lot of criticism of high drug prices and you get a lot of criticism of what are often referred to as “me too” drugs, which is actually a term that I don't love.
But, the idea is: “Are we getting innovations that are too close to past innovations rather than new breakthrough innovations?”
So there's a lot of people that look at the pharmaceutical industry and worry that we're getting too much innovation because they see some drugs get introduced that they feel like private firms are making more money than they should be, relative to the social value of the things that get invented.
There are definitely cases like that, that I think you can find; that's different from saying, in general, ‘You know, we should be getting less health research.’ And so, when I look at pharmaceutical markets, I tend to see these big swaths where we aren't providing hardly any incentives for investment.
And to say that things are uneven, just begs, ‘well, what about these areas where we don't get enough investment? Would it be socially valuable to bring in more investment?’
One of the areas that we were interested in was this idea that for new drugs often it takes a long time to bring those to market. Partly because we require clinical trials to show evidence of safety and efficacy of new drugs. And so if a drug gets discovered today, and it's published in Nature as a potential compound, it can be 12 or 16 years before that actually reaches consumers.
But you can actually develop some drugs quite quickly. And so if you think, well, I'm an investor and maybe these are two equally profitable drugs. One of them is going to take two years to come to market, and one is going to take 16 years to come to market. That second one looks much more costly.
It's compounded by the fact that you have to file your patents on your drugs before you start your clinical trials. So every drug patent basically gets 20 years. If it takes you two years to get from starting your clinical trials to coming to market, you get 20 minus two. So 18 years of patent life.
Whereas if it takes you 18 years to do your clinical trials, you get 20 minus 18 or two years of patent life. And so there's this whole section of drugs that take longer to develop. We're providing them with less patent protection, and I look at that and think, maybe we're getting too few of those drugs.
And so we collected a lot of data on cancer drugs where it's easy to know – even if we've never had a clinical trial on a given type of cancer – how long it would take to develop a drug for that kind of cancer. And we use various different statistical tests to try to say: ‘If we had a different set of incentives, in the sense that if we had a technology that would just let us have shorter clinical trials, how many more drugs do we think would get developed for these diseases where we've just generally had very little innovation?’
The basic answer we got was, we would get a lot more innovation. When you tab out the health consequences of those missing drugs, it looks like the number of life years that you would save if you fix that distortion are actually quite large.
We're kind of saying, I'm happy to acknowledge we get too much drug innovation in certain areas. I'm happy to say there's some “me too” drugs that aren't adding a lot of value, but there's this other area where we're just totally under-investing relative to the life years that we could be saving. And I think those kinds of numbers, even if they're very approximate, do a lot to call attention to the potential value of aligning incentives in a better way for innovation and science policy.
[00:10:31] From PhD to independent scientist
Dylan Mathews: So I think we have a framework here for how to think about why it's valuable for the government to be incentivizing scientific research above and beyond what industry would do. But as you've both been saying, there are a lot of different ways the government tries to incentivize it, a lot of different ways that the private actors and philanthropies try to do it.
Maybe one way to think through what this looks like is if you're starting a PhD program in a science, maybe let's say you're a physicist. What does funding look like for you as you enter a PhD program?
What's the maze you have to navigate through to get funding for your work? First as a grad student, then as a postdoc, then as a professor somewhere or a researcher at some lab?
Heidi Williams: I think oftentimes at our science PhD programs in the U.S., one important thing is that we're very lucky that right now a lot of international students want to come train at U.S. universities because we're seen as offering a premier research environment.
In the sciences, you're generally fully funded as a student. That takes different forms: in my own field of economics, you're fully funded centrally, so you join a program and you have the flexibility to choose who you work with and you can change that at any point. In the sciences, it's much more common that you might come in and very quickly be matched with a faculty mentor to work on a specific grant in their lab, and so you very quickly get exposed to what grant funding looks like because you're being funded on a specific grant and your project is very tied to what the grant is about. You're immediately fed into that system. You finish your PhD and often it involves making progress on a specific set of things related to that grant funding that you had. And then most of the time in the sciences, you'll do one or more postdocs after that. Postdocs again are often more complicated for international students because they're often not eligible for certain types of postdocs.
Many kinds of labs might have some commercially funded postdocs that are in industry-based collaborations. They might have some funding for specific projects through the PI, and there are an increasing number of these kinds of early career independence postdocs, where actually you might be choosing your own project. But your postdoc can basically be similar to your graduate experience in the sense that you're working on someone else's grant or it could take one of these other kinds of a postdoc that's more of a path to independent research.
So eventually if you continue in the academic pipeline, you're then applying for grants for your own lab. And you know, in the biomedical sciences, for example, if you apply to NIH, the numbers on this vary, but it can be 18 months between when you apply for a grant and when you actually get money back from that grant.
That time horizon just really shapes the advanced planning that you need and how you plan out your students that you support. Because if you're having a student on a grant that's say five years of their PhD in, how long is the grant? And am I going to be able to support this student for that entire time?
And these kinds of mismatches between student training lengths and grant lengths and just portfolio management. Science is meant to be experimental. So what if a project doesn't work? How do you adjust and what is the structure of how grants adjust?
When you talk to scientists, it just comes across very clearly that it's a very intricate balance of how to manage making sure that you have funding for projects, making sure that the people in your lab are getting paid. It sometimes feels like it's coming at the detriment of, ‘am I working on the projects that I personally feel are the highest impact?’ Because my resources in my lab are tied to specific commitments that were made and the time allocation of my students on specific grants.
[00:14:05] The changing landscape of team science
Dylan Mathews: Yeah, so it sort of seems like as you go into the profession as a PhD student, you're setting on a course of many years where you're primarily working on other people's projects. It takes a while to take on independent projects of your own. And once you're there, you're very dependent on what sort of funders are available and what they say about your ideas. And which ones they like or don't like. Is that a fair characterization?
Heidi Williams: Just to highlight one extreme of that, a lot of medical schools and public health schools are what are called “soft money” jobs. Essentially you're required to fundraise almost all of your salary in addition to any research costs that you have. And so, you get the sense from people that they're very beholden to what funders are interested in funding them, as opposed to what's my best idea that I want to take forward.
And so I just feel like, that's a lot of the motivation that has been behind some of the recent movements for more person specific funding or should we think about transitioning off of this “soft money” environment where you need to fundraise and that other people are choosing which topics are most important for you.
Caleb Watney: Yeah. I think it's actually pretty interesting to trace how science has been traditionally funded. Across time and across history, a lot of historical scientists were funded through a kind of patronage system, which was in some ways connected to the work. Obviously, you would try to choose scientists whose work you thought was socially valuable or potentially even personally valuable. But a lot of it was much closer to a person-specific grant funding which can work quite well in certain cases. It allows scientists to have a much greater degree of flexibility in terms of deciding which specific subgenre of their research ends up looking the most promising.
I think it's not an uncommon experience for scientists to start working on something, have a lot of promise about a particular path or avenue of research, start digging in a little bit more, and realize, ah, actually this really isn't going to work. Under a lot of project-based grants, it can actually be pretty hard to pivot your research from one approach to a different one. And so these person-based funding approaches can be a lot more flexible.
A potential downside though is who gets recognized as a person who is worth funding could end up really sort of biasing the field more towards various established researchers. Researchers with a lot of background credibility.
And so trying to balance this, how do you provide researchers the flexibility they need to pursue the kinds of projects that they think have the most value? While also providing ways for up-and-coming scientists who may not have as much name recognition or a portfolio of work to draw on can still get funding.
Dylan Mathews: How much of this culture comes out of the rise of big science? And the change in what science is? Over the 20th century, you went from physicists sort of doing stuff with cathode ray tubes and like small rooms to something like the Large Hadron Collider.
I guess that was the 21st century, but where you're spending billions of dollars to set up one destination to test various experiments. Is some of this increased complexity just a necessary aftereffect of that shift in what science is?
Heidi Williams: Yeah, so Ben Jones, who's an economist at Northwestern, has written very thoughtfully on just how ubiquitous the rise in team science has been across fields over time.
I think for a while you could get the sense, well, maybe this is just physics, or maybe this is just biomedical, but it's actually even in economics and other social sciences and even in the arts. He has a really nice paper that was in a volume that the National Bureau of Economic Research published that was basically pointing out that the structures we use to support science have essentially been static, even though the rise of team science has been like one of the most important changes that is going to happen to science over our lifetime.
And similarly, the lengthening of training for students just has a lot of nuanced implications for how we support early career sciences. Like what Caleb was saying. And so, in some sense it's very natural that the structure of science is going to shift. As Ben would say, the burden of knowledge is changing the frontier of what tools you need.
And do you need different people from different disciplines combining to work together in teams? And what does it mean to support interdisciplinary work? All of that is changing and the fact that our structures of how we fund and support science haven't changed, I think, is itself indicative of a lot of the reasons why you might think that we could be doing better than we're doing today.
Caleb Watney: I think one, maybe particular example is to just think through how different science is today and how it was a century ago. Over the course of a single summer Albert Einstein had, I think, a series of really cutting edge physics papers that laid down the foundation of basically most of what we know about theoretical physics today.
And he was just like one guy. He had a chalkboard, He had colleagues he was talking to. But funding Albert Einstein's work during that period would've been extremely cheap. Whereas today, to prove the existence of a single particle, the Higgs Boson requires this massive particle accelerator that costs billions and billions of dollars, with thousands of scientists working in close collaboration.
So the structure of science has changed dramatically over the course of the last century. But the way that we fund and structure science through our funding institutions has remained remarkably stable across that time.
Heidi Williams: And also the way that we recognize talent. People still get tenure at universities as individuals, so your work is essentially evaluated about you personally. A lot of scientists are thinking about, well, am I going to be recognized in some way by my colleagues?
And whether that's papers that get published in journals, those can all be collaborative. But who gets a Nobel Prize is kind of a very individual recognition. And so in some sense it's like also our structures for how we evaluate people's work hasn't kept up with the rise of team science, and I think that's also an important disconnect.
[00:19:52] Immigration of scientists
Dylan Mathews: One area where I wanted to talk through some problems before we start talking about solutions is immigration. We've been talking a lot about the rise of team science. Science is a big collaborative endeavor. It stands to reason that frictions in getting people from one place to another where they can collaborate with people productively would have a big effect on that. And, Caleb, I know this is something you've worked on a lot recently. Is the U.S. immigration system right now fit for purpose in terms of augmenting our scientific capacity and getting the smartest people to the right U.S. labs?
Caleb Watney: It seems like not as far as we can tell. Just to take a step back, you can think about what are the high level inputs to science that enable a scientific ecosystem to succeed. You have research funding, which we've spent a lot of time in this conversation talking through how the NSF and the NIH can choose or pick grants in different ways.
There's the actual physical infrastructure you need. These are both the cities in which scientists collaborate as well as the lab space. You need expensive microscopes and particle accelerators.
But maybe the single most important part is the people, the scientists themselves that make it all work.
And if we have a rough intuition that talent is distributed at least roughly equally across the globe, well, the United States is only about 4% of the total global population. And so that means that, in terms of where scientific geniuses are being born all over the world today, only a small share of them are actually being born in any one country's particular borders.
If you have aspirations as a country to be a scientific superpower or if we really want to maximize the impact of agglomeration effects, which are what economists call when you get a bunch of really smart people together and their work is more productive than the inputs of any one particular worker.
If you take agglomeration effects really seriously, that means that there are enormous returns for allowing scientists from all over the world to cluster in particularly impactful research clusters. And it seems like for a variety of policy and historical path dependency reasons, the United States has ended up being where a lot of the most productive scientific research is actually happening.
And you can also see this in surveys of international students and where would they like to go and study, and then practically, where do they go study? A large chunk of, especially the most promising students end up coming to the United States to study. But our immigration system is really poorly set up for actually allowing a lot of those students to stay here.
It's a pretty common occasion where we'll have some student come do a PhD here. We are pouring and investing a lot in their research training, and then we have actually no avenue to allow them to stay in the United States. And that seems really counterproductive in the normal American interest story, but it's potentially just as bad from the global advancement of science and the advancements of new medicine, if we're actually preventing the world's top scientists from being able to cluster together.
Dylan Mathews: I've been reading the making of the atomic bomb, which means I'm a guy who brings that up in all conversations now. One of the really striking things is just how many scientists from small countries. There’s Neils Bohr from Denmark, a tiny country. Ernest Rutherford was from New Zealand and had to move to the United Kingdom to do his work. If they had just been locked in their small countries without peers to work with, it's shocking to think of all the fruitful collaborations that wouldn't have happened.
Heidi Williams: Yeah, both with the rise of team science and if you think about how technologies get commercialized out of universities and into the real world, I think in both settings having the right team is itself incredibly challenging. Then if you're just putting on a constraint that you can't have the people that are the best people because for some reason they can't get a visa to come work in your lab or they can't get a visa to come work at your startup. You feel like you're sort of shooting yourself in the foot. It's hard enough to find the right people. And just having fewer barriers in the way of getting the right team together, I think is something that very naturally matters a lot.
[00:24:00] Solutions to improve science funding
Dylan Mathews: Okay. So let's talk through some ideas that people floated as ways to improve the funding process and right-size American policy towards science. Caleb, what are a few different ways of funding that come through the pipe and that strike you as promising?
Caleb Watney: Right. So it's maybe worth just taking a second to talk about the current dominant model. Especially at the NSF and NIH, it is sort of this traditional peer review structure. I want to emphasize that this is almost by definition a caricature. There's a lot of variation across specific institutes, but at a high level, what's happening is you have scientists who are submitting proposals for promising avenues of research they want to work on. They will create a budget that roughly describes how much that work will cost. They will submit it and then a panel of their peers will grade their research across a number of dimensions: How promising does this seem? How much social impact do we think it could have? How likely is the research to work out? How much of a track record does this particular scientist have?
Then program officers at either NSF or NIH will in some sense collate these proposals and create some sort of rough ranking. In some institutes, there's a bit more discretion on the program officer's side to be able to rearrange them from just the pure average.
But at some level, the average opinion of your peers ends up really shaping where your proposal stacks in this rank, and then ultimately scientific grant funding agencies will make a determination based off of that and pick the ones that they think seem the most promising.
As we alluded to earlier, one alternative process is a more person-based funding approach, where you choose a particular scientist who you think their whole strain of research seems particularly promising. And we want to give them autonomy. We want to give them discretion to pick and choose which strains of their research are the most promising.
And so there are some research organizations like the Howard Hughes Medical Institute that really specialize in person-based funding. There's also some new models that are coming up. There's the Arc Institute which is sort of a combination of a couple of universities in the Bay Area which is trying to bet on particular biomedical scientists and allow them a lot of freedom to pursue different kinds of strains of research. I think these are promising. Just to briefly highlight a few other models.
Heidi Williams: I would also just chime in. So Jon Lorsch at the National Institutes of Health actually started a person-specific funding program at the NIH which is something that he's very interested in learning about what's worked well and novel about that.
But it is interesting to see that there is some precedent for that. And similarly at the National Science Foundation, they do have some career awards, which are person-specific. And so it's interesting to think of. It's not administratively that the government couldn't do that, it's just that historically we haven't.
Caleb Watney: Absolutely. As I think both Heidi pointed out, as I alluded to earlier, there's a lot of variation across the federal science agencies. While the bulk of the funding ends up being distributed through these big peer review processes there. There are a lot of really small projects and small programs that are trying out different approaches.
Another one that's been gaining a bit more attention recently is this idea of using golden tickets. The basic intuition here is that consensus-oriented review processes might end up just having a bias against high-risk, high-reward research. It might be an attribute of novel research that some people really like it and some people think it's really not promising. And maybe you actually should be looking for variation across reviewers. So one way that you could potentially try to select for this is to give each particular reviewer during the selection process, a golden ticket that they could use to basically champion a specific proposal and say: “I want to guarantee that this gets funded, or at least want to heavily tilt the selection process towards this particular one that I think is really promising.”
And while it would probably be a bad idea to have all science funded that way, it might be helpful to have as part of a portfolio of funding approaches. Some other approaches that people have at least talked about or have maybe been tried on a small scale is a scientific lottery. Here the idea is: how much do we really know about the way in which our selection mechanism is actually choosing the most promising or the most rewarding scientific projects? It might be that above some minimum quality threshold, we'd be better off. Just choosing at random, especially once you consider the huge time and grant paperwork costs that are involved in the current system.
I think it's worth highlighting that we really don't know how any of these systems work in a rigorous way. That's one of the things I'm most excited about over the next 10 years is how we can build an evidence base that actually allows us to test out – in an iterative way – different kinds of funding mechanisms and then actually have very clear metrics that we're going to use to judge them and say whether or not they were successful.
Dylan Mathews: Heidi, I wanted to ask about what we do know about which mechanisms work. There are some of your peers beyond yourself in economics, like Pierre Azoulay, who have been doing some research on how different types of funding mechanisms affect the outputs of science. What have we learned to date? And what are some of the big open questions there as you see them?
Heidi Williams: Yeah, so just to give one example. Kyle Myers, who's a junior faculty at Harvard Business School, had a really nice paper. Sometimes the National Institutes of Health tries to shape the direction of research by saying, rather than having their normal process, which is academics propose what they want to work on and peers judge whether they think that's a good project. They'll sometimes put out what are called “requests for proposals” in specific areas. So we want to have more research on Alzheimer's or we want to have more research on a specific area of biomedical, basic scientific research. And so, what Kyle looked at very cleverly was essentially: what does it cost to get faculty to change their research direction?
And in short, it's very expensive. For me, that result really shaped how I thought about if we as a society think that a given area is neglected, what's the best and most cost effective way to remedy that? And I think the normal model that people have in mind is this request for proposals idea, where let's give senior academics some extra money for them to shift their research to go work in that area.
And I think Kyle's paper made very clear that you can do that and they will shift their behavior, but it's very expensive. Whereas I think of it as: what if you offered PhD fellowships or postdoc fellowships for people that haven't yet chosen their area of specialization, and you're highlighting to them this is a socially important problem and you're giving them the opportunity to make their decisions about what to work on.
On a more socially level playing field, in the sense that the whole idea behind why the NIH wanted to subsidize Alzheimer's research is because they felt like there was too little Alzheimer's research relative to the social value. So why don't we subsidize the cost of doing that? But rather than doing that by paying senior faculty to change what they're doing, which is really hard, you can just subsidize new entrants to come in.
And that's actually, I think, much less expensive. These papers in isolation I think oftentimes highlight actually pretty fundamental insights that can shape how you think about science as a whole. I will also say that even though I'm a big fan of academic research, which is why I'm in academia. I actually feel like a lot of the low-hanging fruit on how to improve the productivity of science is like much more bread-and-butter process improvements.
And so, one of the problems that we highlighted was this idea of really long time lags in funding. For the NIH, maybe that's 18 months. Even at the NSF, that's usually six months. Although at the NSF, they do have these two programs that are much shorter turnaround time of closer to two weeks.
But I think it's been really interesting to see post-covid more momentum around should we be doing more ‘Fast Grant’-style programs. So Patrick Collison and Tyler Cowen had an explicit program that they started during covid, which guaranteed two-week turnarounds, and actually oftentimes they were doing two-day turnarounds in the middle of the pandemic.
I think it's just really important to think of this as covid might have exposed problems in the system and the development and piloting of new models that could themselves be back incorporated into the system. The NIH during covid actually had their own rapid grants program called the RADx program, which did get grants out the door in about eight weeks.
And so administratively, we've seen evidence that the agencies can do this when we need to, but to me, it feels like it must be a higher priority than we're putting on it to have that be the norm rather than the exception. We can do this in the time of crisis, but the normal system of funding science is just going to be this very slow process.
Dylan Mathews: Another trend that I've noticed in government funding lately is the proliferation of ARPAs. So in the beginning there was DARPA, the Defense Advanced Research Projects Agency. And now there's ARPA-E for energy, there's ARPA-H for health. What's different about that model? What's distinctive about an ARPA? How have attempts to copy the DARPA model worked so far?
Caleb Watney: I think at the highest level the ARPA model is characterized by giving a really wide scope and autonomy to particular program managers within the agency who can then, in a much more directed way, try to push grants or technologists and engineers to work on a very specific problem.
The NSF and the NIH model are very: you apply, we'll evaluate them and then we'll decide what we think is important. ARPA managers are oftentimes working very hands-on with people at universities shaping the research directly, checking in very often. Same thing they do with private sector allies that they might be working with. Another thing that characterizes them is their ability to take a coordinated bet on a set of technologies at the same time. So I think, oftentimes, one particular scientific breakthrough or one particular technological tweak may not actually provide a lot of value by itself, but you almost need a whole ecosystem.
You need three or four bets to work at the same time to actually unlock a whole ecosystem of value and ARPA models, because they have these highly empowered autonomous program managers have much more flexibility to pursue that type of strategy.
Heidi Williams: It can also, just to interrupt, it can also be the opposite where if there's four possible solutions to a problem and only one of them can work. If you're an individual funder just funding one grant, you take a bet on which of the four it is.
But in this DARPA kind of portfolio approach, you could actually invest in all four and you're kind of guaranteed to have the payout at the end, but also maybe the product that you get at the end will be better because you actually pursued all four simultaneously and learned about what didn't work about the other ones along the way.
Caleb Watney: Absolutely, I think we're still actually learning a little bit about what makes the ARPA model tick. There's been a couple of papers or analyses trying to investigate it. Mostly I think we're starting to see a lot of clones because there's something about it that I think policymakers have identified that seems to be providing real advances.
I mean, a lot of the technologies that people will cite positively. Over the last 30 years have had a hand in some way from ARPA funding. Everything from GPS satellites to the internet and even mRNA vaccines, ended up getting some ARPA funding at an early stage that really helped them advance.
Heidi Williams: Yeah, and I think one thing that's hard is that DARPA tends to be evaluated by the best projects that they funded. And that's a very natural thing. It's actually not even a crazy way of evaluating the overall value of the program because, if we got mRNA vaccines and two other things in that initial grant to Moderna that Dan Wattendorf did really mattered a lot.
In some sense that could justify the cost of the whole portfolio. That's not a crazy approach, but I do feel like at a systematic level the idea that we're going to have ARPA-H for the ARPA for health exist alongside the National Institutes of Health and not have a great idea of like what exactly is the value-added of which projects are better funded at an ARPA versus which projects are better funded at high-risk, high-reward programs at the National Institutes of Health.
I think it would be to everyone's benefit to have sharper thinking on where this model is most productively applied. Rather than just having unbounded enthusiasm based on the idea that we can find some examples that worked well, even if those examples might well justify all of the spending on the portfolio.
[00:36:51] Doing metascience
Dylan Mathews: One question one might have at this point in the conversation is whether we're disappearing too far into looking at ourselves in the mirror? That we're navel gazing a little bit? What's the advantage of really examining how science works? Are there case studies in the past of when we've looked at a process of knowledge generation or research found deficiencies, fixed them and the results were really encouraging?
Heidi Williams: Yeah, so I think in general, the idea that we can use the scientific method to learn whether something is valuable or the right thing to do is something that has a lot of important precedence. For drugs, just because I'm a scientist and I come up with the idea that you might want to put this chemical compound in your body, like we don't immediately give it to you.
Instead, we do these very carefully constructed trials where we randomize who takes the drug and we compare you to a control group, which is either the available standard of care or just a placebo. And we actually try to rigorously test: is your health better because we gave you the drug?
And that approach to say: what we need is systematic comparisons to know whether or not we're doing better is something that is very natural in medicine and has actually been applied really well in the field of international development. Paul Niehaus, who's going to speak on some of our episodes in this podcast, often cites the example of his work on cash transfers which Dylan knows very well cause I know you've written articles on it too. But at some point people thought that giving people money was just a terrible idea because people are going to spend it in terrible ways. And in fact, we should just do direct aid like an in-kind transfer or give people what they need, like food rather than giving them money. The whole idea that the field of development set up this system, very similar to clinical trials, where we were going to randomize and compare how good our cash transfers are relative to in-kind transfers. And do people benefit more? Is their welfare better?
Under this different system, it really legitimatized cash transfers. Energy has actually shifted where I think there's like much more momentum around cash transfers being the best way of having a social impact and improving people's lives relative to doing in-kind transfers.
And so, it's just the idea that we're going to have evidence as a basis for making decisions and that that can really make broad-based institutional change in improving the social value of these investments that we make. I think it's just an example that I find very inspiring.
We don't just have to complain about the National Institutes of Health. We don't just have to say that we don't know how to do science better. Like we can do systematic studies and like just learn about what works and address specific problems, and it just feels like a real opportunity to try to make investments that can make progress on that,
Caleb Watney: Totally agree. I think it's also worth saying that this is not like a crazy idea at an organizational level. Private firms do this all the time. In Silicon Valley, you have a whole range of companies that have a sophisticated apparatus for running AB tests to optimize something as small as an ad placement on a website. And if something as socially trivial as that can benefit from finding small efficiencies and knowing that it'll pay off in the long term, how much more can we find ways to optimize our scientific ecosystem? Maybe the single biggest example of how this process works out is the enterprise of science itself. It was a revolution for a reason and applying these systematic ways of gathering knowledge, evaluating evidence and then making iterative improvements has been the main way that humanity has progressed over the course of centuries.
And so now applying that to the institutions that fund and incentivize science directly, I think makes all the sense in the world.
Caleb Watney: Thank you for joining us for this first episode of the Metascience 101 podcast series. Next episode, we’ll talk about whether science has been slowing down and how we can measure the pace of breakthrough advancements.
Subscribe to this podcast feed to follow the rest of the series, you can find more information about this series and the Macroscience newsletter at macroscience.org. You can learn more about the Institute for Progress and our metascience work at ifp.org, and if you have any questions about this series you can find our contact info there.
Special thanks to our colleagues Matt Esche, Santi Ruiz, and Tim Hwang for their help in producing this series. And thanks to all of our amazing experts who joined us for the workshop.
Metascience 101 - EP1: "Introduction"