Macroscience
The Macroscience Podcast
Metascience 101 - EP7: “Science and Political Legitimacy"
1
0:00
-1:10:02

Metascience 101 - EP7: “Science and Political Legitimacy"

1

IN THIS EPISODE: Journalist Dylan Matthews leads a conversation with Open Philanthropy CEO Alexander Berger, Professor Tyler Cowen, and IFP Co-CEO Caleb Watney. Together, they explore the relationship between effective, robust scientific institutions and notions of political legitimacy.

“Metascience 101” is a nine-episode set of interviews that doubles as a crash course in the debates, issues, and ideas driving the modern metascience movement. We investigate why building a genuine “science of science” matters, and how research in metascience is translating into real-world policy changes. 


Caleb Watney: Welcome back to the Metascience 101 podcast series! I’m Caleb Watney and in this episode, Dylan Matthews leads a conversation with Alexander Berger, Tyler Cowen and myself on the relationship between effective, robust scientific institutions and our notions of political legitimacy. How does science change when we are spending dollars that are accountable to the public? 

Dylan Matthews: I'm Dylan Matthews. I'm a reporter at Vox. I like to write about philanthropy, progress, and things of interest to the IFP world. I have three great guests for you: Caleb Watney, who is a co-founder of the Institute for Progress; Tyler Cowen, professor at George Mason University and author of the Marginal Revolution blog; and Alexander Berger, who is CEO of Open Philanthropy, a leading funder in this space.

We're going to talk about science and politics today, and how to build scientific institutions that have some form of political legitimacy. As people who think that science is relatively important to progress, this is a fairly central question. 

Caleb, why don't we start with you? How would you characterize America's current framework for politically supporting science? We can start there, and then we can get into some of the strengths and limitations.

Caleb Watney: I think you could conceptualize this a couple of ways. The first is in terms of pure funding outlays. The majority of our basic research is funded directly by the federal government. The National Science Foundation and the National Institutes of Health together comprise around $60-70 billion per year, which is non-trivial. They fund a lot of basic research, a little bit more applied research especially on the NIH side.

There are also a number of quite lucrative tax incentives that we provide. For example, the R&D tax credit is a huge incentive trying to recognize the fact that when private firms invest in research, oftentimes, there are positive externalities that they don’t totally capture. So financially, the public sector is a huge driver of science.

Scientists as a class are oftentimes government employees. When a little kid thinks about who a scientist is, they think of NASA. Or they think about people working directly in university physics labs. There's a quite tight link in the public imagination between science and the public sector.

Dylan Matthews: Got it. One aspect that sometimes doesn't get fleshed out as much is the connection to the university system. 

We have a large public university system. The majority of our research universities are publicly funded. How much of that is coming out of state and local versus this national level that you're describing?

Caleb Watney: Right. In terms of research funding, most of it is driven by the federal level. Again, NSF and NIH are the biggest funders of university research. It's true that money is fungible and sometimes state and local budgets, especially for state schools, will provide a lot of funding for universities. But in terms of the pure research budgets, a majority of that comes from the federal government.

Dylan Matthews: Got it. If we're thinking about things that influence decision making for these kinds of institutions — Tyler, maybe I can bring you in here — these are institutions that are overseen by Congress and are answerable to the public. You sometimes get freakouts about the NSF funding, research where you put shrimp on treadmills, that kind of thing. What do you view as the main risks of putting so much of our resources in this kind of an institution?

Tyler Cowen: I would stress just how decentralized science funding is in the United States. The public universities are run at the state level. We have tax incentives for donations where you have to give to a nonprofit, but there's otherwise very little control over what counts as a viable nonprofit. 

One specific issue that I think has become quite large is how much we run our universities through an overhead system. On federal grants and many other kinds of grants, an overhead is charged. The overhead rates are very high, and well above what the actual marginal overhead costs. 

You might think that's a crazy system, and in some ways it is crazy. It means there's intense pressure on professors to bring in contracts, regardless of the quality of the work. That's clearly a major negative. Everyone complains about this.

But the hidden upside is that when universities fund themselves through overhead, there's a kind of indirect free speech privilege because they can spend the overhead how they want. Now, I actually think they are violating the implicit social contract right now by spending the overhead poorly. But for a long while, this was why our system worked well. You had very indirect federal appropriations: some parts of which went to science, other parts of which went to education. It was done on a free speech basis. 

But like many good systems, it doesn't last forever. It gets abused. If we try to clean up the mess — which now in my view clearly is a mess — well, I'm afraid we'll get a system where Congress or someone else is trying to dictate all the time how the funds actually should be allocated. 

That's a question I've thought through a good amount: how or whether we should fix the overhead system? I feel we've somehow painted ourselves into a corner where there is no good political way out in any direction. But I think you'll find case by case that the specifics are really going to matter.

Dylan Matthews: Let's get into some of the specifics. Do you have an example of the overhead system breaking down that is motivating for you here?

Tyler Cowen: Well, universities are spending more and more of their surplus on staff and facilities — on ends that even if you think they're defensible in some deep sense like “Oh, we need this building,” it's about the university. It's about what leads to long run donations, but it's seen as a violation of public trust. 

The money is neither being spent on possibly useful research, nor educating students. The backlash against universities is huge, most of all in Florida, Texas, and North Carolina. It seems to me that where we are at isn't stable. How we fund science through universities is, in some ways, collapsing in bad ways. The complaints are often justified, but odds are that we'll end up with something worse.

Dylan Matthews: I don't want to focus too much on the state aspects of this. Obviously, this is a heavily state-sponsored enterprise, but pharmaceutical and chemical companies employ huge numbers of scientists. 3M has plenty of scientists working on various polymers and things. 

What does the division look like there? What is the kind of symbiosis between these types of scientists? I guess that's for the field, but if Caleb maybe wants to get a stab at it?

Caleb Watney: Sure. I think the conceptual understanding, oftentimes is this spectrum from basic scientific research, all the way to very applied technology. The classical understanding is that you put in federal resources at this early stage of the pipeline, really basic research stuff that may not pay off for another 10, 20, 30, 40 years. It's hard for private sector companies to really have incentive to invest in that kind of research, so there's a strong case for federal investment.

Then after the basic scientific advancements are made, it moves down the pipeline, and eventually you to a point where pharmaceutical companies, chemical engineering firms, or whoever can see the light at the end of the tunnel. They can see the way to potentially commercialize whatever technology and that's the moment they jump in.

Oftentimes, this sort of spectrum between basic and applied science misses the fact that working on applied science can generate insights or questions that then lead to basic scientific results. So, it's often the case that you look back at the old industrial research labs: Bell Labs, Xerox PARC, etc. They were oftentimes working on quite applied problems, but in the process of working on those problems, they generated insights and solved basic scientific questions as well. 

Dylan Matthews: Alexander, you have a sort of unusual perspective here as someone who funds scientists and attempts to improve science policy. You have a heavy incentive to pay people to try to understand this better. I'm curious — what are the main lessons you've gotten in terms of why the funding system works the way it does and what its limitations have been?

Alexander Berger: I think to speak to one micro example of limitations: a project we did a few years ago, that our science team led on, was looking at the winners of an NIH review process called the Transformative Research Process, or the TR01. 

R01s are the standard NIH grant, usually around $1 million for most biomedical research. The TR01 was meant to fund more experimental, higher upside, higher risk science. Our science team did a process where they invited a bunch of people who had applied and been rejected by the NIH to reapply to us, so we can get a sense of who else was in the field and what was the kind of science that the NIH wasn't necessarily able to support at the current level. And just get a really diverse cross-cutting sense of what kind of research was being put out there as transformative.

One of the things that they were most surprised by was — I’m making “air quotes” but you can't see — how “normal” most of the science was. In spite of the fact that the NIH had tried to set up this process to enable transformative basic, risky research, it still had all of this process around. The applications were really long. They were still asking for preliminary results. So, it still ended up looking a lot like you already needed to have a lot of the research done in order to get the funding to do the research. I think that kind of risk aversion in the scientific funding process is something that we've seen a lot of. And it makes scientists often a little bit pessimistic about the prospects of reform because they see at these large-scale research bodies — who fund lots of good research, for sure — that it's hard to really enable them to take risks to try new things.

Dylan Matthews: So let's do a bit of Chesterton's Fence reasoning here. For listeners, Chesterton's fence — this British writer noted that if you see a fence out in the field that you haven't been to before, you should probably think about why the fence is there before you tear it down. If the fence here is these bureaucratic restrictions that require onerous applications for funding, that seem to create these problems that I was describing, why did that come about? What problems was that solving prior to the reforms that brought it about?

Alexander Berger: I think that really goes to the heart of this discussion around the political economy and policy and science. Like the thing that you were saying about the research on shrimp treadmills. The fact that science has always felt vulnerable, especially when it's curiosity and scientist driven, has created a lot of bureaucratic processes to try to show that, "No, we're being careful, rigorous, and responsible. We're not just throwing money after flights of fancy." 

In order to be able to defend these large-scale, public appropriations to support relatively basic research that might fail and might not pay off. These projects could sound kind of weird to someone just hanging out and wondering about why tax dollars are being spent this way. So, I see that as the core driver of the bureaucratization of the process — the need to minimize risk and maximize explicability in an enterprise or process that is itself very curiosity-driven and hard to plan.

Tyler Cowen: I think there's a general problem in science funding, also arts funding, and it's the following. There is a lot of underproduced public goods out there. Basic science is one of them. At the margin, you can always do something with government. If it's small enough, it can be well-controlled and have positive impact. But as it gets larger, Congress or someone else wants to have a say. Then effectiveness is greatly diminished. Over time, bureaucratization sets in, labor costs rise, maybe the states and different senators want their share of the thing, whatever else. 

So you have this scarce resource. It's the ability to do things without attracting too much attention. You have to think very carefully how you allocate that. I think a lot of good science policy is knowing when you can do more in an area without attracting too much attention. That's always going to change over time. It won't be a fixed formula. Knowing that we could set up 27 different ARPA-like entities, but in fact, the total amount of money would be so high that Congress would really start interfering with them all, and then we've got to pull back from that. Even though the abstract arguments for doing that might be quite strong. It's a kind of art: figuring out the balance of what you can get away with and keeping enough autonomy so that it still works well.

Caleb Watney: This kind of gets at one of the real meaty, thorny issues in the heart of science funding, and especially when you're considering the political support for it. In many ways, the strongest theoretical support for public funding of science is for basic science, but that's also the part that is the least politically defensible. It's the part where you are most likely to find really weird, strange things — yeah, sometimes you are funding underwater treadmills with shrimp running on them, but sometimes you end up doing that and you’ll discover something really interesting about underwater mechanics that ends up changing how submarine design works.

Dylan Matthews: Or you invent the transistor or something.

Caleb Watney: Yes, exactly. I think one way to do this is to be cautious about political limitations and how much can fly under the radar as Tyler gets to. 

Part of it is also thinking about science as a portfolio approach. Oftentimes, public servants who are working in science agencies get dragged before Congress and get told, “What are your successes? What are you working on?” I think it's actually quite hard for them to point to successes, and part of that is due to the fact that basic science is hard to predict and hard to know way down the line. 

But also, we don't actually have a lot of great, inherent justifications for why science is designed the way it is. A lot of it is path dependence. We designed a series of scientific institutions, especially after World War II, and the design of those has just persisted, without a lot of experimentation. 

This is one of the things that we've been working on: are there ways that you could build experimentation into the way that science agencies operate? That way, you could actually get a baseline of, “Hey, we tried these two different procedures, these two different ways of allocating funds across a portfolio. And we found that this one generated X percent more citations,” or “This one produced 10% more novel research proposals as judged by the new technical keywords that were combined in an application.”

Alexander Berger: Isn't there a parallel in terms of IFP’s work on policy change, to what Tyler was saying about wanting scientific research funding to sort of stay below the radar sometimes? Like people talk about the secret Congress idea. Sometimes when you're doing science policy, you actually don't necessarily want to be in the headlines, you don't necessarily want the President announcing it from the White House steps. You might want it to be something where it's operating behind the scenes as a second-tier issue.

Tyler Cowen: Universities for a long time enabled that, but now they too are in the line of fire. It seems to me a lot of our institutions now have become too legible in a way that's not sustainable. I admit that's maybe a controversial idea for you, Alexander. But I worry about this, the idea that “Oh, you know, I saw Spock on Star Trek, the professor on Gilligan's Island, the scientists are working on this. It will be fine.” There is something useful to having a world like that.

Caleb Watney: A book I think a lot about is Revolt of the Public by Martin Gurri, which talks a lot about a lot of these themes: what happens when information becomes way more legible than it used to? His primary thesis is that the internet made a lot of the behavior of public institutions and the behavior of our elites so much more legible, trackable, and findable than they used to be. Even if our institutions or our elites are failing at roughly the same rates that they did 50 years ago, it's so much easier to find and make those failures legible.

One example, outside of science I think a lot about is the National Football League. There's a lot of complaining about the quality of refereeing. A lot of people are convinced that referees are so much more incompetent than they used to be. You'll see on Twitter people pulling out clips of, "Look at this referee making the obviously wrong decision in these 10 games, with the same team again and again." I think it's totally wrong. I think referees are probably just as good if not better than they might have been 40 years ago. But it's so much easier to draw out the failures in very highly legible ways. This is a trend that's absolutely happening with science as well.

Alexander Berger: And it's actually like Monday morning quarterbacking across society has just gotten way more pervasive because we have better documentation. Everything is more legible.

Tyler Cowen: It may be great in some areas like food safety, where you just want a very low rate of error. But when you're playing a game where there's one hit in every 10,000 attempts, it may be quite counterproductive to have too much legibility to the public. Because some of the failures will be quite absurd, the Golden Fleece Award or Solyndra. We need to think of some new ethos to recreate some of the illegibility but still keep accountability and get some new lens that maybe no one has figured out yet.

Alexander Berger: I mean DARPA is an amazing success story in this front, where the fact that they're still so high status in spite of the fact that so many of their projects fail catastrophically. I think they have successfully sold the ethos of the brilliant program manager out there taking risks at the frontier. And I think the tie in to defense makes it-

Tyler Cowen: It's the military, I think, that sustains them, not that the public understands their model.

Dylan Matthews: At the same time, we have a bunch of ARPAs now. We have an ARPA-H. We have an ARPA-E. How do we account for that? Is it military hero worship and that you want to copy the successful military institutions?

Caleb Watney: I think the military aspect of it certainly provides a vein of legitimacy for ARPAs, but part of it is that a lot of the bets that ARPA managers make are not public. They can fund a portfolio of 40 things and even if only one of them works out that can be the thing that you trumpet. The 39 failures are not nearly as legible in the ARPA model as they are under the traditional NSF or NIH model.

What's interesting is that, across a lot of our scientific institutions, we're seeing almost cultural evolutionary responses to this. How do you justify to the public why you're spending money on things that might fail? The ARPA model was one version of this. Peer review in the traditional scientific system is another example of this. 

As an NSF program officer, being able to tell the public, “Hey, it wasn't me who made this bet on this underwater shrimp treadmill,” — to keep coming back to that example — “We asked a panel of experts, a panel of capital ‘S’ scientists, and they said that this was a good idea.” That provides at least a vein of defensibility that science has relied on for a long time. But that defense mechanism is weakening, especially as capital “S” science becomes more polarized than it used to be.

Tyler Cowen: It seems we're in a weird world, where at the very micro level of the individual researcher, the emphasis is on the defensibility of your research way more than ever before. A paper has to be longer, robustness checks everywhere, all these appendices. But at some higher macro level, maybe it's due to polarization. 

But defensibility is much weaker. Say you are in a state legislature or you are in Congress. Well, maybe what matters is your party and what your district looks like and how well you did. The accountability lines are weaker. This weird mix of defensibility is way stronger at the micro, but quite a bit weaker at the macro. That's a problem science has to deal with, so it makes us risk averse and then poor allocators at the highest tiers. 

Dylan Matthews: Our friend Emily Oehlsen had a helpful contribution to the conversation here about the idea of like weak or strong link problems. 

Sometimes if you were trying to regulate the safety of apples that are being sold, you care a lot about the worst apple and making sure none of them have poison in them. But maybe for science, you want to maximize the quality of the strongest link, make the best paper, say a special relativity paper rather than making sure that there's absolutely no papers about panpsychism or something that make it into the mix. 

That does seem somewhat helpful here, but I don't know how we get around the problem that Tyler diagnosed that all this research is legible, the weakest links will be pulled out and highlighted in legislatures and Congress, and absent some IARPA-style, extreme secrecy. It's hard for me to imagine how you get around that dynamic.

Alexander Berger: How much do you think polarization is the root cause of the problem? It's striking to look back at statistics on the partisan affiliation of scientists from 50 years ago. They were way less left-leaning than today, maybe even right leaning at some points. I wonder if that helped contribute to the relatively bipartisan credibility of science and scientific research institutions. In a way that has declined as scientists as a population have become consistently more left leaning. But I'm curious what you think, Tyler.

Tyler Cowen: It's part of the chain of the problem, but I doubt if that's the primary driver, because it seems that it is endogenous and it's relatively recent. 

I think the primary driver of a lot of our problems is that there are not any good scientific funding institutions that stay really good forever. It's just a fact of life about a lot of things, in the private sector as well. That's what's driving this. 

When people look for very abstract principles on what worked, I get quite suspicious. I think I have less nostalgia for past successes than a lot of science policy people in our circles and I keep on coming back to this time inconsistency point. Maybe scientists turned against the Republican Party. Basically, they stopped agreeing with it, and it was in their interest to do so, and the gains from conformity like in many areas have become higher. All that together makes it part of the chain, but not the first step.

Caleb Watney: I think that this aspect of new versus old institutions can definitely be an explanatory factor in the declining effectiveness of science. 

Even if you were to make our scientific institutions much more effective, I don’t know how much more political support would that necessarily generate. On the margin, it would help. But again, if our model here is that people are pulling out the failures, publicizing them, and making them legible, even more successful scientific institutions will still have failures that are possible to bring out in the spotlight.

Alexander Berger: I think they would have more embarrassing failures, right?

Caleb Watney: Yeah, if connected to success is taking on more high-risk, high-reward failures. The flip side of this is maybe that we need to do a better job of marketing and telling positive stories about the successes of science. Here is a way having more effective scientific institutions might imply better communication of science, its upside, and the successes.

Alexander Berger: I'm always skeptical of that kind of approach, because I feel like it implies too much responsiveness to public opinion. I think science polls okay. People like it. It's a little bit like mom and apple pie. The bigger issue is the polarization of the research workforce has meant that the bipartisan support that, to a remarkable extent, science and scientific funding has benefited from over a long period of time, is decaying. So, it does seem like you need to have a partisan analysis of this problem, as opposed to merely a secular-change-type story.

Tyler Cowen: Part of the problem might be that it's no one's priority, except for, say, the people around this table and some of those we know. That makes it especially vulnerable. The scientists themselves are not effective defenders.

Alexander Berger: But that doesn't seem true. I mean, think about the CHIPS and Science Act, the NIH budget goes up, not down. Trump tried to cut it, and Congress stopped him. The extent to which these institutions are politically durable is underrated.

Caleb Watney: I think it's true. I mean, the NIH is exceptionally popular in Congress. Broadly considered, it’s often like, “Oh, you can't actually try to change the NIH without getting NIH's buy-in first.” I think unless you really want to go to bat as your number one issue as a senator. I think it's exceptionally hard to change the NIH without the NIH's buy-in.

But I think this is also modeling the fact that the NIH has already built in a bunch of defense mechanisms, and now is politically popular. It's possible to make the case that the NIH has been too responsive to concerns about conservatism. That they've built in too many defense mechanisms. And now, they do have sustainable support in Congress, but they're also way less effective than they could be.

Dylan Matthews: What do you make of the fact that the NIH is still politically supportable in Congress despite the fact that their most prominent employee, Anthony Fauci, has been on cable news for the last three years as a prominent hate object? The fact that they're still popular in getting more money in spite of that is interesting to me and I don't feel like I have a good model for it.

Caleb Watney: I mean this is pretty recent, and so we'll see in some sense how this changes long-term support for the NIH. There's a new NIH director who's been appointed, and they'll have to be congressionally confirmed. I think the expectation is that there will be a long fight about gain of function research and other things that the NIH has funded as part of that.

One reason why the NIH in particular has had such support is that the areas of science that they focus on feel quite explainable to the average American. They're working on curing diseases, curing cancer, curing Alzheimer's, and those are diseases that affect millions of Americans around the country. I think it's quite popular to say, "We want to cure cancer, so you should fund the NIH."

Alexander Berger: Right. It's quite noticeable that the NIH is bigger than the NSF by a large margin. Biomedical research gets more funding than everything else combined. That's not actually true because of the defense R&D spending, but-

Dylan Matthews: This is maybe an area where none of us have looked into it enough to say, but Howard Hughes Medical Institute is one of the biggest foundations in the United States. They fund a lot of biomedical research directly. They're obviously not as prolific of a funder as the NIH. 

When you compare their application processes to the NIH, what does that tell you? Since if it's way easier, that tells you there is something about the government and politics that makes us really dysfunctional. But, if they're not that different, then that seems like a bit of a puzzle to me.

Caleb Watney: The economist, Pierre Azoulay, has a great paper, where he got access to both some of the HHMI data and some of the NIH data, and compared researchers that were right on the margin of being accepted as an HHMI principal investigator as opposed to doing the traditional NIH process, and seeing how their selection into one mechanism versus the other change the kind of research that they did. As background, most NIH grants work through this more project-based approach, where you submit a very specific grant application to a panel of peer reviewers, it gets scored, and then if you get accepted, you get funding to go to that specific project. Whereas HHMI operates much more on a person-based funding model where they select the particular scientist, give them a length of time, and say, “Whatever you think is important within your broad area of expertise, we're going to give you funding to go and do it.”

Pierre’s paper shows that principal investigators who ended up getting the HHMI fellowship ended up doing more impactful work, both as judged by how likely it was to disrupt other research in that field, and also how likely it was to get more citations, more papers, more awards later on. So it seemed that allocation mechanisms really did meaningfully change the kind of research that they were doing.

Alexander Berger: HHMI was also associated with a second change that gave people more time between renewals. You don't need to apply for a specific project, but you also have unconditional funding for a longer period of time. That explains why researchers are willing to take more risks, and they had both more hits — I think almost twice as many papers in the top of the citation distribution — but also more failures, more papers that almost ended up uncited because they might not have panned out or might not have been of interest to other scientists.

Tyler Cowen: I think of the two structures as quite parasitic on each other, a bit like the major music labels and the indies. You can say, “Oh, the one works this way, the other works the other way.” But neither could exist without the other.

The NIH props up the whole super-costly, bureaucratic, at times innovation-clogging infrastructure. But the innovators need that. And in turn, the NIH needs more innovative groups on the fringe to push or nudge them in other directions over the longer run. So I think of it as one integrated system.

Dylan Matthews: Like the classic HHMI Matador records comparison that you hear many times.

Tyler Cowen: Exactly.

Dylan Matthews: What are some of the services that NIH does provide those innovators? We've been pretty down on some of the processes for these groups. So what do you see as the basic infrastructure that they're supporting that we would miss when it's gone?

Tyler Cowen: Security for the profession as a whole, which is immense. There's a place you can go. The fixed costs are very high and there's really no one who wants to pick those up. If you go to a venture capitalist with something that's 10-year R&D, much less 20- or 30-year, it's very hard to get anywhere with that, much less with high sums of money. So if you design an institution to pick up a lot of fixed costs, it's going to be very hard for that institution not to be super bureaucratic. 

Now, I would much rather see it be less bureaucratic, but there's even a way in which Fast Grants, which I helped direct with Patrick Collison and Patrick Hsu: It's itself parasitic on NIH. You're funding at the margin, you're speeding up at the margin, but you don't have to pay any of the basic tabs, and that's why you can move quickly. So I think we need to do a better job at the margin, of adding pieces that will fill in for what NIH will never be good at. And I'm not that optimistic about reforming the NIH.

Caleb Watney: You use this word “parasitic.” I would say maybe “complimentary.”

Tyler Cowen: No, I know. That's podcast talk.

Caleb Watney: Right, right, right.

Dylan Matthews: Yes, yeah.

Tyler Cowen: It's all parasites.

Dylan Matthews: As you remember from high school biology, there's mutualism… 

Caleb Watney: But I think it's true that sometimes we get caught up in thinking, “What is the best way to fund science,” in an abstract sense. That misses the fact that we should probably have a portfolio approach where we're trying to fund different kinds of science in different ways. I do think the role that the NIH plays is being this funder of last resort. They're just pumping so much money into the system that even if your thing doesn't directly get funded, in some downstream sense you're probably going to benefit from them.

Actually, one interesting example here is Katalin Karikó, the Hungarian-born scientist whose work was pioneering in developing mRNA vaccines. She was quite public after COVID in a New York Times profile about the fact that she was applying for NIH funding back in the early '90s to advance her work on mRNA vaccines, and she was getting consistently turned down. At one level that is, in some sense, a massive failure of the NIH. 

But on the other hand, she was able to continue to stay in the United States in a downstream way: She was able to get funding from somebody else who was funded by the NIH, she persisted around for a while, and eventually she actually got funding from DARPA.

You can see that as an example of the system failing, and we could have possibly had mRNA vaccines 10 years earlier if we had made a different set of funding decisions. But also, the base support layer that NIH played meant she didn't have to leave science altogether, and that seems like a plus.

Dylan Matthews: Yeah. That tees up a conversation on immigration, which is something that I did want to ask about, and that I know you work on a lot, Caleb. 

Science seems like an area where there are huge gains to agglomeration, to having smart people in a scene together. Most of the smart people are not going to be citizens of one particular country. There seem to be major gains to easing international migration on this, but there are major political challenges to that.

Caleb, what has your experience been trying to convince Congress to let more scientists stay in the United States? What’s been easier or harder about that than you expected?

Caleb Watney: Right. So when we launched IFP, we decided high-skilled, STEM scientific immigration was going to be one of our major focus areas, partially because the gains here seem so large. 

If your basic model is that talent is distributed roughly equally around the globe, then the fact that the United States has only 4% of the world population means that the majority of cutting-edge could-be genius scientists are going to be born elsewhere. If you really want to take agglomeration benefits seriously — if you think adding a bunch of smart scientists all in one cluster is really going to boost productivity — that implies you have to have ways to allow them to come and stay here. The United States already has this massive benefit of the world's premier university system. We end up training a huge number of global scientists. Scientists-in-training come to our universities and then for bizarre, prosaic reasons we end up forcing them out in one way or another.

We do think that there are gains to be made in trying to improve the immigration system. It's hard, for a variety of reasons. One is just that immigration as an issue has been bundled. It's quite hard in this all-or-nothing sense to really push forward just high-skilled immigration, because it always gets tied back up into the border and DACA. Even though Indian PhD students in chemistry are not actually coming to the United States via the southern border, it's been so polarized as an issue, it's hard to separate.

I think there's maybe hope that we're starting to see some unbundling of it in the CHIPS and Science bill that Alexander mentioned earlier. There was actually, in the House version of the bill, a green card cap exemption for STEM PhDs and master's students that passed, but didn't ultimately end up in the final conference version of the bill. But I saw that as a positive sign that the political system is starting to be able to unbundle these issues.

Alexander Berger: What do you see about the political power of universities on these things? I would have thought that universities would really have cared about that provision. At least on the funding issues, it seems like they do show up and have some power to wield.

Caleb Watney: This is one of the great puzzles I find in the political world. If you talk to university associations or university presidents, they'll definitely acknowledge that international students are a huge community that they care about. 

But I have not found that they put their money where their mouth is in terms of political force. Some part of this may be a collective action problem, where they benefit very directly by increasing funding in some specific NSF appropriations fund that they know their school plays particularly well in. In some sense, they can directly make that connection, whereas high-skilled immigration is an argument that's much harder to directly make. 

There's also a perception that because there's this bundling, by stepping into the issue, they may be adding political polarization to themselves. If you're the University of Iowa, you may not want to be making the case for full-fledged immigration reform. And if your model is that it has to be all-or-nothing, then I think that poses political issues.

Tyler Cowen: I would gladly triple the level of immigration and prioritize scientists. But I wonder if a key issue moving forward won't be cooperating with bio labs or science labs in allied countries or even in non-allied countries. They’ll be more and more capable. I don't think we're going to send a lot of money overseas, but access to artificial intelligence or to intellectual property: that may be a way we can get certain things done with less legibility, just like there are some trials run in poorer countries. 

There's a lot of labor there, and maybe we're not going to let it all come here. So just how we establish working relationships across borders, maybe it's a kind of frontier area where we can do something better. That would give us this new model, get us a bit away from nostalgia. Even with a much more liberal immigration policy, India is, what, almost 1.4 billion people? Only so many of them are going to come here, and we can do something there.

Dylan Matthews: I guess, but my question about that would be are we so sure our partner countries have any more functional immigration politics than we do? If the question is about partnering with, like, France, I trust the American political discourse on immigration a lot more than I trust France's.

Tyler Cowen: They don't have to let in immigrants, but they just have people you can work with and different rules of the game, and you have different people trying different approaches. We can expect maybe more progress from a number of other foreign countries than we've seen lately.

Caleb Watney: It's interesting. I think this partially gets at how much you think in-person agglomeration effects really matter. With this new era of remote work and whatnot, it might be possible to have a lot more international scientific collaborations. But it seems like there's still really massive gains just from in-person, physical interaction, and that relies on being geographically located in the same place.

Tyler Cowen: Sure. But, say, that doesn't happen the way we all would want, what do you do at the margin-

Alexander Berger: Especially in biology, right, where people learning to pipette the right way or having the right exact lab technique just ends up being weirdly important.

Caleb Watney: You could say, in some sense, across a lot of areas of cutting-edge science and technology, tacit knowledge is just increasing in importance. 

Semiconductor manufacturing seems to be the kind of thing that you really just have to work directly on the factory line with somebody else that’s been working in semiconductor manufacturing for the last 10 years to learn the knowledge that they have. There's a weird way in which especially for the very cutting-edge frontier of science and technology, in-person interactions are becoming even more important. 

Drawing back a little bit, I do think it's interesting that other industrialized countries with whom we are allied are making different decisions about their immigration system. I don't know per se if I would trust, say, France's immigration system. But the UK, Canada, Australia, New Zealand, Germany to some extent, are much more aggressively targeting international scientists and trying to bring them into their borders. The UK especially has this interesting global talent visa, an uncapped category for cutting-edge scientists. 

China is also trying to be very aggressive about recruiting back talent. They have the Thousand Talents program. They also have the less reported thousand foreign talents program where they're explicitly trying to bring international scientists to their border. I think China has similar issues with this because they have much lower rates of immigration or assimilation in general.

But, in some sense, the big barrier for all these countries that are not the U.S. is that people would prefer to move to the United States. If you ask them for their preferences of where they would like to move, it's still the United States as number one. Canada's been eating its way up there, but I almost think that's just like USA-lite and they are willing to go there as a secondary location.

Alexander Berger: Hey, Toronto is pretty nice. Just to make a really obvious point that I think we all know, but might not be totally obvious to listeners: I think this kind of stuff can often end up sounding like, “There's like a war for talent, and we want to win the zero-sum fight.” That can be part of the story or or why this policy appeals to some people. But I think it's really important to note that there’s actually really big global gains from letting scientific talent concentrate on the frontier. 

There's these papers, particularly by a researcher named Agarwal, looking at International Math Olympiad winners from around the world, and finding that kids at more or less the end of high school had performed similarly on objective international tests of math talent. But when they ended up in the U.S. vs. another rich country vs. staying in a lower income country, they were significantly more productive as post-PhD math researchers if they had moved to the U.S., and they were more likely to publish. They're more likely to do a PhD.

There's always worries about whether you have adequately controlled everything, but this is a situation where you had quite strong early measures of talent that ended up suggesting that even moving from the UK to the U.S. can be a pretty big gain in terms of your eventual output.

Caleb Watney: I think they were about twice as productive in the U.S. I mean, they were still much more productive moving to the UK than staying in their home country. But yeah, they were about twice as productive if they moved to the U.S. than to the UK, which is a wild fact about the world. A lot of people's perception is that the UK has a pretty good scientific ecosystem. They've got Oxford and Cambridge and lots of cutting-edge scientists who are working there. And yet it still seems to be the case that the United States’s research environment is that much more productive. 

Tyler Cowen: Longer-run, is there any argument for having a greater number of multiple centers and giving up some gains today? You might end up more innovative. Like, do we really wish that in the year 1890 everyone had moved to Britain or to Germany? Right? Some came to the US. It actually paid off.

Caleb Watney: Yeah, I think you're both not going to practically get everyone because people have countervailing things that they care about like being close to family. But also because there can be a specialization in research culture. 

There's a really interesting paper that looks at the multiple competing clusters that could have been the home of automobile manufacturing. A bunch of cities in the Midwest had large manufacturing and industrial capacity that were the home for early prototyping around automobiles, and Detroit was like a relative unknown. It was much smaller. What the paper identifies as one of the things that made Detroit the ultimate winner was that it was a physically smaller city, so it's just easier to run your prototypes back and forth across different facilities.

There can sometimes be a way in which being smaller allows you to specialize culturally in an area. If we think a lot of the power of these innovation clusters actually comes from the softer cultural side of it, that means you have to have a large chunk of people in those networks going to the local bars and talking about automobile manufacturing, or in San Francisco talking about software, or in Massachusetts talking about biotech — or, actually, there's been a small cluster around virtual reality. It's launched around Disney World, because there's already so many use cases there. 

So I don't think it's inevitable that we end up getting a bunch of clustering in one giant mega-city, partially because innovation clusters do have this cultural dynamic there, and you actually need sufficient saturation of one particular area. A bunch of specialists in petroleum manufacturing or fracking are going to be different culturally than experts in artificial intelligence.

Dylan Matthews: To pivot this to politics a little bit, do we have any experience in setting up new clusters like that? I think there's been some discussion in the U.S. about trying to relocate things to post-industrial cities, people getting priced out of major innovation hubs on the coasts. Do we know how to do that? Do we know how to do place-based policy like that, and is it at all desirable?

Caleb Watney: This is a big focus of ongoing legislation. The CHIPS and Science bill, which we've referred to a couple of times, made a major bet on reviving regional innovation. So across the National Science Foundation, the Department of Commerce, and the Department of Energy, there’s these big programs with a want to revive regional innovation within particular areas and we are making big bets on that. 

I am cautiously pessimistic about our ability to actually do that: Especially, from a top-down level, trying to say that we want Cleveland to become the next biotech hub, and then we're going to spend lots and lots of money to make that happen. It just hasn't worked out historically. There's a whole Wikipedia page of failed Silicon Valley knockoffs that all have “silicon” in their name, like Silicon Slopes and Silicon Heartland and whatever. There’ve been a lot of attempts to recapture the magic of Silicon Valley.

Where I'm actually a little bit more bullish is — a lot of these efforts have been financing-focused first and I think financing can help, but I would be much more bullish if there was talent first. When I think about a regional innovation cluster that has succeeded more recently, it's Pittsburgh, which was going through a bit of an industrial depression. Then, especially around Carnegie Mellon University, they made a really strong, targeted bet on robotics and AI, but that's partially because they had a world-class university that was already there. They brought in a bunch of international students, and there's cool literature showing that when international students come to university, and then especially when they start a company of their own, about 40/50% of the time it starts in the county where their university was. You can get these really strong clustering effects around universities. A talent-focused effort at regional innovation that then uses financing is the sprinkle on top. It may not still work, but I'd be more bullish about that.

Tyler Cowen: Even in that case, it's worked for science, but Pittsburgh still has lost population.

Alexander Berger: Yeah. I feel like this is the classic thing where industrial policy to revive dying regions is just a really, really hard problem. And it's an example of the way the policy process ends up prioritizing politics over innovation per se, right? We're sitting here recording this in South San Francisco, and we can kind of see across the bay of Berkeley. Berkeley urban economist Enrico Moretti has a really nice paper showing that even within U.S. metro areas, there's really big agglomeration effects in patenting. Moving from the fifth-biggest city in your area of research, not even from the bottom of the stack to the first, leads to notable gains in terms of output for people who are working on biology or on new micro-electronics. That's kind of the opposite of what the centers-oriented drive is going to push you towards.

Dylan Matthews: Yeah. If we're pointing to Carnegie Mellon as a success case, one of our major regional policies has been land-grant universities and setting up new universities. We've had a remarkable slowdown in creation of new universities since the ‘60s. At the same time, the most recent attempts — we probably can't see from here to Merced, but I don't think UC Merced is setting the city of Merced on fire. What are the costs and benefits of that? Do we need more universities? Do we need to rethink what they're doing before we start adding more of them?

Tyler Cowen: I think I'm a little more drawn to a longer-term perspective than the rest of you. If I think of the late 18th century, no one thinks Germany will be the prevailing science power — and Germany becomes that within a century. How they did it is maybe not clear, it wasn't reviving anything. If you go back much earlier, the Renaissance, no one thought England had any potential as a science power. There wasn't even a notion of such a thing. Yet that's where the scientific revolution comes. There seems to be some time horizon of something like a century, where you just can't at all see what's coming. 

Even though I want to triple immigration, I think that makes me a little more tolerant of the status quo than the two of you. So maybe next time, it's India, which, when I was a kid, was a country just completely written off. But 60 years from now, it will be doing a lot of great stuff. Like, I don't sit around wishing, “Oh, if we had only hired the best Toyota people in 1965, automobiles would be better.” In fact, it seems better that we let them stay with Toyota and didn't bring them to Detroit. So, I don't know. I think we should think about clusters a little more long-term and just be tolerant of things coming out of nowhere.

Caleb Watney: I mean, to potentially push back. The last time I would say we saw a major sea change in scientific leadership on the global scale was from Austria and Germany in the late 1800s, early 1900s. Then eventually, over the course of the 20th century, it shifted mostly to the United States. To my mind, that was primarily a story about massive immigration, across three specific waves of emigration and immigration. The United States ended up capturing a lot of that specific talent. 

The first was in the early stages of World War II. There was a mass wave of Jewish refugees that were being forced out of both Germany and Austria, and that included Albert Einstein and a bunch of the early pioneers in the Manhattan Project. Then after World War II, there was Operation Paperclip on the U.S. side and Operation Osoaviakhim on the Soviet side. And they're basically both trying to recruit as many German scientists or, in some sense, forcibly kidnap them back to their countries because they realize these talents are so important.

Dylan Matthews: You got the Jews back, then you got the Nazis back.

Caleb Watney: Yes, yes. Both in turn. And the third wave, you could say, is post-Cold War, around the late 1980s, as the Soviet Union was on the brink of collapse. You have the Soviet Scientists Act of 1992. We created a specific time-delineated visa to be able to suck up as much Soviet mathematical talent as possible. 

Across those three waves, you saw a sea change in U.S. innovation, U.S. science. I do think sometimes clusters can arrive out of nowhere. But like, the last major sea change we saw was literally people moving from one place to another, and then their scientific leadership followed.

Alexander Berger: This is a totally different topic, but earlier in the conversation, I said science is really popular. It's like mom and apple pie. But when I think about comparisons to the post-World War II era of immigration and the space race and the Cold War, science was coded as optimistic. You had the growth of engineering and you have Sputnik, you have space. I think it's a little bit harder these days to imagine an optimistic utopian future, in spite of the fact that I think science, per se, and biomedical research especially are relatively popular and uncontroversial. I think it's a little bit harder to just imagine a much better future. I wonder if that undermines some forms of this public case for science, relative to a more optimistic, mid-century style.

Tyler Cowen: Yeah, it has to be more than popular is the way I would put it. And maybe it's missing that extra.

Caleb Watney: One proxy for this that I think is interesting, and I hear people sometimes talk about, is how optimistic does a country's science fiction feel? 

In the 1960s, around the time when America was optimistic about science, our science fiction was quite optimistic. A lot of people today feel like it's quite dour, quite pessimistic, always dealing with dystopian, world-ending scenarios. Chinese science fiction is sometimes pointed as being quite positive; The Three-Body Problem, even though it's in some sense, dealing with apocalyptic things, takes a much more positive approach that humans have agency in some sense to change the world around them with their technology. 

But I think that tends to be more of a lagging indicator of scientific progress, rather than a leading indicator. I think when people have seen change in their own lives happening at a much faster, more rapid rate, it's easier to imagine on a fictional scale what that would look like if trends continued over the course of my lifetime. Although I'm sure that there's some way the two feed into each other.

Tyler Cowen: My purely anecdotal sense is that teenagers doing computational biology are super excited. There's an old guard they war against. They think they're going to change the whole world and cure everything. And that might all be overstated, but I feel some of that has come back recently, I hope.

Dylan Matthews: I don't know if this is on topic as a political thing, but I was trying to think of why none of my friends and I wanted to go into science in college. And it was mostly that it seemed utterly miserable, that you worked as a vassal in the empire of some professor, doing minor tasks at the direction of some grad students. You had no freedom. You had no ability to formulate your own hypotheses and learn from them. That's a caricature, but I wonder what a policy goal of making science fun would look like.

Caleb Watney: I think part of it would be really trying to attack how long it takes to reach the frontier. The NIH tracks the average age it takes to become a first-time PI, and it's consistently going up and up and up over time. Part of this is connected to the growing burden of knowledge discussions that we had in an earlier episode. 

But part of it is also that it’s very hard as a young person to have agency in science. That is a key thing that drives people away from it. A lot of young people want to work on things where they feel like, within a relatively short amount of time, they can have an impact.

Alexander Berger: It is especially true in biomedical research, where the standard lifecycle is an increasing number of postdocs, and the age at which people get their first R01 has been going up, and might be above 40 now. The career choice that you're making at this point just seems pretty unattractive relative to a lot of other options people may have.

Tyler Cowen: Who's the number one science role model right now?

Dylan Matthews: I would have said until recently Elon Musk 

Alexander Berger: He's an entrepreneur.

Tyler Cowen: He's not a scientist in that sense.

Dylan Matthews: Of course.

Tyler Cowen: Maybe Stephen Hawking for a while, but that's over, and he was in a way famous for something other than science. Katalin Karikó has not seized that mantle. That may be a personal choice on her part.

Dylan Matthews: Jennifer Doudna, perhaps.

Tyler Cowen: No one's heard of her out there.

Dylan Matthews: Out there, we say gesturing to San Francisco.

Tyler Cowen: Out there, running through the window. Yes. Maybe in this town, but–

Dylan Matthews: Yeah. I mean, if you view computer science as a science, there might be. But even there, I don't know.

Tyler Cowen: That is fraught now.

Dylan Matthews: Yeah, that's fraught now. But Larry Page and Sergey Brin, met as PhD students, I suppose they're not heroes.

Tyler Cowen: But that would help, if we had — whatever we all might think of them — people who are digested easily by the public and viewed as almost purely positive.

Dylan Matthews: Yeah. We need two, three Bill Nye, the Science Guys.

Caleb Watney: I sometimes think about, where do these scientific cultural heroes choose to go? You can read a lot of biographies of the early 20th century, and you see folks like Vannevar Bush who go from science to the government. I think there's like a less clear connection there today. 

If Vannevar Bush was alive today, it's unclear that he would go to the NIH or the NSF. “Can you as a young person have agency in a federal agency” is also a pretty open question. That also connects to the earlier point Tyler made, that new scientific institutions might be one way around this.

Alexander Berger: Yeah. I feel like that's a broader cultural sclerosis, right? If you look at the age of the mean member of the Senate over time, our institutions have gotten older, the people who run them have gotten older. Overall, it feels harder to imagine regeneration for large swaths of existing U.S. institutions of all kinds. 

I mean, universities actually have been a super interesting case. Around the time when the U.S. started taking the lead in science, very early in the 20th century, that was the last time that we saw major new universities, University Chicago and Stanford in the 1890s, being founded. You don't really see a random billionaire starting new universities in the same way anymore.

Dylan Matthews: We've talked a lot about deficiencies in the U.S., and how globally we want to distribute science. Are there other countries with science policies that seem politically viable that you're envious of?

Caleb Watney: There's not another country that really stands out as like, “Oh, man, I wish I could just adopt all of their policies.” I think there are particular countries that have particular policies that I think are interesting.

I've been impressed by New Zealand's willingness to try new things. For example, they are one of the only countries that tried to use a lottery system, at least a partial lottery, for how they distribute scientific grants, which is interesting and attacks the very idea of how good are scientific institutions at being able to select meritorious grants within a population. It remains to be seen how that will work out. 

Actually, one thing I'm disappointed about is they didn't really do it in a randomized way so that you could have a control group and see how the lottery would have done compared to some other kinds of system. But I appreciate that they were willing to take that risk in the first place.

Alexander Berger: The fact that it's hard to point to cross-country examples of especially good science policy or science funding is part of my reason for pessimism about cultural or institutional reforms leading to profoundly better outcomes. I have this running debate with Patrick, who we did another episode with, where I think he sees a lot more optimism for those kinds of reforms. 

The lack of other vastly more successful science funding bodies in other countries to point to suggests that either the funding bodies just aren't that important, or maybe the Pareto frontier is just closer than we see. 

Caleb Watney: To push it back on that, I think you can actually argue that the U.S. has the scarce resources that would be required for any country to actually push out the scientific frontier. So in some sense, the U.S. is stagnating only says something about how bad our institutions have been.

Tyler Cowen: Finland and Singapore have done education very well. In the realms of scientific innovation, they don't seem to have that much to show for it. 

Weirdness is maybe the input that is scarce. The United States is pretty well run and we're weird. We're sitting very close to America's weirdest, most tolerant, most open, most chaotic major city, which is San Francisco. We're here in the legal entity of South San Francisco. But that's no accident we're near the weird place with Haight-Ashbury and Jefferson Airplane. And I think that's what Singapore and Finland can't pull off.

Dylan Matthews: Can we do a round of over/underrated? Patents.

Caleb Watney: I'm going to say appropriately rated, but insufficiently. Basically, I think patents work really well for some sectors and they work really poorly for other sectors. I would love to actually have patents or intellectual property rights much more differentiated by industry, but that would pose all sorts of issues with international IP agreements and whatever. 

Alexander Berger: In some sense, I think they're underrated. I feel like nobody walks around on the street being like, "Man, patents are so great." But in some deep sense, like, it-

Dylan Matthews: Maybe in that one court in Texas.

Alexander Berger: Yeah, exactly. But in some sense, I mean patents are what enable large-scale, pharmaceutical investment in developing new drugs. That seems, the classic case where it's really valuable to be able to do. That's pretty cool.

Tyler Cowen: High capital costs are underrated in those cases.

Dylan Matthews: Yeah. Prize awards.

Tyler Cowen: Overrated. People need opportunity, they need talent. 

Some dangled, big patch of money at the end of it all — I don't know. I'm not sure that that kind of pecuniary incentive, it's at the same time too large and too small. You're not going to get to be a billionaire. I think amongst people like us who use the phrase, they're overrated.

Alexander Berger: How does that interact with Emergent Ventures?

Tyler Cowen: They're not prizes. They're grants.

Alexander Berger: Isn't part of the appeal that you're creating a validating mechanism and the community?

Tyler Cowen: Well, the community is important, but that's a kind of input. And the validating mechanism also, it's a way of networking. If they are prizes, I get more worried. If they are ways of investing in networks and giving people a start and a nudge, then I'm happier.

Caleb Watney: I would say, prizes themselves are overrated, but there's a broader category of alternative ways to finance innovation, dramatically underrated.

Tyler Cowen: Agree with that.

Dylan Matthews: Yeah. Advance market commitments.

Caleb Watney: Underrated, definitely. Although I will say, there's a small bubble of people with whom they are overrated. They work very well within a particular set of conditions and circumstances. But I have some concern that we might start looking around and applying them as a square peg in a round hole, but, like, they are still dramatically under utilized in the policy world.

Dylan Matthews: The Bayh-Dole Act. For listeners who aren't familiar, it enables collaborations with industry and publicly funded universities and allows patenting of certain publicly funded innovations.

Tyler Cowen: It could always be worse, right?

Caleb Watney: It seems fine.

Dylan Matthews: Yeah. Seems like a fine answer. Price controls.

Caleb Watney: Overrated.

Tyler Cowen: Overrated, but you're asking someone where you know the answer in advance.

Alexander Berger: By who and in what context?

Dylan Matthews: For innovation-specific products. So I think prescription drugs are the classic case, but maybe medical price controls more broadly.

Alexander Berger: In general, I think that this is an example of where I think advance market commitments might not be exactly the best idea, but doing more to reward breakthrough progress in a way it doesn't end up being passed on to consumers, has a lot to be said for it. I think it's a good thing that the U.S. subsidizes so much of the innovation for the world — and I'm pretty happy to do it. But the 20th year of a patent that is discounted at a IRR hurdle rate by some corporate decision maker at like 12% per year is a very, very expensive way to induce marginal innovation. So finding more ways to make R&D spending cheaper for companies, rather than that marginal year of financial incentive seemed pretty attractive.

Dylan Matthews: Funding lotteries.

Tyler Cowen: I'm all for more innovation. I'll try anything, as Caleb said, but I wouldn't bet heavily on funding lotteries, per se.

Caleb Watney: I would almost compare lotteries to giving cash directly in the international development context, where just the presence of them can provide a baseline with which you can compare everything else to. We know that there are lots of ways of spending international aid that are more effective than giving cash directly. But the fact that we have a strongly established baseline is very helpful for the larger community. I think there's lots of ways of directing scientific grants that I'm sure would be dramatically more effective than lottery. But I'm slightly concerned that we don't have right now the baseline to test against.

Alexander Berger: And I feel like the analogy is even better than that. It might be the case that the mean dollar of aid is better than unconditional cash transfers and that the median is much worse.

I feel that way about funding mechanisms compared to lotteries. I think most funding mechanisms that actually exist or are widely used might be worse than lotteries, even though it might be the case that it's very easy to do better than lotteries.

Caleb Watney: We recorded these sessions with several other workshop guests in the room listening in. After the initial conversation, Emily Oehlson and Jim Savage joined in with some additional thoughts.

Jim Savage: Gun to your head, what share of GDP would you put into public R&D?

Caleb Watney: I would say it almost doesn't matter what the socially efficient rate is, because the political constraints are almost always going to be binding before the economically efficient rate. Even if we could effectively sink 15% of GDP into R&D, which might end up being optimal, I don't think you would ever politically be able to hit that rate. In some sense, I would say politically, we can always just go harder.

Tyler Cowen: I think about economics, obviously the field I know best. I would spend less on it. I would spend more money on creating open data sets, and give way less or maybe zero to researchers. And whatever's left over send to the biomedical sciences. 

It's so case-by-case specific. The idea that we're just going to take the status quo and shovel in a lot more money, I really don't like. I would press the no button on that. But I can think of a lot of areas, methods and programs that I would give more money to if they would reform.

Emily Oehlson: Maybe this is too much of a can of worms, but the political legitimacy question of the moment seems to be how we should think about scientific progress in private artificial intelligence labs. What do you think?

Caleb Watney: Seems hard. 

Tyler Cowen: I'm all for it, so I wouldn't use the word accelerationist, but I think our best chance at having stable international agreements that limit AI in some ways will come about if there's American hegemony. It reminds me a bit of nuclear weapons. 

I don't think we have any choice but to proceed at a pretty high clip, understanding that safety measures only tend to get developed in a hurry when there's actual, real problems facing you. So I'm fine with saying, “This is great. Let's do more.” I don't think the dangers are zero, but I'm very much on record as staking out that position.

It just seems to be obvious that we're going to do that anyway, so we want to be part of it in a better way. There's no way to really fight all those incentives and stop it, so let's jump on board and improve it.

Alexander Berger: I think there's a really interesting question around the international balance of power that does seem much more salient to me on this issue than most areas. 

Like, when I think about progress on cancer biology, I don't really have any sense of worry about getting beat, but I think there is a sense in which the analogy to weapon systems seems more salient for AI systems. I expect there to be much more invasive monitoring of labs, much more government engagement over time, much greater sense of national champions, than I think we typically see with non-profit research universities.

Jim Savage: There's this great development paper where they allocate micro grants to a community of people who then have to allocate those grants to people whom they perceive as being the most effective in their communities in India. They have a clever mechanism to allocate that money in an incentive-compatible way. What's to stop or what would be wrong with, say, the NIH making block grants to schools within a university, where they all have this rich context on each other's research, and then have them divvy it up according to where they think it's best spent?

Caleb Watney: I think there's a way in which you could interpret the increasing centralization of especially biomedical labs as actually one way of doing this, basically. You're just having larger blocks of scientists together apply for something and then they're in some sense distributing funding across the lab. It might be that we're in some sense already moving toward that world. You could also think about other ways people allocate the respect of their peers, in the form of: “Who votes yes on the peer review panel? Who endorses it in public letters? What's the general sense of this whole area of science?” That is in some sense a reflection of what small, local departments think.

More generally though, I would make a pitch for scientific surveys as a pretty underrated thing in terms of both defining scientific progress and deciding which areas of science to fund more. I think there's a lot of concern that the current ways in which we measure scientific progress, things like patents or citations or papers, are pretty poor proxies. People think that good science is a know-it-when-you-see-it kind of phenomenon. But that is measurable, through large-scale surveys.

So I would love to see almost a scientific census, or something that really tries to measure what scientists do across the board, and what they think both about individual people's works but then also broad categories of work. I'd be particularly interested to see, maybe outside of your subfield, what other discipline ends up providing you and your research the most benefit. It would be an interesting way of trying to assess where scientific positive externalities are coming from.

Alexander Berger: I like the idea of allowing scientists to allocate funding themselves in a little bit more of a market to projects that they like. But I worry about primarily using the university bureaucracies to do so. If you look at the UK system, the Research Excellence Framework has some features of this, and I only absorb it through the rantings of unhappy UK professors on Twitter. My sense is that it ends up being a very painful bureaucratic process, rather than capturing more of the upsides, as a market-type system of local information seems to ideally deliver.

Tyler Cowen: I would second those remarks, and if we were going to spend more on one thing, if I get my one-item wishlist, I want to spend more on 13 to 17-year-olds. That's when you can really influence people. I'm not sure you need to give them large amounts of money. You give them something with a science tag connected to it, help them do something at the margin. That's the one thing I would do.

Alexander Berger: You see this compelling evidence from some of the Chetty papers and others, showing that early exposure to innovators seems to matter a lot. That sort of role model effect — the geographic effects in terms of how people are patenting and what they're working on — I think that makes a lot of sense.

Caleb Watney: Thanks for listening to the Metascience 101 podcast! Next time we’ll discuss whether the invention of new ideas is overrated when compared to the bottlenecks for diffusing them out to the rest of society.

Thanks for reading Macroscience! Subscribe for free to receive new posts

Discussion about this podcast

Macroscience
The Macroscience Podcast
A podcast about macroscientific theory, policy, and strategy