IN THIS EPISODE: OpenPhil CEO Alexander Berger interviews economist Matt Clancy and Stripe co-founder Patrick Collison to talk about whether science itself is slowing down, one of the key motivating concerns in metascience. They look at the challenges of measuring scientific progress, the reasons why progress might be slowing down, and what we might be able to do about it.
“Metascience 101” is a nine-episode set of interviews that doubles as a crash course in the debates, issues, and ideas driving the modern metascience movement. We investigate why building a genuine “science of science” matters, and how research in metascience is translating into real-world policy changes.
Episode Transcript
(Note: Episode transcripts have been lightly edited for clarity)
Caleb Watney: Welcome back, listeners! This is the Metascience 101 podcast series. Last episode, we introduced the series with a conversation between myself, Professor Heidi Williams and Dylan Matthews with a 101 intro on “How we can do science better?” If you missed it, I highly recommend that as a starting point.
For this episode, Alexander Berger, CEO of the foundation Open Philanthropy, is joined by Matt Clancy and Patrick Collison. They discuss whether science is slowing down, how to measure scientific breakthroughs and the role of institutions in our scientific enterprise.
Alexander Berger: Great. My name is Alexander Berger. I'm co-CEO of Open Philanthropy and I'm here today with Matt Clancy and Patrick Collison. Matt, do you want to introduce yourself?
Matt Clancy: Sure. I’m Matt Clancy. I also work at Open Philanthropy as a research fellow on metascience, which is what we'll talk about today. I also write a living literature review called New Things Under the Sun, which I've been doing for a few years now about academic research on science and innovation.
Alexander Berger: Great. And Patrick?
Patrick Collison: I’m Patrick Collison, co-founder and CEO of Stripe and also co-founder of the Arc Institute, which is a new research nonprofit focused on basic biological discovery. I wrote a piece in 2018 with Michael Nielsen on some of the questions I think we'll discuss today like the per-capita slowing in science.
[00:01:25] Per-capita slowing in science
Alexander Berger: Yeah, that's right. We're going to talk about whether science is slowing down. And Patrick, why don't we start with you? Could you talk a little bit about that piece with Michael and what you found?
Patrick Collison: Sure. So the title of this episode is “Is Science Slowing Down?” We made the case in this article that science, as a whole, is not slowing down, but rather that per-capita science is slowing down. People broadly may not be familiar with the fact of the shift in the number of people, the number of papers, and many major metrics connected to science since the Second World War. There's been an explosion in the number of practicing scientists, up by a factor of maybe 20x. The amount of federal funding for science is up by a comparable magnitude. The number of papers being published is up enormously. Given this explosion in the denominator, one obvious question is "Well, how has the numerator changed?" where the numerator is the actual realized scientific discovery. We made the case that it's very hard to see how it could be true that realized scientific breakthroughs and progress is up by, say, a factor of 20 and therefore necessarily the per-capita impact is down.
I think it's an interesting question: what's going on with the total returns? Maybe the absolute returns or output is up by a factor of two. Some people make the case that is even in decline. We're explicitly neutral on that. But I think an important fact about the world today – with significant policy implications – is that the per capita returns are almost certainly diminishing materially.
Alexander Berger: How did you actually think about whether the amount of progress happening is going up or down? Did you have some metric of scientific progress to look at?
Patrick Collison: Yeah. An unfortunate fact about metascience is that the ground truth that you really care about – important scientific breakthroughs – cannot be objectively measured. Typically, people fall back on various things pertaining to papers and citations.
To try to get a somewhat different cut on this, we decided to survey practicing scientists about their beliefs and their estimation of various Nobel Prize winning breakthroughs. Given plausible beliefs as to the shape of the distribution of breakthroughs, you might expect that Nobel Prize winning work would be getting significantly better through time.
When we looked at the three different fields of chemistry, biology, and physics, it's a little bit noisy. But after surveying 1,000 or so scientists, their estimation was that breakthroughs were roughly constant in significance. And in the case of physics, in slight decline.
If we're working 20x harder and only producing breakthroughs that in scientists' own regard are of roughly constant or again possibly declining quality, that seems like a significant fact that's in accordance with this general arithmetic intuition.
[00:05:03] Measuring breakthroughs via inputs
Alexander Berger: Matt, you said you do living literature reviews on New Things Under The Sun. What does the rest of the literature say about this question?
Matt Clancy: Yeah. I mean, I totally agree that measuring science is this very difficult phony question. I think about it as a patchwork and dashboard of different indicators and there are a handful of indicators that say, “No, things are going fine.” That's like the number of papers published per scientist, which is pretty constant. Or patents which are roughly constant.
And so just counting papers doesn't seem very satisfying. When you dig deeper, everything points in the same direction that what Patrick was saying. The average is going way down, probably because the denominator is also exploding. Nobel Prizes are one thing. But maybe the Nobel Prize is a particular niche institution. Maybe it's not representative of broader trends.
But you can look at stuff that reflects broader trends. There's a bunch of citation metrics like what's the probability that a recent paper gets cited? Or what share of your citations go to recently published work? And you might think that reflects a vote of confidence network or that work is important enough for you to build on. These have been steadily declining since the Second World War.
The share of citations to recent work published in the last 10 years has gone down to levels that were seen at the end of the Second World War when there was not a lot of science to cite from the previous five years because everybody was fighting. It's a large magnitude.
Other stuff people have looked at is: what's your chance of having a paper become one of the most cited papers of all time, in the top 0.1 percent? And that's also been declining over time. It's harder and harder to climb into those ranks. The older stuff is just sticking around and maintaining its position for longer.
Other people have tried to do really sophisticated stuff with the citations where they look at disruptive papers, “I cite you, but you rendered everybody else obsolete, so I no longer cite the work that came before you.” People have developed this disruption index based on this idea of: do you cite a paper in conjunction with its own references or do you just stop citing its references altogether? The probability that you get cited alone without your antecedents has also gone down.
And then another citation-based metric is about patents. Maybe this is all something weird in academia and just some norm about how we cite stuff or a weird culture thing. But inventors are not playing the same game. Inventors are just looking for useful knowledge to build technologies on, then they patent them. Inventors are also citing recent academic work at a much lower rate than they used to. 50% of citations to academic work were to recent work in the 1980s, it's down to like 20% in the most recent data.
Instead of citations, people have looked at the text of the papers themselves. If you look at the titles of the papers, are they referencing new words? Are they bringing new concepts into the scientific corpus? Are they even combining old words, but with new combinations of them? And that's been declining too. Even people have looked at the keywords that authors attach to the papers and how many new keywords do authors attach to their papers. That's all declining too.
I think any one of these you could be like, “Well this disruption maybe it's a little suspect. Or maybe this other Nobel Prize thing, indexes some game that Nobel Laureates are playing to award each other stuff." But when everything is pointing in this consistent direction, I take it as that there's something going wrong and we're not producing as you would expect from the huge increase in the inputs to science.
[00:09:20] Measuring breakthroughs via outputs
Patrick Collison: And, and just to state something that I think is maybe implicit: I think that metascience can sound like an even more arcane subfield of science and self-referential and not of obviously tremendous external significance. But I think this really matters. If you think about how our lives are better than those of individuals in the 18th century, so much of that is downstream of progress in science. Infectious diseases, semiconductors, what have you.
Today, obviously an object of significant discussion is AI and the various breakthroughs and new models there. When you ask people working on AI why to work on AI, especially given some of the stated concerns, they say things like, "So that we can cure cancer." Given the risks, they are very reasonably justifying the pursuit of AI on the basis of possible forthcoming scientific discovery.
I think the mechanics, the dynamics, and the prospects for these discoveries are among the most central questions for us as a polity today. Some implicitly devalue it, treating it as this mechanistic industrial process. If we dump more money in, somehow linearly more output or outcomes will ensue. Science is just not that straightforward, as a lot of the data that Matt just cited reflects.
Matt Clancy: One more of these facts that I think does a good job of tying it to real world impact is a paper that looks at how many years of life are saved per 1,000 journal articles about the same scientific topic like cancer or heart disease. That too has been falling over time. So if you think at the end of the day how we want this to cash out as health gains, we're not getting the same return that we used to.
Alexander Berger: Matt, I mean, that was the same thing I was going to say. A lot of the work that you were citing is about citations or the text of scientific papers. But what about the sort of things in the world, say crop productivity? Like are we investing more in R&D and seeing less output in engineering feats in the world too? Not just in scientific papers themselves?
Matt Clancy: Yeah. I think you see the exact same dynamics when you look at broader technological progress where you can debate and it's uncertain. Is the absolute rate of technological progress slowing down or speeding up? I'm not sure, but it's much less debatable that we're pouring in a lot more R&D effort and not getting a commensurate increase in the pace of technological progress. Like you said, crop yields go up at a linear rate, but we increase our inputs exponentially over decades.
The R&D has gone up orders of magnitude. You get about 2.5 more bushels of corn per acre every year. I used to be an ag economist, I know that one well.
People have looked at machine learning performance on benchmarks. They're moving incredibly fast. But the amount of people working on it and compute resources going into it are going up even faster. Every industry that I know of that people have looked at – which is not a ton – but the ones where you can measure stuff well, they find the same dynamic.
[00:12:53] Books and sources
Alexander Berger: So you both said it's a little bit ambiguous whether the absolute progress per year is going faster or slower than in the past. What's the best thing written on this? Like if somebody wants more, “Okay. We get the per scientist, per dollar invested, we're not getting the same returns. But overall, are we learning more per year than we used to be?" Where should somebody look?
Matt Clancy: I think the best answer is maybe the great stagnation literature, which Tyler Cowen who's here has kicked off. But also Bob Gordon has a famous book The Rise And Fall Of American Growth.
Patrick Collison: Which is one of those books that's super long and still actually worth reading it all.
Matt Clancy: It's very focused on the absolute pace and less focused on the rate of return. And he makes the case that TFP growth rates, real GDP, technological progress at a very granular detail level is not keeping pace with what it was doing in the 1920-70s.
Patrick Collison I think FRED, like the economics database, is probably the best source on the absolutes. I say that kind of tongue in cheek, but at some point I think you do start falling back on GDP and things like that. I can never quite figure out what the right conclusion is. The constancy of log of U.S. GDP, that's just a shockingly steady exponential. Now, that's not GDP per capita and obviously that denominator has changed a lot. But somehow, if you just look at the U.S. as a whole, as a system it has been on this really robust steady exponential since at least 1870, possibly earlier, although the data gets worse when you go back.
Matt Clancy: I mean, even if things are getting much harder, we're also trying a lot harder. You wonder if these are feedback effects where once science starts to disappoint, you start to see the Institute for Progress pop up and say, “We need to push things forward.” And people writing Atlantic articles point the problem out. Maybe there's this endogenous response that tries to perk it up.
[00:14:57] Predictions from these models
Alexander Berger: Could you actually talk a little bit more about how people model these kinds of dynamics? What we're seeing is that there is way more investment in science and scientists than there used to be in the past. But we’re seeing pretty constant, pretty much linear economic growth and other kinds of progress that we're seeing. What model makes sense of that? And what would that model predict for the future of the world?
Matt Clancy: Yeah. So I think the canonical economic models of this are going to be by different economists named Jones. There's Ben Jones who has this idea of the burden of knowledge. He has a model of how this would play out and generates a lot of stylized facts about innovation that seem to bear out.
The basic idea of this model is that as a field progresses there's more to learn. To push the frontier, you have to know more than in the past. It's almost tautological. If you couldn't solve the problem before, it's probably because you didn't know something you needed to know to solve it. You have to learn the new thing and usually the amount of new knowledge you learn doesn't fully displace the old stuff. The burden of knowledge that you have to know keeps growing. That means that people have to spend more time getting trained. They have to assemble bigger teams of specialists to put enough knowledge onto these problems. That's one model.
The other is Chad Jones's which is a little bit more agnostic about what exactly is going on, but it is these growth models where it assumes that generating R&D gets harder and harder. These effects that Ben Jones pointed out are one explanation for why, but there could be additional ones too. And he relates everything down to the growth rate of scientists and shows that if the share of the economy working on science is constant, then you can get constant growth. But notice that the share of a growing economy is going to be a constantly growing thing kind of matches this stylized fact of being unsure if the pace of technological progress is speeding up or slowing down, but we know for sure that we're putting a lot more effort into doing it.
Alexander Berger: For a long time, as Patrick mentioned, the number of scientists was just increasing really rapidly. You have more and more people being able to plug into the frontier of the global economy, due to general population growth. As those trends slow down as demographics change, does this model imply that we should expect scientific progress to radically slow down or stop?
Matt Clancy: Yeah. So that model does. The number of people is the big thing. If the population growth rate slows, you can offset that by putting a bigger and bigger share of the population into science, but at some point you run out of gas.
Patrick Collison: But that model makes homogeneity assumptions or uniformity assumptions around the per-capita productivity of the scientists. If you don't make that assumption, then I think the bad news is your current scientists are not as productive as you think or certainly the marginal scientists you've been adding. But the good news is maybe things are not going to deteriorate as much as you fear.
Matt Clancy: I think one way you can think about it is like if you have a population growth rate and one in a million people is Einstein, then you're kind of okay. But if the population growth rate is stagnant and you have to pull a larger and larger share out of the economy, you get Einstein first and then you start to have to go down to second tier scientists who we won't name.
Patrick Collison: But I don't know if you think this is fair. Impatient is not the right word, but something that I don't love about this literature, despite having tremendous respect for the Joneses. I mean, they substantially pioneered this field, so we're on some level riding on their coat tails. But something that I don't like about these models is this either implicit or explicit homogeneity assumption. Do you know of any models that explicitly don't make that assumption or model some unevenness there? Just because it feels to me that, absent strong justification we ought not to be making that assumption.
Matt Clancy: I mean, I think there is a paper by Erik Brynjolfsson and, and co-authors about genius being scarce. So it's like, "Well, we got this AI explosion, we should be seeing these productivity booms," and this paper was arguing that one thing that's holding us back is that you need this certain rare confluence of skills. They were thinking more in terms of entrepreneurial skill rather than scientists. I think the question about the differences in skill level of scientists is something that people have thought about more empirically. Like they document that there's these huge disparities in citation stuff.
Patrick Collison: However, we don't quite connect it to the production models.
Matt Clancy: I think you're right. Probably somebody has because it's a big literature, but I'm not aware of any. So it hasn't percolated to the top.
Patrick Collison: In my view, there are these empirical facts that the models don't permit. One that was very influential to me was the existence of the Cori lab at the University of Washington, St. Louis. Gerty and Carl Cori, who are themselves Nobel Prize winners, trained around seven other Nobel Prize winners. If one accepts that the Nobel Prize is a reasonable indicator of intrinsic merit and not just some contingent thing about your social network, then either it feels necessarily true that there is something intrinsic to the people attracted to this lab – and it was not a huge lab – or there's something at the treatment level happening at that lab that subsequently yields these differential outcomes. But it seems empirically the case that people coming out of the Cori lab were different to the population of other scientists.
I don't know how to connect that to these stochastic models where all these people are just, proceeding down their independent paths and sometimes they happen to collide with another particle and a discovery exists.
[00:21:35] After the low hanging fruit era
Alexander Berger: Can't these both be true though? These abstract models might treat scientists as too fungible or unduly fungible, but still seem to capture this fun fact which is that we're investing 20 times more. So it may be the case where maybe something changed socially or our training programs got worse. And now the average scientist is half as good as they were before.
But it just seems hard for me to explain the phenomenon of 20 times more with the same output, except by virtue of something that looks like plucking low-hanging fruit. Like do you have another story in mind that explains the decline in per-capita output in your picture, Patrick?
Patrick Collison: I think it might be very nonlinear. In systems like science, you often get these nonlinear dynamics. An obvious one to analogize science to – though, I think you can over-extend this analogy – is entrepreneurship. Both are domains where we really care about the behavior of the tails. Until very recently there were almost no successful technology startups coming out of Europe. Now there are a few, but the disparity between Europe and the U.S. was incredibly striking.
Well, it's an open question what exactly the reason for that was. The basic existence of the disparity is super striking. Plenty of people were starting companies in Europe. Take a model where we assume uniform propensity to produce a giant success. Even if you're kinda pessimistic about Europe – maybe you discount it by 50% – no model like that with only constant factor or small constant factor disparities can account for the actual realized disparity with closer to two orders of magnitude.
I don't know, but some multiplicative model where there are five terms that matter. And if each of those is a half of or a third of that, then in the counterfactual you end up with this exponential dampening. Wild disparities and heavy tailed distributions are not abnormal.
Alexander Berger: I'm sympathetic to that point, but doesn't it route back into your point about the oddly persistent 2% U.S. GDP growth for 170 years? The picture where actually the frontier is very malleable and extremely movable. I want to run into the law of large numbers and say that might be true for individuals. But when we look across countries and when we look across GDP growth rates, it just seems like you end up back at this picture of grinding out the marginal gains.
Patrick Collison: If I had to give you a more concrete and specific model that addresses that to some extent, Harriet Zuckerman found – and I'll slightly misquote this figure – on the order of 70%, maybe slightly less of science Nobel Prizes awarded in the U.S., maybe globally, between 1900 and 1970 were awarded to somebody who trained under someone who received or would go on to receive a Nobel Prize. In its stylized directional sense, that finding is accurate. Again, this is thought-provoking along the lines of the Cori lab.
But then two, there is some tacit knowledge. There is some set of practices as to how to do this truly breakthrough work at the frontier that is not uniformly distributed. In fact, this is only disseminated painstakingly and through one-to-one tuition and mentorship over some extended period of time. It may not be true, but if that were true, then I think the whole picture largely hangs together. Pigeonhole-principle-wise, you can't have the influx of new people be sufficiently trained per this definition, so you end up with a meaningful differential in the realized productivity.
Alexander Berger: A lot of my view of this is that Nobel Prizes are the results of competitive tournaments. And similarly for hit papers. Outputs that might be only like a little bit better than the replacement level contribution end up accruing disproportionate wins.
Like a tech startup that's 2% more productive or has 2% higher TFP than another tech startup, might just capture the whole market and end up with super extreme returns. It's not clear to me that the realization of extreme outcomes suggests that the productivity frontier is so much further out. We do see these realizations, but I think we have plausible alternative models that make sense of them.
Patrick Collison: I think that's super fair and maybe a big question. One that perhaps we could bring empirical data to bear on would be: how valuable is the non-frontier work that does not win the tournament? What's the shape of that distribution?
I think there's some evidence that it's not super good. There's the former BMJ editor who says that one should assume that any individual published medical paper is probably wrong. This is the former editor of the journal, not some external critic. The replication crisis is well known. Through my private experience in conversation with scientists, they simultaneously believe that some amount of fantastic work is being done, yet not only the median but even p70 is really not that reliable.
I think you're honing in on a good question. What is the shape of the distribution in particular, setting aside the tournament winners?
Matt Clancy: I want to offer a different perspective. I wonder if there's an issue of what the realized marginal thing is different than the ex ante, before we know what the outcome of the research project is going to be. Was it an equally valid thing? There's this paper by Pierre Azoulay and co-authors that looks at what happens when a particular disease area with a particular scientific approach gets a windfall of extra funding from the NIH based arbitrarily on how they score things.
Patrick Collison: This paper was interesting and it cuts against some of my intuition.
Matt Clancy: They find that it still generates new patented medical products, but suggests that even the stuff that barely made the cut is useful. A big share of that, maybe half or more, ends up being used in patents for products that are different from what it was initially intended for. That speaks to it being very unpredictable.
Most of that value is driven by a small number of high value wins that we couldn't have predicted. But it still means that there's value in the marginal applicants to the NIH. That's my pushback on that.
At the same time, I peer review stuff. I read a lot of stuff that I'm like, “When was this ever going to be a valuable contribution?” But I don't know.
Alexander Berger: The other example that we talk about a bunch over the years is the Karikó work on RNA vaccines. For many years, it was the marginal NIH project in some very deep sense. Then, it played a causal role in getting to the point where we could have COVID vaccines when we had them. It could simultaneously be true that hits explain the whole returns of science. And that the ex-ante unpredictability of them means that the whole enterprise looks pretty good. I think Karikó wasn't the kind of person who looked ex-ante super likely to win that tournament. But it's a bit tough.
I want to pull back a little bit from this cross-sectional variation, though, and ask about time series. Patrick you were saying, “Look, we're investing 20 times more than we used to be. The U.S. didn't just become Europe, we still have a pretty entrepreneurial culture. People still are going into science to pursue new breakthroughs.” What's your story? What's driving the slowdown if it's not exclusively or primarily this plucking low hanging fruit dynamic?
[00:31:30] How much is institutional?
Patrick Collison: I don't know. The essay we wrote for The Atlantic explicitly proposes this as an important open question. I've tried not to affiliate too strongly with any particular causal explanation. The main thing to observe is that the mechanisms undergirding science have changed so much.
People's subconscious mental model is that there's a natural way of pursuing this stuff. A natural way for grants to be distributed. A natural way for the work to get published. Whereas, in fact as you take these snapshots over the past three decades or so, it starts to look super different. The vivid example here is that when Einstein's EPR paper was distributed for peer-review, he was very offended. He wrote to the journal editor asking, “What the hell is going on here? Why was it not just published as submitted?”
Second, we have very substantially professionalized and institutionalized the practice of science over roughly the last 60 years or so. In broad terms the NIH's budget is on the order of 50 billion a year. The NSF is on the order of 15 billion a year. That itself is an important thing to know that the NIH's budget is so much larger. The NIH is a post-war creation. So much of the progress we've made as a society on infectious diseases and many other conditions happened before the current funding model and mechanisms even existed.
If you go back and read contemporaneous work from people like James Shannon under whom the NIH budget really grew in the 1950s. Shannon had pretty strong views about the importance of scientific freedom among the scientists. That they would be able to pursue their work without too much concern about what committees or other individuals might think of it. The question certainly comes to mind to what degree we have managed to adhere to those founding principles and to what degree any of these institutional dynamics are of relevance.
We ran a COVID grant making program during the pandemic called Fast Grants. A significant number of the grant recipients were not themselves virologists. They were drawn from a fairly broad spectrum across the field because so many scientists were compelled to do what they could to help avert the pandemic.
Alexander Berger: Also, they were locked out of their labs otherwise. So if they wanted to work, it was COVID or bust.
Patrick Collison: Exactly. My point is they were drawn from a fairly broad set of fields and institutions doing all different kinds of work. We asked them a question at the end of Fast Grants – not about Fast Grants in particular, but just about their lives. We asked them not if they had more money, but if with their current money, they could pursue whatever they wanted. Because NIH grants are the bulk of the funding and are restricted to particular projects. If they could spend their current money however they wanted, how much would their research program change? Four out of five said that it would change a great deal.
I don't know how different things would be if we existed in a world in which that number was one out of five, rather than four out of five. But, it's certainly thought-provoking. I think the basic question is how much is it about the shape of knowledge and low-hanging fruit? It's much easier to cure tuberculosis than it is to cure cancer. How much is it about some of these sociological, cultural, or institutional considerations?
My personal belief is it's almost certainly at least 25% institutional. And even if it's only 25% institutional, it's worth fixing that. But I think you could make the case with a straight face that there are so many other benefits that we can bring to bear today that we could not in 1920. And that it's actually 75% institutional, but within that range I'm agnostic.
Matt Clancy: How much is institutional? I was asked to give my best guess at this by somebody at Open Phil once. I came to a similar conclusion that there's a lot of structured knowledge problems that are very difficult to solve. Maybe AI will turn out to be a way to solve them. But otherwise the lion’s share of why there's been this 20x increase in effort without a 20x increase in result comes down to how things get harder.
The institutional stuff matters and it is something that we can do something about. Whereas the other stuff, we have to take as part of nature. It's worth investing our efforts in finding better ways to do things. Science is a slightly weird industry. The fact that people publishing the same number of papers per person per year hasn't changed a ton is like an outlier. Most industries get more productive over time. Most industries improve, and they get better at doing their job. The labor productivity rises.
It's worth thinking about why we do not have that kind of process in science. It speaks to science as an unusual economic activity. It's hard to learn new and better practices and to identify better ways to do grantmaking or organizational design. Because the results are hit-based and they take a long time to play out, so it's hard to observe.
Even now I think everybody has learned DARPA was a good model or they've decided DARPA was a good model, but it was based on hits that took a very long time to play out for an organization that was founded in the Cold War.
Alexander Berger: Well, if overall productivity in the economy has gone up by 10x or something in the last 100 years – it has to be more than that. But people are producing 10 times as many hamburgers per hour. If I took the same hamburger from 100 years ago and made it 10 times faster today, that's still like a decent product. If you take a 1910 physics paper and publish 10 of them today, that's not a 10 times better product. The tournament nature of discovery and the structure of knowledge makes it really hard to eke out those marginal productivity gains. Because you're trying to grow the stock rather than the normal way we do economic activity where it's usually more about trying to produce some flow.
Matt Clancy: But also knowledge evolves and interacts with our institutions in interesting ways. The burden of knowledge means there's more teams. When there's more teams, that is a different way of doing science. That means it selects for people who can work in part of a team rather than being very cantankerous. Outsiders who challenge all the conventional wisdom don't have as easy a time in science.
It may also mean that I have to find somebody with a really specific set of skills, so I'm going to collaborate with somebody at a far away university who has that set of skills. Now we're not going to be able to chat about our project as much as we would have if we were both down the hallway from each other.
And one more, this torrent of papers, how do you keep track of it all? This leads to the rise of the use of metrics like citations and quantitative measures. These are better than nothing, but in an ideal world everyone would read all of the papers and decide what is the best work. People don't have the time to do that, so they use other proxies. This pulls everybody towards reading the same narrow set of papers. That's maybe another source of groupthink.
Patrick Collison: Two points on these topics. So one, quoting from an article that James Shannon, the NIH director, wrote in 1956: “The research project approach can be pernicious if it is administered so that it produces certain specific end products or if it provides short periods of support without assuring continuity or if it applies overt or indirect pressure on the investigator to shift his interests to narrowly defined work set by to source of money or if it imposes financial and scientific accounting in unreasonable detail." And I've considered calling myself a Shannonist, in my attitude and my interests.
Matt Clancy: It sounds kinda like that CIA manual for sabotaging
Patrick Collison: Right, he goes on to describe the importance of scientific freedom as I mentioned. Second, Matt, to the thing you were just saying, a very interesting fact is the disparity and the divergence between papers as they were written in say the 1950s and 60s and papers as they are written today. You can go back and you can read stuff from the 1950s and 60s and it's readable and it is clearly written to be comprehended. There's often a recognizable narrative. That is obviously not true today. Papers are frequently impenetrable to people in even merely adjacent fields, leave aside lay people. It can be hard to understand if you're not in this particular domain and there are lots of metrics that attempt to quantify this.
But I just find myself questioning why that is? Some of it is that maybe the particular things in question are intrinsically more arcane. But I’m certain that that is not all of it. Some of it is some combination of sociology and incentives causing people to write in a far more arcane and difficult-to-understand fashion. I don't know that that has grand significance in and of itself, although it presumably slows the dissemination of some discoveries, but it's another epiphenomenon that suggests strange things going on in these institutional and cultural dynamics.
Alexander Berger: I want to backtrack a little bit in the conversation, Patrick, go back to your “at least 25%” claim. Let's grant by hypothesis that productivity in terms of innovative output per scientist has gone down by 20 times which is a huge, huge amount. Is the claim that things can be 25% better than they are today if we adjust to the relevant institutional factors? Which I think is a relatively small improvement. Or is the claim that 25% of that 20x could be undone and so things could be five times better than they are today?
Patrick Collison: I meant the former, but I wouldn't repudiate the latter.
Alexander Berger: Got it. And Matt, I'm curious for your take on that. When you were saying you tried to split it up between knowledge getting harder to find – low hanging fruit being plucked vs. more institutional, sociological factors getting worse – what's your story there?
Matt Clancy: A five times improvement through just institutional tweaks, I would be shocked.
Patrick Collison: I’d only be surprised.
Matt Clancy: On the social science side of stuff, the effect sizes are usually smaller than that. But I also think that stuff compounds. If you get a little bit better in something like science, which is the input to so much and we build on it, it's an accumulative thing. You get a little bit better over time then you get a 5x return in a century or however long it takes.
Alexander Berger: I think this is like a huge disagreement in my experience between economists and entrepreneurs. Entrepreneurs think in terms of orders of magnitude and they're like big wins are 10 times or 100 times bigger than small wins. Then microeconomists are like, “Man, it's really hard to eke out 10% gains.”
Patrick Collison: Yeah. But economists are concerned with crop production.
The U.S. did essentially no science of note in the 19th century like just none. If you're thinking with typical econometric production intuitions you think, “Well, okay. Maybe we can go from nothing to slightly, slightly better than nothing in the 20th century.” In fact, we went from nothing to most of it. I think the culture in the institutions can really do a lot.
Matt Clancy: I think that was catch up growth, to use another economic phrase. Like we bolted to the frontier
Patrick Collison: Okay. But it wasn't catch-up growth from like 1600 to 1700 say as you saw similar gains from the adoption of the scientific method.
[00:45:17] Solutions to these problems
Alexander Berger: I was going to say the same thing as Matt. It’s like the U.S. culture didn't radically change. We built some slightly better scientific institutions. We invested some more in our universities. Then suddenly, we're at the world frontier. There could be a story about culture as a cross-national narrative that implies a permanence to it. I actually think that's often a better example of how quickly culture can change and how it's wrong to treat it as fixed, rather than a feature of where you are in GDP or global trade networks or whatever else.
We spent a lot of time bemoaning the problems and how much worse things have gotten potentially. I'm curious, what are the best solutions? Patrick, what are you excited about?
Patrick Collison: Well, obviously we're excited about Arc which celebrated its first birthday back a couple of weeks ago. The purpose of this podcast is not to advertise Arc, but maybe to just kinda briefly describe what it does.
It’s a biomedical research institute and scientists do the work at the institution itself. They moved their labs there. Arc provides flexible internal funding to the scientists – so they don't need to apply for outside grants – for renewable eight year terms. Second, Arc is internally building technology centers or platforms for things like functional genomics, microscopy, certain animal models or computation – all ingredients that scientists might want to draw upon in pursuit of some research goal. But universities tend to not have an obvious way to support that today because of how grant and funding models work.
We definitely don't imagine that Arc is some panacea or that everything should work the way Arc does. Our hope is that it can be a complement to the training systems and research systems as they exist today. If the status quo did not exist, Arc would probably itself have to look very different.
The 1956 Shannon piece goes on to describe how important a diversity of mechanisms and models is. Maybe the meta idea behind Arc is that there should not just be one system. In an ideal universe, there'd be many Arcs with many different methodological approaches, premises, and beliefs. Both being an intrinsic good because certain kinds of discoveries are more and less well suited to any given model, and also would be able to learn from the successes and failures as people try different things. That's one thing I'm excited about.
Second thing, there was an extended period where for whatever reason people just weren't that interested in some of these institutional questions about how we fund science. Maybe it was because the explosion of federal funding had not happened for long enough for some of the longitudinal intertemporal consequences of that to be evident in the way that they now are. I think some are actually just contingent. Now there's a very vibrant set of people like Matt who are pursuing these questions full time. But the existence of this discussion actually makes me quite optimistic.
I mean, I don't know how science funding is likely to look different in 10 years as compared to today. But the probability I ascribe to it being meaningfully different in some respects is a lot higher than it was five years ago. On some basic level, that’s a good thing. Obviously, the existence of ARPA-H is somewhat reflective of this shift in sentiment. We don't know yet whether ARPA-H will work, but again I think its existence is a very encouraging fact.
Third, there are particular things happening in different areas of science. It's hard to not be excited about. I think many of the things said and observed about biology and the prospects there are correct: new sequencing technology, single cell sequencing and stuff like this, RNA sequencing, the cross-product of machine learning with biology. I mean, it's a TED Talk cliché to invoke that, but I think there are quite meaningful and interesting prospects there. I guess my third category would be particular frontier discoveries. If you wanted to tell a story about how the next 20 years will look substantially better, in terms of real scientific discovery, than the last 20 did a priori, I think you can make that case pretty credibly and I hope it's correct.
Alexander Berger: I see Arc as a cultural intervention. Is the target of that intervention the broader scientific ecosystem, where most of the impact of Arc flows through changing everyone else's behavior? Or is it mostly like a “We're going to build a really healthy, fruitful community for these scientists on this campus in Palo Alto where they're going to really make a difference”?
Patrick Collison: We think of it as the second. The goal of Arc is to do actual work and to make actual discoveries that others judge to be of significance. I think if Arc succeeds at that then it probably will have some of the former effect. It is possible that the former effect could, through time, come to dominate. But we don't think of it as a cultural intervention. And I think it could only possibly be an effective one if it is actually very good at what it does. All of what we think about is the second.
Alexander Berger: I hear you that it can only work as a cultural intervention if it succeeds at the second thing, but isn't the magnitude of the returns from the cultural success like so big relative to if you think about how many PIs are there at Arc?
Patrick Collison: There are four today, but there will be in the teens in the not overly distant future.
Alexander Berger: I don't know how many life science labs PIs there are in the country, but it has to be thousands. Tens of thousands? Hundreds of thousands?
You're in the ballpark of probably like one in a thousand, maybe less. Intuitively, the magnitude of the impact on the culture seems like it has to be really big – again, if you succeed – relative to the direct impact of breakthroughs.
Patrick Collison: Maybe. I don't know if that's true. I'm not a scientist, obviously. Arc's actual scientist co-founders Silvana Konermann and Patrick Hsu who would kill me for what I'm about to say. I'll just acknowledge that and then, I guess, proceed to say it. But if Arc were to cure some major disease could be an enormous deal. I don't know where those breakthroughs will come from. I don't know whether it'll be from Arc or not, but whatever that place is, the effect of that place will probably be primarily the effect of the breakthrough and the discovery and not whatever second order effects it has.
I don't want to downplay the possible magnitude of some of those successes. We are plucking from a super heavy tail distribution and seven sigma discoveries are in principle possible. I don't want to sound remotely self-aggrandizing by suggesting that I expect Arc to do that. The base case has to be that Arc won't just with any realistic humility. But in terms of what one aspires to or grants as at least theoretically in the space of outcomes, I think the first order effects truly can be the dominant ones.
Alexander Berger: Matt, are there other solutions to the problem of slowing scientific progress that you're especially excited about?
Matt Clancy: Yeah. The big thing that I am excited about is an effort to tackle the problem: why is the productivity of science different than other industries? You could probably pin that on a lot of things. But two that I have focused on are: one, it's hard to get feedback loops. Stuff takes a long time. Hits matter, so it's hard to learn. Maybe Arc, for example, is actually the best model, but it's a hits-dominated thing and you have a run of bad luck. Then, funders pull out – that would never happen – and we would never learn that it could've been the way to do it.
Second, it's not a private sector where funders who succeed grow their pie and they take all their methods spread and capital gets reallocated towards them.
People don't have strong incentives to change their behavior. What the Institute for Progress and this movement is trying to do is fill in some of those feedback loops. You're not going to learn just by casual observation what works. But if you team up with social scientists, pool together a lot of different data from a lot of individual scientists, get big sample sizes, and do it really carefully. Then you can start to identify effects that add up over time. Then if you have things like the Institute for Progress and progress studies movements and others creating cultural pressure or political pressure to reform these institutions and have them work with social scientists, then hopefully we can build an engine for how this thing becomes a self-improving engine.
And then we talked earlier about how cultural changes can actually matter a lot. If we could shift the culture from “the way we do things is the best way to do them” to “the way we do things is we try new things and we learn from them rigorously and then we implement the best data,” then that's the culture that I hope we can build.
Patrick Collison: Alexander, you've been exhibiting admirable moderatorial restraint, but you're very well-informed and expert on these questions. What's your commentary on what we've discussed?
Alexander Berger: I guess, I think maybe more than both of you, I put a pretty significant amount of weight on the Chad Jones endogenous growth model, where the growth in the population that's able to do science is a pretty big driver. That makes two things stand out to me as pretty important channels. One is around immigration for scientists or potential scientists.
Notably, I think the highest functioning part of the U.S. immigration system is the H-1B cap exemption for universities. We already have this machine around training people to contribute to the scientific workforce, but as Caleb and others from IFP will definitely tell you, we have a pretty painful tendency to send people home afterwards.
Given what we know from the literature about how much more productive people can be when they're surrounded by other scientists working on similar things and have similar caliber, I think the global economic gains from allowing people to cluster in the U.S. and to move the global frontier forward are huge. I do see moving the people to the most productive places as one channel. Part of what appeals to me is that we often talk about culture and how to change culture, yet I have been left feeling like I don't know how to do that. That sounds really hard.
It's hard to change laws, but these are very fundamentally like achievable policy changes. I mean, even in these last couple of years, right? The UK changed their high skill immigration laws to be radically better in a way that it doesn't seem beyond the pale for the kind of change the U.S. could entertain.
The second idea that I'm surprised that neither of you emphasized is the progress in artificial intelligence. When you think about a scientist-driven thesis or like a population-driven thesis for scientific progress overall, a question is “Could we just have way more scientists by digitizing or like getting artificial intelligence someplace where it could improve the productivity of individual scientists or replace a lot of labor?” I think that could be a huge boon relative to material or huge cultural interventions for individuals like frontier scientists today.
Matt Clancy: I want to say, I very much agree. Artificial intelligence has been on the back of my mind for some of this. Even with the beginning when I was talking about these Chad Jones papers, he has another paper that's about what if you can automate some portion of scientific production. In his model, that is a way to keep your scientific workforce as if it's growing if the share of things that it needs to do keeps shrinking. Instead of having 10 times as many scientists, if we can focus the ones we have on just one tenth of the tasks – like setting the vision for their labs and being freed from a lot of the detailed work – then that's another way that you can de facto get the same effects.
Artificial intelligence could potentially help deal with the burden of knowledge. We already have large language models that can summarize papers. Their ability to help you search through a huge corpus of text when you don't exactly know how to articulate the search terms is pretty incredible. That could potentially get much better if you solve all these problems with hallucinations and you really trust the results. I could imagine that being a big deal.
Patrick Collison: Agree with everything you said. Some amount of AI/ML exists beyond some predictability event horizon. I don't even know how to kinda speculate there. But we should at least acknowledge that so much is contingent on whether those curves soon saturate or continue to compound. There's an obvious way along the lines of what you just said, Matt, where you get large language models or the successor to the large language model or some agent as an adjunct scientist or maybe we become the adjunct and it's charting the way forward. There's that category and it seems real.
The second one, just to call it out, is slightly more traditional machine learning. It's possibly the case that across many different important domains, the character of the open questions are about how we understand or come to predict behaviors in these complex systems – with non-linear emergent phenomena as the phenomena of note. Biology really has that character, but a lot of condensed matter physics does as well or quantum matter and maybe other parts of physics. Certain parts of chemistry do.
As you go across the lines, you can maybe make the case that the things that you can derive from formalisms we've derived and the next things you need predictive models for. We haven't had those to date you, and you can probably make some kind of new mathematics analogy. And I think that second story could be true even if the first one isn't. Obviously both can be true and they probably are correlated, but I think the first category is obvious and the second category I think is also very compelling.
[01:01:03] Over- and underrated
Alexander Berger: Could we do a round of over- and underrated?
Matt Clancy: Let's try it.
Alexander Berger: Okay. We'll switch off. How about PhD training?
Matt Clancy: I think that it is probably overrated since it is seen as the only way to contribute to the scientific frontier. The reason I think it's overrated is the exact formal mechanism where you have to do it in an exact format with a department. The idea that you need to spend a lot of time learning knowledge I think is completely true. And I don't mean that you can just skip that part. But there may be other ways to do it. In the future, large language models that we just talked about, maybe they’ll let you tutor things.
Long story short, I think there's nothing special about the exact institutional makeup of getting a PhD. The knowledge that you get as part of doing a PhD, however, I think will continue to be just as important as ever.
Alexander Berger: Government funding for academic research?
Patrick Collison: Well, I would kinda say on the first one, that I think the right mentorship is underrated and the PhD itself, as Matt said, it's overrated.
For government funding, Everyone is preoccupied with how many dollars we should give. I don't have a strong view on that question. I think the question of how and where to give those dollars does not get nearly enough attention.
Alexander Berger: Peer-review?
Matt Clancy: Peer-review is probably overrated by the general public. If you have the idea that it is this very strong quality filter, you're probably mistaken.
Underrated maybe by the peer-review skeptics who think that it's just totally garbage and it makes everything worse.
I've written some stuff recently about how it's correlated with the best outcomes we can think of measuring. This is not that surprising because science is an institution where the value of knowledge is ultimately whether people like the experts in the field think that your contribution is useful. If you poll a group of two or three anonymously and ask, “Do you think this is a good contribution?” It's not surprising that their views are correlated with how things eventually turn out.
But that said, I think it also has some issues where it can induce risk aversion. It's extremely noisy. It's a lot of luck of the draw and I'm not sure that the benefits of making everyone do it are worth that cost.
Patrick Collison: The scientists I know are, to me, surprisingly positive on it, given the set of breakthroughs that happened in its absence. But I will note, the scientists I know are actually relatively positive.
Alexander Berger: How about great man theories of discovery?
Patrick Collison: Probably overrated, relative to great scene theories of discovery. And I want to better understand the Cori lab phenomena and its existence suggests to me that maybe any single citing phenomena of interest and any single individual might be somewhat overrated.
Alexander Berger: Citations?
Matt Clancy: Again, this is like one of those audience-specific things. The circles I run and people hate citations because they think they’ve taken over everything. But I think that they're probably understood as a vote of confidence. Well, sometimes you're just citing because, “Hey, we have to cite everything relevant to satisfy our peer-reviews,” but there's signal in there. If I'm going to build on your work, I'm going to cite your work. And that's sort of, I think, what we want science to be. We want it to be this cumulative enterprise and citations are going to be correlated with it working well.
If this cumulative enterprise is working well, we should be seeing citations. I wish there was a way that we could filter the ones that are really signaling “I'm building on this guy's work.”
There are surveys people have done on the question “Is this citation one that is really influential or not?” Like 20% of them are really influential, so 80% is some kind of noise. But still, you have a million papers and you're looking at the citations and 20% of them are sending you a strong signal. I think that's valuable.
Alexander Berger: Corporate labs?
Patrick Collison: It seems like it's been a very good couple of years for corporate labs.
Alexander Berger: How are they rated? This is always the question for over- vs. underrated.
Patrick Collison: They've had an anachronistic vibe for the past couple of decades. Those that persisted had these ignominious declines. Parc ended up part of Lucent and petered out. I think the rating was not that high and I think the last five years have changed and it's not just the AI stuff. But I think you can look at pharma as this phenomena and I think over the last five years pharma has actually done pretty well.
Alexander Berger: And Matt, DARPA?
Matt Clancy: DARPA? Ooh, that's tough. Possibly correctly rated. It's hard to argue with their hit successes. It's tough to know the counterfactual. Probably, I bet, we would eventually had internet without them. I don't know if my critiques add up to like a big picture to say it's overrated. My critiques are like, “Well, they can classify all their failures, so we don't hear about them.”
Also underrated about them is maybe there's nothing special about program managers and the whole field strategist. Maybe the secret sauce is just a mission that people really believed in: “we had to save the country from these existential threats." We got super intrinsically motivated people who are really talented to come work and then we just trusted them with money. And that's the secret sauce rather than a specific institutional or organizational setup.
[01:07:37] Other creative endeavors
Alexander Berger: It's a bullish hypothesis.
Why don't we end on this question: how special is science? So we've described this general decline in scientific output per person. What about other kinds of creative endeavors? Are arts, music, and film also declining in the same way? Or should we be looking for science-specific explanations?
Matt Clancy: So I think there's a lot of commonalities. Like there's like we're trying to pull things from the unknown. The jobs of artists and entrepreneurs and scientists have some commonalities. There's obviously differences with the role of government funding. But I think to the extent you think the burden of knowledge is important, you could see some commonalities.
I also think that this issue of a torrent of content in the media landscape and science exhibits similar problems. In science, we follow citations because it's too hard for everybody to read all the papers. Even if you read a random subset and Patrick reads a random subset, as there's more and more papers the chances that we overlap gets smaller and smaller. We need a coordinating device and we use citations.
I think in popular media we're using coordinating devices like franchises and TV shows. They can build up this word-of-mouth. And that's in some sense of franchise, right? Every week is another version of the same thing that came before. Or anchoring on superstars, everybody knows we're all going to listen to Taylor Swift this week because she has a new album, rather than any of the thousand people in the same genre releasing stuff.
Patrick Collison: I mean, that's a great question. And I haven't thought about it before, but I think we can probably take hope from looking at some of these other domains.
Something totally unrelated to science that I find interesting is that no nice neighborhoods have been built essentially anywhere in the world since the Second World War. I'm sure exceptions exist, but I have been asking for nominations for exceptions on my website for several years and I've gotten remarkably few. Building a neighborhood is a collective pursuit with some parallels to the practice of science. There's presumably no upper limit on the number of good neighborhoods we could have. Who knows if there's even an upper limit on how nice the neighborhood could be.
I think it's a grim story with respect to our urban fabric that we've gotten terrible at this. We've lost the technology, so to speak. Nobody thinks that's a story of low hanging fruit and exhausting the nice neighborhoods or things about the intrinsic structure of the urban landscape. Instead, I think it's necessarily a story of zoning and culture and tastes and who knows what else.
But in some way the existence of that phenomenon is hopeful because if we have examples of domains where the regression is demonstrably, necessarily cultural. It should elevate our priors that this is the causal agent in some of these other complex, collaborative systems. I think it's a good question, but I think on that it makes me hopeful.
Alexander Berger: Thanks for joining us.
[01:11:05] Audience comments
Caleb Watney: We recorded these sessions with several other workshop guests in the room listening in. After this conversation between Alexander, Matt and Patrick ended, Tyler Cowen, economics professor at George Mason University, jumped in to share some thoughts.
Tyler Cowen: Hello, this is Tyler Cowan. I've been sitting in the room listening to this discussion. It's hard to just have to sit and listen.
I thought it was great, but I have a few comments and observations. My first is the general frustration that when people talk about the productivity of scientists, they don't look closely enough at the wages of those scientists. It's not a perfectly competitive market, but there's lots of bidders. All the hypotheses that imply that scientists are simply more afforded by obstacles seem inconsistent with the general data. In most areas, real wages of scientists have been going up.
I don't think that's the phenomenon. I think just their marginal productivity is going up. Now whether it's a marginal productivity in creating science, I think one can challenge that. It could be more and more we deploy them to create status. The status may or may not be positive. So but that's one thing I would add to the discussion – to look more closely at wages.
The second point, I think you're all under rating the low hanging fruit hypothesis.
Patrick Collison: Alexander's not.
Tyler Cowen: No Alexander's not.
Matt Clancy: They can't see. The listeners can't see but Alexander is shocked and waving his arms.
Alexander Berger: Mm-hmm.
Tyler Cowen: If you look at biomedical areas, there've been phenomenal advances lately. But we all know the sector has become far more bureaucratized. It can't be that all of a sudden we started doing something right.
So maybe just the new general purpose technology, some notion of computation writ large, as expressed through the internet and what was behind mRNA, and now it's leading to many innovations. We hope over time it raises morale and the cultural profile, and gets people more positively psyched, and that will be a positive feedback effect. I see anecdotal evidence for that. But it seems to me the most likely hypothesis is that in some – but by no means all areas – we simply got this new general purpose technology. I'll call it computation.
All of a sudden these major bursts – not explicable any other way. At some point, they'll be over. But in the meantime, we're now going to have this long run. And not Alexander, but I think the rest of you didn't assign enough weight to that as a possibility. I invite responses.
Matt Clancy: All right. I've got two responses. On the first point about wages, I think that one way to think about this is that the return on R&D is another economics question. Why invest in R&D if everything's getting so hard, and it's getting 20 times as hard? The classic answer here is because the economy is getting so much bigger. Marginal gains become so valuable, and then this equates things.
Patrick Collison: Which I think I was going to say the same thing. I think that was an important point and you can totally have a model like in the Bloom, Jones, Web and Van Reenen paper. They make the case on the basis of what's going on in the ‘A’ parameter. If it were the case that the marginal productivity of a scientist was constant with respect to the size of the economy, then you would expect to see comparable exponential returns in their real wages. But to get near monotonically increasing real wages, I think you can totally get that in a world where the realized productivity is exponentially declining. But it seems
Alexander Berger: That seems why we should believe – with such a strong prior – in the pluckability of fruit dynamic, right? Because you just read it off of the natural observation in the world of 2% growth rather than singularity growth. Or growth that already went to zero centuries ago. It seems like the problem getting harder and the world economy getting bigger have to be in a competing race that are canceling each other out. In order to explain this weird, weird to balanced phenomenon that we've observed so far.
Tyler Cowen: It seems to me we have a lot of scientists who work only locally in firms. We may call them engineers. But they're scientists of the sort. Their wages have also been going up. It seems unlikely to me that the larger world economy is off-balancing the greater obstacles. So I think the private productivity of scientists hasn't been rising at slower rates. But the thing that has changed is what Patrick called the scene. Maybe we have fewer scenes. And in the scenes, the social and private returns diverge so dramatically. How much did Picasso really get paid for cubism? Well, he was in fact quite rich, but not compared to the total value of helping to create modern art.
So periodically, you get these scenes. The question is then – this is consistent with some of Patrick's points – why do we have fewer scenes in some areas for extended periods of time? That's a very different question than focusing on the obstacles facing scientists.
Matt Clancy: I wanted to make one other point about the wage returns of science. The decline of the corporate lab is another indicator that firms were not seeing the much value from investing in science for their own. I think that's consistent with a low-hanging fruit argument, or just something going wrong with the returns to science.
On the question of low-hanging fruit, my answer is really boring. I think it's very important, but I didn't have much more to say about it. The share of the conversation talking about it was more like the share of the conversation about problems that are maybe addressable, rather than the share of what I think is actually the big picture behind the scenes.
Tyler Cowen: So maybe there's this multiplicative model. You need a scene, you need cultural self-confidence, you need a new general purpose technology in your area. So you need all three, maybe a bit more. If you try to measure the marginal product of any one of those, you'll be baffled. It often will look like zero. Sometimes look near infinity, or super large. But maybe the purpose of policy, broadly construed, is to bring together the factors needed for the multiplicative model to operate. Figure out in a particular setting which is scarce. If you're setting up say art, you figure which of those are the things we need.
In general some of the comments, they seem to me to be underrating demonstration effects. I think that you can demonstrate something. In my mind is more powerful than how I felt you all were describing it. And if Arc does say fix Alzheimer’s, apart from the direct benefits from that, I think the impact will just be phenomenal. Going back to Alexander's cultural question, you see this in culture. Like The Beatles do something and everyone chases after that. What OpenAI did, you now see so many people chasing after things like that.
A final comment, I would up the importance of these demonstration effects, and we should think about those when choosing what to do.
Matt Clancy: Thanks, guest star.
Caleb Watney: Thank you for joining us for this episode of the podcast series. Next episode, we’ll dive into the three core inputs to the scientific production function: the funding we need to run experiments, the minds who come up with those ideas in the first place, and the infrastructure to make it all happen.
Metascience 101 - EP2: “Is Science Slowing Down?”