Macroscience
The Macroscience Podcast
Metascience 101 - EP6: “Safety and Science”
0:00
Current time: 0:00 / Total time: -55:16
-55:16

Metascience 101 - EP6: “Safety and Science”

IN THIS EPISODE: Journalist Dylan Matthews sits down with Professor Tyler Cowen, Matt Clancy, and Jacob Trefethen to discuss whether there are tensions between accelerating science and safety. With case studies where society has faced this tradeoff between progress in science and safety, they work through strategies we can use to accelerate science safely.

“Metascience 101” is a nine-episode set of interviews that doubles as a crash course in the debates, issues, and ideas driving the modern metascience movement. We investigate why building a genuine “science of science” matters, and how research in metascience is translating into real-world policy changes. 


Episode Transcript

(Note: Episode transcripts have been lightly edited for clarity)

Caleb Watney: Welcome. This is the Metascience 101 podcast series. In this episode, Dylan Matthews, a writer for Vox, sits down with economics professor Tyler Cowen, as well as with Matt Clancy and Jacob Trefethen, both of whom work on the science portfolio at Open Philanthropy. They will discuss whether there are tensions between accelerating science and safety. 

Together, they dig through case studies where society has faced this tradeoff between progress in science and safety before, from automobiles to nuclear weapons, and the strategies that we can use to accelerate science safely. 

Dylan Matthews: We’re here talking about science and safety, and the general view at the Institute for Progress and I think the general view of most economists I talk to is incredibly pro-science. You can find some models where the growth of science is the most important thing for the overall wealth and wellbeing of the world. 

Matt, maybe you could walk us through that since it gives us a good baseline. Then we can talk about points where that model might break down and the real dangers and risks that appear.

Matt Clancy: Sure. Economists assume that material prosperity is ultimately driven across long stretches of history and across countries by technology, and technology has its roots in innovation and R&D, which has a lot of its roots in fundamental science. There are long time lags. There could be decades between when discoveries are made and when they get spun out into inventions, but all the gains in income and health are ultimately attributed back to some form of technology, even if you call it social technology. 

Even things that are fuzzier, for instance, your fulfillment or your meaning in life, my own view is that those are really correlated with material prosperity. The two are not synonymous, but it's a good way to enable human flourishing more broadly.

Dylan Matthews: Got it. Over the history we're thinking about — and this is really something that starts with the scientific revolution, the Industrial Revolution and some of the changes that began in Holland and England in the 17th, 18th centuries — that was not a period of growth where everyone wins in every situation. There were serious costs, but there's a broad view we're taking as a starting point for this conversation where those are acceptable costs, or at least weighed against significant benefits. 

Tyler, how have you conceptualized that balance? It’s not a Pareto improvement, not everyone's better off – how do you think about risks? For example, ordinary risks, environmental degradation, some public health challenges that come with economic growth to date.

Tyler Cowen: I see the longer run risks of economic growth as primarily centered around warfare. There is lots of literature on the Industrial Revolution. People were displaced. Some parts of the country did worse. Those are a bit overstated.

But the more productive power you have, you can quite easily – and almost always do – have more destructive power. The next time there's a major war, which could be many decades later, more people will be killed, there'll be higher risks, more political disorder. That's the other end of the balance sheet. Now, you always hope that the next time we go through this we'll do a better job. We all hope that, but I don't know.

Dylan Matthews: The counterargument to that worry would be that the growth in technology and science is complemented by a set of what Deirdre McCloskey would call the bourgeois virtues. That this technological growth was enabled by growth in liberalism, mutual toleration, and things that you would expect to reduce warfare risk. I take it you're a little skeptical or at least unconvinced on that.

Tyler Cowen: Well, we had two world wars, and I really don't blame liberalism for those. I would blame the Nazis, Stalin, and other evil forces.

Dylan Matthews: Hot take. 

Tyler Cowen: But the point remains that more productive powers end up in the service of various parties. Now we've made what you could call the nuclear gambit. Well, we're going to make sure leaders suffer from big wars. We've had nuclear weapons, American hegemony. That's worked out relatively quite well so far. But of course, there's the risk that if something bad did go wrong, it could be unimaginably bad in a way that even the earlier world wars were not.

Dylan Matthews: Let’s think about some concrete ways the world could go unimaginably bad. 

Jacob, you fund a lot of science. You move $100 million a year, roughly, in scientific funding. What are the ways your scientific funding can go wrong? What are the ways you think the kinds of work you fund could make things go boom?

Jacob Trefethen: I think that everything we fund could go wrong. We fund syphilis vaccine development, and if something goes wrong with a particular vaccine candidate, that could harm someone in a phase I trial. The issue that we often think about is trying to have some sense of when the harms could be very large and stand out. The nuclear gambit that Tyler mentioned is an interesting example, where the harm is so large, we haven't observed it. We don't have a base rate to go off, whereas we have quite a few base rates in phase I trials to go off of. That can make it tricky.

The orientation that we often take to our science funding is that historically most biomedical science funding – maybe science funding as a whole – has been very beneficial for people on net. That's a baseline we should deviate from in particular cases. You then have to tell particular stories about particular cases with really bad potential harms. For us, that often comes up as bioweapons as potential uses of biological technologies or potential applications of transformative AI that could be very new and hard to pick up in the data so far.

Dylan Matthews: Got it. So why is now a moment where these kinds of worries are emerging? We've had a germ theory of disease for some time. We've had vaccines since the 18th century. What is it about the current environment that makes, maybe let's start with biorisk, particularly fraught at the current moment?

Jacob Trefethen: There are a lot of the worst bioweapons that you could design, and there are only some number of people in the world who'd be able to design them or put them together. Potentially some state bioweapons programs could do that, and maybe some grad students could do that if they had the right training. 

What’s changing now is the breadth of potentially harmful technologies that are available. At Open Philanthropy, we think about the intersection of AI with bioweapons, because all of the wonderful progress in language models and other parts of the AI ecosystem will make certain actions easier to take for a broader range of people.

Dylan Matthews: Got it. Matt?

Matt Clancy: One thing that has worked well for us as a species for a long time is that frontier science is pretty hard to do. And it's getting harder. You need bigger teams of more specialists, which means deploying frontier science for nefarious ends requires organizing a group of people to engage in a kind of conspiracy, which is hard to pull off.

People do pull it off — military research does happen. Traditionally, something that's helped us out is that these things get developed, but then it takes a long time before they get developed into a technology that a normal person, without advanced training, working in a team, can use. By the time it gets there, we understand the risks, and maybe we've even developed new and better technologies for tracking and monitoring stuff like that. Wastewater treatment monitoring of diseases is one random example.

Tyler Cowen: But the puzzle is why we don't have more terror attacks than we do, right? You could imagine people dumping basic poisons into the reservoir or showing up at suburban shopping malls with submachine guns, but it really doesn't happen much. I'm not sure what the binding constraint is, but since I don't think it's science, that's one factor that makes me more optimistic than many other people in this area.

Dylan Matthews: I'm curious what people's theories are, since I often think of things that seem like they would have a lot of potential for terrorist attacks. I don't Google them because after Edward Snowden, that doesn't seem safe. 

I live in DC, and I keep seeing large groups of very powerful people. I ask myself, “Why does everyone feel so safe? Why, given the current state of things, do we not see much more of this?” Tyler, you said you didn't know what the binding constraint was. Jacob, do you have a theory about what the binding constraint is?

Jacob Trefethen: I don't think I have a theory that explains the basis.

Tyler Cowen: Management would be mine. For instance, it'd be weird if the greatest risk of GPT models was that they helped terrorists have better management, just giving them basic management tips like those you would get out of a very cheap best-selling management book. That's my best guess.

Dylan Mathews: It seems like we're getting technologies that are radically distributed in ways that have pretty serious misuse risks. As Jacob was describing, we might be at a stage where a talented 15-year-old can design a more-dangerous-than-nature virus and release it. We might be entering a stage with large language models where you might not need that much knowledge yourself. You can just ask the large language model to design something for you, or you can ask it the best way to do a terrorist attack against a given entity. You can ask it how to bring down an electrical grid.

I'm curious how all of you think about radically democratized or distributed risks like those. How is tackling those risks different from some of the other risks that governments are used to tackling from science?

Tyler Cowen: I think of it in at least two ways. The first is — at the risk of sounding like too much of an economist — that the best predictor we have is mostly market prices. Market prices are not forecasting some super increased risk. You look at VIX and right now it's low. If it went up, it might be because of banking crises, not because of the end of the world. 

The second is just the nature of robustness of arguments. There is a whole set of arguments, very well tested by history, that the United States Constitution has held up really quite well, much better than people ever would have expected, even with the Civil War.

When I hear the very abstract arguments for doom, I don't think we should dismiss them, but I would like our thoughts to race to this point: actually trying to fix those risks by staying within the bounds of the U.S. Constitution is, in fact, the very best thing we can do, and we ought to pledge that. 

That's what I find missing in the rationalist treatment of the topic, with talk of abridging First Amendment rights or protections against search and seizure. We need to keep the Constitution in mind to stay grounded throughout this whole discussion.

Matt Clancy: One additional indicator that I look at is to ask if the size of teams doing frontier research is shrinking over time or continuing to grow. We haven't seen that yet, but we also haven't had these large language models trained on science yet. But that's something that I feel will be a leading indicator — if it's getting easier to do new, powerful science, by small groups. 

Jacob Trefethen: We're not yet at a point where small groups can do all sorts of leading science. If you are part of a frontier group now, you should treat that with some ethic of responsibility, and you should figure out what projects you want to work on that you think will not lead to a world where it's possible for a 15-year-old to do something really damaging. 

That applies to funders too. It's something we think about a lot. We do a lot of red teaming of different things we fund before we fund them. There's a lot of work you can do upfront. There are capital-intensive projects that are going to create the future, so you don't have to do all of them. You can do some more than others.

Matt Clancy: There is a precedent to how we regulate dangerous technologies, for example, who has access to high-grade military weapons or so on. In World War II, the U.S. Patent Office had this compulsory secrecy program that silenced your ability to get patents on things that were perceived to put national security at risk. We have liability insurance, and that also affects what people choose to work on and how they choose to create inventions in more responsible or less responsible ways. We do have a lot of tools, and I agree with Tyler that we should resort to them before we resort to some kind of crazy authoritarian plan.

Dylan Matthews: Got it. Let's talk about AI specifically. AI seems an unusual thing in that it's a general-purpose technology. There was a nice paper called “GPTs are GPTs” making the point that this is not a specialized thing. It's not nuclear weapons. It's not something where you need large industrial capacity to lay it out. But you do need large industrial capacity, it seems still, to build it. What does that imply for the ability to build safe systems out of that?

Tyler Cowen: Again, I view it pretty generally. The world is in for major changes, no matter what your estimate of the ratio of positive to negative. We have a lot of rigid institutions, a lot of inertia and interest groups cemented in. The combination of major changes, with systems not ready for major changes, that haven't seen comparable major changes for a long time, maybe not since the end of World War II, is going to cause significant transition problems – no matter what kind of safety measures we take. 

Say it doesn't kill us all, say there's no terrible pathogen, then the biggest impact will be on our understanding of ourselves and how that in the longer run percolates through all our institutions. I think it's going to be both hairy and messy. We don't really have a choice at this point, having opted for decentralized societies, but it's going to be wild.

Dylan Matthews: Do we have a precedent for that? The 20th century saw a lot of pretty radical revisions in how people think of themselves, how they think of themselves in relation to God, and how they think of themselves in relation to their nation, and to international causes and ideologies. Yet, those changes did not seem to make the world radically less safe in aggregate.

Tyler Cowen: Well, the first half of the 20th century was a wild time, right?

Dylan Matthews: Right.

Tyler Cowen: Since then the changes have been modest. However, the printing press, which was a much slower example, changed everything. You could argue the discovery of the new world, for example, and maybe electricity. There are parallels. You could say that in all the cases, the benefits are much higher than the costs. But the costs, just in gross absolute terms, have been pretty high.

Jacob Trefethen: I agree that the world is in for major changes, and we aren't going to be able to predict a lot of that. One thing I get frustrated with is the sense of inevitablism that then can pervade from that observation to – it's not worth thinking along the way and picking different parts of the tech tree. I'm not attributing that to you because you may want to pick different parts of the tech tree, but people will always be going after benefits, health benefits, going after making their own life better. There are many ways to achieve those benefits, and you don't have to explore every fork. Not everything is inevitable.

Within AI, I think the part of the tech tree we're on now is a lot better than it could've been with some of the large language models. There's human feedback involved in the ways that those are performing well right now. You could imagine a worse way that they could have been built than they are. I'm sure people are thinking carefully about how to build even better models going forward.

In vaccinology, it comes up for us a lot. For instance, we want to achieve a benefit of a TB vaccine that works in adults. TB still kills 1.5 million people every year, and there's no vaccine known to work in adults. Well, should we make a transmissible vaccine, a vaccine that can be passed from person to person? Then you don't have to vaccinate everyone, and it just happens naturally. We don't think so. That's the kind of risk that we would assess as part of the decision about what platform to invest in to achieve a benefit that everyone can agree is a great benefit.

Dylan Matthews: Is there a difference in how you think about this at Open Philanthropy by virtue of being a nonprofit, a quasi-foundation entity, since there are some risks that might come up because there are unpriced externalities that emerge with new technologies? 

There are costs to putting lead in gasoline, and they don't accrue to the people putting lead in gasoline. You as a nonprofit can specialize in finding those and fixing those because the system won't fix them naturally. Is that the kind of consideration that comes up for you in terms of trying to specialize?

Tyler Cowen: Do you think you'll be a decisive actor in that kind of tuberculosis vaccine never happening? I don't know anything about it, but that strikes me as unlikely. Now, if you just want to say, “Well, it's our institution, we don't want to be a part of it,” I'm all for that. But I would doubt if you're going to be decisive.

Matt Clancy: One lesson from technology comes from competing models. Whoever gets the head start often becomes the platform for further development to get ahead. If we can get a TB vaccine – which I also don't know anything about, Jacob is the expert – that doesn't use this modality to transmit, that becomes the benchmark that alternatives get tested against. It makes it harder for other people to do clinical trials on other untested versions because I can just get the approved vaccine. 

This dynamic starts to lock in. Another more boring example of dangerous and safe technology is fossil fuels, which emit carbon dioxide, and renewable energy. Everybody's hoping that we get to the point where renewable energy is so efficient that no one even thinks about using fossil fuels. Why would you use the worst version, the one that smells bad and isn't as cheap as the solar panels?

That's one of the powers of technology. If you can pick winners, which is very hard to do, then you can potentially change the course of subsequent development.

Jacob Trefethen: That's right. Regarding the TB example, a company's trying to make mRNA TB vaccines. And all the investment that went into the mRNA platform maybe will now pay off. That'd be wonderful. Personally, I am glad that all that effort went into that platform rather than a platform that had potentially more risks. There are new platforms being discussed right now. Should you enable self-amplifying RNA that you put in a smaller dose, and you have maybe less negative response from that, or is that too risky, and you can't control the amount of RNA that gets produced? That's a question that should happen now rather than after billions of dollars of investment when you're rolling something out.

When it comes to science, the sense of inevitablism is particularly inappropriate and often gets shipped in. Maybe I'm reading the tea leaves too much, but it seems shipped in from venture capital investing or investing as a whole, where there's more competition for deals. There's a sense that I have to get into this deal because it's going to happen anyway. So I don't have to hold myself particularly morally responsible. I can just think of the counterfactual as more inevitable.

Tyler Cowen: But maybe the inevitablism is correct. Say the printing press has been invented. You're Gutenberg. Someone comes in, has a meeting with you. “What are the first 10 books we publish? These are going to be really important because everyone will want to read them, and they'll be circulated.” I'm not at all against people having that discussion. Stripe Press has it all the time. 

But at the end of the day, did it matter that much what were the first 10 books they published? It seems there was something inevitable about the printing press that would produce a super wide variety of material. There wasn't anybody's decision that was decisive in that very much at all.

Dylan Matthews: That seems like an odd example in that not long after the printing press, we had the Reformation. The fact is the first thing printed was the Bible, and then you had access to the Bible and religious knowledge that was somewhat less mediated by religious authorities.

Tyler Cowen: But the Bible would've been printed anyway is my point. Someone might have said, “Oh, we can't print the Bible, there'll be a reformation. There'll be religious wars.” You just say, “Well, look, that's inevitable.”

Dylan Matthews: We'll print the Koran instead. 

Tyler Cowen: Don't dismiss inevitablism so quickly. The name of it makes it sound false, just inevitable, like agency denied. But a lot of things are just super likely once the technology's been invented. Electrocutions by mistake, for example. Of course we want to minimize the number, but once you have electricity, there are going to be some.

Matt Clancy: I wrote a piece called “Are Technologies Inevitable?”. I had a fuzzy centrist view, of course, which is that the big ones are in some sense inevitable. We were probably always going to figure out electricity. There are certain things based on how nature works that you're probably going to discover and then exploit. Then details are very highly contingent. 

This TB example is something where it could be a really contingent result. In one universe, it could be very different. Would we eventually discover vaccines? Probably in all universes that have science.

Dylan Matthews: Is there a particularly vivid detail that could've gone one way or another that's motivating to you? The world could've been this way, but it was this other way, and it didn't have to be?

Matt Clancy: When there's a global crisis, the technologies that are at hand – the mRNA vaccine or something – they get pulled to the frontline and deployed. Then we develop massive expertise built around them, and that sets a new paradigm. There were alternative platforms out there, such as the Astrazeneca vaccine.

Matt Clancy: If there were others, if there had not been the COVID-19 pandemic at that time, maybe they would've all evolved in parallel at different rates. Maybe mRNA would not have been the inevitable winner. Maybe there's something that's a few years back, and if it had had its time in 10 years, it would have been ready for prime time, and it would've been even better. 

It's hard to judge the counterfactual because we can't see the technologies that weren't invented, but these crises show a really clear example of something that was at hand and ready to go, then got supercharged and locked in.

Dylan Matthews: How good are our feedback loops for safety? We had a number of examples of technologies where you'd build automobiles, you build highways, and they take off. Ralph Nader points out that they're dangerous in various ways. You correct them. We get the best of both worlds, which is cars with all their benefits and safety. 

That seems to be the way a lot of technologies work. Where are some problems you guys foresee for that? Are there places where the feedback loop isn't tight enough? Where it's too imprecise?

Tyler Cowen: 40,000 Americans – is that the number? – die every year in cars or because of cars. I'm all for what we did, but it's not that good, right? It's clearly a huge positive, but I don't think we can say that we've solved the risk problem with automobiles.

Dylan Matthews: Of course. We have not solved it by any means. 

Matt Clancy: There's two minds about this. When you talk about AI alignment or something, I've always believed that there's probably not a lot of marginal productivity in thinking about this before the technology exists and we don't even know what form it's going to take. Before we knew about neural nets and large language models and deep learning, we didn't know that this would be the paradigm. It's hard for me to think that would've been super productive. As with automobiles, you have to iteratively experiment and correct mistakes as you go, because you can't anticipate what will happen in advance.

But the big danger is these existential risks. You don't have the luxury of trying out an existential risk. You have to get it right, and it's really hard to get it right. That makes it a thorny problem.

Jacob Trefethen: The way it works in different parts of the economy and in different countries can be fairly different. The part of the economy I'm very familiar with is R&D for medical devices, drugs, and diagnostics. In some of those cases, we will fund grants for safety work, before it's legal to sell a product, where the safety work is very likely to reveal nothing particularly scientifically novel. We funded animal toxicity or toxicology for the drug oxfendazole for deworming. That drug has been used in many different animals and veterinary purposes for decades, and so it's probably not toxic. But the FDA wants assurance there.

Parts of the economy, including science, are potentially being throttled too much. There are just certain properties of particular types of science that you can identify ahead of time as heuristics and where you might want to go with a bit more care. For instance, if something is spreading or self-replicating or if something evades the immune system. You can say things ahead of time that mean you might want to slow down.

Tyler Cowen: But keep in mind, when it comes to AI, what care often means is taking good care that America is first and not nastier countries. If we're first and we have a certain amount of hegemony in the area, we can then enforce a better international agreement, as is the case with nuclear weapons. So taking care can mean hurrying, right? This has been the case in the past with many different weapons systems, and we've taken pretty good care to hurry. The world has stayed pretty peaceful. The U.S. as hegemon has worked relatively well. I worry that the word care is slanting the debate towards some kind of pause when it actually implies the opposite.

Dylan Matthews: A lot of this depends on the empirics, right? I speak to some people on artificial intelligence who think that China is just unbelievably far behind, and open source models are just completely nonviable. In that world a pause doesn't seem particularly costly.

Tyler Cowen: But you have to stay ahead of China forever, right, unless you think they're going to get nice and democratic soon. It's all over history. The Soviets get to the hydrogen bomb first, which shocked us. We had no idea. There's so much espionage. China has a lot of resources. The fact that they put out a press release, “Oh, we're not going to have consumer LLMs.” 

I saw so many AI people, even EA people, rationalists, jump on that. They just point to it. People who knew nothing about China would say, “Ah, the Chinese can't do anything, so we've got to pause, we've got to shut down.” This total vacuum of knowledge and analysis. Sam Altman has criticized this as well. It stunned me how quickly people drew conclusions from that. Maybe it just means China will do military AI and not a consumer product.

Dylan Matthews: Does that imply similar things for bio? Does that imply there should be a speed-up of certain gene editing technologies on the theory that someone else will? This arms race dynamic seems like it proves a lot, and maybe more than you intended to.

Tyler Cowen: Well, you want America to be first in science in just about every area. We haven't quite achieved that, but we've come pretty close. We have a lot of experience with that. The basic risk in a lot of global settings is just warfare, right? That's historically the risk that keeps on recurring, and that's what we need to be most focused on.

Jacob Trefethen: Your point about care can, I agree, go multiple ways. I think it could once again loop back around. Does it make you want the U.S. government to require more in terms of info security from leading labs?

Tyler Cowen: Absolutely.

Jacob Trefethen: Let's say that slowed down progress in the U.S., would you be in favor?

Tyler Cowen: I've even told my own government that, absolutely.

Emily Oehlsen: Tyler, can you imagine a scenario in which we had nuclear weapons, as we've had them over the last half century, and we had the geopolitical threat that they posed, but in addition to that, there was another threat in which they might, of their own accord, self-implode? One might self-implode, and it would set off a chain reaction in which all of them exploded. 

I think that's the way that a lot of people conceptualize AI, that there's not just the geopolitical threat, but there's also an internal threat to the system itself. When you were discussing AI, it seemed you were mostly focusing on the geopolitical arena, but I'm curious how you think about safety when it has those multiple dimensions?

Tyler Cowen: I don't think your example is so different from the status quo. A nuclear accident could happen. It could lead to a lot of other nuclear bombs going off. You'd like to limit the number of countries that have anything really dangerous. I'm not sure AI is that dangerous, but if need be, limit it. But when you look at how things get limited, I think you want a very small number of leader nations, ideally one. It's because America is militarily strong that we've enforced some degree of nuclear proliferation. Keep in mind, it's not just the race against China. Our allies want us to develop some form of AI, and if we do not, they will.

You're Singapore, you're Israel, you may or may not think America protecting you is enough. But if America doesn't do it, I strongly expect you'll have a lot more nations trying to do it because they trust us more than they trust their enemies.hat example all the more militates in favor of America moving first and trying to establish some kind of decisive lead. 

England is trying to do it. I'm fine with that. It's not that I fear they're going to conquer us, but America should not do it so the English can set the world's safety standards? That doesn't seem like a huge win to me.

Jacob Trefethen: Does anything feel perverse about that reasoning style to you? How big do you think the risk is that makes it worth it to be first?

Tyler Cowen: It's path-dependent. It's been a lot of human history. You're always rooting for the better nations to stay ahead of the less beneficent nations. There's no guarantee you win. We've just been on that track for a long time. You can't just step off the rails and stop playing. I'm hopeful we'll do it, but I very much see it as a big challenge, even without the very most dangerous scenarios for AI. Just the risk of flat-out normal conflict is always a bit higher than we realize.

Dylan Matthews: How much would your view of this change if you changed your estimates of how beneficial U.S. hegemony has been historically? For instance, if you went from thinking that it's reduced the incidence of conflict meaningfully from 80% to 50%?

Tyler Cowen: Oh, of course, it could flip. If we were the bad guys or if we were just so incompetent at being the good guys that we made everything worse, then you would turn it back over the Brits. Singapore, you go first. We're America. We have nuclear weapons. We're going to stop everyone but Singapore. You could try that. It's not what I think is most plausible, but sure, as potential scenarios, yes.

Dylan Matthews: Let's talk a little bit about information security because this sometimes gets shunted aside as the boring stepchild of some of these first-order debates on safety. But locking down both in bio and AI. Securing relevant data and parameters seems really important. 

Matt or Jacob, how do you guys at Open Phil think about this and how do you make sure people prioritize this?

Matt Clancy: When you're talking about biorisk and biological risks and biological catastrophes, there's a deep trade off about how much you disclose about what you're worried about versus keeping that internal. It's just this frustrating trade off. 

It's hard to solve problems and identify solutions if you don't talk openly about what you're afraid of, but there's also a very real risk that you're advertising things that other people might not have thought about as things to do. If you're worried that there are not necessarily great solutions out there, then the net benefit of being open can quickly fall to zero. It's tricky and I don't know. On the biosecurity side, it’s a very thorny problem again.

Dylan Matthews: We've had CRISPR for about 15 years now, in various forms and it's obviously gotten better. It's surprising to me that we haven't had, with the possible exception for the infant in China that was genetically edited, any major scandals or catastrophes to come out of it. We've had this immensely powerful biotechnology, and maybe this is a famous last words thing — Norman Angell writing a book about how, in 1909, Europe wasn't going to have a major war ever again — but it is kind of striking to me that we haven't had big close calls yet. Do you guys have a theory of why that is? 

Matt Clancy: I don't know it's specifically with CRISPR, but in general, you still have these same dynamics that it's hard to use. It's not necessarily easy to genetically modify. Scientists operating in labs have one set of incentives, but private firms that are looking to do this have to think about the reputational effect of how they use this thing. 

I remember I went to a seminar once about genetically modified crops and how CRISPR was going to be integrated. The companies had essentially learned that if they're too cavalier with how they're going to use this technology, it has huge consumer blowback. They had thought very much about things. “We're not going to use the technology to engineer tobacco because we just don't want to be associated with anything bad.” They were going to have all these local partnerships with local seed breeders. 

Again, it just shows that these large corporations are operating in the open, and they have to think about how their decision on how to use this technology will be perceived by the wider world. Those are the people that I think are currently able to use CRISPR, so maybe that's an explanation. But again, I'm not an expert on CRISPR.

Dylan Matthews: This is the safety discussion in a series of podcasts where we've been largely taking, not a skeptical view of safety, but discussing “safety is abused” perspective. 

There's a ratchet where you regulate things to care about safety, and you get to a point where you can't build nuclear power plants anymore. People worry about safety to an extent that even perfectly safe things, like vaccines, don't seem acceptable to them, or things like golden rice don't seem acceptable to them. 

How do you form a coherent attitude about this that's neither blasé about risks of new technologies nor knee-jerk defensive in a way that impedes societal progress?

Jacob Trefethen: For us, it starts tricky often and then ends up getting easy, where we want to figure out which direction we should be pushing on a given problem. We end up on different sides of different problems. Once we want to push for development of something, we just try to push as hard and as quickly as we can often. That's from the seat of a funder. Funders can't actually do much operationally. We're just a part of the ecosystem there. 

But there are so many obvious harms occurring in the world that could be prevented through better medical technology, through better seatbelts, all sorts of things, that once you can get comfortable and have done your due diligence, often you should go full steam ahead.

Matt Clancy: But we're also in the fortunate position of having that secretive biosecurity team that we can run things by. If you have to judge these things on a case-by-case basis, if you can't say there's some general abstract principle, then you kind of need this domain-specific knowledge. It works in our org because I guess we're this high-trust organization.

Jacob Trefethen: We definitely have the benefit of being able to have regular meetings and poll the biosecurity experts before we get involved in a new area. 

We also have other parts of our process that we designed to not give a bad experience to grantees or try to avoid that, where we have a two-stage process for most of our grants. Initially, a program officer will write up if they're interested in investigating a grant further, and we'll check in about that and try to catch any potential safety worries there, so that you don't go through a whole process with a grantee who then at the end of the day doesn't get money for a safety concern.

Tyler Cowen: One lesson is that if we can avoid polarizing scientific issues, you then have access to the right nudges that can make the world much, much safer at low cost: getting more people vaccinated, making Europe less fearful of GMOs. There are many examples. China has its own problem with vaccines. They didn't want mRNA, for whatever reasons. Older Chinese people don't trust Western medicine, don't trust vaccines, and this led to their zero COVID policy for so long. That was a massive cost, and still a lot of them are not vaccinated and presumably dying or getting very sick from COVID.

Dylan Matthews: What is the best regulated area of science and technology right now? People love to complain about the FDA, love to complain about the Nuclear Regulatory Commission. There are things that seem completely unregulated right now, large language models. Has anyone found the sweet spot?

Tyler Cowen: Every area's different, but, say, food safety seems to work fairly well. I don't think we should regulate other things like food safety because with food safety, you just want uniformity and predictability, so you're not stifling innovation that much. A restaurant doesn't need a new dish approved by the local authorities before putting it on the menu. But if you go into a restaurant in the U.S., you can be reasonably sure you won't just get sick and die.

Jacob Trefethen: That's a good example. Plus one.

Dylan Matthews: Plus one to that. Do you have any favorites, Matt?

Matt Clancy: I'm just running through the list in my mind and saying: “Well, no, not really.”; “No, not really.”; “That's not great.”; “That's not great.”; “Too excessive or not, not enough.”. Food regulation is a good one, and that probably is true, as a metapoint, that the ones that I'm not noticing are probably the ones that are working the best. The ones that people are not writing articles about saying why we should reform this thing for the better.

Tyler Cowen: I assume this building is super safe. I'm not saying it's because of regulation, but the private decisions are embedded within some broader structure that's led to a lot of safety.

Matt Clancy: Even there, we've got at IFP our construction senior fellow, Brian Potter, who's writing all about how TFP in construction is not going as fast as it could, possibly because there's too much regulation. It's hard for me to come up with a good example.

Caleb Watney: Fire sprinkler systems seem to be a risk that we've basically eliminated via technology.

Tyler Cowen: And fires are way down for whatever reasons, so someone has been making good decisions.

Dylan Matthews: Occupational safety, maybe. I'm not saying I agree with every decision OSHA ever made or that they haven't fallen down on some parts of the job, but injuries at work in the United States seem way down from where they used to be.

Tyler Cowen: But that rate does not accelerate with the creation of OSHA, it's worth noting.

Dylan Matthews: I'm not making a causal claim about OSHA, but we seem to be in a pretty good place.

Heidi Williams: How about lead exposure policies in the U.S.?

Dylan Matthews: Lead exposure might be under-regulated at the moment. Our regulatory agencies don't do well with legacy setups, and so they're not well prepared to do the funding and work of replacing old lead water mains or soil remediation or things. But it's hard to get leaded paint in stores now, that's for sure.

Jacob Trefethen: Depends what country you're in.

Dylan Matthews: Yes, it does depend what country you're in.

Matt Clancy: I've got one more idea, which is operating behind the scenes. I've always thought BARDA is doing an okay job of doing stuff that is not necessarily very public. 

They're stockpiling medical supplies in the event of nuclear attacks or diseases, and putting these big milestone payments for the development of new antibiotics.

Jacob Trefethen: That's a great example because we've been talking mostly about safety in the context of ways science can go wrong, but science is a contributor to the safety of society in lots of obvious senses. You could target more resources as a government. 

I think BARDA's a great example. I've got the JYNNEOS vaccine coursing through my veins, and that's thanks to BARDA for funding Jynneos, before the monkeypox outbreak happened, for smallpox. It's thanks to the FDA approving the JYNNEOS vaccine before the monkeypox outbreak happened.

Matt Clancy: That's also related to the earlier question about how much to disclose. Every once in a while I might be worried about something, but maybe BARDA is working on it right now. I just don't know because they don't want to let people know that they're on the ball on that.

Tyler Cowen: A key point here is that it's much harder to regulate very new things well. You see this with crypto. There are some people who hate crypto; it's just a fraud. If they're right, crypto can just go away, but they could easily be wrong. Maybe crypto is how the AIs will trade with each other. Over time, you want modular regulation of crypto, whatever particular thing crypto is used for. If we use it for remittances, regulate it as you regulate remittances. Probably that would work fine. But while it's still evolving into even what the core uses are, it's very hard to see regulation working well then. You just want a minimum of protections against gross abuses and see what happens, then regulate things in particular areas.

Caleb Watney: We've been talking somewhat about path dependence in technology and to what extent you can have one scientific breakthrough that increases risk, and sometimes you can have one that decreases some other previous risk. People talk about the concept of differential technology development, where you can try to be strategic and anticipate safety-increasing technologies and accelerate them so that you get them before other kinds of technologies. That, of course, is in some ways dependent on your ability to predict or anticipate what are the attributes of a technology or scientific area that make it more or less safe.

Do you think that is reasonable, and should the United States be trying to do more strategic differential technology development?

Matt Clancy: We do it extensively, on some domains. The Department of Energy's ARPA-E is a differential tech development. Or to use the economics of innovation language, it's trying to influence the direction of technological change. We're trying to basically jumpstart the green revolution, renewable energy, and so forth. Plans for carbon taxes are also a de facto attempt to steer further innovation away from certain kinds of innovation.

There's a spectrum. On the technology side, it's easier to predict the answer to: how dangerous or how beneficial is this technology? What are the unanticipated consequences? In innovation, that's always a big challenge, but it's a smaller challenge in technology than in the area of science. 

When you're talking about fundamental science, it's not that you have to be totally agnostic. Funding Egyptology is probably not dangerous unless we get a mummy's curse. But funding gain-of-function research is obviously much more controversial. There, it's a lot harder to know what you're going to get, so that's my big picture thoughts on that.

Tyler Cowen: I'm glad we're spending more on asteroid protection now.

Dylan Matthews: What would make you change your mind on that?

Tyler Cowen: If we learned there weren't any asteroids out there, or that they would come much more rarely than we now think.

Matt Clancy: The thing about asteroid protection is that a monitoring system is good if we can see them far away, but it is one of these things where if you develop the technology to move an asteroid cheaply, then you can move the asteroid into the planet too and away from it. On the whole, I'd rather have it than not have it.

Dylan Matthews: Are there other areas where scientific potential to increase safety is underrated? So asteroid detection seems like one place. Mega-volcano detection might be one place. Presumably, there are areas where it's not merely natural disasters that you can protect against through differential development.

Tyler Cowen: By far, the procedures for launching nuclear weapons, which are not entirely open and common knowledge, to get those right. What exactly right means, you can debate, but we don't seem to put a lot of effort into that. Those are fairly old systems. Again, maybe you can't have a public debate, but still, I would want to make sure we're really doing the best we can there.

Matt Clancy: The other area where there's been a lot of thought on this is in these biosecurity areas. Far-UVC light, if you could develop that technology and have it embedded throughout the economy, it could make certain kinds of diseases a lot less prevalent and a lot harder to attack a lot of people with those diseases. 

Much better, more comfortable, fashionable PPE could be good for protecting us against future pandemics. Wastewater and novel pathogen detection stuff. Those are the ideas that I hear out there. Any others?

Jacob Trefethen: Those are all great. Also, just having an attempt to make a vaccine for the next pandemic viruses would be great. There's lots of energy behind that, but not enough. Good work being done, but we're not there yet on a lot of the obvious societal protecting technologies.

Dylan Matthews: Do we want to do a round of overrated, underrated? Gain-of-function research?

Tyler Cowen: Everyone dumps on it. I'm skeptical, and so many people dump on it, but maybe there's some chance it's underrated and it's actually useful. I just want to make clear that I don't know. But it's become a cliché, and I would like to see a lot more serious treatment of it.

Dylan Matthews: I met a biosecurity expert who almost in secret, as though she had a shameful secret, said, “I don't think it's totally pointless.”

Jacob Trefethen: Some of it is demanded by regulatory agencies, depending what it means. You'll be asked to put things through resistance tests, and that's in a sense selecting for enhanced ability to evade a drug or something. We shouldn't be doing things that increase the transmissibility or increase the pathogenicity or harmfulness of a pathogen. I'm so mainstream in that way.

Dylan Matthews: Phase I trials for drugs.

Jacob Trefethen: They're good.

Tyler Cowen: But the whole system of clinical trials needs to be made much cheaper, have a lot more trials, be much better funded, and have far fewer obstacles. That seems to me one of the very worst parts of our current system, and it makes everything much less safe.

Jacob Trefethen: I agree with you generally. I think that I might disagree on some specific cases, but what about Phase 1s in particular?

Tyler Cowen: I don’t have a particular view, but everyone I talk to says there's so many different obstacles. Exactly which ones you should loosen up, I don't pretend to know, but it seems something's not working.

Jacob Trefethen: Right.

Dylan Matthews: Industry capture.

Jacob Trefethen: Of regulators?

Dylan Matthews: Maybe over or under-regulated as an explanation of why the world is the way. I assume most people would say they're against industry capture.

Jacob Trefethen: Got it. Just checking. I think probably overrated in some circles, underrated in others. I think on net, maybe underrated as an explanation.

Tyler Cowen: Normatively, I don't think industry capture is necessarily so bad. It depends on the alternative. A lot of times it gets things done. you build up cities, you have a lot of construction. The government where I live, Fairfax County, at times has been quite captured by real estate developers. I'm all for that. Bring it on. It's one good recipe for YIMBY.

Dylan Matthews: The CDC.

Matt Clancy: I mean, they're not highly rated at the moment.

Jacob Trefethen: I think scientific talent at the CDC, underrated. Outcomes, probably appropriately rated as not so hot in recent years.

Dylan Matthews: Nuclear waste.

Jacob Trefethen: Dial it up.

Tyler Cowen: I've been reading all these pieces lately, saying it's not such a big problem. I don't feel I can judge. But given the alternatives, I want more nuclear power. If we have to deal with waste, I say, let's do it.

Dylan Matthews: Geothermal.

Jacob Trefethen: Probably underrated.

Matt Clancy: Seems underrated.

Tyler Cowen: Same.

Dylan Matthews: Global zero for nukes.

Tyler Cowen: Just impossible.

Matt Clancy: Is it a serious plan for many people?

Tyler Cowen: Who goes first?

Dylan Matthews: Barack Obama seemed to believe in it a little bit. He seemed important for a while.

Tyler Cowen: But what did he do? I don't blame him. I think it's impossible, but you can cut back on the number, it doesn't really matter. You might save some money.

Dylan Matthews: Yeah. Zoonosis.

Matt Clancy: I will say that when we worked for the Department of Agriculture, we looked at this a lot for antibiotic resistance and farm animals. They use antibiotics, and it was always feared that this would be the vector through which we would get very bad, antimicrobial-resistant [illnesses] coming to humans. From what I could tell, it was very hard to make that case in practice. In theory, it's compelling and the story makes sense, but it was really hard to ever trace back conclusively an example. So, it's probably still correctly rated.

Jacob Trefethen: I would say underrated by the broader public. You could just make vaccines and antivirals against some of the obvious potentials, but obviously, we haven't done that in some cases.

Tyler Cowen: All I know is I hear a lot of claims I don't trust.

Dylan Matthews: AI model evals, either voluntary or mandatory.

Jacob Trefethen: Do listeners know what that is? I guess not rated.

Dylan Matthews: Not rated. The idea would be to release something like GPT-4 or Claude or another large language model, you would have to go through either a non-government agency, like the Alignment Research Center, or a government agency that tests to make sure that it doesn't do a set of dangerous things.

Jacob Trefethen: For models above a certain size it's something that's got to happen at some stage. There is another one of these episodes about the political legitimacy of science. If you have industries or scientists taking what the public perceives as large risks, that are on behalf of other people, that's not going to last. So, probably underrated.

Tyler Cowen: We don't yet have the capacity to do it, but as you know, when Apple puts out a new iPhone, they have to clear it with the FCC. I mean that's been fine. There's a version of it that can work, but right now, who exactly does it? How is it enforced? What are the standards? Is Lina Khan in charge? Is Elizabeth Warren in charge? I just don't get how it's going to improve outcomes. It'll become a political football and polarize the issue, so I say we're not ready to do it yet.

Matt Clancy: Self-regulation is probably a good place to start rather than involving government agencies and having nonprofits that are focused on this. I agree with Jacob that eventually, you probably want to codify this somehow, but you have to start somewhere, and this seems a reasonable place to start.

Dylan Matthews: Luddites, either current or historical.

Tyler Cowen: They were smart. They didn't see how good progress would be. They didn't know fossil fuels would come into the picture. They're a bit underrated, maybe. They weren't just these fools.

Matt Clancy: I do have some sympathy for them, I'll admit. They were responding to real problems.

Jacob Trefethen: I do think it's wise to consult what makes your life go well or not. There are a lot of things that don't feel connected to technology directly. It’s falling in love, having friends, it's all of that. 

In the grand scheme of things, that is probably a connection that we as a community need to keep making, if we want to make the changes in metascience and the science world broadly continue to matter to people. It gives me a little bit of generosity to the Luddites too.

Dylan Matthews: That seems like a beautiful place to end. All you need is love. 

Caleb Watney: Thanks for listening to this episode of the Metascience 101 podcast series. Since we recorded this episode, Matt Clancy has published a long and thoughtful paper sketching out a framework to help think about these trade offs called “The Returns to Science in the Presence of Technological Risk” — I highly recommend reading it if you thought this conversation was interesting. For our next episode, we will consider the role that political legitimacy plays in our scientific enterprise.

Discussion about this podcast

Macroscience
The Macroscience Podcast
A podcast about macroscientific theory, policy, and strategy