Dr Miah Hammond-Errey is joined by Jason Matheny, CEO of RAND Corporation and founder of CSET to delve into the complexities of regulating emerging technologies — from AI to biotechnology, what the United States can learn from Australia, the opportunity a current bottleneck in compute capacity offers democracies, and his work at IARPA — ‘the DARPA of the intelligence world’ — using innovative methods to solve the hard problems of policy and national security. They also discuss the role of alliances such as Five Eyes in combatting AI-generated disinformation and why standards bodies need greater support.

Jason is the President and CEO of RAND Corporation. He previously led technology and national security policy for the White House in the National Security Council and the Office of Science and Technology Policy. Jason founded the Center for Security and Emerging Technology (CSET) at Georgetown University, was a Commissioner on the National Security Commission on Artificial Intelligence and the director of the Intelligence Advanced Research Projects Activity (IARPA). He has also worked at the World Bank, Oxford University, the Applied Physics Laboratory and Princeton University.

Technology and Security is hosted by Dr Miah Hammond-Errey, the inaugural director of the Emerging Technology program at the United States Studies Centre, based at the University of Sydney. Miah’s Twitter: https://twitter.com/Miah_HE

Resources mentioned in the recording:

Jason Matheny
Dr Miah Hammond-Errey and Jason MathenySource: USSC


Making great content requires fabulous teams. Thanks to the great talents of the following. 

  • Research support and assistance: Tom Barrett
  • Production: Elliott Brennan
  • Podcast Design: Susan Beale
  • Music: Dr Paul Mac

This podcast was recorded on the lands of the Ngunnawal people, and we pay our respects to their Elders past, present and emerging — here and wherever you are listening. We acknowledge their continuing connection to land, sea and community, and extend that respect to all Aboriginal and Torres Strait Islander people.


Episode transcript

Check against delivery

Miah Hammond-Errey: [00:00:02] Welcome to Technology and Security. TS is a podcast exploring the intersections of emerging technologies and national security. I'm your host, Dr. Miah Hammond-Errey. I'm the inaugural director of the Emerging Technology Program at the United States Studies Centre, and we're based in the University of Sydney. My guest today is Jason Matheny. Thanks for joining me.

Jason Matheny: [00:00:23] Thanks for having me on, Miah.

Miah Hammond-Errey: [00:00:25] Jason is the president and CEO of RAND Corporation. He previously led White House technology and national security policy in the National Security Council and the Office of Science and Technology Policy. Jason founded the Centre for Security and Emerging Technology, or CSET, at Georgetown University, was a commissioner on the National Security Commission on Artificial Intelligence and the Director of the Intelligence Advanced Research Projects Activity, IARPA. He's also worked at the World Bank, Oxford University, the Applied Physics Laboratory, and Princeton University. We're really thrilled to have you here Jason.

Jason Matheny: [00:00:57] Oh. I'm really thrilled to be here and I'm honoured also to be following Sue Gordon, who's one of my former bosses and just an amazing human being.

Miah Hammond-Errey: [00:01:05] Excellent. We were stoked to have her on and we're really thrilled to have you here, too. We're coming to you today from the lands of the Ngunnawal people. We pay our respects to their elders past, present and emerging, both here and wherever you're listening. We acknowledge their continuing connection to land, sea and community and extend that respect to all Aboriginal and Torres Strait Islander people.

Miah Hammond-Errey: [00:01:26] So Jason, it's been a really big year already in 2023 for tech policy. With your experience in the White House and intelligence, what are some of the top technology challenges the US is wrangling with and where have you looked for inspiration to address these challenges?

Jason Matheny: [00:01:40] Yeah, I think one of the big ones that's top of mind for folks right now is AI. And I think ChatGPT sort of woke up the policy community in ways that were very concrete. I mean, we had been working on this for several years, but for elected representatives to actually be playing around with AI systems because of ChatGPT, I think made this much more tangible for the policy community in a way before that it was more abstract. So that's certainly one of the things that's top of mind for policymakers right now, I think not just in the US but also here in Australia and globally.

Jason Matheny: [00:02:19] By the way, I was really impressed by a recent discussion paper that came out of the Australian Government on safety issues related to AI. I mean, I can't remember a sort of United States deliberative document that, who's first sort of analytic effort was, hey, let's look at what the rest of the world is doing, before deciding on what our own governance pathway is going to be. So there's this really rich discussion in the paper that looks at approaches that a variety of countries are taking, regulatory approaches, non-regulatory approaches, sort of mapping them on the spectrum between different levels of of governance. And I think that doing that kind of international scan is really useful. There is a sort of performative and then substantive aspect of, of tech policy. And I think the the substance of tech policy that I think is probably most consequential right now is going to be around things like AI, semiconductors and synthetic biology. I think those are those are three that I think are just profoundly important because they they all have enormous upside and also enormous risk. So figuring out how to navigate those is especially important.

Miah Hammond-Errey: [00:03:33] You've talked about biotechnology and also then synthetic biology. Could you break those down a little bit for people so that they can have some insight into what you're talking about?

Jason Matheny: [00:03:42] Yeah. So biotechnology is a more general term of just leveraging biology as a technology, and that can include everything from from pharmaceuticals to the design of organisms that can produce materials or chemicals. Synthetic biology is a sort of, a newer term used to encompass DNA synthesis, so the production of of DNA, or the editing of DNA or other nucleic acids to to produce organisms that have specific properties. And what's unusual about synthetic biology as sort of a discipline now is that it turns biology into an engineering, a sort of subset of engineering, so that you can get biology to carry out very specific tasks with the same level, at least with intent of precision of other engineering disciplines. And I think one of the one of the things that we'll have to navigate as a as a sort of technology discipline is there's these great things that you can do with synthetic biology, like producing new molecules that have therapeutic effects and designing new therapies that can improve human health, but you can also use it to design novel pathogens that are capable of pandemics that are even worse than Covid or even worse than the 1918 influenza. Things that are could be quite catastrophic.

Miah Hammond-Errey: [00:05:14] Yeah, it's really, it's actually quite a scary proposition, particularly when you add in that genetic element, which is also part of it. Thank you for setting that out so clearly. I want to just pause for a moment on the tensions between democratic governments and the regulation of these technologies. You know, I heard you speak a little bit about this this morning, and I was wondering where you see that those tensions lie and and drawing out, I guess, from that discussion paper how we can continue to inform our approach.

Jason Matheny: [00:05:42] Yeah. I mean, so one of the things that I that I think about with the approach that we're likely to take with, with AI as, as an example, but I think this could apply just as much to synthetic biology or quantum or materials, semiconductors, etc. is, we need to be thinking about what are the kind of asymmetric advantages that democracies have from a technology. And one thing that's really interesting about about AI right now is there's, I think, a sense that there might be an asymmetric advantage for democracies to benefit from these large language models compared to autocracies, because autocracies generally don't like the idea of some system that they don't control generating text or other media. And it is true that the text that's generated by these large models is pretty unpredictable. And if you want to, if you want to actually govern the content that comes out of these models, that's a big challenge. And so the regulations in China, for example, around these large language models is going to make it very difficult for tech developers to actually work on the state of the art. There was a really fantastic piece in Foreign Affairs by Helen Toner, Jeff Deng and others just this week describing some of the challenges that China is going to have in governing large language models if they want to treat it like most other things and have, you know, be able to have censorship standards applied to the outputs of those models.

Miah Hammond-Errey: [00:07:28] To succeed in AI, you need, as you've just pointed out, among other things, data and leading-edge compute capacity restricting competitor access to compute is something you proposed in a recent Senate testimony. You've kind of just talked us through why. What might that mean in practice? I mean, you've kind of suggested a risk-based regulatory framework, but what does that mean for open innovation?

Jason Matheny: [00:07:50] Yeah, I think the, what you'd like is to have some sort of approach to governance that allows all the good uses of the technology while preventing all the bad uses of the technology,

Miah Hammond-Errey: [00:08:02] The utopian regulatory framework.

Jason Matheny: [00:08:04] The goal here. And so what's interesting about about AI is that right now the bottlenecks for, for these large models is access to to compute. And that is actually something that you could have this level of screening where you could sort of have end use or end-user controls, where the infrastructure that's used for training these large models is not widely distributed. It's highly concentrated and it's highly concentrated right now in a set of companies and democratic countries that follow the rule of law. So if you really wanted to ensure that these models are ones that are used responsibly, we're in a pretty favourable window of opportunity here right now where we can have those end use or end-user controls at the point of compute that cloud computing providers could do 'know your customer' screening, but also sort of 'know your process' screening. Like is this particular process that's running on on the computing infrastructure, one that is creating a really good formula for a medicine, or is it one that's that's likely to be used for training a cyber weapon?

Miah Hammond-Errey: [00:09:19] This leads really neatly into a conversation about supply chains. Can you connect for the listeners, the supply, the supply chain linkages between compute capacity?

Jason Matheny: [00:09:30] Yeah. So the the compute that's being used for training. These large language models depends on a very small type of sorry, a small category of chips, like advanced GPUs and to a lesser extent machine learning ASICs, so specialised AI chips. And the supply chain for those chips is extremely narrow. You know, there's just a couple of companies that produce these chips. The manufacturing of those chips is done right now in just one place in the world, in Taiwan. And the tools that are used for the manufacturing of those chips on Taiwan are produced by just a few countries, primarily the United States, Japan and the Netherlands. So it's this highly concentrated supply chain with a bunch of chokepoints that right now again, are sort of favourable towards governance because they're concentrated in countries that have a strong rule of law, that have a strong governance system and are ones that care about safety and security. It's just incredibly costly for somebody to produce their own semiconductor foundry, to produce the chips that are needed for these data centres that train the large AI models. So I think we're in a kind of favourable position of focusing on compute as the area of governance, in part because the supply chain is so narrow, so concentrated and has these key choke points.

Miah Hammond-Errey: [00:11:08] Does that mean then that other technologies like broader AI applications or quantum, for example, might might face different regulatory challenges? And I guess a two-part question. How do you see that playing out in in that decoupling, at least in a compute perspective, between the US and China?

Jason Matheny: [00:11:26] There are there are certain kinds of chokepoints in quantum supply chains. As one example, superconducting approaches to quantum sensing and quantum computing often rely on these Josephson junctions that are manufactured using niobium. And there aren't that many niobium foundries in the world. And so that ends up being one chokepoint in biotech. DNA synthesis and DNA sequencing involve tools that have components that aren't widely manufactured. So they're they're chokepoints there. I think, though, probably the semiconductor industry is unique to its level of concentration. It is, though, at the base of these other technologies. So in some ways, semiconductors will still be a chokepoint for these other technologies as well.

Miah Hammond-Errey: [00:12:19] I've written a lot about the information environment and have noted the automation and amplification of disinformation that AI enables. What applications of AI in disinformation campaigns are you most concerned about?

Jason Matheny: [00:12:31] You know, right now, the main rate limiter for disinformation campaigns has just been the level of human effort that Russia or China or others need to employ to carry out disinformation attacks. The challenge is when you can automate that fully and it's right now defies forensics. So we can't distinguish reliably between auto-generated text or in some cases, auto-generated images and authentic text or authentic images. And it's going to get harder and harder, I think, to tell the difference between auto-generated video and and authentic video. So I think it will lead to a general sort of level of mistrust of our media environment. And it's it's, it's going to have applications, I think, not only to disinformation and influence campaigns, but also to spearfishing.

Miah Hammond-Errey: [00:13:28] Given the extensive noise already in the information environment, can you describe why distinguishing user-generated content from AI-generated content is important? And how do you see that we would do that without straying into content moderation?

Jason Matheny: [00:13:43] Yeah, I think it's important in part because the function of democratic governance depends on having some shared sense of what the truth is. And it's really hard to reach compromise in policy or agreement in policy unless you have some shared basis of deciding what is true and what's not. So if we're saturated with a media environment that has no clear distinctions between truth and fabrication, I think that makes governance much, much harder. I do think that there are some technical approaches that could be used to make authenticity clearer. Among them are proposals to embed watermarks in generated media or in authentic media and allow the distinction between the two. So having sort of like cryptographic signatures embedded within different kinds of media, whether that's audio or imagery or video, and there's probably also advances on the forensics side. But I think that the forensics is likely to be a kind of a cat and mouse game with a constant sort of moving arms race between detection and generation, whereas watermarking is something that could be more robust.

Jason Matheny: [00:15:04] We deal with this today with counterfeit currency, which I mean, for for most of us is a minor nuisance if if we notice it at all or we have to deal with it at all. So a problem that in fact could be have been sort of existential if we had not developed ways of detecting counterfeit currency has been sort of addressed through technical means by having things like watermarks and physical tags that we can use to distinguish the authentic from the counterfeit.

Miah Hammond-Errey: [00:15:32] I guess following on from that, can we talk a little bit more about truth and democracy? Can you talk us through the RAND project called Truth Decay?

Jason Matheny: [00:15:40] The the term truth decay is one that that we created some some years ago. Michael Rich and Jennifer Cavanaugh, who were studying this problem of the erosion of norms and policy debates. So the weakening of references to evidence, changes in journalistic standards, the use of of social media platforms to sort of amplify content that might not be authentic, the changes also in the civics education. All of those things sort of in, in combination appear to have really shifted the way in which evidence is used in policy debates. I mean, we were sort of thinking what is what is the future of policy look like if we don't have a strong foundation of facts and evidence? So we started studying this problem and then thinking about what are what are some possible policy approaches to deal with with this phenomenon of truth decay.

Miah Hammond-Errey: [00:16:38] So then how do we use technology to strengthen the institutions of democracy?

Jason Matheny: [00:16:43] I think one one thing that I really am interested in is figuring out ways that we can use AI as sort of a defender of our media environment. So, you know, one is like have have AI systems that are pointing out factual errors and content. Have AI systems that are presenting sort of like, you know, counterfactuals for us that sort of exercise our critical thinking. It's an incredible tool for summarising content that otherwise might be inaccessible. I mean, I use large language models today to summarise things that I'm just not going to be able to get to read a full report, and I need it sort of condensed. Or like tell me the intersection between this report and another topic that I'm, that I'm working on. So it can be an incredible educational tool. And then lastly, there's some work done by my colleague Teddy Collins at RAND, who's been thinking about AI as a tool for for democratic participation. Just that we can we can use sort of AI as a way of not only becoming more informed about policy questions, but also finding ways to to engage with them, to access different debates that we otherwise might be distant from.

Miah Hammond-Errey: [00:18:10] I'm going to pivot back to AI standards for a second. In a recent Senate testimony, you talked about the need for AI standards that national security agencies can leverage. Can you set out this argument in a bit more detail?

Jason Matheny: [00:18:22] Yeah. So there's these obscure technology standards bodies. There's one, for example, called ISO SC 42, which is the sort of international standards committee for for AI. And these technology standards are ways of of saying, hey, we think that a particular technology needs to have the following characteristics in order to be useful in a harmonised way around industry globally. And the reason then for having standards is so that different systems can talk to each other no matter what country you're in. So having Wi-Fi standards makes a lot of sense so that if I'm travelling, I can be sure to connect to Wi-Fi in Australia just the same way that I connect to Wi-Fi in the United States. And it also makes sense to have standards for public safety be the same. Like, you know, road construction, bridge construction, aviation. So these technology standards can can have a big impact on the shape of a technology. You want to sort of bake in the principles that you think are going to make the technology safer and more secure and more sort of future proof. There are certain kinds of standards that might also be really good for democratic governance. As one example, ensuring that there's privacy protection within AI. You can have systems like federated learning, which is a approach to to AI that that Andrew Trask and others have been developing, which allows privacy protection even as you're training AI models. Perhaps the requirements that that models be compatible with certain levels of encryption, again, for privacy protection. So baking in privacy and security and safety from the start, when these standards are developed is, is going to make it easier to ensure sort of that these technologies are broadly valuable and safe for society and so that you don't have to keep baking in after the fact, you know, retrofitting safety when you when you when it's already broadly distributed.

Miah Hammond-Errey: [00:20:36] And do you think at the moment these organisations are able to manage that or do you think there's scope to leverage some of these conversations to say our bilateral relationship through ASMIN or multilateral alliances like AUKUS or the Quad?

Jason Matheny: [00:20:49] Yeah, I mean these, these technology standards organisations are not well funded and the, the participants who come from, from the standards bodies and individual countries themselves tend not to be well funded. So for example, in the United States there's the National Institute of Standards and Technology, NIST, one of my favourite federal agencies in the US, but not one of the better-funded agencies. And when it comes to participating in these standards meetings, sometimes they don't even have the travel budget to attend. Or they can send only one person instead of the three people who are the subject matter experts. So I think this is an area that where we need much greater investment.

Miah Hammond-Errey: [00:21:35] Let's shift to the segment on alliances. Technology's impact all nations and effective governments need to collaborate with industry and academia to solve complex policy problems. What is the role of alliance building in technology policy?

Jason Matheny: [00:21:48] Yeah, I mean, because there's there's no single country that has a lock on, on technology. I mean, first, all of these technology supply chains are ones that are that are globalised. Now, it might not be more than four or five countries at a time that control a significant market share of these technologies. But it's virtually impossible for one country to say, 'oh, yeah, we've got this technology covered.' So you have to develop alliances to think about how you collectively arrive at supply chains that are going to be able to deliver the technologies that society needs. So alliances, I think, are going to be really important for a variety of kind of technology goals that we have. Among the things that I think are going to be really important are developing technology standards. Collectively coming up with approaches to supply chain security. And that means not just reshoring or onshoring, but friendshoring. Third is coming up with approaches to technology governance that are compatible across national borders. And then a fourth reason to have these kinds of alliances is for tech and horizon scanning. It makes just a lot of sense to cumulatively combine our knowledge about what we see on the horizon and the things that we're anxious about, the things that we're hopeful about. And there's been a huge amount of information sharing between the Australian and US governments historically around technology and its security impacts for example.

Miah Hammond-Errey: [00:23:21] You've almost led into the next question. How do you see intelligence agencies and intelligence communities contributing to alliances?

Jason Matheny: [00:23:31] Yeah, I mean, I think the the Five Eyes Alliance has been among the most important and most successful alliances in our respective histories. So it's one that I was just profoundly impressed by when I was in the US intelligence community. I mean, we worked so closely with our with our Australian colleagues and with our, our UK and New Zealand and Canada colleagues and the the alliance that's created by the Five Eyes construct is one in which information sharing is possible, technology sharing is possible, a shared appreciation of opportunities and risks, really candid discussions, a kind of collegiality that comes from that engagement. And I think, you know, there's there's questions about whether we need like 'Five Eyes Plus' for certain kinds of intelligence topics that I think makes sense. We have other intelligence sharing agreements. I mean, none that is as as robust probably as the Five Eyes. But I think that that arrangement is is like a role model for other things that we can do with alliances.

Miah Hammond-Errey: [00:24:39] In the last 20 years, technology has permeated almost every aspect of our lives. You have been at the centre of these trends in government and the NSC. Can you describe for our listeners what it means for intelligence?

Jason Matheny: [00:24:51] I think that that I will be incredibly valuable. It has been already really valuable to intelligence analysis. We've seen some of the earliest returns really in the government's use of AI, we've seen in the intelligence community. And I think that's likely to increase. I think we need to be especially thoughtful about how AI is applied to cyber operations and cybersecurity, because it's not clear that AI offers an asymmetric advantage to either defence or offence. So I think that will be really complicated what its net effects are. In other sort of areas where I think there's just tremendous potential for, for technology being used in intelligence. I mean some of these are you just sort of think like, well, is that really high tech? So a lot of it still comes down to when humans are making judgements about really hard problems that are uncertain and they're working under time pressure. You just want to make sure that the analytic tools that human analysts have access to are ones that allow them to share information really, really well, really efficiently. So finding ways of breaking down the silos and barriers that separate analysts of different disciplines that allow them to spot where analysts disagree. Any tools that allow pre-mortem analysis, that allow crux mapping like prediction markets for internal use by intel analysts so that they can see where they disagree about judgements and what evidence accounts for their disagreements. I think those are those are really valuable. They're not high tech, but they are applications of technology that can make analysts sort of super-powered.

Miah Hammond-Errey: [00:26:30] You've spent time as the director of IARPA. Can you tell us a bit about the organisation and a bit about your time there?

Jason Matheny: [00:26:37] Yeah. Oh it's a great organisation. It's the advanced research arm of the, the US intelligence community.

Miah Hammond-Errey: [00:26:47] Is that the quiet cousin of DARPA?

Jason Matheny: [00:26:49] Yeah. Sometimes I think like in one article it said it's basically DARPA for spies, but it's, it's more than that because it's, it's coming up with new approaches to intelligence analysis, leveraging insights that we have from the research and human judgement and decision making, figuring out how to apply AI to a range of really hard analytic problems, figuring out how to solve hard technical problems in collection and in operations. And one thing that's unusual about IARPA is that there isn't any in-house research. It's program managers who are subject matter experts who find researchers and academia and industry and national labs and then compete out the the hard problem. And so they sort of run research tournaments in which interdisciplinary teams of researchers are all sort of competing in order to achieve a very hard set of technical goals. And it doesn't institutionalise any of these programs. So the programs run for typically three to five years and then they stop and either they've achieved their goals or they haven't. Things that aren't achieving their goals are ended early. And that kind of discipline of not continuing to fund things that aren't working is really hard. I mean, it's unusual, but it has a big impact. As does having program managers and other staff with term limits, usually not to exceed five years. So I didn't spend more than five years in any of the positions that I had there. And that kind of discipline means like, you feel the clock ticking the day you start work and you want to work as quickly as you can to solve hard problems and you know that you're not going to be there forever. So you you need to make a difference quickly.

Miah Hammond-Errey: [00:28:42] We spoke to Sue Gordon about the challenges of integrating technology into intelligence, and we were specifically talking about In-Q-Tel. How was IARPA working to bring things into the intelligence community and what can other agents, other intelligence agencies and communities, particularly those in Australia, learn?

Jason Matheny: [00:28:58] Yeah, I mean, In-Q-Tel is this great model. And you know, thanks to the work that Sue did to to stand it up, the intelligence community in the US, but also among our Five Eyes partners, now has sort of access of ways of bringing in commercial technologies that exist but have not been integrated into the IC. And IAPRA is sort of an analogue for technologies that don't yet exist that we need to integrate into the IC, so even earlier stage research and technology. I think the integration challenge is, is a hard one. There's valleys of death that often, between the research stage and hardening a piece of technology so that it can actually be fieldable. One of the things that IARPA does is to have transition partners, the folks who will ultimately be in the agencies and needing to deploy the technology or field it, present at the moment of creation so that they can they can address the mission needs of the technology once it's deployed. Now, you have to be careful in the way that you ask the question about, well, what is the technology that you actually need? There's this line about Henry Ford saying, If I asked my customers what they want, they'd ask for a faster horse. So you really need to frame the question in a way like 'what is the mission need?' As opposed to asking the transition partner to prescribe the way of addressing the mission need, which is really the problem of the program manager and the research teams. But it's a phenomenal model in that it takes on some of these incredibly hard technical challenges in intelligence. And when it, when these programs succeed, then you see them go into the field and just completely transform an intelligence operation.

Miah Hammond-Errey: [00:30:44] Can you give us some examples of how technologies through IARPA have helped contemporary intelligence or national security problems?

Jason Matheny: [00:30:52] Yeah, there's there's a there's a handful I can talk about and of course, more that I, that I can't, and those are those are just as interesting. But the ones that we can talk about, there's, there's an amazing program called SILMARILS at IARPA that developed incredibly sensitive ways of detecting chemicals, including chemical explosives, residues on surfaces from great distances. And the ability to do that is just transformative for our ability to do sort of counterproliferation work, because instead of doing a close access operation where you're having to go into a facility, you can take from a great distance a scan of a doorknob from a safe distance, that that then tells you whether there's something to worry about or not in that facility. It's amazing.

Jason Matheny: [00:31:52] There's another project that that's fun to talk about because it's something we can kind of all imagine doing, which is forecasting world events. So there's a program a number of years ago at IARPA in which tens of thousands of people from around the world made several million forecasts of geopolitical events and IARPA kept score and figured out which of the forecasts were accurate because they were about real world events. And we actually just waited for history to happen, you know, like forecasting the outcomes of foreign elections or whether there would be a skirmish in in Kashmir or whether North Korea would test a nuclear device by a certain time. And we just scored these these probability forecasts and and then figured out which people and which groups of people were most accurate. And there were several things from that that were really interesting. One is it's really hard to beat the unweighted average of judgements of a large group of people and this sometimes called the wisdom of the crowd. So if you're, if you're like me, where you're having to make decisions about things that I'm not an expert in, I find a really good approach is to pull a bunch of people and take the average.

Miah Hammond-Errey: [00:33:06] And now you've got evidence to prove that's the best advice.

Jason Matheny: [00:33:08] Yeah, that's right. So that's, that's really interesting that we kind of ran what was really the world's largest geopolitical forecasting tournament and came out with support for this sort of 'wisdom of the crowd' approach. And we demonstrated the same the same results inside the intelligence community where several thousand intel analysts participated as well.

Miah Hammond-Errey: [00:33:32] I wanted to go to geopolitical events for one minute. The war in Ukraine has brought to the fore, for the public, the involvement of private actors like StarLink in national security matters and decisions. What are the implications from the war in Ukraine for global tech companies?

Jason Matheny: [00:33:47] Yeah, I mean, I think in the in the same way, sort of that the Spanish Civil War was sort of a training ground for the World War of new technologies and tactics, I think we're seeing Ukraine potentially be a training ground for new technologies and new tactics for what the future of warfare looks like in the next couple of decades. So this this integration of commercial platforms being leveraged as part of combat is one of them. I think we're also seeing tactics that involve what's sometimes called large numbers of small things like attritable munitions, UAVs that are very, very small, like human portable, and also have like a level of of attrition that is, that you wouldn't mind because they're so low cost. And I think this sort of like staging area of, of seeing these systems compete with one another on the on the battlefield is one that all countries will be watching closely in order to plan for for the next war.

Miah Hammond-Errey: [00:35:02] And I'm going to pivot away to another segment. So Emerging Technology for Emerging Leaders. You have held national leadership roles during big tech developments. Can you give insight into how you have led others to navigate major tech changes in your career?

Jason Matheny: [00:35:16] You know, I think one is just hiring great people. Second is trying to give top cover to people to work on the most important problems. Like there's there's certain kinds of problems that for one reason or another, are just being ignored, even though they're incredibly consequential. And those are, those are really good problems to work on because chances are your effort can make a huge difference at the margins.

Miah Hammond-Errey: [00:35:42] Can you share some emerging technologies you think up-and-coming leaders of today and tomorrow need to know about?

Jason Matheny: [00:35:47] Well, the obvious ones are things like AI and synthetic biology. And I think that, like, even after we appreciate how important these technologies are, they are probably still more important than we imagine that they are, just because I think they're so foundational. AI really could be an amplifier for, for technological change in so many different domains because AI can itself be used as a tool for scientific discovery. Bio is sort of a tool for for manufacturing. I mean, it's a, you know, insofar as AI is a tool about converting bits of information, bio is a really good tool for converting atoms, moving atoms around. So between AI as a system for processing information and biology as a system for processing matter, we have two incredibly powerful platform technologies. And then I think there are other technologies that are sort of more niche but are I think really important and can be very disruptive. Things like quantum computing, if that becomes cost effective at scale, then we'll at least need to be rethinking encryption. And fortunately organisations like NIST and others have been thinking about what post, quantum-resistant cryptography looks like. And then there's there's other technologies that I'm really excited about because I think they could they could be incredibly helpful. So, for example, figuring out ways to dramatically improve education through like digital tutors and personalised tutoring is, I think you know, could be really transformative.

Miah Hammond-Errey: [00:37:30] So coming up in this segment, Eyes and Ears. What have you been reading, listening to or watching lately that might be of interest to our audience?

Jason Matheny: [00:37:37] I've been rereading a great book by Toby Ord called The Precipice, and Toby is an Australian-born philosopher at Oxford. The Precipice is a book about existential risk. It's about what are what are the things that we face as a civilisation that could really undo us. And among the things that we really need to get right are biosecurity, the security guardrails that we build up around synthetic biology and the guardrails that we build up around AI. And there's also a really good discussion of close calls and nuclear weapons and nuclear policy is just such a thoughtful book and so well written.

Miah Hammond-Errey: [00:38:20] Thank you. Thanks so much for your time.

Jason Matheny: [00:38:23] Thanks so much, Miah.

Miah Hammond-Errey: [00:38:24] Thanks for listening to Technology and Security. I've been your host, Dr Miah Hammond-Errey. I'm the inaugural director of the Emerging Tech Program at the United States Studies Centre, based at the University of Sydney. If there was a moment you enjoyed today or a question you have about the show, feel free to tweet me @miah_he or send an email to the address in the show notes. You can find out more about the work we do on our website, also linked in the show notes. We hope you enjoyed this episode and we'll see you soon.