Dr Miah Hammond-Errey is joined by Dr Kobi Leins, Honorary Senior Fellow at King’s College, London and international law expert,to discuss her work on nanomaterials and their implications for existing international law governing chemical and biological weapons. They also discuss why international standards are so important, AI’s potential for evil and the need for improved understandings of data ethics – from the classroom to the boardroom – as well as why we should be wary about claims of de-identified or anonymised data.

Kobi is an Honorary Senior Fellow of King’s College, London; an Advisory Board Member of the Carnegie AI and Equality Initiative; a technical expert for Standards Australia advising the International Standards Organisation on forthcoming AI Standards; and co-founder of the IEEE's Responsible Innovation of AI and the Life Sciences. She is also a former Non-Resident Fellow of the United Nations Institute for Disarmament Research, worked at NAB in Data Ethics and in 2022 published her book, New War Technologies and International Law: The Legal Limits to Weaponising Nanomaterials.

Technology and Security is hosted by Dr Miah Hammond-Errey, the inaugural director of the Emerging Technology program at the United States Studies Centre, based at the University of Sydney.

Resources mentioned in the recording:


Making great content requires fabulous teams. Thanks to the great talents of the following.

Research support and assistance: Tom Barrett

Production: Elliott Brennan

Podcast Design: Susan Beale

Music: Dr Paul Mac

This podcast was recorded on the lands of the Gadigal people, and we pay our respects to their Elders past, present and emerging — here and wherever you’re listening. We acknowledge their continuing connection to land, sea and community, and extend that respect to all Aboriginal and Torres Strait Islander people.

Episode transcript

Please check against delivery

Dr Miah Hammond-Errey: [00:00:02] Welcome to Technology and Security. TS is a podcast exploring the intersections of emerging technologies and national security. I'm your host, Dr. Miah Hammond-Errey. I'm the inaugural director of the Emerging Technology Program at the United States Studies Centre, and we're based in the University of Sydney.

Dr Miah Hammond-Errey: [00:00:20] My guest today is Dr. Kobi Leins. Thanks for joining me.

Dr Kobi Leins: [00:00:24] Thanks so much for having me.

Dr Miah Hammond-Errey: [00:00:25] Kobi is an Honorary Senior Fellow of King's College London, an advisory board member of the Carnegie AI and Equality Initiative, a technical expert for Standards Australia advising the International Standards Organisation ISO on forthcoming AI standards and co-founder of the IEEE Responsible Innovation of AI and the Life Sciences. She is also a former Non-Resident Fellow of the United Nations Institute for Disarmament Research, worked at NAB in Data Ethics and in 2022 published her book 'New War Technologies and International Law: The Legal Limits to Weaponizing Nanomaterials'. I'm so happy to have you on the podcast, Kobi.

Dr Miah Hammond-Errey: [00:01:03] We're coming to you today from the lands of the Gadigal people. We pay our respects to their elders past, present and emerging, both here and wherever you're listening, we acknowledge their continuing connection to land, sea and community and extend that respect to all Aboriginal and Torres Strait Islander people.

Dr Miah Hammond-Errey: [00:01:19] So, Kobi, last year you released your book on the challenges of nanomaterials being weaponized for conflict. So I thought we might start there. Nanotechnologies and purposefully engineered nanomaterials rather than those naturally occurring are often billed as wildly transformative. We hear about their possible impact on everything from AI to biotechnology, from additive manufacturing to military sensing. Can you explain what nanomaterials and nanotechnologies refer to and why they matter?

Dr Kobi Leins: [00:01:48] Oh, defining things. I love starting with definitions. I started researching in this area because I had worked in disarmament. I'd worked in biological and chemical weapons control, and I was attending some conferences looking at international humanitarian law in particular and weapons control. And there would always be the sort of the biological, the chemical, the nuclear. And then quantum and nano just kind of shoved in at the end all as one weird group that no one really understood but needed to be terrified of. And if there's one thing that gets me interested or annoyed, it's when something isn't defined properly and then spoken about properly. So my challenge was to basically set up a project where I was looking at actual ways in which nanomaterials were being used. So not just a hypothetical, 'we think in the future' because those books already existed, you know, the nano swarms and the brains being taken over and the vaccines, a lot of us came across nanomaterials through the vaccination program.

Dr Kobi Leins: [00:02:41] And the way that I did that was to look at applications of nanomaterials that were available at the time or in development at the time. So I look at genetic modification, which is at the nanoscale. A lot of people don't think about that. I looked at thermobaric weapons, which I was really hoping would never see used, but we've seen again being used in the conflict in Ukraine. And then the other thing was really looking at was howhow nanomaterials are used in optogenetics, which is a fairly cutting edge, was acutting edge technology at the time this way that you can manipulate brain behaviour. So even beyond whether you use nanotech or if you use other options now neurotech has become a real thing, that way that you can control the way that people think, have memories or see the world was really interesting to me. What happens when you get into the psychological when you can change the way that people's brains work? So that was already all three of those technologies were already being used, and I was really interested in exploring what the implications of those would be, because every time I look at a new science, I go, This is great. It's got lots of opportunities, but every science has opportunities and risks.

Dr Miah Hammond-Errey: [00:03:44] Kind of stepping right back to the basics. In your book, you offer the illustration of a nano-sized object as to an apple what an apple is to the size of the earth. And I think it's a really great visualisation. Your book obviously focuses on these technologies in war, but more broadly, what are some of the national security threats and applications of nanotechnologies and nanomaterials. And maybe if you could set out the difference firstly?

Dr Kobi Leins: [00:04:07] Really good questions. So in my actual live presentations, I have this wonderful image of zooming out from a normal sized object, and it just goes through all of this, these different sized objects. Until you get to nano. It'svery hard for us to conceptualise the scale at which these things operate. When I say nanomaterials or nanotechnology, I'm really just talking about nanomaterials are things that occur at the nano scale and nanotechnologies are technologies that utilise the ability to function at the nano scale. Why do we care? Well, we care because the surface area ratio to mass function changes. These are things that can go through the bloodstream. They cross the blood-brain barrier when it comes to human interaction. And the most common and probably the most recent discussions now are around vaping, you know, you've got materials at a smaller scale. That's where a lot of our bodily functions happen. That's where a lot of the risks are.

Dr Kobi Leins: [00:04:56] The national security issues are really interesting because they raise I think what'sreally interesting is who has control of these technologies, What are the guardrails for these technologies? They'revery similar questions to what was asked in chemistry, to what was being asked in biology and to what we're asking about AI now, which is the link between the two, that all of these technologies have opportunities. But if you can manipulate memory, if you can manipulate the way that people view the world, if you can change the human genome for generations to come, who should have access to that technology and why? And I think some of these conversations, as I've written about with Helen Durham in one piece, you know, there'sthere's enormous regulation around, this is recently with the Lieber Institute. There's enormous regulation about procreation. And her view is that this is a very feminist issue. And yet we don't really talk about how we kill people with these technologies, right? Sothere's a bit of a disparity there in terms of how we control women's bodies versus how we kill people. And, you know, they're opposite ends of the life cycle. We probably should be having the same kinds of ethical conversations about how people are being killed or manipulated or controlled and what we want to tolerate as societies.

Dr Miah Hammond-Errey: [00:05:56] Yeah, absolutely. Emerging technologies, as you've just mentioned, there are changing state-based chemical and biological weapons programs. I've written previously about how technology changes the security threat assessment of chemical and biological weapons programs. We'll throw a link in the show notes. I'm really keen to get your thoughts on how nanotechnology might impact the production and transportation and thus detection of state-based chemical and biological weapons.

Dr Kobi Leins: [00:06:23] I need to read your piece. Absolutely. And this is this is part of the argument that I make in my book is that if it's a chemical and this is where it gets tricky because matter at that scale is neither chemical nor human matter. Like when you think about our human bodily processes, the UN structures we have or the definitions we have around these substances don't really work anymore. So I wanted to say, you know, loudly and clearly that these matters still, that nanomaterials still fall within those categories. Even though they fall outside, they still fall within and keeping those lists up to date. For those who are not aware, the Chemical Weapons Convention has an annexe which is updated regularly by state parties. New chemicals come to light, as you pointed out. You have, you know, transportation, creation [and] sale prohibitions. How does that work with nanomaterials when you can't see them? Very challenging. That'sprobably one of the biggest challenges with nanomaterials is if there were nefarious uses, it'svery difficult to track and trace how these matter this matter is being transferred or used. On the other hand, it's also pretty hard to make, right?It'snot quite likethe garage biology where people can make this stuff in their backyard. If you're making proper nanomaterial-sized materials, you really need some equipment and clean rooms and to be able to do that properly. That said, you can aerosolize some stuff. There are risks. There are things that you could be doing. And this is again, where I think a lot of the gaps are. A lot of the experts fall into buckets of 'I'm a chemical weapons expert' or 'a biological weapons expert' or whatever it is. When all of these, the convergence of all of these technologies is really, really important. So I think for targeted individual personal attacks, I mean, we've sort of seen some of the poisonings, which are probably the best example that people can relate to. It's pretty easy to find an individual target them.It'svery, very hard to track, trace and protect.

Dr Miah Hammond-Errey: [00:08:02] In a nutshell, what complicates the legal regulation and review of nanotechnologies in this security context?

Dr Kobi Leins: [00:08:09] I could go on about this for so long. Article 36 Weapons Reviews.

Dr Miah Hammond-Errey: [00:08:13] Hence the nutshell.

Dr Kobi Leins: [00:08:16] Article 36 Weapons Reviews are no longer fit for purpose. There need to be interdisciplinary teams reviewing these materials, not just the lawyers. And having approached some of the Article 36 Review experts, there's a lack of understanding about what they don't know. So the old days, for those who are not as familiar with this area, you take a gun, you take a bullet, and this happens still in suites in Switzerland, and you shoot the bullet at the fat of fat that you use. And then you sort of track where the fat holds the bullet, what kind of harm you do to someone. You have all of this research around how weapons are used. Those days are gone. The technologies we're using now are not being researched in the same way. The other challenge is really that a lot of the scientists who do this work are also deeply embedded in Defence. The separation and independence in ability to review is incredibly challenging and the question of when to review is also really, really hard. So do you review when Defence gives university funding for a grant and they start doing the work, or do you only review it just before it comes out? At which point Defence has already invested an enormous amount of money. Which, you know, investing enormous amounts of money for not much in Defencedoesn't seem to be such a big thing, but you know, longer term it'sprobably not ideal. So lots and lots of questions. Policymakers also need to think about from a societal point of view, again, you know, do you roll the sunscreens out and then go, Oh, hold on, these might cause cancer and then roll them back? Or do you actually do your research beforehand and make sure you prevent that kind of harm?

Dr Miah Hammond-Errey: [00:09:31] I mean, so many of the emerging technologies we both research have many implications across lots of different sectors and it makes it exciting, but also quite difficult to engage because as you say, there's the policymakers, there's the scientists, you know, there's the international legal structures and our own domestic legal structures. Is international humanitarian law currently equipped to tackle the challenge? And if not, what are some of the suggestions that you have put forward in your book to change that?

Dr Kobi Leins: [00:09:59] Most of my research I refer to as incredibly mundane. It's mapping what exists first before you run to 'we need new law, we need new law'. Because saying we need new law is often a way to deflect from existing law. And we're seeing the same thing in AI, right? We're seeing the same thing. We need all these new laws. Well, actually, there's a ton of law that exists and a bunch of us have been saying that for a while, but it's very boring. It's not as sexy as going. We have no law or we need all new laws. You know, they're all they're all way more interesting than going. Well, the law kind of applies, but lawyers need to be a little bit more creative and look at how these treaties were envisaged. And, you know, my very favourite thing ever as a complete nerd is the Martens Clause. You know, you shouldn't be doing things that are against the dictates of public conscience. Is it disgusting, do most people go 'ew', you probably shouldn't be doing it. And that goes whether you're looking at the way you use data or the way that you're using these technologies, you shouldn't you shouldn't be going down tiny rabbit holes of going, Oh, you know, nanomaterials are not really biological anymore, so they don't fall under the BWC. Like that'sthat's just not how it works.

Dr Kobi Leins: [00:10:50] So that said, I think the review point that you've raised is a major gap. How you document, track and trace these kinds of materials being used by whom, when and where and how is really important. And I think the communities around the CWC, the BWC and and other, and Pugwash as well, for those who are familiar with Pugwash, set up after Einstein died, for scientists to kind of talk about that, his last act before he died was signing this into existence out of concern for the nefarious uses of science. So scientists need to also be more engaged. The ICRC, the International Committee of Red Cross has done a lot of work on this, upskilling technology, technologists and people who are in in these kinds of environments to also talk about risks, because a lot of the time for those who've seen Oppenheimer, it's very much about what can you do rather than what should you do.

Dr Miah Hammond-Errey: [00:11:34] Thank you. We've seen various efforts to regulate AI globally and lots of discussion about managing social harms. You've done a lot of work in this space, especially around international standards, which we'll get to in a sec. But firstly I want to ask what are the interplays for you between AI and equality?

Dr Kobi Leins: [00:11:51] I've spent a lot of time thinking about how these these systems are tools. So whoever has these tools has an advantage. And particularly with AI, the speed and scale at which it operates is really important for those who have access to it. Because you're telling a story from the data set you build it with to how you use it. And every step of the way you're telling a story. These systems are difficult to interrogate. This is why all the conversations around transparency and explainability. But youdon't even need an AI for that. Even, you know, a simple spreadsheet, as we saw with Robodebt here in Australia, can cause an immense amount of harm. So having the right voices in the room to build, critique, access is really important. But the same goes for standards and all the forms of regulation. So if you look around the room, who's in the room making the rules is just as important as who's in the room building the systems. Both are embedded with values and governance, power. I don't think we talk enough about power in this space.

Dr Miah Hammond-Errey: [00:12:39] You recently co-authored a piece with colleagues from Carnegie on the potential for AI to automate the banality and radicality of evil. Could you set out for our listeners what this means as AI systems proliferate?

Dr Kobi Leins: [00:12:51] Yeah, it was a pretty contested and complicated piece which came out of reading some Hannah Arendt, which I do do from time to time. Hannah Arendt is one of my favourite female philosophers for listeners who have not heard of her, and she fled the Holocaust during World War Two and was seeking asylum in Paris for some years. She came from a very particular angle and thinking about these things and I was really aware of as as a German speaker, having studied in Germany, I was really aware of the banality of evil, right? We all, a lot of us know about the Eichmann trial. She sat in on that trial and she just said, look, all it takes for really bad things to happen is for good people to be silent. And, you know, I've had that the poem on the back of my door all my life. So if you don't speak out for what's right, that's kind of what you walk past and that's what'll happen. What I hadn't really come across in her work before was the radicality of evil. And this is a concept that, at least to me, was relatively new. This idea that the radicality of evil is quite different. She talks about the fact that when people are reduced to numbers or to binary digits or made redundant, that's radical evil. So it's different to the banality of evil. It's the act of making people less useful.

Dr Miah Hammond-Errey: [00:13:55] A quote from that piece that stood out for me was "in the rush to roll out generative AI models and technologies without sufficient guardrails or regulations, individuals are no longer seen as human beings, but as data points feeding a broader machine of efficiency to reduce cost and any need for human contributions." In my own work on Big Data, I too talk about the transformation of social action into online quantified data and the ubiquity of visible and invisible systems analysing and using that data, I would love to hear your thoughts on where we can best direct our efforts to create solutions and improve diversity of people in the room.

Dr Kobi Leins: [00:14:30] I don't think enough people understand how much data is collected about them, how it's connected and curated, and then how it's used against them. And having worked in a corporate environment, reviewing these systems and how this data is used, I just do not think the Privacy Act is fit for purpose. I think we all agree, it's up for review here in Australia right now. Unlike the GDPR, it has a very narrow definition of personal information and most people still don't understand that two or three data sources and you can re-identify anyone. I think we need to treat data with the value it has. Governments and businesses see the risks because of breaches, but what they don't see is the value in the opportunities. And I think that's something that companies really need to think about a lot more.

Dr Miah Hammond-Errey: [00:15:08] How can we approach AI regulation to help shape some of the future regulatory challenges we might see with new and emerging technologies?

Dr Kobi Leins: [00:15:17] There are so many proposals for ways that we can regulate. I think it's really important to remember that any kind of control of any system or managing risk or protecting opportunities is always using a toolbox. So as a reformed lawyer, law is expensive, slow and tedious, which is why I don't practice it anymore. It's one of the tools. It's one of the tools at the end of the road. So think in terms of the toolbox. The trend or tendency is to go for the traditional things that we do, but there are obviously things that we could be doing differently. I think one of the the biggest elements for me is the ability for people who see any harm or any laws being broken to call it out. So there are a couple of places now that are sort of registering and logging AI harms or system harms, which I think again, we should just keep to AI, it should be systems more broadly. I think people overestimate the power of these systems. And we've seen just recently with the cancellation of all the flights in London and saw Toyota had to shut down for a few days everyone thought it was a horrendous hack. They just didn't have big enough backup compute. You know, sometimes these systems are just terribly brittle for really simple reasons, so we don't have to go to the worst.

Dr Kobi Leins: [00:16:21] But we also need top-down change and this is where the AI management standard is really interesting for me because boards know that there are risks. They know that there are problems, but they have a lot of consultants doing a hard sell and a lot of execs who also want to be saying that they're using AI or other technologies. And so having a language and an understanding of what those risks are. So from a board level saying this is our risk appetite, this is what we'll tolerate is really important. But most importantly, it all comes back to data, how you're using data which you've already flagged. What are your what is your data provenance, What is your data set? You know, who's not in your data set, whose story are you not telling. I think it's a really interesting time, as you did the acknowledgement of country. You know, we sit on Indigenous land all the time and when we have these conversations we talk about it, but most of the time we'rekind of blind to it. There are a lot of things that are like that in tech, which we where we just don't see things because we don't talk about them. And we need to be having those hard conversations. But ultimately, to your question of how do we get more diversity, you need more money. You need money for the women, particularly the women who do this work. And there's an increasingly a note of, you know, a lot of the people building the systems are male and white. A lot of the people contesting the systems or raising issues about the systems. Mia Shah-Dand, and Timnit Gebru, there are so many JoyBuolamwini. They're all talking about the concerns and they're all coloured women because their communities are the ones that are most affected. And yet a lot of these people are also looking for work, right? There are lots of them, lots of people around who have these views. There needs to be better funding, independent funding for people who are critiquing and raising issues around their systems.

Dr Miah Hammond-Errey: [00:17:42] Yeah, they're huge questions. And that point that you raise in terms of power is one I talk about in my forthcoming book and is just, I think, radically profound for society. You know, kind of on a personal level, what do you see as some of the key transferable skills between law, technology and security.

Dr Kobi Leins: [00:17:58] Being curious. So one of the things I do in a lot of my work now in the corporate world is set a tone. I did it in teaching. I also did it as a lawyer with clients is set a tone where I might have expertise in a particular area, but I'm never going to be the expert for everything. And particularly when you're talking about really senior execs or board members or, you know, government leaders, to to understand that there's no single person who can possibly have all of the answers to these questions. So yeah, I always quote Ted lasso And so 'you can't be arrogant and curious at the same time'. You actually need to be asking a lot of questions. So thinking about systems as again, as these systems connect, I think the next generation are going to need to really be thinking about those things, but also be able to to think about them differently. But getting people to think sideways and across different areas, I think is is actually almost as important as the technical skills and then make really good friends with the technical people, so when you don't know, you can ask them. You don't have to know everything about everything. It's impossible.

Dr Miah Hammond-Errey: [00:18:54] Great advice. Let's talk about international standards. You've been working with the IEEE and Standards Australia on international standards around AI. We hear lots about standards, but I'm hoping you can give us a bit of behind the scenes insight. Can you give us an overview on how standard setting processes work? You mentioned who's in the room and who they involve and how they're progressing?

Dr Kobi Leins: [00:19:17] So I came from a treaty making background. I was heavily involved in the adoption of the Declaration of the Rights of Indigenous Peoples. And although unfortunately only had a BlackBerry and no photos of that, I sat in the General Assembly when that declaration was adopted. That's where my heart lies. Is this international, not just law building, because the law is almost a by-product. It's the consensus and the community and the social license to do certain things I think is part of the process of standards. So I became involved in standards for the same reason, the global aspect to it. In Australia we select experts and experts are always welcome to join, to apply, to join basically by those who are interested. Soit's a fairly simple process for people to join and contribute to standards in Australia. It'sa very open process. It's not the same in all countries. Some countries require election or selection by a government, so you don't actually get in unless you're in there representing your actual government. I think in Australia our superpower has been that. We'veactually just got a lot of really good technical experts in the room who don't have agendas. I should also note these roles are not paid. Soyou're getting up at 2AM in the morning because you care. So that attracts a certain kind of person. You're not doing it for the prestige or the kudos. You're doing it because you want to see really effective change.

Dr Kobi Leins: [00:20:30] The most interesting thing about standards, I think, is that they're a soft form of power. So they're still, again, you've got that consensus building, you've got that international aspect, but they're not international treaties, they're not law, they're not required. What they are going to do is change the expectations and the base level of behaviour. An interesting question I got asked recently is whether Australia should adopt the standards as regulation, and I said no and had to think about why. And my thinking is that regulation is, you know, you can't go below that line. That's, that's kind of you can't cross those lines, where standards you really wanting to get people to prove best behaviour, not the minimum behaviour. It'sa very, very differentbehavioural tool. So SC42,000 is coming down towards the end of this year. That's the management standard which has a lot of social components. It's about how boards need to establishrisk,it's about how they need to ensure that that feeds through their organisation. It's about documentation and training, it's about systems, it's about reviews, it's also about life cycles, not just having one-off systems in place. All of that is really exciting for me and I think will change the game.

Dr Miah Hammond-Errey: [00:21:30] I think that'sa really interesting point. There talking about regulation being a minimum standard and standards being that aspirational best practice. One particular standardyou've worked on is centring around developing an AI system impact assessment. And can you explain what makes that piece of the puzzle so important for you?

Dr Kobi Leins: [00:21:48] Because it was my day job, in my last job, I had a very personal vested interest. So a lot of organisations, large organisations, global organisations will have review processes where you'll start with your security review and then you'll have your privacy review and then you'll have this and then some have human rights impact assessments, etcetera, etcetera. By the time they get to you got to the data ethics assessment. They were really, just a lot of interlocutors, not everyone, we're just like, how do we get how do we get over the line? What do we need to do here? Just just tick that box. Let's move on.

Dr Kobi Leins: [00:22:14] And so what'sreally interesting about this standard is that there's an annexe proposing a completely different approach, which is actually going to turn organisations upside down in their reviewing processes where they triage at the beginning what these systems are. So there are a few issues. One is data scientists will game whatever they call a system. So if you make it only ML or AI all of a sudden you'll start getting a whole lot of really basic modelling or they'll say it's basic modelling. And as soon as you lift the hood, you're like, This is way more complex. So having at the beginning a proper review of what the system is, what it entails, what data it's using, how it's being connected, how it might be repurposed, and then at the end of the lifecycle, how it's being retired should all be consolidated across disciplines. Again, a lot of the questions are the same. The cyber questions are very similar to the data ethics questions and often they feed each other. So each review is much more comprehensive and better if people in the room are doing it together. SoI'm really excited for that because I think hopefully stakeholders will be more engaged and actually see it as a a process that embeds and protects their data rather than, and their value, rather than another hurdle to jump through to get their project over the line.

Dr Miah Hammond-Errey: [00:23:14] We're seeing tech decoupling in some areas between the United States and China. Does this hold any kind of risks for you in the space of international standards?

Dr Kobi Leins: [00:23:22] What you standardise and what you don't is fundamental to how the world runs and how you run a business. So for those who haven't read 'The Box' before standardisation of container ships, you had a whole lot of people on dockyard sort of throwing bags into corners and shovelling things onto ships. With the standardisation of containers came trains, railways, shipping. Globalisation itself relies on these systems. The lifestyle that we have relies on this interoperability. There is a risk that we become isolated and this, the geopolitical impacts of this kind of separation isreally problematic. I'd much prefer to see international collaboration, but I also understand the interest and the power and why it's being done the way it is. I think people need to be more aware of the hardware. The cables. Tamsin Paige writes about this. All of the hardware and the infrastructure that's profoundly going to affect how these systems work going forward.

Dr Miah Hammond-Errey: [00:24:06] I want to talk a little about trust and governance systems. What does trustworthiness mean to you in the context of technology?

Dr Kobi Leins: [00:24:14] The trust element to me is sometimes a deflection from talking about the things that are more important, like the standards, like the regulation, like the limits to use. I think you need to be a little bit more serious about how we how we control these tools. But that said, that's a pretty controversial statement because a lot of people believe very strongly in trust around AI, and it's a really big piece for them. SoI think there is trust from a, you know, business to business perspective and there are elements of trust. But I think in a lot of the time people are talking past each other when they use it.

Dr Miah Hammond-Errey: [00:24:40] Yeah, really interesting perspective. I mean, I think for many users, technology can actually be quite scary though, you know, from data breaches, digital currencies and all the way through to the potential of AI and biotechnology. It's such a fast moving space. People can feel, it can feel out of reach. And, you know, the individual kind of users who don't play in the tech and security space, I think it can be quite confronting. So engaging with them in terms of that trust of technology at a really simple personal level I think is important. But understanding how we simultaneously build, like you say, just the trust in that standard or that baseline that the technology will do what it says it will do and also help users to be aware of that risk.

Dr Kobi Leins: [00:25:22] Yeah, if trust is the T&Cs being legible to the user, I'm all for that. You know, you don't need privacy consent that's 15 pages long when you sign on to an app. Use plain English, make these things accessible. None of this is accidental. That kind of trust I'm all for.

Dr Miah Hammond-Errey: [00:25:38] We're going to go to a segment on alliances. Technologies impact all nations and effective governments need to collaborate with industry and academia to solve complex policy problems. What do you see as the role of alliance building and technology policy?

Dr Kobi Leins: [00:25:51] I don't think you can separate the two, and I don't think we necessarily engage the way that we should. So and when I say we, I'm saying the Australian Government and again, this might be slightly controversial, but I think Australia has a fairly traditional view of diplomacy. Diplomacy is the cables and wires. Diplomacy is the ports. Diplomacy is all of the infrastructure that's needed to operate a country and certain countries, which I think people are fairly aware or may or may not be aware that China has been extremely active in the south east, south East Asia region and they have a lot of the ports they own, a lot of the ports. Facebook has been incredibly active in providing internet across a lot of the developing world in exchange for access to all of these people's data. There's an been an incredibly, again, strategic and fast-moving development by a lot of the tech companies to get into places because it is a form of power. So again, the tools that you have in a place will affect, you know, the power that you have over that place, but also the relationships. If you'reproviding services and hardware, you're also, to your point, you're a trusted party. So you build trust by, you know, creating things that are safe and good to use. I think we need to be thinking a bit more strategically about how we do that.

Dr Miah Hammond-Errey: [00:26:59] Yeah, really interesting. I mean, I think one aspect of this conversation which is often lost, is that entire infrastructure, you know, that goes behind some of the technologies that we use. You have spent time working with corporates on their data ethics, and AI use and previously published on some of the misconceptions around de-identification and privacy protections. I'd really love to hear a little bit more from you about particularly anonymization and de-identification. It's something I completely agree with and have also written about, but I feel like it's not often talked about enough.

Dr Kobi Leins: [00:27:30] De-identification or anonymization is, for those who are not as familiar with the terms, often used as a catchcry to kind of give you free range to use data however you want. So as long as you can remove the identifiers, it's fine. You can do anything you want to do. Having had the luxury of sitting in two computer science departments and two universities as a non-data scientist, I had the luxury of working with some incredible minds on these kinds of pieces and thinking about where they would come in with a technical aspect, I'd say, Well, you know, this is this is probably the social thing.

Dr Kobi Leins: [00:27:56] I think people don't understand or they don't want to talk about the fact that you can re-identify people if you have multiple data sets so you can no longer think about here's an individual data set that we can separate and use cleverly, because what these systems are doing, a lot of the automation is actually combining multiple data sets. And as long as you have three data points and you can basically re-identify an individual. That's really problematic when you're talking about PI, private information in a really narrow sense. And it really doesn't matter because the concept of who you are is in these systems. And if it's not, you know, you as Miah Hammond-Errey, it's you as a woman of a certain age with a certain number of offspring, with certain types of friends who does certain activities. And it's enough to put you in a group which the German constitution banned, given its history, because they know that the risks of that to be able to market to you, to target you, to do all these things. Anna Johnson also writes and talks a lot about this.

Dr Kobi Leins: [00:28:47] But corporates are actually actively trying to de-identify And I saw a headline on the way back from the airport on the weekend that they're taking medical data and de-identifying it and using it as synthetic data. I mean, people don't even know what these words mean. You can't, it either is human data or it's synthetic data you can't really like. We need to clarify and define the words that we're using across disciplines. Again, back standards, part of the thing that I love is that you're creating a common language so that people can't just say, you know, if hear 'hashed and salted' one more time, just scream. It's like, what is what do you mean by 'hashed and salted'? And this is a marketer's term for those who don't know, like it's being de-identified. We've got like an identifier and there's a key here and a key here. And you're like, But you still know who the person is, right? They're like, Yeah, but it's de-identified. You know, words mean things. I'm quite a big believer in words. We need to make sure that we're not talking past each other.

Dr Miah Hammond-Errey: [00:29:32] I think coming from an intelligence background where you're often using lots of disparate data sets which have a lot of gaps to match with other data sets, you become intimately acquainted with what de-identified, if you like, data can actually do. And it's one of the reasons during my research that the intelligence community or the participants in my research were so profoundly concerned with privacy and protecting the privacy of Australians given what they understand about the real capabilities of mass data.

Dr Miah Hammond-Errey: [00:30:04] There's obviously many projects going on in Australia about tech workforce and, you know, increasing the pipeline from primary education. But one of the things that often the tech workforce focuses on is the PhD level education, rather than, you know, that vocational training all the way through the corporate sector with people who will be engaging with AI systems or, you know, anywhere in that that system or pipeline, I guess, and you've alluded to this fact that there are multiple systems. Where do you see the greatest shortages in tech workforce, particularly from that data perspective

Dr Kobi Leins: [00:30:38] Well, there are shortages and there are shortcomings. There are not enough people with the expertise to interrogate the social impacts the data uses from a broader perspective. There are not enough people who can do data ethics or AI governance or whatever you want to call it. They're just not there. There's just not a pipeline of people coming through who are trained in this expertise.

Dr Kobi Leins: [00:30:57] But even for the technical skills, I, again sat in two universities and there is no consistent teaching of governance or ethics. Most data scientists don't know what personal information is. They barely know what a regulation looks like. If they have one class or two classes where ethics is taught inconsistently, they're doing really, really well in their degrees.So the history of that is that the positioning was that data science is just math, so it can't do any harm. And for those who haven'tread Cathy O'Neil's 'Weapons of Math Destruction', just have a read. We really actually need a licensing program for data scientists so that they have a code of ethics, something like medical practitioners 'do no harm' or call out, sort of ties back into your question of bigger changes, more radical changes. But the conversations around licensing data scientists are something we've been having at Carnegie Council as one potential way to sort of, how you would police that and who would police that is a whole other question. But this idea of creating culture and community around these these kinds of projects. So if someone goes off and does something that is not aligned with those values, then they can opt out, but they're no longer a practitioner with a license that belongs to that particular body I think is another way to get around it

Dr Miah Hammond-Errey: [00:32:01] Yeah, absolutely. So many research studies have shown the challenge between being able to identify issues in a system and being able to interrogate them. People tend to either blindly accept or are given sceptics, and it's really important to give people frameworks for critical reflection.

Dr Miah Hammond-Errey: [00:32:18] Okay. We're coming up to just the final couple of segments. So the first one is Eyes and Ears. What have you been reading, listening to or watching lately that might be of interest to our audience?

Dr Kobi Leins: [00:32:27] So I have had my head deep in a book from 1960s or 1970s, I actually brought it so I could remember it. Small is Beautiful by E.F. Schumacher. I started going down the rabbit hole of economics because so much of what we're talking about is, also sits within a, you know, neoliberal capitalism model. Why are we doing what we're doing? What is pushing, what are the levers, what are the sort of the handbrakes? And this book talks about in the 70s how environmental resources were going to run out and how we need to think on smaller local scales and how we shouldn't be going global and what the risks of those are. And when you read it now, you can't help but go, Yeah, we kind of knew what was coming, but it wasn't in the interest of those who were doing it to stop because there's a lot of money to be made. The other book I'm reading is Old Ways, Old Days, which is an incredible book by an Australian settler, and she spends a lot of the time describing what she saw in Indigenous culture at the time. And this woman talks about, one of the things is how the Indigenous people counted stars, that they had a different way of counting. And I'm always really interested in finding things that were common knowledge, but we now don't talk about, sort of these blind spots or these areas of knowledge and that amount of knowledge that sits in indigenous culture that we don't recognise or acknowledge. And thinking about that from an economic perspective, we can't actually have a sustainable nation unless we start drawing upon and really valuing the knowledge that the Indigenous people had that maintained Australia sustainably for thousands of years.

Dr Miah Hammond-Errey: [00:33:45] What do you do in your downtime to keep sane?

Dr Kobi Leins: [00:33:48] I was in Noosa on Saturday and saw seven whales and three pods of dolphins jumping out of the water. That's, my happy place is the sea, so I sail on the bay in Melbourne on a regular basis with an incredible crew. I read voraciously, but that's probably not as relaxing as one would like. And I'm a regular gym goer with a bunch of power women who are awesome. So fairly, fairly connected in various ways, mostly, yes, offline and not really doing anything to do with work hopefully.

Dr Miah Hammond-Errey: [00:34:18] And the final segment is called Need to Know, drawing on our intelligence connections here. Is there anything I didn't ask that would have been great to cover?

Dr Kobi Leins: [00:34:27] You touched on the safety of Australians and don't think we talk enough or think enough about how much information exists about us in the ecosystem from a geopolitical perspective. A lot of the work that's done with third parties involves a sharing of enormous amounts of data. ACCC has just had a call out now on data brokers. I don't know that there's enough awareness around the risks with that, not just from a misinformation disinformation point of view, but also from an interruption, critical infrastructure point of view. That we don't have our own satellites. We don't have our own data sources in a lot of ways, we're sharing an incredible amount of data about our people. We've had a huge number of breaches, even if we were the most cyber secure country tomorrow that's still out there, you can't pull it back. What are we doing about that? How are we raising awareness

Dr Miah Hammond-Errey: [00:35:18] Yeah, I think it'sreally interesting. Like there is now so much information out there. It doesn't matter if we rein that back because there are other nation states who are who are holding that and who are collecting that. And in the futurewe'll match that with other data. And that'sprobably something that most people in the data business understand. You know, data brokers have been around for a long time, but when you start to step that out into other forms of government and an awareness of what that kind of data fusion might mean, I think it is really quite confronting. Since you went there, can you talk a little bit about the tensions between the United States, Australia and the EU? There are clearly different approaches to, you know, whether it's from data to technology to even AI blueprints. How do you see us navigating them?

Dr Kobi Leins: [00:36:01] Well, it's a little bit like the standardised versus individualised question with shipping containers. It's where do you play with others and where do you build yourself? There are big strategic questions that need to be asked. This is probably another point that I don't see discussed often enough. Strategically, where do we best play with others is a really important question and where do we do things ourselves? It'sreally hard for a country the size and distance from the rest of the world, like Australia, to do everything itself for a tiny, tiny nation state, a long way away from from the centre of power. But I think being more strategic about who and where and longer term thinking about again, I've got a colleague, friend, Catherine Tay, she's looking at rare minerals in Australia and said, you know, have you have you spoken to the national security people? I imagine this would be of incredible interest to them to know what we've got in Australia, what we're going to need, you know, 30, 50, 100 years from now. And, you know, that kind of thinking is what's really needed. What do they have? What do we have? What do we need? Where do we want to be? Who do we want to be? Their big hard questions. And they involve resources and they involve hardware and software imaginations as well.

Dr Miah Hammond-Errey: [00:37:06] In terms of like the tensions that you mentioned between the US and the EU. Can you can you give us a bit of oversight into where they are, particularly from AI regulation?

Dr Kobi Leins: [00:37:20] Well, the EU AI Act is comprehensive and potentially going to click into the standards. This is one of the, one of the things that'll be really interesting in the next 6 to 12 months is whether theadopts the international standards or starts making their own. I, full disclosure, I'm part German. And so again, I view privacy quite differently given my history, to how Australians view data and the security of data. And I think for Europeans, having had the history that they've had, there is a much more privacy focused. There are different values embedded in the technologies and there are different values in how they're used. But most of the systems are built in Silicon Valley still. I'm not saying that people should wake up over breakfast thinking about them, but I think more people at senior levels need to be thinking about this and where that power sits.

Dr Miah Hammond-Errey: [00:38:04] It's clearly critical to the future and yet seems to be such a small part of the discussion. Thank you so much for joining me today, Kobi. It was such a pleasure

Dr Kobi Leins: [00:38:13] Thank you so much for having me

Dr Miah Hammond-Errey: [00:38:16] Thanks for listening to technology and security. I've been your host, Dr. Miah Hammond-Errey. I'm the inaugural director of the Emerging Tech Program at the United States Studies Centre, based at the University of Sydney. If there was a moment you enjoyed today or a question you have about the show, feel free to tweet me at @miah_he or send an email to the address in the show notes. You can find out more about the work we do on our website, also linked in the show notes. We hope you enjoyed this episode and we'll see you soon.