Artificial Intelligence (AI)

  • United States
    Tackling an Evolving Threat Landscape: Homeland Security in 2023
    Play
    Secretary Alejandro N. Mayorkas reflects on the twenty years since the Department of Homeland Security’s formation and reviews the evolving security challenges of today and tomorrow, including the steps being taken to prepare for potential threats like the swift development of artificial intelligence and rise in nation-state aggression.
  • Artificial Intelligence (AI)
    Artificial Intelligence and Great Power Competition, With Paul Scharre
    Podcast
    Paul Scharre, the vice president and director of Studies at the Center for a New American Security, sits down with James M. Lindsay to discuss how artificial intelligence is reshaping great power competition and intensifying the geopolitical rivalry between China and the United States.
  • Religion
    Religion and Foreign Policy Webinar: Religion and Technology
    Play
    Heidi A. Campbell, professor of communication at Texas A&M University, and Paul Brandeis Raushenbush, president and CEO of Interfaith Alliance, discuss the meeting of religion and digital culture, and its effect on religious communities. Carla Anne Robbins, senior fellow at CFR, moderates. Learn more about CFR's Religion and Foreign Policy Program.   FASKIANOS: Thank you. Welcome to the Council on Foreign Relations Religion and Foreign Policy Webinar Series. This series convenes religion and faith-based leaders in a cross-denominational dialogue on the intersection between religion and international relations. I’m Irina Faskianos, vice president for the National Program and Outreach at CFR. As a reminder, this webinar is on the record, the audio, video, and transcript will be available on CFR’s website, CFR.org, and on the iTunes podcast channel, Religion and Foreign Policy. As always, CFR takes no institutional positions on matters of policy. We’re delighted to have Carla Anne Robbins with us to moderate today’s discussion on Religion and Technology. Carla Anne Robbins is a senior fellow at CFR. She is also Marxe faculty director of the master of international affairs program and clinical professor of national security studies at Baruch College’s Marxe School of Public and International Affairs. Dr. Robbins is an award-winning journalist and foreign policy analyst. She was deputy editorial page editor at the New York Times, and chief diplomatic correspondent at the Wall Street Journal. So, Carla, thank you very much for moderating this conversation. I’m going to turn it over to you to introduce our distinguished speakers. ROBBINS: Thank you so much, Irina. And thank you so much for inviting me. I don’t know an enormous amount about this topic. I know a reasonable amount about the internet. My mother would say, she hopes I know a reasonable amount about religion as well, but not from an academic point of view. So, as Irina said, we’re going to have a conversation here for about twenty-five minutes, and then we’re going to turn it over to you all for questions and conversation. Dr. Heidi A. Campbell is professor of communication, affiliate faculty in religious studies, and a presidential impact fellow at Texas A&M University. She’s also director of the Network for New Media, Religion, and Digital Culture Studies, and a founder of digital religion studies at the university. Dr. Campbell’s research focuses on technology, religion, and digital culture, with emphasis on Jewish, Muslim, and Christian media negotiations. The Reverend Paul Brandeis Raushenbush is president and CEO of Interfaith Alliance. He’s an ordained Baptist minister, and a long-time leader in the interfaith movement, working to protect an inclusive vision of religious freedom for people of all faiths, and none—I love that, people of all faiths and none—in both online and offline spaces. Throughout his twenty-five years of ministry, he has maintained a presence in both IRL and URL spaces, including digital journalist at Beliefnet and HuffPost Religion. He has also served as senior advisor for public affairs and innovation at Interfaith America, and as associate dean of religious life in the chapel at Princeton University. So, Heidi, I’d like to start with you. As a now-academic, I know the scramble we went through taking classes online overnight at the start of the pandemic. And there’s been debate ever since in my shop about whether online education is worse, whether it’s better, or whether it’s just different from in-person classes. Can we start talking about that pivot point for religious communities and organizations? How hard was it to adapt? And is there a similar debate going on in your community? CAMPBELL: So, back in 2020, it was interesting to watch from my perspective of someone who studied religion and technology, especially the internet, for the last three years, that almost like an overnight change from, one week there was about a dozen people in my Facebook feed that were streaming their services, and the next week it was fifty. And then the next week it was one hundred. Religious communities experimenting with different ways to use digital technology in their ministries, and in their communities and communication, isn’t new. We have examples going back as far as the 1990s. But it being a widespread phenomenon rather than the exception to the rule, it’s been very new.  There was a lot of initial resistance in some groups because it was not just requiring them to learn a new thing on top of a very uncertain situation, but it was also asking them to rethink what it means to kind of create a worship environment, what it means to have religious gatherings. And so I think a lot of the debates have been over not just the instrumental use of the technology but now, a couple years on, what has it meant to how we change our ways of interacting to one another? I talk to pastors and rabbis all the time. And it’s been a strong learning curve, I think. And kind of three camps right now. There’s the people that are thinking, wow, this has been the best thing for us. It’s forced us to think outside the box and it’s been a positive innovation. There’s people that are now saying, well, we spent all this money and time getting up online. And people in our congregation, have a demand for this, so we feel like we have to keep going. And then there’s groups that are still what I would call technologically reluctant, or hesitant, in that they actually have access to the technologies, but for them religious experience and actions will always be embodied. And so they just can’t wait to get back to more of that traditional kind of space. So there’s still a wide range, but digital technology in the church and in ministry is here to stay. And our future is definitely hybrid. ROBBINS: Thanks. So, Paul, you were a very early adopter of this world. And then suddenly the world was where you were. Do you find that most people are just basically transferring what they did before online? Or do you see more and more people actually doing creative things? I would say among my colleagues, and I say this with love, that far too many of them still on Zoom look like they’re reading hostage videos. I mean, if you’d had a newspaper in front of them, you’d know they were in a basement somewhere. Are you seeing more creativity as time has gone by, or are people still more hesitant, the way Heidi was saying? RAUSCHENBUSH: Well, as Professor Campbell has said, this has really made plain some of the questions that were already present before the pandemic happened. It’s made us examine some important questions, like what does it mean to be embodied? The question of when ten Jews are online is that a minyan, is a question that no one has really quite answered. When two or three are gathered in Christ’s name online, is Christ there? And what does that mean about an embodied faith? They seem kind of glib, but they’re actually really important for how we view the body, what we mean by community. So I think there has been incredible innovation on many parts, but it does bring into question what does it mean to be a community. So when you have a community that is not geographically focused, what is that religious community’s responsibility to geography? Meaning, if the majority or even all of your congregation is online around the country or around the world, what does it mean for the local neighborhood that really needs a food pantry, or needs a place for AA to meet? The local responsibility. So these are big questions that are coming up for religious communities around the internet. I would say, I have seen incredible innovation. You can really tell, though. It’s kind of like the difference between when radio happened, or when TV happened, there’s all of a sudden TV evangelists who are, like, OK, I see what the possibility is here. I see what this allows me to do. My community, which is a more mainline Protestant liberal community, has not been that great at it, though there have been many people who have been good. You also see these new influencers, like TikTok pastors, and TikTok rabbis, who are really there. They have constituencies. Whether they’re communities, that’s another question. ROBBINS: So, Heidi, can we talk a little bit more? You’ve written about three common approaches, and you were talking about this. Can we talk a little bit more about the transformational ones? Are they using particular digital platforms? Have they come up with particularly cool ways of using, leveraging the technology so that it is a new experience, a truly religious experience, rather than just preaching at people? But using technology to have them truly experience the religion? CAMPBELL: Yeah, so during the pandemic, there were two innovative strategies. One was kind of a translation strategy. And that was people realized putting a smartphone up and trying to get the whole sanctuary or church wasn’t the best strategy. That just not transferring online. And so a lot of congregations decided, how we do actually restructure the front of our building? We saw a lot of churches and congregations actually go out of the church or synagogue building and go into maybe a fellowship hall, even into pastors’ homes, and making the sermon into more of a talk show format, or a fireside chat kind of format. And realizing in a time when people were feeling disconnected, maybe the service and liturgy needs to be changed to adapt to that sense of more need for community, more need for connection, more ability to address loneliness. Now, some of those experiments, especially the talk-show kind of format, have kind of transferred back to just streaming their services online. But there’s still a lot of churches that have now kind of added Sunday Bible study groups or discussion forums, or synagogues and mosques will have these community chat groups. And using some of those alternate paths. And the transformational, I would say that that was—when people were not just kind of looking at how do we get our services online, but really looking at, OK, what does the internet do well, and allow us to do internet well? And how do we actually leverage that for our community? And so here’s where you kind of see more creative forms of, whether it’s religious study or outreach. I’m doing a big study of churches in Indiana and how they were affected over a three-year period from the pandemic. And we heard lots of interesting stories about little rural churches becoming the internet hub for the community, setting up picnic tables outside. And now they run a kind of web hub kind of community center for people because that became a real need. And it became a gathering point just to decompress during the pandemic. So I think that there’s a lot of innovation. But most, the majority, I’d say, is going back to this kind of translate strategy. And we have to remember that the average religious congregation in the U.S. is sixty-seven people. So those churches, it’s usually the pastor and maybe one other person who had to work to get them online. And so if they didn’t have any technological training or technological resources, just the acting of getting their service online was a huge change. And it’s really only now that they’re starting to be able to think through what it would mean to experiment at a greater level, especially if our church is now going to be online and offline, this hybrid reality. ROBBINS: Thanks for that. So, Paul, I actually have two questions for you. My first question is, are ten Jews on Zoom a minyan? And I’m serious. And where is that debate taking place, for all of the questions that you raised? RAUSCHENBUSH: Personally, I think that there is no escaping it now, but I think understanding the depth of what we mean when we say that, and not doing it glibly. Not saying, oh, it’s just the technology. But rather understanding that our world has been radically transformed by the internet—radically transformed. At its root, we are different than we were thirty years ago. I don’t think we’ve begun to have that conversation in religious communities. There’s been no innovation or invention that is similar to the internet. We have not begun to delve into what it will mean for us. Eric Schmidt, one of the executives of Google said, “This is the first invention that humanity has made that humanity does not understand.” We don’t understand what is happening. And there has been very little theological reflection on that. And this is an unfortunate, unfortunate thing. I often introduce lectures about this saying, how many times have you heard a sermon about the internet, or a rabbi talk about the internet, aside from saying, well, you need a sabbatical. That’s not going to do it. We have deep questions. Today on the front of the New York Times, “it’s time to talk to your kids about the chatbots.” Well, we haven’t begun to talk to the kids about the internet. There are people, and religious communities could be doing this, could be offering a conversation about what the internet will mean, what it is doing already to us. Meaning, what does our body mean? What does it mean to be online? We don’t have those conversations, and it’s a real problem for religious communities that I’m shouting—Professor Campbell is one of the few people that’s been shouting it longer than I have. (Laughter.) But I’ve been shouting it since my beginning too, that this is really, really important. So, questions like, is ten Jews—I think rabbis are going to disagree strongly about that. The question that that begs is, what is the internet doing to us? And that’s the question I really want us to dive into. ROBBINS: So one of the things, and it’s always the conventional wisdom, is that too much of what has happened in our digital world has separated us rather than brought us together. What is the most creative use you’ve seen of online platforms that are bringing people together more? I would say from a teaching point of view, I love breakout rooms. (Laughs.) They have made me a better teacher. I am much less on transmit and much more on receive. That’s the one thing that Zoom has made me better as a teacher. So are there things that you’re already seeing that are going on, I’m going to ask both of you that, that you think that is easily leverageable, that makes the experience better? CAMPBELL: Well, I would say that, as someone who for thirty years has been following some of these trends, I actually see less innovation happening right now than I did ten years ago. Because the people who were doing it were doing it because they wanted to. And right now, we’re in this space of we have to do this and have to figure it out. But I think there still are some interesting ways that people are leveraging together. In these congregations that I study, one group has this weekly Bible study. And it’s for anyone who’s ever been part of the church. And some weeks there’s ten people, and some weeks there’s fifty people. And it’s become this kind of common thing. I’ve even had friends say, “oh, hey, I have to get off the phone because Zoom Bible study is starting.” And really seeing how can you actually just integrate it into the fabric of people’s everyday lives. Oftentimes I’ve found that religious organizations will say, OK, we need to use TikTok, or we need to use Instagram. And so they build some kind of tool and then it’s, like, if we build it, they will come. But the strategies that work the best is seeing how are you people in your congregation actually using the technology? And what kind of things do they need? Do they need community? Do they need support? Do they need teaching, or spaces for prayer? And matching what they need with what they’re already doing online is the best way to get these different innovations up and running quickly, I think. ROBBINS: Paul, anything to add on that? Something that you’ve seen that’s given you excitement about this, about the use of it? RAUSCHENBUSH: Oh, I mean, so many things. It’s amazing how people are reaching out. I mean, this is an incredible opportunity to be able to dive into areas of the world that you had no idea about, and that you’re no longer restricted because of geography or because of who you know. You’re allowed to expand. So the opportunities for learning about people of your own faith, different faiths. The experience of people in other communities. I for a while have done a little bit in the Metaverse. And the opportunity in the Metaverse to dive into experiences where you are kind of oddly embodied, but not really embodied, to but be able to meet people from around the globe who are in the Metaverse, who may actually say, hey, we set up a news broadcast of news from our country, and we’re inviting anyone to come by and let us translate it for you. The opportunity is to meet people and share in experiences. And, like anything, if you go in there ready to fight, and ready to judge, and ready to hate, then you’re going to have an experience. And you’re going to have other people experience you that way. If you go in there with curiosity, with real interest and love, the internet is an incredible—I mean, it’s amazing—there’s nothing like it. For what I do, which is largely interfaith work, you can build bonds. You can learn. You can grow. You can follow people from diverse traditions and learn so much about what they’re doing, in a way that was just not available to use before. So all of that is present. And then all of the opposite is present, too, because it’s people. And ultimately, we have to figure out how to navigate the internet in positive ways. And also almost thwart the internet’s effort to make us the product for commerce or for other kinds of purposes. We have to know what we’re up against, and then use it for ways that can be positive. ROBBINS: So, Heidi, and this is a perfect transition to this question—and then I want to turn it over to the group—can we talk a little bit about the downsides, and how much are the communities that you talk to aware of and coming up with strategies to deal with the rise of disinformation and the amount of hatred and alienation that is out there? I mean, the internet is very good for bringing people together but, as Paul alluded to, it’s also really good at spreading hatred and a huge amount of just bad information. CAMPBELL: Well, I think one of the challenges of the internet, and one of the things that’s always praised, is that it’s a space that we can all go to, and we can meet one another and have this global conversation. But internet is still an exclusionary space. It’s exclusive by the kind of technologies that you have and have access to. There are some places even still in this country where there isn’t WiFi access. And so depending where you are, you may think it’s an egalitarian and equal space, but it’s still a (inaudible) space in that perspective. And the other thing, and this is something that scholars found early on, that the internet is a place to build community. But because it’s so vast, in order to tame the space, as it were, people usually gravitate toward like minds. Interfaith work online is actually—like Paul’s—is really hard, because there’s, oh, I want to find all the people who think like me, so that we can have this shared conversation. And so it creates a kind of online tribalism. It doesn’t inherently create diversity. You have to actually design your space and kind of design how you’re going to run your events or your environment to create that. And so obviously, whenever you put people in the same mind, it’s easy to form an echo chamber of just, yeah, yeah, we all believe this. And there’s no external voice of accountability. And so that’s why we’ve seen, especially, whether it be the dark web, or these spaces where antisemitic voices or religious extremists emerge, it’s because, again, they find their tribe. But their tribe may be problematic. And again, while the internet gives us access to a lot of information it’s not like a peer-reviewed journal article where it’s been vetted by four or five people. And I’m always having to teach my students, how do you discern and how do you evaluate the resources that go to it? I can go to a website that looks like a full-on academic journal, but it’s just two or three people’s opinion. And so I think this is the challenge. There’s some innate things the technologies do really well. Like Twitter is really good for spreading information. Facebook is really good for building communities. Instagram is really good for collecting digital stories. But knowing what the technologies do well, as well as what kind of tendencies they can encourage away from communal accountability to an individual preference, is important. ROBBINS: So, Paul, final question by me before I turn it over. Heidi said that the internet was hard for interfaith work. Do you find it harder for interfaith work? Or are you leveraging it particularly well? RAUSCHENBUSH: Sure. I don’t want to put—I would never put words in Heidi’s mouth. I think she said that you have to work against some currents of the internet, which algorithmically encourage you to stay with people who are likeminded. And so I do think it takes intention. But that doesn’t mean that it’s not possible. I do think that what we also are up against as far as interfaith is intentional spread of disinformation about different religious communities, as well as the spread of hate. Intentional targeted spread of hate—37 percent of Jews say they feel harassed online. We’ve got close to that of Muslims, other traditions as well. These are things that are happening now, and they’re happening intentionally. And we have white extremists, largely Christian extremists, who are spreading manifestos. They’re finding the internet. They can share manifestos. They are broadcasting mass murders or either actually live or telling their community right before they do it. And they’re being supported in that effort. So this is the real downside of it. And so part of my role now at Interfaith Alliance, we put out a report on big tech hate and religious freedom, saying that actually the fact that big tech has not found a way to counteract hate in a productive way that safeguards freedom of speech actually curtails religious freedom, because religious freedom is both an online and an offline experience. And right now we are, again, experiencing something where there’s a case in front of the Supreme Court. And five justices said, I don’t understand the facts of this case. It’s so complicated. We’re in a moment, again, where the internet is such a very difficult thing that our Supreme Court justices are confused about the basic facts around the case. So we are in a very—one thing to remember, and Kevin Kelly has said this, the founder of Wired, that we’re in the beginning of this technology. This isn’t going to end. We’re in the very beginning of it. We’re in the throes of this technology. So if we haven’t figured it out yet, it’s OK. Now is the time to really lean in and say, what do we want this to be? And things are happening very fast. And so I encourage us to take it really seriously, especially in the interfaith community where you can do all the offline work you want, but one bad Facebook post because someone hasn’t thought about it can blow up six months of work. And so I just encourage anyone who does interfaith work to take the internet very seriously and train people up on how to be good interfaith citizens online as well as how we train people to be engaged in person. ROBBINS: Thank you for that. So we want to remind everybody about how to ask a question. And while we’re doing that also, we do have one question already in the Q&A. So I will turn it over to our operator host to remind people how to ask a question. OPERATOR: Great. Thank you, Carla. (Gives queuing instructions.) And we have some raised hands, so our first question will come from Don Frew from Covenant of the Goddess. ROBBINS: Thanks, Riki. FREW: Lower hand. There we go. Hi, hi. I’m Don Frew at the Covenant of the Goddess. And my own path, Wicca, was very much an early adopter of technology. A lot of the early religious webpages were Pagan. And then, conversely, with the pandemic, that really hit us a lot harder than a lot of other religious traditions, because when you’re casting a circle it’s not the same thing as going to a church. You can’t really do that online. Working magic online is not something that works very well. Both of you have focused very much on the internet and Abrahamic faiths, but what about the online experience of other religions, especially indigenous traditions? For many years, I served on the board of the United Religions Initiative. And we would go out of our way to try to connect board members, especially who were indigenous practitioners often in South America, to the board. And that meant getting them technology, laptops, and internet connection. And once that connection was over, we found that that then became a platform for various indigenous spirituality practitioners around the world to be able to connect using the technology that the URI had provided. So there’s been a real growth in indigenous networking using that kind of technology. Although, that hasn’t really been practicing the faith traditions online. That’s just been networking. So I’m wondering, can you say something about technology and non-Abrahamic faiths? CAMPBELL: Yeah. I mean, I remember back in the 1990s, I spent a lot a good deal of time in a techno-Pagan community. And this was in the 1996 to 1999 time period. And here these were people that kind of—where you had affiliations to Paganism in different kind of forms. And they wanted to see how could the internet, first of all, give them a community space because Abrahamic traditions, it’s easy to find an offline space to gather. It’s not so easy, especially if you live maybe in a more remote area of the world or community. So the internet became this great gathering space for Zoroastrianism, with all kinds of religious traditions. But also there was kind of a tension within some of those communities of like, OK, do we—again, do we just transfer our traditional practices to the extent we can or we feel we can online and try to go by a certain kind of tradition or dogma? Or do we try to innovate? In this techno-Paganism community I was studying, they were interested in saying, OK, how can we leverage what technology allows us to do and maybe create new ways to do spells and see if they work or not. And so I think that a lot of the early adopters were, again, smaller religious communities that didn’t have these offline spaces and allowed people to connect with them, but also there was just a sense of, do they follow the tradition or do they innovate. And we see both kind of digitally born religions as well as kind of reimagined forms of religious traditions as much as we’ve seen alternative or smaller minority religious communities emerging online. RAUSHENBUSH: I’ll just add, I think it’s really important for indigenous communities because of the importance of place and the land, and to make their own decisions around this as well as Pagan communities. And those that the question of whether or not ten Jews online is a minyan is very analogous to whether you can create a circle or these are questions that have to be developed by the communities themselves and they will be answered one way or the other. But the question of land and religion that indigenous communities often bring into the—is a really, really challenging one for the internet. ROBBINS: Riki, can we have Daniel Joranko next? Because he had his question in the Q&A. Daniel, do you want to ask your question? JORANKO: Yeah. Can you hear? ROBBINS: Yes. JORANKO: I’m putting out kind of a difficult question. But I worry very much about the internet. I mean, as Paul said, it’s this vast new thing and there’s a difference between technology as a tool, and a tool can very much work, and as a system, and I worry about the system and the amount of screen time that people are spending, the amount of emotional distress young people feel. As a person who works in interface staffing, just the busyness that is cause for people that are working in this field. People just seem more and more busy because you can get online and schedule thousands of meetings and there’s not a lot of reflectivity as much anymore. And so I almost worry that it’s making us collectively ill in a certain respect, and just your thoughts on that. I mean, again, I’m not proclaiming that. It’s more of a worry. So— CAMPBELL: Well, I like to think of the internet not just as a one village but actually this whole kind of new country, because the internet, we use it as this monolithic term. But I, as a researcher, think about it as internets. Everybody’s experience with the internet is different because it’s a network of networks and we choose which platforms that we spend time in and we choose which spaces that we have our interactions. So I could go to one space and just because of the—how I choose to—who I choose to interact with and the choices I make it can be a very positive experience and I can go to other places and it can be damaging and hurtful and dangerous. So I think the key thing is for religious communities to get a better sense about what are these spaces of the internet, and what spaces can be really, well, what I call cultured or cultivated to religious communities because it allows them to do those values of you building religious identity, giving accountability, providing community and care. And then also being aware of these are the spaces that you could happen into that may not be positive spaces. And I don’t mean to say let’s create ghettos on the internet, but it’s just a sense of awareness that it’s not just the internet that is problematic or system. It’s the people and I think sometimes we can see it just as a tool and it’s a neutral thing. But the technology is cultured by the people who live there and for the purposes that they use it for, and so we need to see that it’s the users that are actually bringing the negativity and the problematic, not the technology itself. They do encourage, again, more individualistic behaviors rather than communal, which is one of the challenges. But having this level of discernment and understanding about what these spaces are and how to use them is, I think, important. ROBBINS: Doesn’t it seem to be that religious communities can play a role if they are educated enough and not—don’t sound too nannying and actually educating kids on how to use the best and not fall into the worst? CAMPBELL: Yeah. I think that it’s important for religious communities to have a digital literacy kind of thing. We tell them how behave and what it means to live out our faith values in different spaces. Well, how do you live that out online, to treat the other as a friend rather than an enemy, and to show care and concern, and to call—speak truth to power as well. So I think, yeah, these are the things that—maybe education isn’t something that’s being seen as part of religious communities, but it’s the world they swim in, and especially for young people. And I think more seminaries, more religious institutions, need to have this digital literacy and digital understanding, not just the technical side but the cultural impacts in their training of future religious leaders. RAUSHENBUSH: Strongly agree. I’ll just say we assume that people understand the internet, especially digital natives, but the internet is hard to understand and it’s always changing. And so it is really important that we don’t assume that people understand the waters they’re swimming in but also recognize that water can be life giving but it can also overwhelm you. And so you need to really be thinking about how you’re interacting with the internet. But, it’s becoming less and less of an option. I mean, it might be a forced option for people who do not have access. But for those of us who are in urban areas it’s not an option and so it means that we—exactly as Heidi just said. We need to be very intentional about the way we show up. We need to tell our young people about disinhibition where you are more likely to do things online that you wouldn’t do offline and that’s because the technology affects us in a certain way, and, again, we’re at the beginning and there’s a lot of stuff that will be coming at us very quickly. One thing—I’ll just take the liberty to mention right now—the one thing that I’m very concerned about is AI and religious leadership. And someone is going to create—we asked—when I—this was ten years ago when I was at Huffington Post. We asked Siri about God, and at that point Siri said: I don’t talk about God; you should ask a human. Pretty soon we will probably have AI pastors. We’ll have AI rabbis. These will be invented. Someone will decide to invent it, and then AI pastors and AI rabbis will learn from the questions. They’ll have this vast library at their background. And so it’ll—this all will happen. How do we educate our young people about what that means and when they’re—my kids ask our Alexa, which we finally got this year, they ask them everything. They ask them everything. And so, people are going to ask about religious questions and we’re not ready for it. We’re not ready for it at all. ROBBINS: AI gods. Yes. So I suppose Lawrence Whitney is next. I’m taking Riki’s prerogative here. WHITNEY: Hi. Yeah. Thanks. Larry Whitney, research associate at the National Museum of American History and postdoctoral fellow at the Center for Mind and Culture here in Boston. Thinking about, historically, the invention of the printing press was a technology that had a profound impact on the future of religion, particularly in Europe and then globally, and transformed religion profoundly and provoked a lot of the same theological question—or analogous theological questions, I should say, to what the internet does, in terms of in that case it was about the meaning of text, and who was responsible and authoritative in interpreting texts and that sort of thing. And the answers to those questions as they emerged in history were, largely, independent of the individual answers that religious leaders and theologians gave to those questions at the time. So I’m interested in how you’re seeing religious leaders now responding to these theological questions, such as are ten Jews on the internet a minyan, or are baptism and Eucharist valid when performed over Zoom, right. Wrestling with those questions not just theologically but as having a profound implication for the future of their traditions. What decisions they make in answer to those questions now will have an influence on how their traditions—the viability of their traditions and how they evolve but also that that future is somewhat beyond their control and how that impacts their decision making about engaging with technology. Thanks. CAMPBELL: Yeah. I think in many respects with the internet we forget that that we’ve been here before. There is really nothing new under the sun because you can map the debates about positive and negatively about the internet from the 1990s directly on to the debates that were happening around the printing press. And it’s interesting, if you look actually back, the Catholic Church was not against the printing press when it first came out. They actually were very supportive, brainstorming about how they were going to be able to standardize priest education and sharing their teachings. And it wasn’t until basically people began to use it to critique the church that they went ah, oh, this—actually technology undermines our authority. And that’s really the base kind of thing for religious communities. They realize that with this smartphone I can do what you could do in a television studio fifty years ago. So individuals have become as powerful as religious institutions. So we see the internet is both challenging religious authorities because individuals have this ability to present themselves, but as religious authorities and groups become more literate there’s been a huge trend in the last decade to bring on not just communication directors but digital media directors, social media curators for religious groups, that then they say how they can actually use the internet to solidify their position and also be more of a kind of accountability or a critique of some of the narratives that are coming out. So I think religious communities, they don’t have to feel disempowered but they do need to say about having that digital expertise and bringing in digital create—what I call digital creatives and think of them as a partner and a collaborator rather than as a competition. And I think as groups begin to do that that we’re going to see a lot more creativity and fruitfulness come out of religious groups’ uses of technology. RAUSHENBUSH: The only thing I’ll add is I think that it is analogous but it is times a million or maybe infinity. The questions are the same. The pace is radically different. The scope is radically different. How quickly everyone has the ability to publish and be an authority, how communities were ruptured. These are all things that are happening much, much quicker and with much less time for traditional religious authorities to react. And the other thing that just was not a part of this is the AI factor, which is, the advent of artificial intelligence and what it means for religion, and how there’s just a different level of ability for truth to be chopped up in so many different ways and digested by machines that are not thinking with trained theological hearts or anything but aside from technology. So I do think the questions are similar and the pace is greatly—I mean, I can do something—while we’re talking here I could write something absolutely inflammatory or outrageous. I could threaten to—which I would never do but, people threaten to burn a Quran and all of a sudden immediately, within a minute, the world knows. The scope of that happens so fast. It flies around the world, and then things get initiated that are very hard to pull back in. And so we’re dealing with a question of factors of time and space that are greatly exaggerated and we have to keep that in mind as we imagine what religious communities can and should do, all of which are what Heidi mentioned. CAMPBELL: I would just jump in here to say I would totally agree that we have seen a total amplification in the time/space area, that while these things are not new it is a lot quicker and it can have much more of a global impact. And one individual can have a huge impact, which can give a disproportionate sense of how people think on a certain topic. So and I think it’s important. And there were chatbots back in the 1990s. I remember talking with them on MIRC before there was chat forums. But they were programmed and they had much more limitations. This new level of technology is a learning technology so it is actually growing. So no longer do the creators have control. So while the first generation was a competition between the individuals versus the community, now it’s the whole community in competition with the technology. So we’ve lost control to some extent of our own creations and that can have— RAUSHENBUSH: That’s what will be so interesting when that gets mixed in with religion and what will be the impact of that when that technology begins to intersect with religious morality and religious truth? ROBBINS: Interesting and terrifying. Jane Redmont? Whatever question you’d like to ask. REDMONT: One of the things I wanted to bring up is the issue of spiritual formation. Not spiritual formation on the internet, although some of us are doing that, but spiritual formation just like higher education or lower education, taking into account the new technologies and forming us spiritually as bodies, as minds, as users of the web in different ways. I think spiritual formation can take a whole different dimension. We need to learn inward disciplines as well as outward ones, both in the flesh and on the internet, that are, if you will, spiritual exercises the way Ignatius of Loyola had spiritual exercises, but the new version, a kind of hybrid version. Am I making sense to you? I’ve started doing that and working on that a little bit. But I don’t think we’re talking about that enough or doing it enough in our organizations. Never mind religious literacy, which I found we had too little of when I was a college professor. Any thoughts on that, Paul and Heidi and Carla, about this formation idea? And if you have a better word than spiritual formation please use it because it’s a narrow word. CAMPBELL: Well, since you referenced the Episcopal tradition, there’s a huge movement within the Episcopal Church coming out of Virginia Theological Seminary. They have e-Formations, and the one thing I really appreciate about that is a lot of what we’ve seen that’s come out of the pandemic is how do we leverage these tools for doing spiritual direction. I’m a spiritual director myself and I’ve been actually doing online direction since the very beginning. It wasn’t common till about three years ago. But we also need to think about not just how to leverage these technologies to create these sacred spaces or holy spaces but how is the shape of this culture that we’re in, what kind of traits and values is it cultivating in us, and how is that shaping our formation just by being in a digital space. And I think the realization, even for churches that say, hey, we don’t want to do digital worship, we don’t want to have digital tools, but they’re still being impacted by this culture that we live in, which is so enmeshed with the digital. And I think that level of values, education, mixed with digital literacy is so important to be what kind of spiritual beings are we becoming and is that the direction that we really want to cultivate in our communities. ROBBINS: That’s great. Thank you. I am not an expert on this but it seems to me that no one can wall themselves off from it and I think, Heidi, that’s sort of a fundamental point that is shaping our entire society. And if religion isn’t going to help and we in education aren’t going to help people wrestle with it, we are failing in our duties. Steve—is an Ohnsman? OHNSMAN: Yes, it is. Thank you. I’m a pastor at Calvary United Church of Christ in Reading, Pennsylvania, and we embraced a digital ministry immediately. We had already been doing some of it. So we did everything online for a while. But one of the big questions was: How do we do good works as a community of faith? And so I came up with this—I think it’s worked out really well because we have people all over the country who will tune in and be involved. And every two weeks I throw out a mission challenge and I ask everyone to do this wherever they are and then send me what they’ve done and then we post it anonymously. So and so, a person did this, this, this, this, and then what we’re basically doing—I’ll say, feed a family in the next two weeks and then everybody feeds somebody where they are, whether it’s through a food bank or a church program, and we’re doing the ministry together even though we’re really far apart and that’s been very cool. RAUSHENBUSH: Yeah. I’ll just respond to that because  I raised the question of commitment to locality and I think that that’s really interesting for those communities that are disparate to recognize that there’s still a need for immediate service. One other way to add to that would be how have you extended love on the internet. How have you shown someone love, or lack of judgment, or uplifted someone? When you come across a stranger how have you loved this stranger? I mean, using some of the Christian language. That could be translated to other communities, what are ways that the mandates of our traditions—ethical mandates—can be translated into life online? I mean, on average each of us spend eight hours a day online in various ways, three or four on social media. For young people, it’s much more than that. That’s the average of all Americans. So we’re spending all this time in online spaces. How are we exercising the mandates of our traditions. ROBBINS: I had wondered about how you bridge this, the online and the real world community—I’m still thinking of virtual communities as not as real—and what Steve Ohnsman was talking about is one fabulous way of doing that. Are there other examples of ways to—because you both began talking about the danger of losing contact with the community around you. Have you guys heard of other ways that the churches and other religious groups are finding ways of making sure that their communities remain intact or even grow even as they, perhaps, have services or Bible study solely online? CAMPBELL: I’ve seen a lot of interesting examples over the pandemic. I know that some congregations, I’ve heard them encouraging their people to get onto the Nextdoor app, which is about being in your local community and then to volunteer. Say, hey, I will pick up medicine for you or food or I can do this or that. And so really especially trying to help people who were homebound, elderly people during the pandemic, and using the digital tools to provide those connections. I also saw a lot of people, in  the Pinterest and Instagram community, people making a lot of very personal kind of encouragement posts and then sending them to specific people that either that they knew or had met online. But really trying to say how can we spread kindness and care, and so that’s what religion is truly about and not just some of the kind of more negative press that it’s often given. ROBBINS: Thanks. Riki, we have, I think, one more. OPERATOR: We do. Our last question will come from Albert Celoza from Phoenix College. CELOZA: This is Alberto Celoza from Arizona Interfaith Movement and Phoenix College. Is it the internet or is it the pandemic that has caused a decline in terms of religious participation or participation in religious services? Also, has the internet caused the increase in the number of nones, N-O-N-E-S? Any thoughts? CAMPBELL: I would say no, that it hasn’t—again, it’s kind of like the technology. It hasn’t started it. It’s just made it more visible and maybe made it more easy for people to leave their congregations. The rise of the nones was starting to be documented post World War II. After the big world wars, a lot of people were disillusioned with all kinds of institutions, and so we see that. But I think in an era of the internet where it allows you to express your opinion in the safety of behind the screen, I think a lot more people are feeling, hey, there are a lot of us that are nones out there or that we’re wanting to leave the church and so we’re done. And so it’s easy to—easier to self-proclaim that. It gives them more confidence to do that on surveys. And so, obviously, we have seen an increase but it wasn’t the internet that started it. It’s just facilitated something that’s already started in culture. That’s my opinion. RAUSHENBUSH: No, I think that’s right. I think that these were trends that were already in play before the internet and it has allowed, just as Heidi said. I think that there’s also ways that there are new communities forming that could be viewed as quasi-religious communities and many of the folks who have left traditional religion would not be themselves—do not call themselves atheists or other things. They find community in social justice movements. They find community in other—in arts movements. They find—so there is ways that—are ways that the internet has actually found—allowed people to find one another. Black Lives Matter could be an example of a movement that attracted a lot of people who might have not been involved in traditional religious worship. I think there’s a—there is a transformation and it’s—I really appreciate where that question is coming from and what we might imagine is what’s coming next and what are ways to—especially for someone like yourself who’s involved in interfaith work, how do we invite those people into questions of interfaith, questions that interfaith communities deal with, around meaning, around working together across lines of difference for social change and things like that. So I think all of those things are very, very interesting. I’ll just take a moment just to say one thing because we’re at CFR. I do think that there’s a massive implication for the internet and for religion vis-à-vis trans-global politics. These are—this is a way that religious communities are connected, so many different manifestations of religious communities across national boundaries immediately and in very intimate ways. Diaspora goes all different ways and religious communities are being mobilized across the globe. What happens, for instance, in India is not separated from the Indian community in America, or the Hindu community in America and other Indian religious communities in America. Likewise with Israel, Palestine, likewise with the war in Russia and the Orthodox Church in Ukraine. So I offer that just as a closing word that this is very relevant for CFR’s work, and so I’m hoping that there are also opportunities in the future to talk about specifically how it impacts foreign relations and international relations. ROBBINS: And also gives a voice to the voiceless and the—(inaudible)—if you look at what’s happening in Iran right now, the way it has— RAUSHENBUSH: Absolutely. ROBBINS: —given voice to women and to young people and, certainly, for unrepresented and suppressed communities—minority communities, religious communities. So not all is bad on the internet. There are other possibilities there. I want to thank you. We’re going to turn it back to Irina. I want to thank you, Dr. Heidi Campbell. Thank you, Reverend Paul Brandeis Raushenbush. We will share—Paul, you mentioned a report that you have just developed. We want to share that with everyone who’s attended today. Heidi, if there’s anything you think we should be reading and sharing we will share it with the group as well. And, Irina, back to you and thank you, everybody, for fabulous questions. FASKIANOS: I echo those thanks. It was a very smart and insightful conversation. So thank you. We will send the link to the video and transcript and Paul’s report. Heidi, anything you want to share. You can follow Heidi Campbell on Twitter at @heidiacampbell, Paul at @raushenbush, and you can follow Carla at @robbinscarla, and Carla is also newly co-hosting a CFR podcast The World Next Week. So you should tune into that. You can also follow us, CFR’s Religion and Foreign Policy program, on Twitter at @CFR_religion. And, again, please do email us, [email protected], with feedback, suggestions for topics or speakers, and any questions you might have. Again, thank you all for doing this, for the amazing questions and comments, and we hope you all have a great rest of the day wherever you are.
  • Cybersecurity
    Cyber Week in Review: March 17, 2023
    TikTok explores separating from ByteDance; OpenAI releases GPT-4; SEC proposes tighter rules on financial system cybersecurity; CISA establishes ransomware warning system; Samsung announces $228 billion hub in South Korea. 
  • Technology and Innovation
    How to Prioritize the Next Generation of Critical Technologies
    The U.S. government should invest in a bottom-up capability for identifying the next generation of emerging technologies and innovators. This will enable agencies to prioritize critical technologies earlier in their development.
  • Cybersecurity
    Cyber Week in Review: February 17, 2023
    U.S. bans six Chinese organizations following balloon incident; Beijing to support development of large AI models; Chris Inglis departs ONCD; TI announces $11 billion U.S. fab; Israeli company ran large disinformation campaigns.
  • Technology and Innovation
    Tracking the Race to Develop Generative AI Technologies in China
    As ChatGPT sets off a wave of global interest in generative AI technologies, Chinese companies scramble to develop homegrown rivals to the OpenAI-developed chatbot.
  • Artificial Intelligence (AI)
    Academic Webinar: AI Military Innovation and U.S. Defense Strategy
    Play
    Lauren Kahn, research fellow at CFR, leads the conversation on AI military innovation and U.S. defense strategy.   FASKIANOS: Thank you, and welcome to today’s session of the Fall 2022 CFR Academic Webinar Series. I’m Irina Faskianos, vice president of the National Program and Outreach at CFR. Today’s discussion is on the record, and the video and transcript will be available on our website CFR.org/Academic if you would like to share it with your colleagues or classmates. As always, CFR takes no institutional positions on matters of policy. We’re delighted to have Lauren Kahn with us to talk about AI military innovation and U.S. defense strategy. Ms. Kahn is a research fellow at CFR, where she focuses on defense, innovation, and the impact of emerging technologies on international security. She previously served as a research fellow at Perry World House at the University of Pennsylvania’s global policy think tank where she helped launch and manage projects on emerging technologies and global politics, and her work has appeared in Foreign Affairs, Defense One, Lawfare, War on the Rocks, Bulletin of the Atomic Scientists, and the Economist, just to name a few publications. So, Lauren, thanks very much for being with us. I thought we could begin by having you set the stage of why we should care about emerging technologies and what do they mean for us in—as we look ahead in today’s world. KAHN: Excellent. Thank you so much for having me. It’s a pleasure to be here and be able to speak to you all today. So I’m kind of—when I’m setting the stage I’m going to speak a little bit about recent events and current geopolitical situations and why we care about emerging technologies like artificial intelligence, quantum computing—things that seem a little bit like science fiction but are now coming into realities and how our military is using them. And then we’ll get a little bit more into the nitty gritty about U.S. defense strategy, in particular, and how they’re approaching adoption of some of these technologies with a particular focus in artificial intelligence, since that’s what I’m most interested in. Look, awesome. Thank you so much for kicking us off. So I’ll say that growing political competition between the United States, China, and Russia is increasing—the risk of great power conventional war in ways that we have not seen since the end of the Cold War. I think what comes to everyone’s mind right now is Russia’s ongoing invasion of Ukraine, which is the largest land war in Europe that we’ve seen since World War II, and the use of a lot of these new emerging capabilities. And so I’ll say for the past few decades, really, until now we thought about war as something that was, largely, contained to where it was taking place and the parties particularly involved, and most recent conflicts have been asymmetric warfare being limited to terrestrial domains. So, on the ground or in the air or even at sea, where most prominent conflicts were those between nation states and either weak states or nonstate actors, like the U.S. wars—led wars in Afghanistan and Iraq or intervention in places like Mali and related conflicts as part of the broader global war on terrorism, for example. And so while there might have been regional ripple effects and dynamics that shifted due to these wars, any spillover from these conflicts was a little bit more narrow or due to the movement of people themselves, for example, in refugee situations. I’ll say, however, that the character of wars is shifting in ways that are expanding where conflicts are fought and where they take place and who is involved, and a large part of this, I think, is due to newer capabilities and emerging technologies. I’ll say it’s not entirely due to them, but I think that there are some things, like, with the prominence of influence operations, and misinformation, deep fakes, artificial intelligence, commercial drones, that make access to high-end technology very cheap and accessible for the average person has meant that these wars are going to be fought in kind of new ways. We’re seeing discussion of things like information wars where things are being fought on TikTok and social media campaigns where individuals can kind of film what’s happening on the ground live and kind of no longer do states have, so to speak, a monopoly on the dissemination of information. I’ll speak a little bit more about some of the examples of technologies that we’re seeing. But, broadly speaking, this means that the battlefield is no longer constrained to the physical. It’s being fought in cyberspace, even in outer space, with the involvement of satellites and the reliance on satellite imagery and open source satellite imagery like Google Maps and, again, in cyberspace. And so as a result, it’ll not only drive new sectors and new actors kind of into the foray when it comes to fighting wars, and militaries have been preparing for this for quite a while. They’ve been investing in basic science research and development, testing and evaluation in all of these new capabilities, from artificial intelligence, robotics, quantum computing, hypersonics. And these have been priorities for a few years but I’ll say that that conflict in Ukraine and the way that we’re seeing these technologies are being used has really kind of put a crunch on the time frame that states are facing, and I’m going to speak a little bit more about that in a minute. But to kind of give you an example of what are—what does it mean to use artificial intelligence on the battlefield—what do these kind of look like, there’s—largely, my work before this conflict was a little hypothetical. It was hard to kind of point to. But I think now, as these technologies mature, you’re seeing that they’re being used in more ways. So artificial intelligence, for example, are used to create—has been used by Russia to create deep fakes. There was a very famous one of President Zelensky that they used that they then combined with a cyberattack to put it at a very—to put it on national news in Ukraine, to make it look a little bit more believable even though the deep fake itself, it was a little, like, OK, they could tell it was computer generated. These are kind of showing how some of these technologies are evolving and, especially when combined with other kinds of technological tools, are going to be used to kind of make some of these more influence operations and propaganda campaigns a little bit more persuasive. Other examples of artificial intelligence, there’s facial recognition technology being used to identify civilians and casualties, for example. They’re being used to—they’re using natural language processing, which is a type of artificial intelligence that kind of analyzes the way people speak. You think of Siri. You think of chat bots. But more advanced versions being used to kind of read in radio transmissions and translate them and tag them so that they’re able to—that forces are able to go through more quickly and identify what combatants are saying. There’s the use of 3D printing and additive manufacturing where individuals are able to for very cheap—a 3D printer costs a couple—a thousand dollars and you can get it for maybe less if you build it yourself. You can add—you can add different components to grenades to make—and then people are taking smaller commercial drones to kind of make a MacGyvered smart bomb that you can maneuver. So those are some of the kind of commercial technologies that are being pulled into the kind of military sphere and into the battlefield. They might not be large. They might not be military in its first creation. But because they’re so general purpose technologies—they’re dual use—they’re being developed in the private sector and you’re seeing them being used on the battlefield and weaponized in new ways. There are other technologies that are more based originally in the military and defense kind of sectors and who’s created them, things like loitering munitions, which we’re seeing more of now, and a little—a lot more drones. I’m sure a lot of you have been seeing a lot of—about the Turkish TB2 drones and the Iranian drones that are now being used by Russia in the conflict. And these are not as new technologies. We’ve seen them. They’ve been around for a couple of decades. But they’re reaching a maturity in their technological lifecycle where they’re a lot more cheap and they’re a lot more accessible and they’re a lot more familiar now that they’re being used in innovative and new ways. They’re being seen as less precious and less expensive. And so not that they’re being used willy nilly or that they’re expendable but militaries, we’re seeing, are willing to use them in more flexible ways. And so, for example, Ukraine, in the early days of the campaign, there were some—allegedly, Ukraine used it as—the TB2 as a distraction when it wanted to sink a war ship rather than actually using it to try and sink the war ship itself. And so using it for things that they’re good for but maybe not the initial thought or the initial what they were designed to be used for. Iran—I mean, excuse me, Russia, now using the Iranian-made loitering munitions. They’re pretty reasonable in price. They’re about $20,000 a pop, and so using them in swarms to be able to take out some of the Ukrainian infrastructure has been a pretty good technique. Ukraine, for example, is very good at shooting them down. I think they were reporting at some point they had an ability to shoot them down at a rate of around 85 percent to 90 percent. And so the swarms weren’t necessarily all of them were getting through but because they’re so reasonably priced it was still—it was still a reasonable tactic and strategy to take. There’s even some kind of more cutting edge, a little bit more unbelievable, applications like now being touted as an Uber for artillery, whether you’re using similar kind of algorithms that Uber uses to kind of identify which passengers to pick up first and where to drop them off, about how to target artillery systems—what target is most efficient to hit first. And so we’re seeing a lot of these technologies being used, like I said, in new and practical ways, and it’s really condensed the timeline that, I think, states are seeing, especially the United States—that they want to adopt these technologies. Back in 2017, Vladimir Putin famously stated that he believed that whoever became leader in AI would become leader of the world, and China has very much publicized their plans to invest a lot more in AI research and development, to invest in bridging the gaps between its civil and military engineers and technologists to take advantage of AI by the year 2023. So we’ve got about one more year to go. And so I think that the United States, recognizing this, the time crunch has been—the heat is on, so to speak, for adopting some of these newer capabilities. And so we’re seeing that a lot now. There’s a lot of reorganization happening within the Department of Defense to kind of better leverage and better adapt in order to take advantage of some of these technologies. There’s the creation of a new chief data—digital and artificial intelligence office, the new emerging capabilities policy office, that are efforts in order to better integrate data systems ongoing projects in the Department of Defense, et cetera, to implement it for broader U.S. strategy. There’s been efforts as well to partner with allies in order to develop artificial intelligence. I mean, as part of the Indo-Pacific strategy that the Biden administration announced back in February of 2022 they announced that along with the Quad partners—so Japan, Australia, and I’m forgetting—and India, excuse me—they are going to fund research, for example, for any graduates from any of those four countries to come study in the United States if they focused on science, technology, engineering, and mathematics, and so to foster that integration and collaboration between our allies and partners to better take use of some of these things. I’ll say, even so, recently, in April 2022, for example, I think, looking at how Ukraine was using a lot of these technologies, the United States was able to fast track one of its programs. It was called the Phoenix Ghost. It’s a loitering munition. Little—it’s still a little—not well known. But, for example, I saw that the capabilities requirement that Ukraine had and fast tracked their own program in order to fulfill that. So they’re being used for the first time. So, again, we’re seeing that the United States is kind of using this as an opportunity to learn as well as to really take advantage and start kicking into high gear AI in defense innovation development. And so I’ll say that doesn’t mean that it’s not without its challenges, acquisitions process in particular. So how the United States—how Department of Defense takes a program from research and development all the way to an actual capability that it’s able to use on the battlefield. Before, in the 1950s where it used to take maybe five years now takes a few decades, there’s a lot of processes in between that make it a little bit challenging. All these sorts of checks and balances in place, which are great, but have made the process slow down the process a little bit. And so it’s harder for smaller companies and contractors to kind of—that are driving a lot of this—driving the cutting-edge research in a lot of these fields to work with the defense sector. And so there are some of these challenges, which, hopefully, some of this reorganization that the Pentagon is doing will help us. But that’s the next step, looking forward. And so that’s going to, I think, be the next big challenge that I’m watching for the—over the rest of this year and the next six months. But I think I threw a lot out there but I’m happy to open it for questions now and focus on anything in particular. But I think that gave an overview of some of the things that we’re seeing now. FASKIANOS: Absolutely. That was insightful and a little scary—(laughs)—and look forward now to everybody’s questions. As a reminder, after two and a half years of doing this, you can click on the raise hand icon on your screen to ask a question, and on an iPad or Tablet click the more button to access the raise hand feature. When you’re called upon, please accept the unmute prompt and state your name and affiliation. You can also submit a written question via the Q&A icon, and please include your affiliation there, and we are going to try to get through as many questions as we can. All right. So the first question—raised hand comes from Michael Leong. Q: Hi. Is this working? FASKIANOS: It is. Please tell us your affiliation. Q: Hi. My name is Michael Leong. I’m an MPA student in public administration at the University of Arizona in Tucson. And I just have a question about, basically, with the frequent use and successful use of drones in Ukraine is there any concern domestically about—because of how easily they are adapting such accessible technology to warfare that those can be used maliciously domestically and what steps they might be considering. Thanks. KAHN: Absolutely. That’s a great question. I think it’s broader than just drones as well when you have this proliferation of commercial technology into defense space and you have these technologies that are not necessarily, for example, weapons, right. So for—I think a good example is Boston Dynamics. They make this quad pet robot with four legs. It looks kind of like a dog. His name is Spot. And he’s being used in all sorts of commercial applications—help fund local police forces, et cetera—for very benevolent uses. However, there’s been a lot of concern that someone will go and, essentially, duct tape a gun to Spot and what will that kind of mean. And so I think it’s a similar kind of question when you have some of these technologies, again, that aren’t—it depends on how you use them and so it’s really up to the user. And so when you get things like commercial drones, et cetera, that you’re seeing that individuals are using for either reconnaissance or, again, using in combination with things like 3D printing to make weapons and things like that, it is going to be increasingly, increasingly difficult to control the flow. We’ve seen Professor Michael Horowitz over at the University of Pennsylvania, who’s now in government, he’s done a lot of research on this and you see that the diffusion of technologies happens a lot—a lot quicker when they’re commercially based rather than when they’re from a military origination. And so I think it’s definitely going to pose challenges, especially when you get things like software and things like artificial intelligence, which are open source and you can use from anywhere. So putting—kind of like controlling export and extrolling (sic) after the fact how they’re used is going to be extremely difficult. A lot of that right now is currently falling to kind of companies who are producing them to self-regulate since they have the best, like, ability to kind of limit access to certain technologies. Like, for example, open AI. If any of you have played with DALL-E 2 or DALL-E Mini, the image generating prompt sandbox tool that’s—they have limited what the public can access—certain features, right—and are testing themselves to see, OK, how are these being used maliciously. I think a lot of them are testing how they’re being used for influence operations, for example. And so making sure that some of those features that allow that to be more malicious they’re able to regulate that. But it is going to be extremely hard and the government will have to work hand in hand with a lot of these companies and private actors that are developing these capabilities in order to do that. But it’s a very great question and it is not one that I have a very easy answer to on how to address that. But it is, like, something that I’ve been thinking about a lot. FASKIANOS: Thank you. I’m going to take the next question from Arnold Vela, who’s an adjunct faculty at Northwest Vista College. What is the potential value of AI for strategy, e.g., war planning, versus tactical uses? KAHN: Great. So I think—honestly, I think a lot of artificial intelligence the benefit is replacing repetitive human—repetitive redundant tasks, right. So it’s not replacing the human. It’s making the human be more efficient by reducing things like data entry and cleaning and able to pull resources from all together. And so it’s actually already being used, for example, in war planning and war gaming and things like that and Germany and Israel have created things to make 3D AI to create sort of 3D battlefields where they can see all the different kind of inputs of information and sensors. And so I think that’s really where the value add—the competitive advantage of artificial intelligence is. It’s not necessarily—having an autonomous drone is very useful but I think what will really be the kind of game changer, so to speak, will be in making forces more efficient and both have a better sense of themselves as well as their adversaries, for example. And so, definitely, I think, I’m more in the background with the nonsexy—the data cleaning and all the numbers bit will be a lot more important, I think, than the having a drone with encased AI capabilities, even though those kind of suck the oxygen out a little bit because it’s really exciting. It’s shiny. It’s Terminator. It’s I, Robot-esque, right? But I think a lot of it will be the making linguists within the intelligence community able to process and translate documents at a much faster pace. So making individuals’ lives easier, I think. So definitely. FASKIANOS: Great. Thank you. I’m going to go next to Dalton Goble. Please accept the unmute. Q: Thank you. FASKIANOS: There you go. Q: Hi. I’m Dalton. I’m from the University of Kentucky and I’m at the Patterson School for Diplomacy and International Commerce. Thank you for having this talk. I really wanted to ask about the technology divide between the developed and developing world, and I wanted to hear your comments about how the use of AI in warfare and the technologies such as—and their proliferation can exasperate that divide. KAHN: Absolutely. I actually think, we’re—I think that I’ve been focusing a lot on how the U.S. and China and Russia, in particular, have been adopting these technologies because they’re the ones that are investing in it the most. I mean, countries in Europe are as well and, Israel, et cetera, and Australia also. Except I still think we’re in those early stages where a lot of countries—I think, over a hundred or something—have the national AI strategies right now. I don’t think it’s as far along yet in terms of its—at least its military applications or applications for government. I will say that, more broadly, I think, again, because these technologies are developed in the commercial sector and are a lot more reasonably priced, I think there’s actually a lot of space for countries in the developing world, so to speak, to adopt these technologies. There’s not as many barriers, I think, when it’s, again, necessarily a very expensive, super specific military system. And so I think that it’s actually quite diffusing rapidly in terms—and pretty equally. I haven’t done extensive research into that. It’s a very good question. But my first gut reaction is that it actually can—it actually can help kind of speak—not necessarily exacerbate the divide but kind of close the gap a little bit. A colleague of mine works a lot in health care and in health systems in developing countries and she works specifically with them to develop a lot of these technologies and find that they actually adopt them quicker because they don’t have all of these existing preconceived notions about what the systems and organizations should look like and are a lot more open to using some of these tools. But I will say, again, they are just tools. No technology is a silver bullet, and so I think that, again, being in the commercial sector these technologies will diffuse a lot more rapidly than other kind of military technologies. But it is something to be cognizant of, for sure. FASKIANOS: Thank you. I’m going to go next to Alice Somogyi. She’s a master’s student in international relations at the Central European University. Could you tell us more on the implications of deep fakes within the military sector and as a defense strategy? KAHN: Absolutely. I think influence operations in general are going to be increasingly part of the—part of the game, so to speak. I mean, I mentioned there’s going to be—it’s very visible to see in the case of Ukraine about how the information war, especially in the early days of the conflict, was super, super important, and the United States did a very good job of releasing information early to allies and partners, et cetera, to kind of make the global reaction time to the invasion so quick. And so I think that was a lot—very unexpected and I think has shown just—not to overstate it but the power of individuals and that a lot of propaganda will have. We’ve known—I’m sure if you studied warfare history, you can see the impact of propaganda. It’s always been—it’s always been an element at play. I will just say it’s another tool in the toolkit to make it a little bit more believable, to make it harder, to make these more efficient, and I think what’s really, really interesting, again, is how a lot of these technologies are going to be worked together to kind of make them more believable. Like, again, creating deep fakes. The technology isn’t there yet to make them super believable, at least on a—like, a large scale that many people at—that a state could believe. But combining them with something like a cyberattack, to place that in a place that you would have a little bit more—more willing to believe it, I think, will be increasingly important. And we’ll see it, I’m sure, combined in other ways that I can’t even imagine. And that goes back to one of the earlier questions we had about the proliferation of these technologies and, like, it being commercial and being able to contain the use and you can’t, and that’s the hardest part. And I think that especially when it comes to software and things where once you sell it out there they can use it for whatever they want. And so it’s this kind of creativity where you can’t prevent against any possible situation that you don’t know. So it has to be a little bit reactive. But I think there are measures that states and others can take to be a little bit proactive to protect against the use. This isn’t specifically about deep fakes but about artificial intelligence in general. There’s a space, I think, for confidence-building measures so informal agreements that states can kind of come to to set norms and kind of general rules of the road about, like, expectations for artificial intelligence and other kind of emerging technologies that they can put in place before they’re used so that when situations that are unexpected or have never seen before arise that there’s not—there’s not totally no game plan, right. There’s a kind of things and processes to kind of fall back on to guide how to advance and work on that situation without having to—without regulating too much too quickly that they become outdated very quickly. But I think it’ll definitely be as the technology develops that we’ll be using a lot more deep fakes. FASKIANOS: Yes. So Nicholas Keeley, a Schwarzman Scholar at Tsinghua University, has a question that goes along these lines. Ukrainian government and Western social media platforms were pretty successful at preempting, removing, and counteracting the Zelensky deep fake. How did this happen? I mean, he’s—asks about the cutting-edge prevention measures against AI-generated disinformation today that you just touched upon. But can you just talk about the Ukrainian—this specific what we’re seeing now in Ukraine? KAHN: Yeah. I think Ukraine has been very, very good at using these tools in a way that we haven’t seen before and I think that’s, largely, why a lot of these countries now are looking and watching and are changing their tack when it comes to using these. Again, they seem kind of far off. Like, what’s the benefit of using these newer technologies when we have things that are known and work. But I think Ukraine, kind of being the underdog in this situation and knowing since 2013 that this was a future event that might happen has been preparing, I think, in particular, their digital minister. I’m not sure what the exact title was, but they were able to mobilize that very quickly. It was originally set up to better digitize their government platforms and provide access to individuals, I think, on a phone app. But then they had these experts that work on how—OK, how can we use digital tools to kind of engage the public and engage media. I think when they—they militarized them, essentially. And so I think a lot of the early days, asking for—a lot of people in that organization asked Facebook, asked Apple, et cetera, to either put sanctions, to put guardrails up. You know, a lot of the early, like, Twitter, taking down the media, et cetera, was also engaged because specifically this organization within Ukraine made it their mission to do so and to kind of work as the liaison between Silicon Valley, so to speak, and to get—and to engage the commercial sector so they could self-regulate and help kind of the government do these sort of things, which, I think, inevitably led to them catching the deep fake really quickly. But also, if you look at it, it’s pretty—it’s pretty clear that it’s computer generated. It’s not great. So I think that, in part, was it and, again, in combination with a cyberattack you could then notice that there was a service attack. And so, while it made it more realistic, there’s also risks about that because they’re practiced in identifying when a cyberattack just occurred, more so than other things. But, absolutely. FASKIANOS: Thank you. I’m going to go next to Andrés Morana, who’s raised his hand. Q: Hi. Good afternoon. I’m Andrés Morana, affiliated with Johns Hopkins SAIS International Relations. Master’s degree. I wanted to ask you about AI and then maybe emerging technology as well. But I think artificial intelligence, as it applies to kind of the defense sector, like, the need to also at the same time reform in parallel the acquisitions process, which is notorious for—as we think about AI kind of where these servers are hosted a lot of commercial companies might come with maybe some new shiny tech that could be great. But if their servers are hosted in maybe a place that’s so easy to access then maybe this is not great, as it applies to that defense sector. So I don’t know if you have thoughts on maybe the potential to reform or the need to reform the acquisitions process. Thank you. KAHN: Yeah, absolutely. I mean, this is some people’s, like, favorite, favorite topic on this because it has become sort of a valley of death, right, where things go and they die. They don’t—they don’t move. Of course, there’s some bridges. But it is problematic for a reason. There’s been a few kind of efforts to create mechanisms to circumvent that. The Defense Innovation Unit has created some kind of funding mechanisms to avoid it. But, overall, I do think it needs—I don’t know what that looks like. I’m not nearly an expert on specifically the acquisitions process that a lot of folks are. But it is pretty—it would make things a lot easier. China, for example, people are talking about, oh, it’s so far ahead on artificial intelligence, et cetera, et cetera. I would argue that it’s not. It’s better at translating what it has in the civilian and academic sectors into the military sphere and being able to use and integrate that. And so overcome that gap. It does so with civil-military fusion. You know, they can kind of do—OK, well, we’re saying we’re doing it this way so it’s going to happen, whereas the United States doesn’t have that kind of ability. But I would say the United States has all the academic and industry leading on artificial intelligence. Stanford recently put out their 2022 AI Index that has some really great charts and numbers on this about how much—how much research is being done in the world on artificial intelligence and which countries and which regions and specifically who’s funding that, whether it’s governments, academia, or industry. And the United States is still leading in industry and academia. It’s just that the government has a problem tapping into that, whereas China, for example, its government funding is a lot greater and there’s a lot more collaboration across government, academia, and industry. And so I think that is right now the number-one barrier that I see. The second one, I’ll say, is accessing data and making sure you have all the bits and pieces that you need to be able to use AI, right. What’s the use of having a giant model that—an algorithm that could do a million things if you don’t have all of the data set up for it. And so those are the two kind of organizational infrastructure problems that I’ll say are really hindering the U.S. when it comes to kind of adopting these technologies. But, unfortunately, I do not have a solve for it. I would be super famous in the area if I did, but I do not, unfortunately. FASKIANOS: Thank you. I’m going to take the next question from Will Carpenter, a lecturer at the University of Texas at Austin. Also got an up vote. What are the key milestones in AI development and quantum computing to watch for in the years ahead from a security perspective? Who is leading in the development of these technologies—large cap technology companies such as Google, ByteDance? Venture capital-backed private companies, government-funded entities, et cetera? KAHN: Great. Great question. I’ll say for quantum, quantum is a little bit more down the line since we do not have a quantum computer, like, a really big quantum computer yet that can handle enough data. China’s kind of leading in that area, so to speak. So it’s curious to watch them. They’ve created their first, I think, quantum-encrypted communications line and they’ve done a few works on that. So I think to keep an eye on that will be important. But, really, just getting a computer large enough that it’s reasonable to use quantum, I think, will be the next big milestone there. But that’s quite a few years down the line. But when it comes to artificial intelligence, I’ll say that artificial intelligence has had waves and kind of divots in interest and then research. They call them AI winters and AI springs. Winter is when there’s not a lot of funding and spring is when there is. It’s featured a lot of—right now we’re in a spring, obviously, and it was a large part because of breakthroughs in, like, the 2010s in things like natural language processing and computer vision, et cetera. And so I think continued milestones in those will be key. There’s a few that I’ve worked on. There’s a—there’s the paper right now—hopefully, it will be out in the next few months—on forecasting on when we actually think those—when AI experts and machine learning experts think those milestones will be hit. I mean, there were, like, two that were hit, like, there was ones where you’d have AI being able to beat all the Atari games. You have AI being able to play Angry Birds. There’s ones that’s, like, OK—well, and there are lots of those mini milestones that—bigger leaps than just the efficiency of these algorithms. I think things like artificial or general intelligence. Some say there are some abilities for you to create one algorithm that can play a lot of different games. You know, it can play chess and Atari and Tetris. But I think, broadly speaking, it’s a little bit down the line also. But I’ll say for, like, the next few months, it’ll—and the next few years, it’ll probably be just, like, more efficient in some of these algorithms, making them better, making them leaner, use a lot less data. But I think we’ve, largely, hit the big ones and so I think it’ll be—we’ll see these short, smaller milestones being achieved in the next few years. And I think there was another part to the question in the—let me just go look in the answer for what it was. Who’s developing these. FASKIANOS: Right. KAHN: I would say these, like, large companies like Google, Open AI, et cetera. But I’ll say a lot of these models are open source, for example, which means that the models themselves are out there and they’re available to anyone who wants to kind of take them and use them. I mean, I’m sure you’ve seen—once you saw DALL-E Mini you saw DALL-E 2 and DALL-E X. So, like, they proliferate really quickly and they adapt, and that’s a large part what’s driving the acceleration of artificial intelligence. It’s moving so quickly because there is this nature of collaboration and sharing that companies are incentivized to participate in, where they just take the models, train them against their own data, and if it works better they use that. And so those kind of companies are all playing a part, so to speak. But I would say, largely, academia right now is still really pushing the forefront, which is really cool to see. So I think that means that a lot more Blue Skies kind of just basic research being funded will—if it’s being pumped into that we’ll continue to kind of—we’ll see these advances continue. I’ll say also a lot of—when it comes to defense applications, in particular, I think, and where the challenge is is that a lot of—a lot more than typically when it comes to artificial intelligence these capabilities are being developed by niche smaller startup companies that might not be— that might not have the capabilities that, say, a Google or a Microsoft has when it comes to working and contracting with the U.S. government. So that’s also a challenge. When you have this acquisitions process it’s a little bit challenging at best, even for the big companies. I think for these smaller companies that really do have great applications and great specific uses for AI, I think that’s also a significant challenge. So I think it’s, basically, everybody. Everyone’s working together, which is great. FASKIANOS: Great. I’m going to go next to DJ Patil. Q: Thanks, Irina. Good to see you. FASKIANOS: Likewise. Q: And thanks for this, Lauren. So I’m DJ Patil and I’m at the Harvard Kennedy School Belfer Center, as well as Devoted Health and Venrock Partners. And so, Lauren, the question you addressed a little bit on the procurement side, I’m curious what your advice to the secretary of defense would be around capabilities, specifically, given the question of large language models or the efforts that we’re seeing in industry and how much separation of results that we’re seeing even in industry compared to academia. Just the breakthroughs that we’re seeing reported are so stunning. And then if we look at the datasets that are—that they’re building on—those companies are building on, they’re, basically, open or there’s copyright issues in there. There’s defense applications which have very small data sets, and also, as you mentioned, in the procurement side a lack of access to the ability of these things. And so what is the mechanisms if you looked across this from a policy perspective of how we start tapping into those capabilities to ensure that we have competitiveness as the next set of iterations of these technologies take place? KAHN: Absolutely. I think that’s a great question. I’ve done a little bit of work on this. When they were creating the chief digital AI office, I think they had, like, people brainstorming about, like, what kind of things we would like to see and I think everyone agreed that they would love for them to get kind of a better access to data. If the defense secretary asks, can I have data on all the troop movements for X, Y, and Z, there’s a lot of steps to go through to pull all that information. The U.S. defense enterprise is great at collecting data from a variety of sources—from the intelligence community, analysts, et cetera. I think what’s challenging to know—and, of course, there are natural challenges built in with different levels of how confidential things are and how—the classifications, et cetera. But I think being able to pull those together and to clean that data and to organize it will be a key first step and that is a big infrastructure systems software kind of challenge. A lot of it’s actually getting hardware in the defense enterprise up to date and a lot of it is making sure you have the right people. I think another huge one—and, I mean, the National Security Commission on AI on their final report announced that the biggest hindrance to actually leveraging these capabilities is the lack of AI and STEM talent in the intelligence community in the Pentagon. There’s just a lack of people that, one, have the vision to—have the background and are willing to kind of say, OK, like, this is even a possible tool that we can use and to understand that, and then once it’s there to be able to train them to be able to use them to do these kind of capacities. So I think that’ll be a huge one. And there are ways that kind of—there are efforts right now ongoing with the Joint Artificial Intelligence Center—the JAIC—to kind of pilot AI educational programs for this reason as a kind of AI crash course. But I think there needs to be, like, a broader kind of effort to encourage STEM graduates to go into government and that can be done, again, by kind of playing ball, so to speak, with this whole idea of open source. Of course, the DOD can’t do—Department of Defense can’t make all of its programs open and free to the public. But I think it can do a lot more to kind of show that it’s a viable option for individuals working in these careers to address some of the same kind of problems and will also have the most up to date tech and resources and data as well. And I think right now it’s not evident that that’s the case. They might have a really interesting problem set, which is shown to be attractive to AI PhD graduates and things like that. But it doesn’t have the same kind of—again, they’re not really promoting and making resources and setting up their experts in the best way, so to speak, to be able to use these capabilities. FASKIANOS: Thank you. I’m going to take the next question from Konstantin, who actually wrote a question—Tkachuk—but also raised his hand. So if you could just ask your question that would be best. Q: Yes. I’m just happy to say it out loud. So my name is Konstantin. I’m half Russian, half Ukrainian. I’m connecting here from Schwarzman Scholarship at Tsinghua University. And my question is more coming towards the industry as a whole, how it has to react on what’s happening to the technology that the industry is developing. Particularly, I am curious whether it’s the responsibility and interest of industry and policymakers to protect the technology from such a misuse and whether they actually do have control and responsibility to make these technology frameworks unusable for certain applications. Do you think this effort could be possible, give the resources we have, the amount of knowledge we have? And, more importantly, I would even be curious on your perspective whether you think countries have to collaborate on that in order to such effort be efficient, or it should be incentive models based inside countries that will make an effort to the whole community. KAHN: Awesome. I think all of the above. I think right now, because there’s so—the relatively little understanding of how these work, I think a lot of it is the private companies self-regulating, which I think is a necessary component. But there are also now indications of efforts to kind of work with governments on things like confidence-building measures or other kind of mechanisms to kind of best understand and best develop transparency measures, testing and evaluation, other kind of guardrails against use. I think there are, like, different layers to this, of course, I think, and all of them are correct and all of them are necessary. I think the specific applications themselves there needs to be an element of regulation. I think at some point there needs to be, like, a user agreement as well about when they’re selling technologies and selling capabilities, how they agree to kind of abide by the terms. You sign it when you—the terms of use, right. And I think also then there are, of course, export controls that can be put on and certain—you’re allowed to do, the commercial side but you make the system itself—incompatibles are being used with other kinds of systems that would make it dangerous. But I think there’s also definitely room and necessary space for interstate collaboration on some of these, especially when you get—say, for example, when you introduce artificial intelligence into military systems, right, they make them faster. They make the decision-making process a lot more speedy, basically, and so the individual has to make quicker decisions. And so if you have things and when you introduce things like artificial intelligence to increasingly complex systems you have the ability for accidents to kind of snowball, right, where they become—as they go through. Like, one little decision can make a huge kind of impact and end up with a mistake, unfortunately. And so when you have the kind of situation when you’re forbid it’s in a—in a battlefield context, right. And let’s say the adversary says, oh, well, you intentionally shot down XYZ plane; and the individual said no, it was an auto malfunction and we had an AI in charge of it; who, in that fact, is responsible now? If it was not an individual now is it the—the blame kind of shifts up the pipeline. And so you’ve got problems like these. Like, that’s just one example. But, like, where you have increasingly automated systems and artificial intelligence that kind of shift how dynamics play out, especially in accidents, which require a lot of visibility, traditionally, and you have these technologies that are not so visible, not so transparent. You don’t really get to see how they work or understand how they think in the same way that you can say, if I pressed a button and you see the causality of that chain reaction. And so I think there is very much a need because of that for even adversaries—not necessarily just allies—to agree on how certain weapons will be used and I think that’s why there’s this space for confidence-building measures. I think a really—like, for example, a really simple kind of everyone already agrees on this is to have a human in the loop, right—a human control. When we eventually use artificial intelligence and automated systems increasingly in nuclear context, right, with nuclear weapons, I think everyone’s kind of on board with that. And so I think those are the kind of, like, building block agreements and kind of establishment of norms that can happen and that need to take place now before these technologies really start to be used. That will be essential to avoiding those worst case scenarios in the future. FASKIANOS: Great. Thank you. I’m going to take the next question—written question—from Alexander Beck, undergraduate at UC Berkeley. In the context of military innovation literature, what organizational characteristics or variables have the greatest effect on adoption and implementation, respectively? KAHN: Absolutely. I’m not an organizational expert. However, I’ll say, like before, I think that’s shifting, at least from the United States perspective. I think, for example, when the Joint Artificial Intelligence Center was created it was, like, the best advice was to create separate organizations that had the capability to kind of enact their own kind of agenda and to create separate programs for all of these to kind of best foster growth. And so that worked for a while, right. The JAIC was really great at promoting artificial intelligence and raising it to a level of preeminence in the United States. A lot of early success in making—raising awareness, et cetera. But now we’re seeing, there was some—a little bit of confusion, a little bit of concern, over the summer when they did establish the chief data—a digital and artificial intelligence office—excuse me. A lot of acronyms—when they—because they took over the JAIC. They subsumed the JAIC. There was a lot of worry about that, right. Like, they just established this great organization that we’ve had in 2019 and now they’re redoing it. And so I think they realized that as the technology develop, organizational structures need to develop and change as well. Like, in the beginning, artificial intelligence was kind of seen as its own kind of microcosm. But because it’s in a general purpose enabling technology it touches a lot more and so it needs to be thought more broadly rather than just, OK, here’s our AI project, right. You need to better integrate it and situate it next to necessary preconditions like the food for AI, which is data, right. So they reorganized to kind of ideally do that, right. They integrate it research and engineering, which is the arm in the Defense Department that kind of funds the basic research, to kind of have people understand policy as well. So they have all of these different arms now within this broader organization. And so there are shifts in the literature, I think, and there are different best cases for different kind of technologies. But I’m not as familiar with where the literature is going now. But that was kind of the idea has shifted, I think, even from 2018 to 2022. FASKIANOS: Thanks. We’re going to go next to Harold Schmitz. Q: Hey, guys. I think a great, great talk. I wanted to get your thoughts on AlphaFold, RoseTTAFold—DeepMind—and biological warfare and synthetic biology, that sort of area. Thank you. KAHN: Of course. I— Q: And, by the way—sorry—I should say I’m with the University of California Davis School of Management and also with the March Group—a general partner. Thank you. KAHN: I am really—so I’m really not familiar much with the bio elements. I know it’s an increasing area of interest. But I think, at least in my research, kind of taking a step back, I think it was hard enough to get people within the defense sector to acknowledge artificial intelligence. So I haven’t seen much in the debate, unfortunately, recently, just because I think a lot of the defense innovation strategy, at least in the Biden administration, is focused directly on the pacing—addressing the pacing challenge of China. And so they’ve mentioned biowarfare and biotechnology as well as nanotechnology and et cetera, but not as much in a comprehensive way as artificial intelligence and quantum in a way that I’m able to answer your question. I’m sorry. FASKIANOS: Thank you. I’ll go next to Alex, who has raised—and you’ll have to give us your last name and identify yourself. Q: Hi. Yes. Thank you. I’m Alex Grigor. I just completed my PhD at University of Cambridge. My research is specifically looking at U.S. cyber warfare and cybersecurity capabilities, and in my interviews with a lot of people in the defense industry, their number-one complaint, I suppose, was just not getting the graduates applying to them the way that they had sort of hoped to in the past. And if we think back at ARPANET and all the amazing innovations that have come out of the internet and can come out of the defense, do you see a return to that? Or do you see us now looking very much to procure and whatever from the private industry, and how might that sort of recruitment process be? They cited security clearances as one big impediment. But what else might you think that could be done differently there? KAHN: Yeah. Absolutely. I think security clearances, all the bureaucratic things, are a challenge, but even assuming that individual wants to work, I think right now if you’re working in STEM and you want to do research I think having two years, for example, in government and being a civilian, working in the Pentagon, for example, it looks—it doesn’t necessarily look like—allow you to jump then back into the private sector and academia, whereas other jobs do. So I think that’s actually a big challenge about making it possible for various reasons, various mechanisms, to kind of make it a reasonable kind of goal for not necessarily being a career in government but allowing people to kind of come and go. I think that’ll be a significant challenge and I think that’s in part about some of the ability to kind of contribute to the research that we spoke about earlier. I mean, the National Security Commission has a whole strategy that they’ve outlined on it. I’ve seen, again, like, piecemeal kind of efforts to overcome that. But nothing broad and sweeping reform as suggested by the report. I recommend reading it. It’s, like, five hundred pages long. But there’s a great section on the talent deficit. But, yeah, I think that will definitely be a challenge. I think cyber is facing that challenge. I just think anything that touches STEM in general, and so—and especially because I think the AI and particular machine learning talent pool is global and so states actually are, interestingly, kind of fighting over this talent pool. I’ve done a research previously also at the University of Oxford that looked at, like, the immigration preferences of researchers and where they move and things like that, and a lot of them are Chinese and studying in the United States. And they stay here. They move, et cetera. But a lot of it is actually also immigration and visas. And so other countries—China specifically made kind of for STEM graduates special visas. Europe has done it as well. And so I think that will also be another element at play. There’s a lot of these to kind of attract more talent. I mean, again, one of the steps that was tried was the Quad Fellowship that was established through the Indo-Pacific strategy. But, again, that’s only going to be for a hundred students. And so there needs to be a broader kind of effort to make it—to facilitate the flow of experts into government. To your other point about is this going to be what it looks like now about the private sector driving the bus, I think it will be for the time being unless DARPA and the defense agencies’ research arm and DOD change this acquisition process and, again, was able to get that talent, then I think—if something changes, then I think it will be able to, again, be able to contribute in the way that it has in the past. I think it’s important, too, right. There was breakthroughs out of cryptography. And, again, the internet all came from defense initially. And so I think it would be really sad if that was not the case anymore and I think especially as right now we’re talking about using—being able to kind of cross that bridge and work with the private sector and I think that will be necessary. I hope it doesn’t go too far that it becomes entirely reliant because I think DOD will need to be self-sufficient. It’s another kind of ecosystem to generate research in applications, and not all problems can be addressed by commercial applications as well. It’s a very unique problem set that defense and militaries face. And so I think there will need to be—right now, it’s a little bit heavy on needing to—there’s a little bit of a push right now, OK, we need to better work with the private sector. But I think, hopefully, overall, if it moves forward it will balance out again. FASKIANOS: Lauren, do you know how much money DOD is allocating towards this in the overall budget? KAHN: Off the top of my head, I don’t know. It’s a few billion. It’s, like, a billion. I think—I have to look. I can look it up. In the research 2023 budget request there was the highest amount requested ever for STEM research and engineering and testing and evaluation. I think it was—oh, gosh, it was a couple hundred million (dollars) but they had—it was a huge increase from the last year. So it’s an increasing priority. But I don’t have the specific numbers on how much. People talk about China funding more. I think it’s about the same. But it’s increasing steadily across the board. FASKIANOS: Great. So I’m going to give the final question to Darrin Frye, who’s an associate professor at Joint Special Operations University in the Department of Strategic Intelligence and Emergent Technologies, and his is a practical question. Managing this type of career how do you structure your time researching and learning about the intricacies of complex technologies such as quantum entanglement or nano-neuro technologies versus informing leadership and interested parties on the anticipated impact of emergent technologies on the future military operational environment? And maybe you can throw in there why you went into this field and why you settled upon this, too. KAHN: Yeah. I love this question. I have always been interested in the militarization of science and how wars are fought because I think it allows you to study a lot of different elements. I think it’s very interesting working at the intersection. I think, broadly speaking, a lot of the problems that the world is going to face, moving forward, are these transnational large problems that will require academia, industry, and government to kind of work on together from climate change and all of these emerging technologies, for example, global health, as we’ve seen over the past few years. And so I think it’s a little bit of a striking a balance, right. So I came from a political science background, international relations background, and I did want to talk about the big picture. And I think there are individuals kind of working on these problems and are recognizing them. But in that I noticed that I’m speaking a lot about artificial intelligence and emerging technologies and I’m not—I’m not from an engineering background. And so me, personally, I’m, for example, doing a master’s in computer science right now at Penn in order to shore up those kind of deficiencies and lack of knowledge in my sphere. I can’t learn everything. I can’t be a quantum expert and an AI expert. But I think having the baseline understanding and taking a few of those courses and more regularly has allowed me to when a new technology, for example, shows up that I can learn how—I know how to learn about that technology, which, I think, has been very helpful, speaks both languages, so to speak. I don’t think anyone’s going to be a master—you can’t be a master of one, let alone master of both. But I think it will be increasingly important to spend time learning about how these things work, and I think just getting a background in coding can’t hurt. And so it’s definitely something you need to balance. I would say I’m probably balanced more towards what are the implications of this, more broadly, since if you’re talking at such a high level it doesn’t help necessarily people without that technical background to get into the nitty gritty. It can get jargony very quickly, as I’m sure you guys understood listening to me even. And so I think there’s a benefit to learning about it but also make sure you don’t get too in the weeds. I think there are—I think a big important—there’s a lot of space for people who kind of understand both that can then bring those people who are experts, for example, on quantum entanglement and nanotechnology—to bring them so that when they’re needed they can come in and speak to people in a policy kind of setting. So there definitely is a room, I think, for intermediaries. There’s policy experts that people kind of sit in between and then, of course, the highly specialized expertise, which I think is definitely, definitely important. But it’s hard to balance. But I think it’s very fun as well because then you get to learn a lot of new things. FASKIANOS: Wonderful. Well, with that we are out of time. I’m sorry that we couldn’t get to all the written questions and the raised hands. But, Lauren Kahn, thank you very much for this hour, and to all of you for your great questions and comments. You can follow Lauren on Twitter at @Lauren_A_Kahn, and, of course, go to CFR.org for op-eds, blogs, and insight and analysis. The last academic webinar of this semester will be on Wednesday, November 16, at 1:00 p.m. (EST). We are going to be talking with Susan Hayward, who is at Harvard University, about religious literacy in international affairs. So, again, I hope you will all join us then. Lauren, thank you very much. And I just want to encourage those of you, the students on this call and professors, about our paid internships and our fellowships. You can go to CFR.org/careers for information for both tracks. Follow us at @CFR_Academic and visit, again, CFR.org, ForeignAffairs.com, and ThinkGlobalHealth.org for research and analysis on global issues. So thank you all, again. Thank you, Lauren. Have a great day. KAHN: Thank you so much. Take care. FASKIANOS: Take care.
  • Technology and Innovation
    The Robots Are Coming: AI Replaces Line Judges at the U.S. Open, With Global Implications for Jobs
    Sports reflect a societal trend of increasing automation. Policymakers should wrestle with the impact that autonomous technological development will have on the workforce, and ensure that marginalized groups are not left behind.
  • Technology and Innovation
    Artificial Intelligence and Democratic Values: Next Steps for the United States
    China and the European Union have both moved to create comprehensive artificial intelligence policy. U.S. policymakers should move forward the AI Bill of Rights to keep pace.
  • Technology and Innovation
    The Importance of International Norms in Artificial Intelligence Ethics
    Artificial intelligence has arrived as a multi-purpose tool. The United States and its allies need to do more to establish norms and ensure AI is used in a way that does not harm human rights.
  • Technology and Innovation
    Artificial Intelligence’s Environmental Costs and Promise
    Artificial intelligence has been cited as a potential tool for combatting climate change. Technology companies, however, need to do more to prevent AI from becoming a source of environmental degradation.
  • Cybersecurity
    Cyber Week in Review: June 24, 2022
    TikTok user data repeatedly accessed from China; China plans to increase social media censorship; U.S. law targeting forced labor goes into effect; Microsoft limits facial recognition sales; Cyberattack delays Putin's speech.
  • Technology and Innovation
    Stop the “Stop the Killer Robot” Debate: Why We Need Artificial Intelligence in Future Battlefields
    Efforts to ban military use of artificial intelligence internationally are built on erroneous assumptions and would have an adverse effect on the ability of law-abiding nations to defend themselves.
  • Technology and Innovation
    The Future of the Quad’s Technology Cooperation Hangs in the Balance
    The United States is a major collaborator on artificial intelligence (AI) research with other members of the Quad, but, according to a new report, research collaboration on AI between the other members is lacking.