Artificial Intelligence (AI)

  • Artificial Intelligence (AI)
    The President’s Inbox Recap: AI’s Impact on the 2024 U.S. Elections
    Expect the weaponization of artificial intelligence to further stress American democracy.
  • Cybersecurity
    Cyber Week in Review: September 15, 2023
    FTC says X violated consent decree; Hackers shut down MGM systems; Senators host AI summit; U.S. appeals court invalidates most of disinformation injunction; Advisory group pushes back on Global Digital Compact.
  • Religion
    Social Justice Webinar: Religion and AI
    Play
    Josh Franklin, senior rabbi at the Jewish Center of the Hamptons, and Noreen Herzfeld, professor of theology and computer science at the College of Saint Benedict and Saint John’s University, discuss how AI is affecting religious communities and the relationship between science, technology, and religion. Johana Bhuiyan, senior tech reporter and editor for the Guardian, moderated.  Learn more about CFR's Religion and Foreign Policy Program. FASKIANOS: Welcome to the Council on Foreign Relations Social Justice Webinar Series, hosted by the Religion and Foreign Policy Program. This series explores social justice issues and how they shape policy at home and abroad through discourse with members of the faith community. I’m Irina Faskianos, vice president of the National Program and Outreach here at CFR. As a reminder, this webinar is on the record and the video and transcript will be available on CFR’s websites, CFR.org, and on the Apple podcast channel Religion and Foreign Policy. As always, CFR takes no institutional positions on matters of policy. We’re delighted to have Johana Bhuiyan with us to moderate today’s discussion on religion and AI. Johana Bhuiyan is the senior tech reporter and editor at the Guardian, where she focuses on the surveillance of disenfranchised groups. She has been reporting on tech and media since 2013 and previously worked at the L.A. Times, Vox Media, Buzzfeed News and Politico New York. And she attended Lehigh University where she studied journalism as well as global and religion studies. She’s going to introduce our panelists, have the discussion, and then we’re going to invite all of you to ask your questions and share your comments. So thank you, Johana. Over to you. BHUIYAN: Thank you so much, Irina. Thank you, everyone, for joining us. As Irina said, my name is Johana Bhuiyan, and I cover all the ways tech companies infringe on your civil liberties. And so today we’ll be talking about a topic that’s not completely unrelated to that but is a little bit of a tangent. But we’re talking about “Religion and AI.” And AI is unfortunately a term that suffers from both being loosely defined and often misused. And so I kind of want to be a little bit specific before we begin. For the most part my feeling is this conversation will focus on a lot of generative AI tools and the way that these play a role in religious communities and play a role for faith leaders, and some of the issues and concerns with that. That being said, if the conversation goes in that direction, I will take it there. I would love to also touch on sort of the religious communities’ roles in thinking about and combating the harms of other forms of AI as well. But again, we’ll be focusing largely on generative AI. And today with us we have two really wonderful panelists who come from various perspectives on this. Both are really well-versed in both theology, of course, as well as artificial intelligence and computer science. First, we have Rabbi Josh Franklin, who wrote a sermon with ChatGPT that you may have read in news articles, including one of mine. He is a senior rabbi at the Jewish Center of the Hamptons in East Hampton, and he co-writes a bimonthly column in Dan’s Papers called “Hamptons Soul,” which discusses issues of spirituality and justice in the Hamptons. He received his ordination at Hebrew Union College and was the recipient of the Daniel and Bonnie Tisch Fellowship, a rabbinical program exploring congregational studies, personal theology, and contemporary religion in North America. And we also have Noreen Herzfeld, who most recently published a book titled The Artifice of Intelligence: Divine and Human Relationship in a Robotic World. That was published by Fortress, so go out and get a copy. She is the Nicholas and Bernice Reuter professor of science and religion at St. John’s University and the College of St. Benedict, where she teaches courses on the intersection of religion and technology. Dr. Herzfeld holds degrees in computer science and mathematics from Pennsylvania State University and a PhD in theology from the Graduate Theological Union in Berkeley. Thank you both so much for having this conversation with me. FRANKLIN: Thank you for having us. BHUIYAN: I do want to set the stage a little bit. I don’t want to assume anyone has a very thorough knowledge of all the ways AI has sort of seeped into our religious communities. And, in particular people when they think of ChatGPT and other chatbots like that, they’re not necessarily thinking of, OK, well, how is it used in a sermon? And how is it used in a mosque? Or how is it used in this temple? So, we’ve had the one-off situations like, Rabbi Franklin, your sermon. But I think it’d be great to get an idea of how else you’ve been seeing chatbot and other—ChatGPT and other chatbots using both of your respective worlds and communities. One example I can give before I turn it over is that there was a very short-lived chat bot called HadithGPT, which purportedly would answer questions about Islam based on Hadiths, which is the the life and saying of the Prophet, peace be upon him. But immediately the community was like, one, this is really antithetical to the rich, scholarly tradition of Islam. Two, the questions that people might be asking can’t only be answered by Hadiths. And, three, chatbots are not very good at being accurate. And so the people behind it immediately shut it down. I want to turn it over to, Rabbi Franklin, you first. Is there a version of HadithGPT in the Jewish community? Are you still using ChatGPT to write sermons? Or what other use cases are you seeing? FRANKLIN: I actually did see a version of some kind of parallel within the Jewish world to HadithGPT. It was RabbiGPT, something along those lines. But actually, Google has done a great job already for years answering very trivial questions about Judaism. So if you want to know, where does this particular quote come from in the Torah, and you type it into Google, and you get the answer. And if you want to know how many times you shake the lulav, this traditional plant that we shake on Sukkot, you can find that on Google. ChatGPT, the same in terms of purveying information and actually generating trivial content or answering trivial questions, yeah. That far surpasses any rabbi’s ability, really. It’s a dictionary or encyclopedia of information. But religion goes far beyond answering simple questions. We’re asking major questions, ultimate questions about the nature of life, that I don’t think artificial intelligence is quite there yet. But when you get into the philosophical, the ethical, the moral, the emotional, that’s when you start to see the breakdown in terms of the capabilities of how far artificial intelligence can really answer these kinds of questions. BHUIYAN: Right. And I do want to come back to that, but I first want to go to Noreen. I mentioned that the immediate reaction to HadithGPT was, OK, this is antithetical to the scholarly tradition within Islam. But is there actually a way that religious scholars and religious researchers, or people who are actually trying to advance their knowledge about a particular faith, are using ChatGPT and other chatbots to actually do that in a useful and maybe not scary and harmful way? (Laughs.) HERZFELD: Well, I’m in academia. And so, of course, ChatGPT has been a big issue among professors as we think about, are our students going to be using this to do their assignments? And there’s a lot of disagreement on whether it makes any sense to use it or not. I think right now, there’s some agreement that the programs can be helpful in the initial stages. So if you’re just brainstorming about a topic, whether you’re writing an academic paper, or writing a homily, or even preparing for, let’s say, a church youth group or something, it can be helpful if you say, give me some ideas about this topic, or give me some ideas for this meeting that we’re going to have. But when it comes to a more finished product, that’s the point where people are saying, wow, now you have to really be careful. Within the Christian tradition there are now generative AI programs that supposedly explicate certain verses or pericopes in the Bible. But they tend to go off on tangents. Because they work stochastically in just deciding what word or phrase should come next, they’ll attribute things to being in the Bible that aren’t there. And so, right now I think we have to warn people to be extremely careful. There have been earlier AIs. Like Germany had a robot called BlessU-2. And if someone asked it for a prayer about a particular situation, it would generate a prayer. If someone asked it for a Bible verse that might fit a particular setting, it actually would come out with a real Bible verse. But I think a lot of people—and this goes back to something Josh said, or something that you said about the Hadith—the Christian tradition is an extremely embodied tradition. When you go to mass, you eat bread, you drink wine, you smell incense, you bow down and stand up. The whole body is a part of the worship. And that’s an area that AI, as something that is disembodied, that’s only dealing with words, it can’t catch the fullness. I think one would find the same thing in Muslim tradition, where you’re prostrating yourself, you’re looking to the right and the left. It's all involving the whole person, not just the mental part. FRANKLIN: Yeah, I’d phrase some of that a little bit differently in terms of the biggest lacking thing about AI is definitely the sense of spirituality that AI can generate. And I think part of the reason that is, is that spirituality has to do with feeling more than it does data. Whereas AI can think rationally, can think in terms of data, and it can actually give you pseudo-conclusions that might sound spiritual, at the end of the day spirituality is something that is really about ineffability. That is, you can’t use words to describe it. So when you have a language model or generative language model that’s trying to describe something that’s really a feeling, that’s really emotional, that’s really a part of the human experience, even the best poets struggle with this. So maybe AI will get better at trying to describe something that, up until now, has very much been about emotion and feeling. But at the end of the day, I really don’t think that artificial intelligence can understand spirituality nor describe spirituality. And it definitely can’t understand it, because one of the things that AI lacks is the ability to feel. It can recognize emotion. And it can do a better job at recognizing emotion than, I think, humans can, especially in terms of cameras, being able to recognize facial expressions. Humans are notoriously bad at that. Artificial intelligence is very good at that. So it can understand what you might be feeling, but it can’t feel it with you. And that’s what genuine empathy is. That’s what religion is at its best, where it’s able to empathize with people within the community and be in sacred encounter and relationships with them. And although AI can synthesize a lot of these things that are extraordinarily meaningful for human encounter and experience, it’s not really doing the job of capturing the meat of it, of capturing really where religion and spirituality excel. BHUIYAN: Can I— HERZFELD: I’m sorry, but to underline the importance of emotion, when people talk about having a relationship with an AI, and especially expecting in the future to have close relationships with an AI, I often ask them: Well, would you like to have a relationship with a sociopath? And they’re like, well, no. And I said, but that’s what you’re going to get. Because the AI might do a good job of—you know, as Josh pointed out, it can recognize an emotion. And it can display an emotion if it’s a robot, or if there’s, let’s say, an avatar on a screen. But it doesn’t ever feel an emotion. And when we have people who don’t feel an emotion but might mentally think, oh, but what is the right thing to do in this situation, we often call those people sociopaths. Because they just don’t have the same empathetic circuit to feel your pain, to know what you’re going through. And coming back to embodiment, so often in that kind of situation what we need is a touch, or a hug, or just someone to sit with us. We don’t need words. And words are all the generative AI has. FRANKLIN: I would agree with you like 99.9 percent. There’s this great scene and Sherry Turkle’s book, Alone Together. I don’t know if you read it. HERZFELD: Yes. FRANKLIN: She talks about this nursing home where they have this experimental—some kind of a pet that would just kind of sit with you. It was a robotic pet that would just make certain sounds that would be comforting, that a pet would make. And that people found it so comforting. They felt like they had someone to listen to, that was responding to what they were saying, although it really wasn’t. It was synthetic. And Sherry Turkle, who’s this big person in the tech world, it automatically kind of transformed her whole perspective on what was going on in such an encounter. And she transformed her perspective on technology based on this one little scene that she saw in this nursing home. Because it was sociopathic, right? This doesn’t have actual emotion. It’s faking it, and you can’t be in legitimate relationship with something that isn’t able to reciprocate emotion. It might seem like it. And I know, Noreen, I asked you a question a little earlier—before we got started with this—about Martin Buber, who I do want to bring up. Martin Buber wrote this book exactly 100 years ago, I and Thou, which at the time really wasn’t all that influential, but became very influential in the field of philosophy. And Martin Buber talks about encounter that we have with other individuals. He says most of our transactions that we have between two people are just that, transactional. You go to the store, you buy something, you give them cash, they give you money back, and you leave. But that’s an I-it encounter. That person is a means to an end. But when you’re really engaged with another human being in relationship, there’s something divine, something profound that’s happening. And he says, through that encounter, you experience God, or that spark that’s within that encounter, that’s God. And I have changed my tune during the age of COVID and being so much on Zoom, to say that, actually, I do believe you can have an encounter with another individual on Zoom. That was a stretch for me. I used to think no, no, you can’t do that, unless you have that touch, you have that presence, that physical presence, maybe even through some kind of being with another human being. But in terms of having encounter with artificial intelligence, no matter how much it might be able to synthesize the correct response, it can’t actually be present because it’s not conscious. And that’s a major limitation in terms of our ability to develop relationships or any kind of encounter with something that’s less than human. HERZFELD: Yeah. It seems to fake consciousness, but it doesn’t actually have the real thing. The Swiss theologian Karl Barth said that to have a truly authentic relationship you need four things. And those were to look the other in the eye, to speak to and hear the other, to aid the other, and to do it gladly. And the interesting thing about those four, I mean, to look the other in the eye, that doesn’t mean that a blind person cannot have an authentic relationship. But it is to recognize the other is fully other and to recognize them as fully present. To speak to and hear the other, well, you know, AI is actually pretty good at that. And to aid the other—computers aid us all the time. They do a lot of good things. But then you get to the last one, to do it gladly. And I think there is the real crux of the matter, because to do it gladly you need three things. You need consciousness, you need free will, and you need emotion. And those three things are the three things that AI really lacks. So far, we do not have a conscious AI. When it comes to free will, well, how free really is a computer to do what it’s programmed to do. And then can it do anything gladly? Well, we’ve already talked about it not having emotion. So it cannot fulfill that last category. FRANKLIN: Yeah, it does it almost so well. And I really say “almost.” We really do confuse intelligence and consciousness quite often. In fact, AI can accomplish a lot of the tasks that we accomplish emotionally through algorithms. Now it’s kind of like a submarine can go underwater without gills, but it’s not a fish. It’s accomplishing the same thing but it’s not really the same thing. It’s not living. It doesn’t have anything within it that enables us to be in relationship with it. And that is—yeah, I love that—those four criteria that you mentioned. Those are really great and helpful. HERZFELD: And you just mentioned that it’s not living. When you were talking about the pet in the nursing home, I was thinking, well, there are degrees of relationality. I can be soothed by a beautiful bouquet that somebody brings if I’m in the hospital, let’s say, just looking at the flowers. And certainly everyone knows now that we lower our blood pressure if we have a pet, a cat or a dog, that we can stroke. And yet, I feel like I have a certain degree of relationship with my dog that I certainly don’t have with the flowers in my garden, because the dog responds. And sometimes the dog doesn’t do what I tell her to. She has free will. There’s another story in that same book by Sherry Turkle where instead of giving the patient in the nursing home this robotic seal, they give them a very authentic-looking robotic baby. And what was really sad in that story was that one of the women so took to this robotic baby, and to cradling it and taking care of it, that she ignored her own grandchild who had come to visit her. And Sherry Turkle said at that point she felt like we had really failed. We had failed both the grandchild and the grandmother. And that’s where I think we fail. One of the questions that keeps bedeviling me is what are we really looking for when we look for AI? Are we looking for a tool or are we looking for a partner? In the Christian tradition, St. Augustine said, “Lord, you have made us for yourself and our hearts are restless until they rest in you.” I think that we are made to want to be in relationship, deep relationship, with someone other to ourselves, someone that is not human. But as we live in a society where we increasingly don’t believe in God, don’t believe in angels, don’t believe in the presence of the saints, we’re looking for a way to fill that gap. And I think for many people who are not religious, they’re looking towards AI to somehow fill this need to be in an authentic relationship with an other. BHUIYAN: And we’re talking a lot about sort of that human connection. And, Noreen, you said this in your book, that AI is an incomplete partner and a terrible surrogate for other humans. And it sounds like both of you agree that there is not a world where AI, in whatever form, could sufficiently replace—or even come close to replacing that human connection. But on a practical note Rabbi Franklin, you mentioned Rabbi Google. You know, a lot of faith practices are incredibly, to reuse the word, practice-centric, right? That that is the building block of the spirituality. Within the Muslim community, of course, right, the five daily prayers. There’s a version of this in many different faith practices. And so if people are seeking answers about the practical aspect of their spirituality from a tool even if they’re thinking, yeah, this is a tool. Trust, but verify. If they’re seeking those answers from this tool that has a tendency to hallucinate or make mistakes, is there a risk that they will over-rely on this particular tool, and then that tool can create sort of a friction between them and the community? Because, I’ll admit it, as someone who practices a faith and also is well-versed in the issues with Google and the misinformation that it can surface, I will still Google a couple—(inaudible). I will turn to Google and be, like: How do I do this particular prayer? I haven’t done it in a very, very long time. And of course, I’m looking through and trying to make sure that the sources are correct. But not everyone is doing that. Not everyone is going through with a fine-tooth comb. And ChatGPT, given how almost magical it feels to a lot of people, there is even less of a likelihood that they will be questioning it. And it is getting more and more sophisticated. So it’s harder to question. So is there a concern within religious communities that this tool will become something that will create even one more obstacle between a person and their faith leader, or their clergy, or their local scholars? FRANKLIN: I don’t seem that worried about it. I think what synagogues and faith-based communities do is something that’s really irreplicable by ChatGPT. We create community. We create shared meaningful experience with other people. And there is a sense that you need physical presence in order to be able to do that. Having said that, yeah, I use ChatGPT as a tool. I think other people will use it too. And it will help a lot with how do you get the information that you need in a very quick, accessible way? Sometimes it’s wrong. Sometimes it makes mistakes. I’ll give you an example of that. I was asking ChatGPT, can you give me some Jewish texts from Jewish literature on forgiveness? And it gives me this text about the prodigal son. And I typed right back in, and I said: That’s not a Jewish text. That’s from the Gospels. And it says, oh, you’re right. I made a mistake. It is from the Gospels. It’s not a Jewish text. I actually thought the most human thing that it did in that whole encounter was admit that it was wrong. Maybe that’s a lack of human—because human beings have an inability often to admit that we were wrong, but I actually love the fact that it admitted, oh, I made a mistake, and it didn’t double down on its mistake. It’s learning and it’s going to get better. I think if we measure artificial intelligence by its current form, we’re really selling it short for what it is going to be and how intelligent it actually is. And, by the way, I think it is extraordinarily intelligent, probably more intelligent than any of us. But we have human qualities that artificial intelligence can never really possess. And I think the main one, which we already touched on, is the idea of consciousness. And I think the experiences that you get within a faith-based community are those experiences that specifically relate to human consciousness and not relate to human—not developing intelligence. People don’t come to synagogue to get information. I hope they go to ChatGPT or Google for that. That’s fine. People come to synagogue to feel something more within life, something beyond the trivial, something that they can’t get by reading the newspaper, that they can’t get by going on Google. It’s a sense of community, a sense of relationship. And so I don’t think that there can be a way that artificial intelligence is going to distract from that. Yeah, I guess it’s possible, but I’m not too worried about it. BHUIYAN: And—go ahead, Noreen, yeah. HERZFELD: I was just going to say, I think you need to be a little careful when you say it’s more intelligent than we are. Because there are so many different kinds of intelligence. FRANKLIN: Yes. IQ intelligence, let me qualify. HERZFELD: If intelligence is just having immediate access to a lot of facts, great, yeah. It’s got access we don’t have. But if intelligence is having, first of all, emotional intelligence, which we’ve already discussed. But also just having models of the world. This is often where these large language models break down, that they don’t have an interior model of the world and the way things work in the world, whether that’s the physical world or the social world. And so they’re brittle around the edges. If something hasn’t been discussed in the texts that has been trained on, it can’t extrapolate from some kind of a basic model, mental model that—which is the way we do things when we encounter something brand new. So, in that sense, it’s also lacking something that we have. BHUIYAN: There’s a question from the audience that I think is a good one, because it sounds to me, and correct me if I’m wrong, that, Noreen, you in particular believe that the doomsday scenario that people are always talking about, where AI becomes sentient, takes over, is more—we become subservient to AI, is unlikely. And, OK. And so the question from the audience is that, it seems like most of the arguments are, we can tell the difference so AI won’t replace human connection. But what happens if and when AI does pass the Turing test? Is that something that you see as a realistic scenario? HERZFELD: Oh, in a sense we could say AI has already passed the Turing test. If you give a person who isn’t aware that they’re conversing with ChatGPT sometime to converse with it, they might be fooled. Eventually ChatGPT will probably give them a wrong answer. But then, like Josh said, it’ll apologize and say, oh yeah, I was wrong. Sorry. So we could say that, in a sense, the Turing test has already been passed. I am not worried about the superintelligent being that’ll decide that it doesn’t need human beings, or whatever. But I’m worried about other things. I mean, I think in a way that that’s a red herring that distracts us from some of the things we really should be worried about. And that is that AI is a powerful tool that is going to be used by human beings to exert power over other human beings. Whether it’s by advertently or inadvertently building our biases into this tool so that the tool treats people in a different fashion. I’m also worried about autonomous weapons. They don’t need to be superintelligent to be very destructive. And a third thing that I’m worried about is climate change. And you might say, well, what has that got to do with AI? But these programs, like the large language models, like ChatGPT, take a great deal of power to train them. They take a great deal of power to use them. If you ask a simple question of ChatGPT instead of asking Google, you’re using five to ten times the electricity, probably generated by fossil fuels, to answer that question. So as we scale these models up, and as more and more people start using them more and more of the time, we are going to be using more and more of our physical resources to power it. And most of us don’t realize this, because we think, well, it all happens in the cloud. It’s all very clean, you know. This is not heavy industry. But it’s not. It’s happening on huge banks of servers. And just for an example, one of Microsoft’s new server farms in Washington state is using more energy per day than the entire county that it’s located in. So we just are not thinking about the cost that underlies using AI. It’s fine if just a few people are using it, or just using it occasionally. But if we expect to scale this up and use it all the time, we don’t have the resources to do that. BHUIYAN: Yeah, and you mentioned electricity. A couple of my coworkers have done stories about the general environmental impact. But it’s also water. A lot of these training models use quite a bit of water to power these machines. HERZFELD: To cool the machines, yeah. BHUIYAN: And so yeah, I’m glad that you brought that up, because that is something that I think about quite a bit, covering surveillance, right? Religious communities are this sort of, incredibly strong communities that can have a really huge social impact. And we’ve had various versions of AI for a very, very long time that have harmed some religious communities, other marginalized groups. You mentioned a couple of them. Surveillance is one of them. There’s also things that feel a little bit more innocuous but there’s bias and discrimination built into them like hiring algorithms, mortgage lending algorithms, algorithms to decide whether someone should qualify for bail or not. And so my general question is, is there a role that religious communities can play in trying to combat those harms. How much education should we be doing within our communities to make sure people are aware that it’s not just the fun quirky tool that will answer your innocuous question. AI is also powering a lot more harmful and very damaging tools as well. FRANKLIN: I’d love for religious leaders to be a part of the ethics committees that sit at the top of how AI decides certain decisions that are going to be a part of everyday real life. So, for example, when your self-driving car is driving down the road and a child jumps out in the middle of the street your car has to either swerve into oncoming traffic, killing the driver, or hit the child. Who’s going to decide how the car behaves, how the artificial intelligence behaves? I think ethics are going to be a huge role that human beings need to take in terms of training AI and I think religious leaders as well as ethicists, philosophers, really need to be at the head, not the lay leadership programmers or the lay programmers. Not the lay but they’re not really trained in ethics and philosophy and spirituality, for that matter, and religion. I really think that we need to be taking more of an active role in making sure that the ethical discussions of the programming of artificial intelligence have some kind of strong ethical basis because I think the biggest danger is who’s sitting in the driver’s seat. Not in the car scenario but, really, who’s sitting in the driver’s seat of the programming. BHUIYAN: Noreen, do you have anything to add onto that? HERZFELD: No, I very much agree with that. I do think that if we leave things up to the corporations that are building these programs the bottom line is going to be what they ultimately consult. I know that at least one car company—I believe it’s Mercedes-Benz—has publicly said that in the scenario that Josh gave the car is going to protect the driver. No matter how many children jump in front of the car the car will protect the driver and the real reason is that they feel like, well, who’s going to buy a car that wouldn’t protect the driver in every situation. If you had a choice between a car that would always protect the driver and a car that sometimes would say, no, those three kids are more valuable— FRANKLIN: And that’s a decision made by money, not made by ethics. HERZFELD: Exactly. FRANKLIN: Yeah. BHUIYAN: Right. Rabbi Franklin, I have a question. There’s a good follow-up in the audience. Are there ethics committees that you know of right now that are dealing with this issue, and then the question from the audience from Don Frew is how do we get those religious leaders into those committees. FRANKLIN: We have to be asked, in short, in order to be on those committees. I don’t know if it’s on the radar even of these corporations who are training AI models. But I think there are going to be very practical implications coming up in the very near future where we do need to be involved in ethical discussions. But there are religious leaders who sit on all sorts of different ethics committees but as far as I know there’s nothing that’s set up specifically related to AI. That doesn’t mean there isn’t. I just don’t know of any. But, if you were to ask me, right now we’ve seen articles about the decline of humanities in college and universities. I would actually say that humanities is—if I had to make a prediction is probably going to make a comeback because these ethical, philosophical, spiritual questions are going to be more relevant than ever, and if you’re looking at programming and law and the medical industry and medicine those are actually things where AI is going to be more aggressive and playing a larger role in doing the things that humans are able to do. BHUIYAN: Right. I do want to bring the question or the conversation back to, you know, religion, literally. In your book, Noreen, you bring up a question that I thought was just so fascinating, whether we should be deifying AI and it sounds like the short answer is no. But my fascination with it is how realistic of a risk is that, and I know there’s one example that I just knew off the top of my head was the Church of AI, which has been shut down and was started by a former Google self-driving engineer who was later pardoned for stealing trade secrets. His name is Anthony Levandowski. So, yeah, take what he says with a grain of salt, I guess is what I’m saying. But the church was created to be dedicated to, quote, “The realization, acceptance, and worship of a godhead based on AI developed through computer hardware and software.” Is this a fluke? Is this a one off? Do you think there’s, like, a real risk of as AI gets more sophisticated people will be sort of treating it as, like, a kind of god like, I don’t know, figure, if that’s the right word, but some sort of god? FRANKLIN: It sounds like a gimmick to me. I mean, look, it’s definitely going to capture the media headlines for sure. You do something new and novel like that no matter how ridiculous it is people are going to write about it, and it’s not surprising that it failed because it didn’t really have a lot of substance. At least I hope the answer is no, that that’s not going to be a real threat or that’s not going to be a major concern. Who knows? I mean, I really think that human beings are bad at predicting the future. Maybe AI will be better at predicting the future than we are. But my sense, for what it’s worth, is that no, that’s not really a concern. HERZFELD: Well, I would be a little more hesitant to say it’s not any type of a concern. I do not think there are going to be suddenly a lot of churches like the one you mentioned springing up in which people deify AI with the same sorts of ways in which we’ve worshipped God. But, we worship a lot of stuff. We worship money all too often. We worship power. And we can easily worship AI if we give it too much credence. If we really believe that everything it says is true, that what it does is the pinnacle of what human beings do and this is what worries me is that if we say, well, it’s all about intelligence, I’ve often thought, well, we’re trying to make something in our own image and what we’re trying to give it is intelligence. But is that the most important thing that human beings do? I think in each of our religious traditions we would say the most important thing that human beings do is love and that this is something that it can’t do. So my worry is that—because in some ways we’re more flexible than machines are and as the machines start to surround us more, as we start to interact with them more we’re going to, in a sense, make ourselves over in their image and in that way we are sort of deifying it because when we think about—in the Christian tradition we talk about deification as the process of growing in the image and likeness of God, and if instead we grow in the image and likeness of the computer that’s another way of deifying the computer. BHUIYAN: I want to turn it over to audience questions; there are some hands raised. So I want to make sure that we get some of them in here as well. OPERATOR: Thank you. We will take the next question from Rabbi Joe Charnes. CHARNES: I appreciate that there are potential benefits from AI. That’s simply undeniable. The question I have is and the concern that I have that I think you certainly both share and I don’t know the way around it is as humans we do often relate to human beings. That’s our goal in life. That’s our purpose. But human relationships are often messy and it’s easier to relate to disembodied entities or objects, and I see people in the religious world relating now through Zoom. Through their Zoom sessions they have church so they’re relating to church and God through a screen, and when you speak of ethics and spirituality, Rabbi, of somehow imposing that or placing that into this AI model I don’t see how you can do that and I do fear we lean—if there’s a way out of human connection but modeling human connection to some extent I do fear we’re going to really go in that direction because it’s less painful. FRANKLIN: So I’ll try to address that. There’s a great book that’s going to sound like it’s completely unrelated to this topic. It’s by Johann Hari and the book is called Chasing the Scream. What he argues is that, generally, addiction is not about being the opposite of sobriety. Addiction is about being disconnected from other individuals and using the substance or a thing as a proxy for a relationship that we have with other people. Love that idea. I think there is a huge danger that artificial intelligence can be just that, the proxy for human relationship when we’re lonely, when we’re disconnected from others, and it’s going to be the thing that we are going to turn to. I would even echo Noreen’s fear that we end up turning to AI in very inappropriate ways and making it almost idolatrous, that when we say deifying it what we’re really doing is idol worshipping AI as something that really won’t actually give you the connection even though you think that it will. I think that’s a very legitimate fear. Having said that, I think that AI is going to be a great tool for the future if it’s used as a tool. Yes, there are tremendous amount of dangers with new technology and newness. Every single new innovation, every single revolutionary change technologically has come with huge dangers and AI is no different. I hope we’re going to be able to figure out how to really put the correct restrictions on it, how to really make sure that the ethics of AI has involvement from spiritual leaders and ethicists and philosophers. Am I confident that we’ll be able to do that? I don’t know. I think we’re still at the very beginning stages of things and we’ll see how it develops. HERZFELD: Two areas that I worry about because these are areas that people are particularly looking at AI are the development of sex bots, which is happening, and the use of AI as caregivers either for children or for the elderly. But particularly for the elderly this is an area that people are looking at very strongly. I think for religious leaders the best thing that you can do is to try to make sure that the people in your congregation—to do everything you can to foster the relationships among the people because as Josh was saying, we’ll use this as a substitute if we don’t have the real thing. But if we are in good and close and caring relationships with other human beings then the computer will not be enticing as a substitute and we might merely use it as a tool or just not bother with it at all. So I think what we really need to do is tend to the fostering of those relationships and particularly for those that are marginalized in some ways, whether it’s the elderly, whether it’s parents with children, particularly single parents who might be needing help, and whether it’s those that are infirm in some way. OPERATOR: We will take our next question from Ani Zonneveld of Muslims for Progressive Values. ZONNEVELD: Hi. Good morning. Good afternoon. You had raised that question, Johana, about what are the faith communities doing or can contribute to a better aggregated response on AI and I just wanted to share that members of our community has been creating images of, for example, women leading prayer in Muslim communities. So that those are some of the aggregated information that could be filtered up into the way AI is being used as a tool. So I think, at the end of the day, the AI system works as an aggregate of pulling in information that’s already out there and I think it’s important for us in the faith communities to create the content itself from which the AI can pull, and that also overcomes some of the biases, particularly the patriarchal interpretations of faith traditions, for example, right? The other thing I wanted to also share with everyone is that there’s a real interest in it at the United Nations. That is being led by an ethics professor from the university in Zurich. I taught a master’s ethics class there as a person of faith and so there’s this international database system agency that is being created at the UN level. Just thought I would share that with everyone. Thanks. FRANKLIN: Thank you. HERZFELD: And I would also share that the Vatican is working on this as well. I am part of a committee that’s part of the dicastery of culture and education and we’ve just put together a book on AI and the Pope is going to be using his address on January 1 on the Day of World Peace to address AI as a topic. FRANKLIN: I’m pretty sure rabbis across the country right now are going to be writing sermons for tomorrow, which begins Rosh Hashanah, our high holiday season, and many rabbis—most rabbis, perhaps—are going to be preaching about AI. OPERATOR: We will take our next question from Shaik Ubaid from the Muslim Peace Coalition. UBAID: Thank you for the opportunity. Can you hear me? BHUIYAN: Yes. UBAID: Overall, we are sort of sort of putting down AI because it does not have the human qualities of empathy. But if instead of that we focus on using it as a tool whether in educating the congregations or jurisprudence then we would be using it. When it comes to the human quality, another quality is courage. We may have the empathy, but many times we do not show the courage. For example, we see pogroms going on in India and an impending genocide. But whether it be the—a (inaudible) chief or the chief rabbi of Israel or the Vatican, they do not say a word to Modi, at least publicly, to put pressure, and same with the governments in the West. And sometimes their mouthpieces in the U.S. are even allowed to come and speak at respectable fora, including sometimes even in CFR. So instead of expecting too much from the AI we should use it with its limitations and sometimes the bias and the arrogance that we show thinking that we are humans, of course, we are superior to any machine. But many times we fail ourselves. So if the machines are failing us that should not be too much of a factor. Thank you. FRANKLIN: Very well said. HERZFELD: Yeah. BHUIYAN: There are other audience questions that sort of build on that. We’re talking about humans having bias and our own thoughts sort of being a limiting factor for us. But, obviously, these machines and tools are being built by humans who have biases that may be and putting them into the training models. And so one of the questions or one of the topics that Frances Flannery brought up is the ways in which AI is circumventing our critical thinking. We talked about over reliance on these tools within the faith practice but is there—beyond that, right? We talked about AI when it comes to very practical things like these practices that we do. I understand it doesn’t replace the community and it doesn’t replace these spaces where we’re seeking community. But people are asking questions that are much more complex and are not trivial and are not just the fundamentals of the religion. Is there a concern with people using chat bots in place of questioning particular things or trying to get more knowledge about more complex topics? FRANKLIN: I would actually just kind of respond by saying that I don’t think AI circumvents critical thinking. I actually think it focuses us to think more critically, and by getting rid of the trivial things and the trivial data points and rational kind of stuff that AI can actually do and piece together and solve even just complex IQ-related issues it focuses us to think about more critical issues in terms of philosophy, in terms of faith and spirituality and theology, all things that I think AI might be able to parrot. But it can’t actually think creatively and original thoughts. So I actually think that AI gets rid of the dirty work, the summaries of what other people have said, maybe even generating two ideas together. But really true creativity, I think, is in the human domain and it’s going to force us to think more creatively. Maybe I’m just an optimist on that but that’s my sense. HERZFELD: And I’ll give the more pessimistic side, which is not to say—I mean, I believe that everything that Josh just said is correct. My concern is that we might end up using AI as a way to evade responsibility or liability. In other words, if decisions are made—Johana, you were talking earlier about how we use AI to decide who gets bail, who gets certain medical treatments, these things, and if we simply say, well, the computer made a decision and we don’t think critically about whether that was the right decision or whether the computer took all things into account I think we need to think about the same thing when we look at autonomous weapons, which are really coming down the pike, and that is how autonomous do we really want them to be. We can then, in a way, put some of the responsibility for mistakes that might be made on the battlefield onto the computer. But in what sense can we say a computer is truly responsible? So I do fear that as long as we use it as a component in our decision-making, which I think is what Josh was saying, this can be a powerful tool. But when we let it simply make the decision—and I’ve talked to generals who are worried about the fact that if we automate warfare too much the decision—the pace of warfare may get to be so fast that it’s too fast for human decision-makers to actually get in there and make real decisions and that’s a point where we’ve then abdicated something that is fully our responsibility and given it to the machine. FRANKLIN: Let’s not forget, though, how strong human biases are. I mean, read Daniel Kahneman’s book Thinking, Fast and Slow and you’ll see all these different heuristics for human bias that are unbelievable. Going to the realm of bail, there was a study that showed that judges who haven’t had their lunch yet are much more likely to reject bail than those who just came out of their lunch break. I mean, talk about biases that exist in terms of the ways that we make decisions. I would say that ultimately although there are biases that we implant within these algorithms that will affect the way that outcomes actually come out probably artificial intelligence and these algorithms are going to do a better job than human beings alone. Having said that, to echo Noreen, when we use them in tandem with human decision-making I think we get the best of both worlds. BHUIYAN: Right. I mean, there are so many examples. Forget warfare and other places. I mean, in policing it happens all the time, right? There’s facial recognition tools that are intended to be used as sort of a lead generator or something that—a tool in an investigation. But we’ve seen time and again that it’s being used as the only tool, the only piece of evidence that then leads to the arrest and false incarceration of many, often black, people. And, again, to both of your points, it’s because of the human biases that these AI tools, particularly when used alone, are unable to—I mean, they’re just going to do what the human was going to do, too—the human with the bias was going to do as well. And I have seen in my reporting that there are a lot of situations where police departments or other law enforcement agencies will kind of use that as an excuse just like you said, Noreen, or sort of, like, well, the computer said, and they validated our data so it must be right. So I do think that there’s a little bit of the escape of liability and responsibility as well. We don’t have a ton more time and, Noreen, you talked a little bit about some of your major fears. Rabbi Franklin, you’re a little bit more optimistic about this than maybe Noreen or even I am. I would like to hear what your great fears of this tool are. FRANKLIN: My biggest fear is that it’s going to force me to change and, look, I think that’s a good thing, ultimately, but change is always really scary. I think I’m going to be a different rabbi five years from now, ten years from now than I am right now and I think AI is going to be one of the largest reasons for that. I think it’s going to force me to hone certain abilities that I have and really abandon and rely on artificial intelligence for other ones. And even going back to the original thought experiment that involved me in this conversation to begin with, which was using AI to write a sermon or ChatGPT to write a sermon at the very beginning of its infancy of ChatGPT, really, what a sermon looks like is going to be profoundly different. And it was part of one of the points that I was making when I actually delivered that original sermon. The only thing that was scripted was the part that was written by AI. Everything else was a conversation, back and forth questioning, engagement with the community who was there. I think sermons are going to look more like that, more like these kind of conversations than they will a scripted, written, and delivered words that come from a paper and are just spoken by a human being. Rabbis, preachers, imams, pastors, priests, are not going to be able to get away with that kind of homiletical approach. We’re going to have to really radically adapt and get better at being rabbis and clergy with different skill sets than we currently have, and that’s scary. But at the same time it’s exciting. BHUIYAN: And, Noreen, to end on a positive note, is there anything that you see that ChatGPT or other forms of generative AI or AI, broadly, what are some of the most positive ways that you see these tools being used in the future? HERZFELD: Well, we haven’t even mentioned tools that work with images is like DALL-E or Midjourney. But I think that those tools have sparked a new type of creativity in people, and I think if there’s a theme that goes through everything that the three of us have said today it’s a great tool, bad surrogate—that as long as we use this as a tool it can be a very good tool. But it’s when we try to use it as a complete replacement for human decision-making, for human courage, for human critical thinking, for human taking of responsibility, that we realize that just as we are flawed creatures we’ve created a flawed creature. But in each of our religious traditions I think we hold dear that what we need to do is love God and love each other and that we as religious people keep raising that up in a society that views things instrumentally. BHUIYAN: Thank you both. I am just going to turn it over to Irina now. FASKIANOS: Yes. Thank you all. This was a really provocative and insightful discussion. We really appreciate it. We encourage you to follow Rabbi Josh Franklin’s work on rabbijoshfranklin.com. Noreen Herzfeld is at @NoreenHerzfeld and Johana is at @JMBooyah—it’s B-O-O-Y-A-H—so on X, formally known as Twitter. And, obviously, you can follow Johana’s work in the Guardian. Please, I commend Noreen’s book to you. And please do follow us on Twitter at @CFR_religion for announcements and other information. And please feel free to email us at [email protected] with suggestions for future topics and feedback. We always look forward to hearing from you and soliciting your suggestions. So, again, thank you all for this great conversation. We appreciate your giving us your time today and we wish you a good rest of the day.
  • United States
    AI’s Impact on the 2024 U.S. Elections, With Jessica Brandt
    Podcast
    Jessica Brandt, policy director for the Artificial Intelligence and Emerging Technology Initiative at the Brookings Institution, where she is a fellow in the Strobe Talbott Center for Security, Strategy, and Technology, sits down with James M. Lindsay to discuss how artificial intelligence might affect the 2024 U.S. elections.
  • Cybersecurity
    Cyber Week in Review: September 8, 2023
    OGP Summit meets in Estonia; Mudge joins CISA; UK will not break end to end encryption; EU designates gatekeepers under Digital Markets Act; DEF CON Generative Red Team Challenge concludes.
  • China
    President Biden Has Banned Some U.S. Investment in China. Here’s What to Know.
    The Joe Biden administration says the restrictions are directed at protecting national security, not stifling economic competition.
  • United States
    A Conversation With Representative Adam Schiff
    Play
    Representative Adam Schiff discusses Russia’s war in Ukraine, U.S.-China relations, the proliferation of artificial intelligence technologies, and emerging threats to the democratic process, including misinformation and deepfakes.
  • Artificial Intelligence (AI)
    Higher Education Webinar: Implications of Artificial Intelligence in Higher Education
    Play
    Pablo Molina, associate vice president of information technology and chief information security officer at Drexel University and adjunct professor at Georgetown University, leads the conversation on the implications of artificial intelligence in higher education.   FASKIANOS: Welcome to CFR’s Higher Education Webinar. I’m Irina Faskianos, vice president of the National Program and Outreach here at CFR. Thank you for joining us. Today’s discussion is on the record, and the video and transcript will be available on our website, CFR.org/Academic, if you would like to share it with your colleagues. As always, CFR takes no institutional positions on matters of policy. We are delighted to have Pablo Molina with us to discuss implications of artificial intelligence in higher education. Dr. Molina is chief information security officer and associate vice president at Drexel University. He is also an adjunct professor at Georgetown University. Dr. Molina is the founder and executive director of the International Applies Ethics in Technology Association, which aims to raise awareness on ethical issues in technology. He regularly comments on stories about privacy, the ethics of tech companies, and laws related to technology and information management. And he’s received numerous awards relating to technology and serves on the board of the Electronic Privacy Information Center and the Center for AI and Digital Policy. So Dr. P, welcome. Thank you very much for being with us today. Obviously, AI is on the top of everyone’s mind, with ChatGPT coming out and being in the news, and so many other stories about what AI is going to—how it’s going to change the world. So I thought you could focus in specifically on how artificial intelligence will change and is influencing higher education, and what you’re seeing, the trends in your community. MOLINA: Irina, thank you very much for the opportunity, to the Council on Foreign Relations, to be here and express my views. Thank you, everybody, for taking time out of your busy schedules to listen to this. And hopefully, I’ll have the opportunity to learn much from your questions and answer some of them to the best of my ability. Well, since I’m a professor too, I like to start by giving you homework. And the homework is this: I do not know how much people know about artificial intelligence. In my opinion, anybody who has ever used ChatGPT considers herself or himself an expert. To some extent, you are, because you have used one of the first publicly available artificial intelligence tools out there and you know more than those who haven’t. So if you have used ChatGPT, or Google Bard, or other services, you already have a leg up to understand at least one aspect of artificial intelligence, known as generative artificial intelligence. Now, if you want to learn more about this, there’s a big textbook about this big. I’m not endorsing it. All I’m saying, for those people who are very curious, there are two great academics, Russell and Norvig. They’re in their fourth edition of a wonderful book that covers every aspect of—technical aspect of artificial intelligence, called Artificial Intelligence: A Modern Approach. And if you’re really interested in how artificial intelligence can impact higher education, I recommend a report by the U.S. Department of Education that was released earlier this year in Washington, DC from the Office of Education Technology. It’s called Artificial Intelligence and Future of Teaching and Learning: Insights and Recommendations. So if you do all these things and you read all these things, you will hopefully transition from being whatever expert you were before—to a pandemic and Ukrainian war expert—to an artificial intelligence expert. So how do I think that all these wonderful things are going to affect artificial intelligence? Well, as human beings, we tend to overestimate the impact of technology in the short run and really underestimate the impact of technology in the long run. And I believe this is also the case with artificial intelligence. We’re in a moment where there’s a lot of hype about artificial intelligence. It will solve every problem under the sky. But it will also create the most catastrophic future and dystopia that we can imagine. And possibly neither one of these two are true, particularly if we regulate and use these technologies and develop them following some standard guidelines that we have followed in the past, for better or worse. So how is artificial intelligence affecting higher education? Well, number one, there is a great lack of regulation and legislation. So if you know, for example around this, OpenAI released ChatGPT. People started trying it. And all of a sudden there were people like here, where I’m speaking to you from, in Italy. I’m in Rome on vacation right now. And Italian data protection agency said: Listen, we’re concerned about the privacy of this tool for citizens of Italy. So the company agreed to establish some rules, some guidelines and guardrails on the tool. And then it reopened to the Italian public, after being closed for a while. The same thing happened with the Canadian data protection authorities. In the United States, well, not much has happened, except that one of the organizations on which board I serve, the Center for Artificial Intelligence and Digital Policy, earlier this year in March of 2023 filed a sixty-four-page complaint with the Federal Trade Commission. Which is basically we’re asking the Federal Trade Commission: You do have the authority to investigate how these tools can affect the U.S. consumers. Please do so, because this is your purview, and this is your responsibility. And we’re still waiting on the agency to declare what the next steps are going to be. If you look at other bodies of legislation or regulation on artificial intelligence that can help us guide artificial intelligence, well, you can certainly pay attention to the U.S. Congress. And what is the U.S. Congress doing? Yeah, pretty much that, not much, to be honest. They listen to Sam Altman, the founder of ChatGPT, who recently testified before Congress, urging Congress to regulate artificial intelligence. Which is quite clever on his part. So it was on May 17 that he testified that we could be facing catastrophic damage ahead if artificial intelligence technology is not regulated in time. He also sounded the alarm about counterfeit humans, meaning that these machines could replace what we think a person is, at least virtually. And also warned about the end of factual evidence, because with artificial intelligence anything can be fabricated. Not only that, but he pointed out that artificial intelligence could start wars and destroy democracy. Certainly very, very grim predictions. And before this, many of the companies were self-regulating for artificial intelligence. If you look at Google, Microsoft, Facebook now Meta. All of them have their own artificial intelligence self-guiding principles. Most of them were very aspirational. Those could help us in higher education because, at the very least, it can help us create our own policies and guidelines for our community members—faculty, staff, students, researchers, administrators, partners, vendors, alumni—anybody who happens to interact with our institutions of higher learning. Now, what else is happening out there? Well, we have tons, tons of laws that have to do with the technology and regulations. Things like the Gramm-Leach-Bliley Act, or the Securities and Exchange Commission, the Sarbanes-Oxley. Federal regulations like FISMA, and Cybersecurity Maturity Model Certification, Payment Card Industry, there is the Computer Fraud and Abuse Act, there is the Budapest Convention where cybersecurity insurance providers will tells us what to do and what not to do about technology. We have state laws and many privacy laws. But, to be honest, very few artificial intelligence laws. And it’s groundbreaking in Europe that the European parliamentarians have agreed to discuss the Artificial Intelligence Act, which could be the first one really to be passed at this level in the world, after some efforts by China and other countries. And, if adopted, could be a landmark change in the adoption of artificial intelligence. In the United States, even though Congress is not doing much, what the White House is trying to position itself in the realm of artificial intelligence. So there’s an executive order in February of 2023—that many of us in higher education read because, once again, we’re trying to find inspiration for our own rules and regulations—that tells federal agencies that they have to root out bias in the design and use of new technologies, including artificial intelligence, because they have to protect the public from algorithm discrimination. And we all believe this. In higher education, we believe in being fair and transparent and accountable. I would be surprised if any of us is not concerned about making sure that our technology use, our artificial technology use, does not follow these particular principles as proposed by the Organization for Economic Cooperation and Development, and many other bodies of ethics and expertise. Now, the White House also announced new centers—research and development centers with some new national artificial intelligence research institutes. Many of us will collaborate with those in our research projects. A call for public assessments of existing generative artificial intelligence systems, like ChatGPT. And also is trying to enact or is enacting policies to ensure that U.S. government—the U.S. government, the executive branch, is leading by example when mitigating artificial intelligence risks and harnessing artificial intelligence opportunities. Because, in spite of all the concerns about this, it’s all about the opportunities that we hope to achieve with artificial intelligence. And when we look at how specifically can we benefit from artificial intelligence in higher education, well, certainly we can start with new and modified academic offerings. I would be surprised if most of us will not have degrees—certainly, we already have degrees—graduate degrees on artificial intelligence, and machine learning, and many others. But I would be surprised if we don’t even add some bachelor’s degrees in this field, or we don’t modify significantly some of our existing academic offerings to incorporate artificial intelligence in various specialties, our courses, or components of the courses that we teach our students. We’re looking at amazing research opportunities, things that we’ll be able to do with artificial intelligence that we couldn’t even think about before, that are going to expand our ability to generate new knowledge to contribute to society, with federal funding, with private funding. We’re looking at improved knowledge management, something that librarians are always very concerned about, the preservation and distribution of knowledge. The idea would be that artificial intelligence will help us find better the things that we’re looking for, the things that we need in order to conduct our academic work. We’re certainly looking at new and modified pedagogical approaches, new ways of learning and teaching, including the promise of adaptive learning, something that really can tell students: Hey, you’re not getting this particular concept. Why don’t you go back and study it in a different way with a different virtual avatar, using simulations or virtual assistance? In almost every discipline and academic endeavor. We’re looking very concerned, because we’re concerned about offering, you know, a good value for the money when it comes to education. So we’re hoping to achieve extreme efficiencies, better ways to run admissions, better ways to guide students through their academic careers, better way to coach them into professional opportunities. And many of this will be possible thanks to artificial intelligence. And also, let’s not forget this, but we still have many underserved students, and they’re underserved because they either cannot afford education or maybe they have physical or cognitive disabilities. And artificial intelligence can really help us reach to those students and offer them new opportunities to advance their education and fulfill their academic and professional goals. And I think this is a good introduction. And I’d love to talk about all the things that can go wrong. I’d love to talk about all the things that we should be doing so that things don’t go as wrong as predicted. But I think this is a good way to set the stage for the discussion. FASKIANOS: Fantastic. Thank you so much. So we’re going to go all of you now for your questions and comments, share best practices. (Gives queuing instructions.) All right. So I’m going first to Gabriel Doncel has a written question, adjunct faculty at the University of Delaware: How do we incentivize students to approach generative AI tools like ChatGPT for text in ways that emphasize critical thinking and analysis? MOLINA: I always like to start with a difficult question, so I very much, Gabriel Doncel, for that particular question. And, as you know, there are several approaches to adopting tools like ChatGPT on campus by students. One of them is to say: No, over my dead body. If you use ChatGPT, you’re cheating. Even if you cite ChatGPT, we can consider you to be cheating. And not only that, but some institutions have invested in tools that can detect whether or something was written with ChatGPT or similar rules. There are other faculty members and other academic institutions that are realizing these tools will be available when these students join the workforce. So our job is to help them do the best that they can by using these particular tools, to make sure they avoid some of the mishaps that have already happened. There are a number of lawyers who have used ChatGPT to file legal briefs. And when the judges received those briefs, and read through them, and looked at the citations they realized that some of the citations were completely made up, were not real cases. Hence, the lawyers faced professional disciplinary action because they used the tool without the professional review that is required. So hopefully we’re going to educate our students and we’re going to set policy and guideline boundaries for them to use these, as well as sometimes the necessary technical controls for those students who may not be that ethically inclined to follow our guidelines and policies. But I think that to hide our heads in the sand and pretend that these tools are not out there for students to use would be—it’s a disserve to our institutions, to our students, and the mission that we have of training the next generation of knowledge workers. FASKIANOS: Thank you. I’m going to go next to Meena Bose, who has a raised hand. Meena, if you can unmute yourself and identify yourself. Q: Thank you, Irina. Thank you for this very important talk. And my question is a little—(laughs)—it’s formative, but really—I have been thinking about what you were saying about the role of AI in academic life. And I don’t—particularly for undergraduates, for admissions, advisement, guidance on curriculum. And I don’t want to have my head in the sand about this, as you just said—(laughs)—but it seems to me that any kind of meaningful interaction with students, particularly students who have not had any exposure to college before, depends upon kind of multiple feedback with faculty members, development of mentors, to excel in college and to consider opportunities after. So I’m struggling a little bit to see how AI can be instructive for that part of college life, beyond kind of providing information, I guess. But I guess the web does that already. So welcome your thoughts. Thank you. FASKIANOS: And Meena’s at Hofstra University. MOLINA: Thank you. You know, it’s a great question. And the idea that everybody is proposing right here is we are not—artificial intelligence companies, at least at first. We’ll see in the future because, you know, it depends on how it’s regulated. But they’re not trying, or so they claim, to replace doctors, or architects, or professors, or mentors, or administrators. They’re trying to help those—precisely those people in those professions, and the people they served gain access to more information. And you’re right in a sense that that information is already on the web. But we’ve aways had a problem finding that information regularly on the web. And you may remember that when Google came along, I mean, it swept through every other search engine out there AltaVista, Yahoo, and many others, because, you know, it had a very good search algorithm. And now we’re going to the next level. The next level is where you ask ChatGPT in human-natural language. You’re not trying to combine the three words that say, OK, is the economics class required? No, no, you’re telling ChatGPT, hey, listen, I’m in the master’s in business administration at Drexel University and I’m trying to take more economic classes. What recommendations do you have for me? And this is where you can have a preliminary one, and also a caveat there, as most of these search engine—generative AI engines already have, that tell you: We’re not here to replace the experts. Make sure you discuss your questions with the experts. We will not give you medical advice. We will not give you educational advice. We’re just here, to some extent, for guiding purposes and, even now, for experimental and entertainment purposes. So I think you are absolutely right that we have to be very judicious about how we use these tools to support the students. Now, that said, I had the privilege of working for public universities in the state of Connecticut when I was the CIO. I also had the opportunity early in my career to attend public university in Europe, in Spain, where we were hundreds of students in class. We couldn’t get any attention from the faculty. There were no mentors, there were no counselors, or anybody else. Is it better to have nobody to help you or is it better to have at least some technology guidance that can help you find the information that otherwise is spread throughout many different systems that are like ivory towers—emissions on one side, economics on the other, academics advising on the other, and everything else. So thank you for a wonderful question and reflection. FASKIANOS: I’m going to take the next question written from Dr. Russell Thomas, a senior lecturer in the Department of International Relations and Diplomatic Studies at Cavendish University in Uganda: What are the skills and competencies that higher education students and faculty need to develop to think in an AI-driven world? MOLINA: So we could argue here that something very similar has happened already with many information technologies and communication technologies. It is the understanding at first faculty members did not want to use email, or the web, or many other tools because they were too busy with their disciplines. And rightly so. They were brilliant economists, or philosophers, or biologists. They didn’t have enough time to learn all these new technologies to interact with the students. But eventually they did learn, because they realized that it was the only way to meet the students where they were and to communicate with them in efficient ways. Now, I have to be honest; when it comes to the use of technology—and we’ll unpack the numbers—it was part of my doctoral dissertation, when I expanded the adoption of technology models, that tells you about early adopters, and mainstream adopters, and late adopters, and laggards. But I uncovered a new category for some of the institutions where I worked called the over-my-dead-body adopters. And these were some of the faculty members who say: I will never switch word processors. I will never use this technology. It’s only forty years until I retire, probably eighty more until I die. I don’t have to do this. And, to be honest, we have a responsibility to understand that those artificial intelligence tools are out there, and to guide the students as to what is the acceptable use of those technologies within the disciplines and the courses that we teach them in. Because they will find those available in a very competitive work market, in a competitive labor market, because they can derive some benefit from them. But also, we don’t want to shortchange their educational attainment just because they go behind our backs to copy and paste from ChatGPT, learning nothing. Going back to the question by Gabriel Doncel, not learning to exercise the critical thinking, using citations and material that is unverified, that was borrowed from the internet without any authority, without any attention to the different points of view. I mean, if you’ve used ChatGPT for a while—and I have personally, even to prepare some basic thank-you speeches, which are all very formal, even to contest a traffic ticket in Washington, DC, when I was speeding but I don’t want to pay the ticket anyway. Even for just research purposes, you could realize that most of the writing from ChatGPT has a very, very common style. Which is, oh, on the one hand people say this, on the other hand people say that. Well, the critical thinking will tell you, sure, there are two different opinions, but this is what I think myself, and this is why I think about this. And these are some of the skills, the critical thinking skills, that we must continue to teach the students and not to, you know, put blinds around their eyes to say, oh, continue focusing only on the textbook and the website. No, no. Look at the other tools but use them judiciously. FASKIANOS: Thank you. I’m going to go next to Clemente Abrokwaa. Raised hand, if you can identify yourself, please. Q: Hi. Thanks so much for your talk. It’s something that has been—I’m from Penn State University. And this is a very important topic, I think. And some of the earlier speakers have already asked the questions I was going to ask. (Laughs.) But one thing that I would like to say that, as you said, we cannot bury our heads in the sand. No matter what we think, the technology is already here. So we cannot avoid it. My question, though, is what do you think about the artificial intelligence, the use of that in, say, for example, graduate students using it to write dissertations? You did mention about the lawyers that use it to write their briefs, and they were caught. But in dissertations and also in class—for example, you have students—you have about forty students. You give a written assignment. You make—when you start grading, you have grading fatigue. And so at some point you lose interest of actually checking. And so I’m kind of concerned about that how it will affect the students’ desire to actually go and research without resorting to the use of AI. MOLINA: Well, Clemente, fellow colleague from the state of Pennsylvania, thank you for that, once again, both a question and a reflection here. Listen, many of us wrote our doctoral dissertations—mine at Georgetown. At one point of time, I was so tired of writing about the same topics, following the wonderful advice, but also the whims of my dissertation committee, that I was this close from outsourcing my thesis to China. I didn’t, but I thought about it. And now graduate students are thinking, OK, why am I going through the difficulties of writing this when ChatGPT can do it for me and the deadline is tomorrow? Well, this is what will distinguish the good students and the good professionals from the other ones. And the interesting part is, as you know, when we teach graduate students we’re teaching them critical thinking skills, but also teaching them now to express themselves, you know, either orally or in writing. And writing effectively is fundamental in the professions, but also absolutely critical in academic settings. And anybody who’s just copying and pasting from ChatGPT to these documents cannot do that level of writing. But you’re absolutely right. Let’s say that we have an adjunct faculty member who’s teaching a hundred students. Will that person go through every single essay to find out whether students were cheating with ChatGPT? Probably not. And this is why there are also enterprising people who are using artificial intelligence to find out and tell you whether a paper was written using artificial intelligence. So it’s a little bit like this fighting of different sources and business opportunities for all of them. And we’ve done this. We’ve used antiplagiarism tools in the past because we knew that students were copying and pasting using Google Scholar and many other sources. And now oftentimes we run antiplagiarism tools. We didn’t write them ourselves. Or we tell the students, you run it yourself and you give it to me. And make sure you are not accidentally not citing things that could end up jeopardizing your ability to get a graduate degree because your work was not up to snuff with the requirements of our stringent academic programs. So I would argue that this antiplagiarism tools that we’re using will more often than not, and sooner than expected, incorporate the detection of artificial intelligence writeups. And also the interesting part is to tell the students, well, if you do choose to use any of these tools, what are the rules of engagement? Can you ask it to write a paragraph and then you cite it, and you mention that ChatGPT wrote it? Not to mention, in addition to that, all the issues about artificial intelligence, which the courts are deciding now, regarding the intellectual property of those productions. If a song, a poem, a book is written by an artificial intelligence entity, who owns the intellectual property for those works produced by an artificial intelligence machine? FASKIANOS: Good question. We have a lot of written questions. And I’m sure you don’t want to just listen to my voice, so please do raise your hands. But we do have a question from one of your colleagues, Pablo, Pepe Barcega, who’s the IT director at Drexel: Considering the potential biases and limitations of AI models, like ChatGPT, do you think relying on such technology in the educational domain can perpetuate existing inequalities and reinforce systemic biases, particularly in terms of access, representation, and fair evaluation of students? And Pepe’s question got seven upvotes, we advanced it to the top of the line. MOLINA: All right, well, first I have to wonder whether he used ChatGPT to write the question. But I’m going to leave it that. Thank you. (Laughter.) It’s a wonderful question. One of the greatest concerns we have had, those of us who have been working on artificial intelligence digital policy for years—not this year when ChatGPT was released, but for years we’ve been thinking about this. And even before artificial intelligence, in general with algorithm transparency. And the idea is the following: That two things are happening here. One is that we’re programming the algorithms using instructions, instructions created by programmers, with all their biases, and their misunderstandings, and their shortcomings, and their lack of context, and everything else. But with artificial intelligence we’re doing something even more concerning than that, which is we have some basic algorithms but then we’re feeling a lot of information, a corpus of information, to those algorithms. And the algorithms are fine-tuning the rules based on those. So it’s very, very difficult for experts to explain how an artificial intelligence system actually makes decisions, because we know the engine and we know the data that we fed to the engine, but we don’t know the real outcome how those decisions are being made through neural networks, through all of the different systems that we have and methods that we have for artificial intelligence. Very, very few people understand how those work. And those are so busy they don’t have time to explain how the algorithm works for others, including the regulators. Let’s remember some of the failed cases. Amazon tried this early. And they tried this for selecting employees for Amazon. And they fed all the resumes. And guess what? It turned out that most of the recommendations were to hire young white people who had gone to Ivy League schools. Why? Because their first employees were feeding those descriptions, and they had done extremely well at Amazon. Hence, by feeding that information of past successful employees only those were there. And so that puts away the diversity that we need for different academic institutions, large and small, public and private, from different countries, from different genders, from different ages, from different ethnicities. All those things went away because the algorithm was promoting one particular one. Recently I had the opportunity to moderate a panel in Washington, DC, and we had representatives from the Equal Employment Opportunity Commission. And they told us how they investigated a hiring algorithm from a company that was disproportionately recommending that they hired people whose first name was Brian and had played lacrosse in high school because, once again, a disproportionate number of people in that company had done that. And the algorithm realized, oh, this must be important characteristics to hire people for this company. Let’s not forget, for example, with the artificial facial recognition and artificial intelligence by Amazon Rekog, you know, the facial recognition software, that the American Civil Liberties Union, decided, OK, I’m going to submit the pictures of all the congressmen to this particular facial recognition engine. And it turned out that it misidentified many of them, particularly African Americans, as felons who had been convicted. So all these artificial—all these biases could have really, really bad consequences. Imagine that you’re using this to decide who you admit to your universities, and the algorithm is wrong. You know, you are making really biased decisions that will affect the livelihood of many people, but also will transform society, possibly for the worse, if we don’t address this. So this is why the OECD, the European Union, even the White House, everybody is saying: We want this technology. We want to derive the benefits of this technology, while curtailing the abuses. And it’s fundamental we achieve transparency. We are sure that these algorithms are not biased against the people who use them. FASKIANOS: Thank you. So I’m going to go next to Emily Edmonds-Poli, who is a professor at the University of San Diego: We hear a lot about providing clear guidelines for students, but for those of us who have not had a lot of experience using ChatGPT it is difficult to know what clear guidelines look like. Can you recommend some sources we might consult as a starting point, or where we might find some sample language? MOLINA: Hmm. Well, certainly this is what we do in higher education. We compete for the best students and the best faculty members. And we sometimes compete a little bit to be first to win groundbreaking research. But we tend to collaborate with everything else, particularly when it comes to policy, and guidance, and rules. So there are many institutions, like mine, who have already assembled—I’m sure that yours has done the same—assembled committees, because assembling committees and subcommittees is something we do very well in higher education, with faculty members, with administrators, even with the student representation to figure out, OK, what should we do about the use of artificial intelligence on our campus? I mentioned before taking a look at the big aspirational declarations by Meta, and Google, and IBM, and Microsoft could be helpful for these communities to look at this. But also, I’m a very active member of an organization known as EDUCAUSE. And EDUCAUSE is for educators—predominantly higher education educators. Administrators, staff members, faculty members, to think about the adoption of information technology. And EDUCAUSE has done good work on this front and continues to do good work on this front. So once again, EDUCAUSE and some of the institutions have already published their guidelines on how to use artificial intelligence and incorporate that within their academic lives. And now, that said, we also know that even though all higher education institutions are the same, they’re all different. We all have different values. We all believe in different uses of technology. We trust more or less the students. Hence, it’s very important that whatever inspiration you would take, you work internally on campus—as you have done with many other issues in the past—to make sure it really reflects the values of your institution. FASKIANOS: So, Pablo, would you point to a specific college or university that has developed a code of ethics that addresses the use of AI for their academic community beyond your own, but that is publicly available? MOLINA: Yeah, I’m going to be honest, I don’t want to put anybody on the spot. FASKIANOS: OK. MOLINA: Because, once again, there many reasons. But, once again, let me repeat a couple resources. One is of them is from the U.S. Department of Education, from the Office of Educational Technology. And the article is Artificial Intelligence and Future of Teaching and Learning: Insights and Recommendations, published earlier this year. The other source really is educause.edu. And if you look at educause.edu on artificial intelligence, you’ll find links to articles, you’ll find links to universities. It would be presumptuous of me to evaluate whose policies are better than others, but I would argue that the general principles of nonbiased, transparency, accountability, and also integration of these tools within the academic life of the institution in a morally responsible way—with concepts by privacy by design, security by design, and responsible computing—all of those are good words to have in there. Now, the other problem with policies and guidelines is that, let’s be honest, many of those have no teeth in our institutions. You know, we promulgate them. They’re very nice. They look beautiful. They are beautifully written. But oftentimes when people don’t follow them, there’s not a big penalty. And this is why, in addition to having the policies, educating the campus community is important. But it’s difficult to do because we need to educate them about so many things. About cybersecurity threats, about sexual harassment, about nondiscriminatory policies, about responsible behavior on campus regarding drugs and alcohol, about crime. So many things that they have to learn about. It’s hard to get at another topic for them to spend their time on, instead of researching the core subject matter that they chose to pursue for their lives. FASKIANOS: Thank you. And we will be sending out a link to this video, the transcript, as well as the resources that you have mentioned. So if you didn’t get them, we’ll include them in the follow-up email. So I’m going to go to Dorian Brown Crosby who has a raised hand. Q: Yes. Thank you so much. I put one question in the chat but I have another question that I would like to go ahead and ask now. So thank you so much for this presentation. You mentioned algorithm biases with individuals. And I appreciate you pointing that out, especially when we talk about face recognition, also in terms of forced migration, which is my area of research. But I also wanted you to speak to, or could you talk about the challenges that some institutions in higher education would have in terms of support for some of the things that you mentioned in terms of potential curricula, or certificates, or other ways that AI would be woven into the new offerings of institutions of higher education. How would that look specifically for institutions that might be challenged to access those resources, such as Historically Black Colleges and Universities? Thank you. MOLINA: Well, very interesting question, and a really fascinating point of view. Because we all tend to look at things from our own perspective and perhaps not consider the perspective of others. Those who have much more money and resources than us, and those who have fewer resources and less funding available. So this is a very interesting line. What is it that we do in higher education when we have these problems? Well, as I mentioned before, we build committees and subcommittees. Usually we also do campus surveys. I don’t know why we love doing campus surveys and asking everybody what they think about this. Those are useful tools to discuss. And oftentimes the thing that we do also, that we’ve done for many other topics, well, we hire people and we create new offices—either academic or administrative offices. With all of those, you know, they have certain limitations to how useful and functional they can be. And they also continue to require resources. Resources that, in the end, are paid for by students with, you know, federal financing. But this is the truth of the matter. So if you start creating offices of artificial intelligence on our campuses, however important the work may be on their guidance and however much extra work can be assigned to them instead of distributed to every faculty and the staff members out there, the truth of the matter is that these are not perfect solutions. So what is it that we do? Oftentimes, we work with partners. And our partners love to take—(inaudible)—vendors. But the truth of the matter is that sometimes they have much more—they have much more expertise on some of these topics. So for example, if you’re thinking about incorporating artificial intelligence to some of the academic materials that you use in class, well, I’m going to take a guess that if you already work with McGraw Hill in economics, or accounting, or some of the other books and websites that they put that you recommend to your students or you make mandatory for your students, that you start discussing with them, hey, listen, are you going to use artificial intelligence? How? Are you going to tell me ahead of time? Because, as a faculty member, you may have a choice to decide: I want to work with this publisher and not this particular publisher because of the way they approach this. And let’s be honest, we’ve seen a number of these vendors with major information security problems. McGraw Hill recently left a repository of data misconfigured out there on the internet, and almost anybody could access that. But many others before them, like Chegg and others, were notorious for their information security breaches. Can we imagine that these people are going to adopt artificial intelligence and not do such a good job of securing the information, the privacy, and the nonbiased approaches that we hold dear for students? I think they require a lot of supervision. But in the end, these publishers have the economies of scale for you to recommend those educational materials instead of developing your own for every course, for every class, and for every institution. So perhaps we’re going to have to continue to work together, as we’ve done in higher education, in consortia, which would be local, or regional. It could be based on institutions of the same interest, or on student population, on trying to do this. And, you know, hopefully we’ll get grants, grants from the federal government, that can be used in order to develop some of the materials and guidelines that are going to help us precisely embrace this and embracing not only to operate better as institutions and fulfill our mission, but also to make sure that our students are better prepared to join society and compete globally, which is what we have to do. FASKIANOS: So I’m going to combine questions. Dr. Lance Hunter, who is an associate professor at Augusta University. There’s been a lot of debate regarding if plagiarism detection software tools like Turnitin can accurately detect AI-generated text. What is your opinion regarding the accuracy of AI text generation detection plagiarism tools? And then Rama Lohani-Chase, at Union County College, wants recommendations on what plagiarism checker devices you would recommend—or, you know, plagiarism detection for AI would you recommend? MOLINA: Sure. So, number one, I’m not going to endorse any particular company because if I do that I would ask them for money, or the other way around. I’m not sure how it works. I could be seen as biased, particularly here. But there are many there and your institutions are using them. Sometimes they are integrated with your learning management system. And, as I mentioned, sometimes we ask the students to use them themselves and then either produce the plagiarism report for us or simply know themselves this. I’m going to be honest; when I teach ethics and technology, I tell the students about the antiplagiarism tools at the universities. But I also tell them, listen, if you’re cheating in an ethics and technology class, I failed miserably. So please don’t. Take extra time if you have to take it, but—you know, and if you want, use the antiplagiarism tool yourself. But the question stands and is critical, which is right now those tools are trying to improve the recognition of artificial intelligence written text, but they’re not as good as they could be. So like every other technology and, what I’m going to call, antitechnology, used to control the damage of the first technology, is an escalation where we start trying to identify this. And I think they will continue to do this, and they will be successful in doing this. There are people who have written ad hoc tools using ChatGPT to identify things written by ChatGPT. I tried them. They’re remarkably good for the handful of papers that I tried myself, but I haven’t conducted enough research myself to tell you if they’re really effective tools for this. So I would argue that for the timing you must assume that those tools, as we assume all the time, will not catch all of the cases, only some of the most obvious ones. FASKIANOS: So a question from John Dedie, who is an assistant professor at the Community College of Baltimore County: To combat AI issues, shouldn’t we rethink assignments? Instead of papers, have students do PowerPoints, ask students to offer their opinions and defend them? And then there was an interesting comment from Mark Habeeb at Georgetown University School of Foreign Service. Knowledge has been cheap for many years now because it is so readily available. With AI, we have a tool that can aggregate the knowledge and create written products. So, you know, what needs to be the focus now is critical thinking and assessing values. We need to teach our students how to assess and use that knowledge rather than how to find the knowledge and aggregate that knowledge. So maybe you could react to those two—the question and comment. MOLINA: So let me start with the Georgetown one, not only because he’s a colleague of mine. I also teach at Georgetown, and where I obtained my doctoral degree a number of years ago. I completely agree. I completely agree with the issue that we have to teach new skills. And one of the programs in which I teach at Georgetown is our master’s of analysis. Which are basically for people who want to work in the intelligence community. And these people have to find the information and they have to draw inferences, and try to figure out whether it is a nation-state that is threatening the United States, or another, or a corporation, or something like that. And they do all of those critical thinking, and intuition, and all the tools that we have developed in the intelligence community for many, many years. And artificial intelligence, if they suspend their judgement and they only use artificial intelligence, they will miss very important information that is critical for national security. And the same is true for something like our flagship school, the School of Foreign Service at Georgetown, one of the best in the world in that particular field, where you want to train the diplomats, and the heads of state, and the great strategical thinkers on policy and politics in the international arena to precisely think not in the mechanical way that a machine can think, but also to connect those dots. And, sure they should be using those tools in order to, you know, get the most favorable position and the starting position, But they should also use their critical thinking always, and their capabilities of analysis in order to produce good outcomes and good conclusions. Regarding redoing the assignments, absolutely true. But that is hard. It is a lot of work. We’re very busy faculty members. We have to grade. We have to be on committees. We have to do research. And now they ask us to redo our entire assessment strategy, with new assignments that we need to grade again and account for artificial intelligence. And I don’t think that any provost out there is saying, you know what? You can take two semesters off to work on this and retool all your courses. That doesn’t happen in the institutions that I know of. If you get time off because you’re entitled to it, you want to devote that time to do research because that is really what you sign up for when you pursued an academic career, in many cases. I can tell you one thing, that here in Europe where oftentimes they look at these problems with fewer resources than we do in the United States, a lot of faculty members at the high school level, at the college level, are moving to oral examinations because it’s much harder to cheat with ChatGPT with an oral examination. Because they will ask you interactive, adaptive questions—like the ones we suffered when we were defending our doctoral dissertations. And they will realize, the faculty members, whether or not you know the material and you understand the material. Now, imagine oral examinations for a class of one hundred, two hundred, four hundred. Do you do one for the entire semester, with one topic chosen and run them? Or do you do several throughout the semester? Do you end up using a ChatGPT virtual assistance to conduct your oral examinations? I think these are complex questions. But certainly redoing our assignments and redoing the way we teach and the way we evaluate our students is perhaps a necessary consequence of the advent of artificial intelligence. FASKIANOS: So next question from Damian Odunze, who is an assistant professor at Delta State University in Cleveland, Mississippi: Who should safeguard ethical concerns and misuse of AI by criminals? Should the onus fall on the creators and companies like Apple, Google, and Microsoft to ensure security and not pass it on to the end users of the product? And I think you mentioned at the top in your remarks, Pablo, about how the founder of ChatGPT was urging the Congress to put into place some regulation. What is the onus on ChatGPT to protect against some of this as well? MOLINA: Well, I’m going to recycle more of the material from my doctoral dissertation. In this case it was the Molina cycle of innovation and regulation. It goes like this, basically there are—you know, there are engineers and scientists who create new information technologies. And then there are entrepreneurs and businesspeople and executives to figure out, OK, I know how to package this so that people are going to use it, buy it, subscribe to it, or look at it, so that I can sell the advertisement to others. And, you know, this begins and very, very soon the abuses start. And the abuses are that criminals are using these platforms for reasons that were not envisioned before. Even the executives, as we’ve seen with Google, and Facebook, and others, decide to invade the privacy of the people because they only have to pay a big fine, but they make much more money than the fines or they expect not to be caught. And what happened in this cycle is that eventually there is so much noise in the media, congressional hearings, that eventually regulators step in and they try to pass new laws to do this, or the regulatory agencies try to investigate using the powers given to them. And then all of these new rules have to be tested in courts of law, which could take years by the time it reaches sometimes all the way to the Supreme Court. Some of them are even knocked down on the way to the Supreme Court when they realize this is not constitutional, it’s a conflict of laws, and things like that. Now, by the time we regulate these new technologies, not only many years have gone by, but the technologies have changed. The marketing products and services have changed, the abuses have changed, and the criminals have changed. So this is why we’re always living in a loosely regulated space when it comes to information technology. And this is an issue of accountability. We’re finding this, for example, with information security. If my phone is my hacked, or my computer, my email, is it the fault of Microsoft, and Apple, and Dell, and everybody else? Why am I the one paying the consequences and not any of these companies? Because it’s unregulated. So morally speaking, yes. These companies are accountable. Morally speaking also the users are accountable, because we’re using these tools because we’re incorporating them professionally. Legally speaking, so far, nobody is accountable except the lawyers who submitted briefs that were not correct in a court of law and were disciplined for that. But other than that, right now, it is a very gray space. So in my mind, it requires everybody. It takes a village to do the morally correct thing. It starts with the companies and the inventors. It involves the regulators, who should do their job and make sure that there’s no unnecessary harm created by these tools. But it also involves every company executive, every professional, every student, and professor who decides to use these tools. FASKIANOS: OK. I’m going to take—combine a couple questions from Dorothy Marinucci and Venky Venkatachalam about the effect of AI on jobs. Dorothy talks about—she’s from Fordham University—about she read something about Germany’s best-selling newspaper Bild reportedly adopting artificial intelligence to replace certain editorial roles in an effort to cut costs. Does this mean that the field of journalism communication will change? And Venky’s question is: AI—one of the impacts is in the area of automation, leading to elimination of certain types of jobs. Can you talk about both the elimination of jobs and what new types of jobs you think will be created as AI matures into the business world with more value-added applications? MOLINA: Well, what I like about predicting the future, and I’ve done this before in conferences and papers, is that, you know, when the future comes ten years from now people will either not remember what I said, or, you know, maybe I was lucky and my prediction was correct. In the specific field of journalism, and we’ve seen it, the journalism and communications field, decimated because the money that they used to make with advertising—and, you know, certainly a bit part of that were in the form of corporate profits. But many other one in the form of hiring good journalists, and investigative journalism, and these people could be six months writing a story when right now they have six hours to write a story, because there are no resources. And all the advertisement money went instead to Facebook, and Google, and many others because they work very well for advertisements. But now the lifeblood of journalism organizations has been really, you know, undermined. And there’s good journalism in other places, in newspapers, but sadly this is a great temptation to replace some of the journalists with more artificial intelligence, particularly the most—on the least important pieces. I would argue that editorial pieces are the most important in newspapers, the ones requiring ideology, and critical thinking, and many others. Whereas there are others that tell you about traffic changes that perhaps do not—or weather patterns, without offending any meteorologists, that maybe require a more mechanical approach. I would argue that a lot of professions are going to be transformed because, well, if ChatGPT can write real estate announcements that work very well, well, you may need fewer people doing this. And yet, I think that what we’re going to find is the same thing we found when technology arrived. We all thought that the arrival of computers would mean that everybody would be without a job. Guess what? It meant something different. It meant that in order to do our jobs, we had to learn how to use computers. So I would argue that this is going to be the same case. To be a good doctor, to be a good lawyer, to be a good economist, to be a good knowledge worker you’re going to have to learn also how to use whatever artificial intelligence tools are available out there, and use them professionally within the moral and the ontological concerns that apply to your particular profession. Those are the kind of jobs that I think are going to be very important. And, of course, all the technical jobs, as I mentioned. There are tons of people who consider themselves artificial intelligence experts. Only a few at the very top understand these systems. But there are many others in the pyramid that help with preparing these systems, with the support, the maintenance, the marketing, preparing the datasets to go into these particular models, working with regulators and legislators and compliance organizations to make sure that the algorithms and the tools are not running afoul of existing regulations. All of those, I think, are going to be interesting jobs that will be part of the arrival of artificial intelligence. FASKIANOS: Great. We have so many questions left and we just couldn’t get to them all. I’m just going to ask you just to maybe reflect on how the use of artificial intelligence in higher education will affect U.S. foreign policy and international relations. I know you touched upon it a little bit in reacting to the comment from our Georgetown University colleague, but any additional thoughts you might want to add before we close? MOLINA: Well, let’s be honest, one particular one that applies to education and to everything else, there is a race—a worldwide race for artificial intelligence progress. The big companies are fighting—you know, Google, and Meta, many others, are really putting—Amazon—putting resources into that, trying to be first in this particular race. But it’s also a national race. For example, it’s very clear that there are executive orders from the United States as well as regulations and declarations from China that basically are indicating these two big nations are trying to be first in dominating the use of artificial intelligence. And let’s be honest, in order to do well in artificial intelligence you need not only the scientists who are going to create those models and refine them, but you also need the bodies of data that you need to feed these algorithms in order to have good algorithms. So the barriers to entry for other nations and the barriers to entry by all the technology companies are going to be very, very high. It’s not going to be easy for any small company to say: Oh, now I’m a huge player in artificial intelligence. Because even if you may have created an interesting new algorithmic procedure, you don’t have the datasets that the huge companies have been able to amass and work on for the longest time. Every time you submit a question to ChatGPT, the ChatGPT experts are using their questions to refine the tool. The same way that when we were using voice recognition with Apple or Android or other companies, that we’re using those voices and our accents and our mistakes in order to refine their voice recognition technologies. So this is the power. We’ll see that the early bird gets the worm of those who are investing, those who are aggressively going for it, and those who are also judiciously regulating this can really do very well in the international arena when it comes to artificial intelligence. And so will their universities, because they will be able to really train those knowledge workers, they’ll be able to get the money generated from artificial intelligence, and they will be able to, you know, feedback one with the other. The advances in the technology will result in more need for students, more students graduating will propel the industry. And there will also be—we’ll always have a fight for talent where companies and countries will attract those people who really know about these wonderful things. Now, keep in mind that artificial intelligence was the core of this, but there are so many other emerging issues in information technology. And some of them are critical to higher education. So we’re still, you know, lots of hype, but we think that virtual reality will have an amazing impact on the way we teach and we conduct research and we train for certain skills. We think that quantum computing has the ability to revolutionize the way we conduct research, allowing us to do competitions that were not even thinkable today. We’ll look at things like robotics. And if you ask me about what is going to take many jobs away, I would say that robotics can take a lot of jobs away. Now, we thought that there would be no factory workers left because of robots, but that hasn’t happened. But keep adding robots with artificial intelligence to serve you a cappuccino, or your meal, or take care of your laundry, or many other things, or maybe clean your hotel room, and you realize, oh, there are lots of jobs out there that no longer will be there. Think about artificial intelligence for self-driving vehicles, boats, planes, cargo ships, commercial airplanes. Think about the thousands of taxi drivers and truck drivers who may end up being out of jobs because, listen, the machines drive safer, and they don’t get tired, and they can be driving twenty-four by seven, and they don’t require health benefits, or retirement. They don’t get depressed. They never miss. Think about many of the technologies out there that have an impact on what we do. So, but artificial intelligence is a multiplier to technologies, a contributor to many other fields and many other technologies. And this is why we’re so—spending so much time and so much energy thinking about these particular issues. FASKIANOS: Well, thank you, Pablo Molina. We really appreciate it. Again, my apologies that we couldn’t get to all of the questions and comments in the chat, but we appreciate all of you for your questions and, of course, your insights were really terrific, Dr. P. So we will, again, be sending out the link to this video and transcript, as well as the resources that you mentioned during this discussion. I hope you all enjoy the Fourth of July. And I encourage you to follow @CFR_Academic on Twitter and visit CFR.org, ForeignAffairs.com, and ThinkGlobalHealth.org for research and analysis on global issues. Again, you send us comments, feedback, suggestions to [email protected]. And, again, thank you all for joining us. We look forward to your continued participation in CFR Academic programming. Have a great day. MOLINA: Adios. (END)
  • World Order
    Council of Councils Twelfth Annual Conference
    Sessions were held on the future of AI governance, accountability for war crimes in the invasion of Ukraine, reworking the Sustainable Development Goals and the global development model, revitalizing the World Trade Organization, and strengthening the global geopolitical order.
  • Artificial Intelligence (AI)
    AI Meets World, Part Two
    Podcast
    The rapid emergence of artificial intelligence (AI) has brought lawmakers and industry leaders to the same conclusion: regulation is necessary to ensure the technology changes the world for the better. The similarities could end there, as governments and industry clash on what those laws should do, and different governments take increasingly divergent approaches. What are the stakes of the debate over AI regulation?
  • Technology and Innovation
    Reporting on AI and the Future of Journalism
    Play
    Dex Hunter-Torricke, head of global communications & marketing at Google DeepMind, discusses how AI technology could shape reporting the news and the role of journalists, and Benjamin Pimentel, senior technology reporter at the San Francisco Examiner, discusses framing local stories on AI in media. The webinar is hosted by Carla Anne Robbins, senior fellow at CFR and former deputy editorial page editor at the New York Times.  TRANSCRIPT FASKIANOS: Thank you. Welcome to the Council on Foreign Relations Local Journalists Webinar. I am Irina Faskianos, vice president for the National Program and Outreach here at CFR. CFR is an independent and nonpartisan membership organization, think tank, publisher, and educational institution focusing on U.S. foreign policy. CFR is also the publisher of Foreign Affairs magazine. As always, CFR takes no institutional positions on matters of policy. This webinar is part of CFR’s Local Journalists Initiative, created to help you draw connections between the local issues you cover and national and international dynamics. Our program aims to put you in touch with CFR resources and expertise on international issues and provides a forum for sharing best practices. Again, today’s discussion is on the record. The video and transcript will be posted on our website after the fact at CFR.org/localjournalists, and we will share the content after this webinar. We are pleased to have Dex Hunter-Torricke, Benjamin Pimentel, and host Carla Anne Robbins to lead today’s discussion on “Reporting on AI and the Future of Journalism.” We’ve shared their bios with you, but I will highlight their credentials here. Dex Hunter-Torricke is the head of global communications and marketing at Google DeepMind. He previously worked in communications for SpaceX, Meta, and the United Nations. He’s a New York Times bestselling ghostwriter and frequent public commentator on the social, political, and organizational challenges of technology. Benjamin Pimentel is a senior technology reporter for the San Francisco Examiner covering Silicon Valley and the tech industry. He has previously written on technology for other outlets, including Protocol, Dow Jones Marketwatch, and Business Insider. He was also a metro news and technology reporter at the San Francisco Chronicle for fourteen years. And in 2022, he was named by Muck Rack as one of the top ten crypto journalists. And finally, Carla Anne Robbins, our host, is a senior fellow for CFR—at CFR, excuse me. She is the faculty director of the Master of International Affairs Program and clinical professor of national security studies at Baruch College’s Marxe School of Public and International Affairs. Previously, she was deputy editorial page editor at the New York Times and chief diplomatic correspondent at the Wall Street Journal. Welcome, all. Thank you for this timely discussion. I’m going to turn it now to Carla to start the conversation, and then we will turn to all of you for your questions and comments. So, Carla, take it away. ROBBINS: Thank you so much, Irina. And thank you so much to you and your staff for setting this up, and to Dex and to Ben for joining us today. You know, I am absolutely fascinated by this topic—fascinated as a journalist, fascinated as an academic. Yes, I spend a lot of time worrying whether my students are using AI to write their papers. So far, I don’t know. So, as Irina said, Dex, Ben, and I will chat for about twenty-five minutes and then throw it open to you all for questions. But if you have something that occurs along the way, don’t hold back, and post it, and you know, we will get to you. And we really do want this to be a conversation. So I’d like to start with Ben. I’m sure everyone here has already played with ChatGPT or Bard if they get off the waitlist. I’ve already needled Dex about this. You know, I asked ChatGPT, you know, what questions I should be asking you all today, and I found it sort of thin gruel but not a bad start. But, Ben, can you give us a quick summary of what’s new about this technology, generative AI, and why we need to be having this conversation today? PIMENTEL: Yes. And thank you for having me. AI has been around for a long time—since after the war, actually—but it’s only—you know, November 30, 2022, is a big day, an important date for this technology. That’s when ChatGPT was introduced. And it just exploded in terms of opening up new possibilities for the use of artificial intelligence and also a lot of business interest in it. For journalists, of course, quickly, there has been a debate on the use of ChatGPT for reporting and for running a news organization. And that’s become a more important debate given the revelations and the disclosures of an organization like AP, CNET, and recently even insiders now saying that they’re going to be using AI for managing their paywall or in terms of deciding whether to offer a subscription to a reader or not. For me personally, I think the technology has a lot of important uses in terms of making newsgathering and reporting more efficient/faster. For instance, I come from a—I’m going to date myself, but when I started it was before—when I started my career in the U.S.—I’m from the Philippines—it was in June 1993. That was two months after the World Wide Web became public domain. That’s when the websites started appearing. And around that time, whenever I’m working nights to—you know, that was before websites and before Twitter. To get a sense of what’s going on in San Francisco, especially at night—and I’m working at night—I would have to call every police station, fire department, hospital from Mendocino down to Santa Cruz to get a sense of what’s going on. It’s boring. It’s a thankless job. But it actually helped me. But now you can do that with technology. I mean, you now have sites that can pull from the Twitter feed of the San Francisco Police Department or the San Francisco Fire Department to report, right, on what’s going on. And AI now creates a possibility of actually pulling that information and creating a news report that in the past I would have to do it, like a short 300-word report on, hey, Highway 80 is closed because of an accident. Now you can automate that. The problem that’s become more prominent recently is the use of AI and you don’t disclose it. I was recently in a, you know, panel—on a panel where an editor disclosed—very high on the technology, but then also said, when we asked him are you disclosing it on your site: Well, frankly, our readers don’t care. I disagree vehemently that when you’re—if you’re going to use it, you have to disclose it. Like, if you are pulling information and creating reports on, you know, road conditions or a police action, you have to say that AI created it. And it’s definitely even more so for more—for bigger stories like features or, you know, New Yorker-type of articles. You wouldn’t want—I wouldn’t want to read a New Yorker article and not know that it was done by an AI or by a chatbot. And then for me personally, I worry about what it means for young reporters, younger journalists, because they’re not going to go through what I went through, which in many ways is a good, right? You don’t have to call every police station in a region to get the information. You can pull that. You can use AI to do that. But for me, I worry when editors and writers talk about, oh, I can now write a headline better with AI, or write my lede and nut graf with AI, that’s worrisome because, for me, that’s not a problem for a journalist, right? Usually you go through that over and over again, and that’s how you get better. That’s how you become more critically minded. That’s how you become faster; I mean, even develop your own voice in writing a story. I’ll stop there. ROBBINS: I think you’ve raised a lot of important questions which we will delve into some more. But I want to go over to Dex. So, Dex, can you talk a little bit more about this technology and what makes it different from other artificial intelligence? I mean, it’s not like this is something that suddenly just we woke up one day, it was there. What makes generative AI different? HUNTER-TORRICKE: Yeah. I mean, I think the thing about generative AI which, you know, has really, you know, wowed people has been the ability to generate content that seems new. And, obviously, how generative AI works—and we can talk much more about that—a lot of what it’s creating is, obviously, based on things that exist out there in the world already. And you know, the knowledge that it’s presenting, the content that it’s creating is something that can seem very new and unique, but, obviously, you know, is built on training from a lot of previous data. I think when you experience a generative AI tool, you’re interacting with it in a very human kind of way—in a way that previous generations of technology haven’t necessarily—(audio break). You’re able to type in natural language prompts; and then you see on many generative AI tools, you know, the system thinking about how to answer that question; and then producing something very, very quickly. And it feels magical in a way that, you know, certainly—maybe I’m just very cynical having spent so long in the tech industry, but you know, certainly I don’t think lots of us feel about a lot of the tools that we take for granted. This feels qualitatively different from many of the current systems that we have. So I think because of that, you know, over the last year, as generative AI—(audio break)—starts to impact on a lot of different knowledge-type industries and professions. And of course, you know, the media industry is, you know, one of those professions. I think, you know, lots of reporters and media organizations are obviously thinking not just how can I use generative AI and other AI tools as part of my work today, but what does this really mean for the profession? What does this mean for the industry? What does this mean for the economics over the long term? And those are questions that, you know, I think we’re all still trying to figure out, to an extent. ROBBINS: So I want to ask you—you know, let’s talk about the good for a while, and then we get into the bad. So, you know, I just a piece in Neiman Reports, which we’ll share with everybody, that described how a Finnish newspaper, Yle, is using AI to translate stories into Ukrainian, because it’s now got tens of thousands of people displaced by the war. The bad news, at least for me, is Buzzfeed started out using AI to write its quizzes, which I personally didn’t care much about, and then said but that’s all we’re going to use it for. But then it took a nanosecond and then it moved on to travel stories. Now, as a journalist, I’m worried—I mean, as it is the business is really tight. Worried about displacement. And also about—you know, we hear all sorts of things. But we can get into the bad in a minute.  You know, if you were going to make a list of things that didn’t make you nervous, that, you know, Bard could do, that ChatGPT could do, that makes it—you know, that you look at generative AI and you say, well, it’s a calculator. You know, we all used to say, oh my God, you know, nobody’s ever going to be able to do a square root again. And now everybody uses a calculator, and nobody sits around worrying about that. So I—just a very quick list. You know, Ben, you’ve already talked about, you know, pulling the feed on traffic and all of that. You know, give us a few things that you really think—as long as we disclose—that you think that this would really be good, particularly for, you know, cash-strapped newsrooms, so that we could free people up to do better work? And then, Dex, I’m going to ask you the same question. PIMENTEL: City council meetings. I mean, I started my career— ROBBINS: You’re going for the boring first. PIMENTEL: Right, right. School board meetings. Yeah, it’s boring, right? That’s where you start out. That’s where I started out. And, if—I mean, I’m sort of torn on this, because you can use ChatGPT or generative AI to maybe present the agenda, right? The agenda for the week’s meeting in a readable, more easily digestible manner, instead of having people go to the website and try to make sense of it. And even the minutes of the meeting, right, to present it in a way that here’s what happened. Here’s what they decided. I actually look back—you know, like you said, and like I said, it’s boring. But it’s valuable. For me, the experience of going through that process and figuring out, OK, what did they decide? Trying to reach out to the councilman, OK, what did you mean—I mean, to go deeper, right? But at the same time, given the budget cuts, I would allow—I would accept a newsroom that decides, OK, we’re going to use ChatGPT to do summaries of these things, but we’re going to disclose it. I think that’s perfectly—especially for local news, which has been battered since the rise of the web.  I mean, I know this because I work for the Chronicle and I work in bureaus in the past. So that’s one positive thing, aside from, you know, traffic hazard warning. That it may take a human reporter more time. If you automate it, maybe it’s better. It’s good service to the community.  ROBBINS: Dex, you have additions to the positive list? Because we’re going to go to the negative next.  HUNTER-TORRICKE: Yeah, absolutely. I mean, look, I think that category of stuff which, you know, Ben might talk about as boring, you know, but certainly, I would say, is useful data that just takes a bunch of time to analyze and to go through, that’s where AI could be really, really valuable. You know, providing, you know, analysis, surfacing that data. Providing much broader context for the kinds of stories that reporters are producing. Like, that’s where I see systems that are able to parse through a lot of data very quickly being incredibly valuable. You know, that’s going to be something that’s incredibly useful for identifying local patterns, trends of interest that you can then explore further in more stories. So I think that’s all a really positive piece. You know, the other piece is just around, you know, exposing the content that local media is producing to a much wider audience. And there, you know, I could see potential applications where, you know, AI is, you know, able to better transcribe and translate local news. You know, you mentioned the Ukrainian example, but certainly I think there’s a lot of, you know, other examples where outlets are already using translation technology to expose their content to a much broader and global audience. I think that’s one piece. You know, also thinking about how do you make information more easily accessible so that, you know, this content then has higher online visibility. You know, every outlet is, you know, desperately trying to, you know, engage its readers and expose, you know, a new set of readers to their content. So I think there’s a bunch of, you know, angles there as well. ROBBINS: So let’s go on to the negative, and then we’re going to pass it over because I’m sure there’s lots of questions from the group. So, you know, we’ve all read about the concerns about AI and disinformation. There have been two recent reports, one by NewsGuard and another by ShadowDragon that found that AI-created sites and AI-created content, filled with fabricated events, hoaxes, dangerous medical advice, you’ve got that on one hand. So there was already, you know, already an enormous amount of disinformation and bias out there. You know, how does AI make this worse? And do we have any sense of how much worse? Is it just because it can shovel a lot more manure faster? Or is there something about it that makes this different? Ben? PIMENTEL: I mean, as Dex said, generative AI allows you to create content that looks real, like it was created by humans. That’s sort of the main thing that really changes everything. We’ve been living with AI for a number of years—Siri, and Cortana, and all that. But when you listen to them, you know that it’s not human, right? Eventually you will have technologies that will sound human, and you can be deceived by it. And that’s where the concern about disinformation comes up.  I mean, hallucinations is what they call it in terms of they’re going to present you—I don’t know if you ever search yourself on ChatGPT, and they spit out a profile that’s really inaccurate, right? You went to this university or what. So that’s a problem. And the thing about that, though, is the more data it consumes, it’ll get better. That’s sort of the worrisome, but at the same time positive, thing. Eventually all these things will be fixed. But at the same time, you don’t know what kind of data they’re using for these different models. And that’s going to be a major concern.  In terms of the negative—I mean, like I said, I mentioned the training of journalists is a concern to me. I mean, I mentioned certain things that are boring, but I think—I also wonder, so what happens to journalists if they don’t go through that? If they already go to a certain level because, hey, ChatGPT can take care of that so you don’t have to cover a city council meeting? Which, for me, was a positive experience. I mean, I hated that I was doing it, but eventually looking back that was good. I learned how to talk to a city politician. I learned to pick up on whether he’s lying to me or not. And that enables me to create stories later on in my career that’re more analytical, you know, more nuanced, more sensitive to the needs of my readership.  Another thing is in journalism we know there is no such thing as absolute neutrality, right? Even and especially analytical stories, your point of view will come up. And that brings up the question, OK, what point of view are we presenting if you have ChatGPT write those stories? Especially the most analytical ones, like features, a longer piece that delves into a certain problem in the community and tries to explore it. I worry that you can’t let ChatGPT or an AI program do that without questioning whether, OK, what’s the data that is the basis of this analysis, of this perspective? I’ll stop there. ROBBINS: So, Dex, jump in anywhere on this, but I do have a very specific technical thing. Not that I want to get into this business but, you know, I’ve written a lot in the past about disinformation. And it’s one thing for hallucinations, where they’re just working with garbage in so you get garbage out, which is—and you certainly saw that in the beginning with Wikipedia, which has gotten better with crowdsourcing over time. But from my understanding of these reports from NewsGuard and ShadowDragon, that there were people who were malevolently using AI to push out bad information. So is this—how is generative AI making that easier than what we just had before? HUNTER-TORRICKE: I mean, I think the main challenge here is around how compelling a lot of this content seems, compared to what came before, right? So, you know—you know, I think Ben spoke to this—you know, a lot of this stuff isn’t exactly news. AI itself has been around for a long time. And we then had manifestations of these challenges for quite a long time with the entire generation of social media technology. So like deepfakes, like that’s something we’ve been talking about for years. The thing about deepfakes which made it such an interesting debate is that for years every time we talked about deepfakes, everyone knew exactly what a deepfake was because they were so unconvincing. You know—(audio break)—exactly what was a deepfake and what wasn’t. Now, it’s very different because of the quality of the experience.  So, you know, a few weeks ago you may have seen there was a picture that was trending on Twitter of the pope wearing a Balenciaga jacket. And for about twenty-four hours, the internet was absolutely convinced that the pope was rocking this $5,000 jacket that was, like, perfectly color coordinated. And, you know, it was a sort of—you know, it was a funny moment. And of course, it was revealed that it had been generated using an AI. So no harm done, I guess. But, like, it was representative of how—(audio break)—are being shared. Potentially it could have very serious implications, you know, when they are used by bad actors, you know, as you described, you know, to do things that are much more nefarious than simply, you know, sharing a funny meme. One piece of research I saw recently which I thought was interesting, and it spoke to what some of these challenges might look like over time, I believe this was from Lancaster University. It compared how trustworthy AI-generated faces of people were compared to the faces of real humans. And it found that actually amongst the folks they surveyed as part of this research, that faces of AI-generated humans were rated 8 percent more trustworthy than actual humans. And, you know, I think, again, it’s a number, right, that, you know, I think a lot of people laugh at because, you know, we think oh, well, you know, that’s kind of funny and—(audio break)—of course, I can tell the difference between humans and AI-generated people. You know, I’m—(audio break)—were proved wrong when they actually tried to detect the differences themselves. So I do think there’s going to be an enormous number of challenges that we will face over the coming years. These are issues that, you know, certainly on the industry side, you know, I think lots of us are taking very seriously, certainly governments and regulators are looking at. Part of the solution will have to be other technologies that can help us parse the difference between AI-generated content and stuff that isn’t. And then part of that, I think, will be human solutions. And in fact, that may actually be the largest piece, because, of course, what is driving disinformation are a bunch of societal issues. And it’s not always going to be as simple as saying, oh, another piece of technology will fix that. ROBBINS: So I want to turn this over to the group. And I’ve got lots more questions, but I’m sure the group has—they’re journalists. They’ve got lots of questions. So the first question is from Phoebe Petrovic. Phoebe, can—would you like to ask your question yourself? Or I can read it, but I always love it when people ask their own questions. Q: Oh, OK. Hey, everyone. So, I was curious about how we might—just given all the reporting that’s been done about ChatGPT and other AI models hallucinating information, faking citations to Washington Post articles that don’t exist, making fake—totally make up research article citations that do not exist, how can we ethically or seriously recommend that we use generative AI for newsgathering purposes? It seems like you would just have to factcheck everything really closely, and then you might as well have done the job to begin with and not get into all these ethical implications of, like, using a software that is potentially going to put a lot of us out of business?  ROBBINS: And Phoebe, are you—you’re at Wisconsin Watch, right? Q: Mmm hmm. And we have a policy that we do not—at this point, that none of us are going to be using AI for any of our newsgathering purposes. And so that’s where we are right now. But I just wonder about the considerable hallucination aspect for newsgathering, when you’re supposed to be gathering the truth. ROBBINS: Dex, do you want to talk a little bit about hallucinations? HUNTER-TORRICKE: Yeah, absolutely. So I think, you know, Phoebe has hit the nail on the head, right? Like, that there are a bunch of, you know, issues right now with existing generative AI technology. You do have to fact-check and proof absolutely everything. So it is—it is something that—you know, it won’t necessarily save you lots of time if you’re looking to just generate, you know, content. I think there are two pieces here which, you know, I think I would focus on.  One is, obviously, the technology is advancing rapidly. So these are the kinds of issues which I expect with future iterations of the technology we will see addressed by more sophisticated models and tools. So absolutely today you’ve got all those challenges. That won’t necessarily be the case over the coming years. I think the second piece really is around thinking what’s the value of me experimenting with this technology now as a journalist and as an organization? It isn’t necessarily to think, oh, I can go and, you know, replace a bunch of fact-heavy lifting I have to do right now as a reporter. I think it’s more becoming fluent with what are the things that generative AI might conceivably be able to do that can help integrate into the kind of work that you’re doing?  And I expect a lot of what I think reporters and organizations generally will use generative AI for over the coming years, will actually—to be doing some of the things that I talked about, and that Ben talked about. You know, it’s corralling data. It’s doing analysis. It’s being more of a researcher rather than as a co-writer, or entirely taking over that writing. I really see it as something that’s additive and will really augment the kind of work that reporters and writers are going, rather than replacing it. So if you do it from that context and, you know, obviously, you know, it does depend on you experimenting to see what are all the different applications in your work, then I think that might lead to very different outcomes. ROBBINS: So we have another question, and we’ll just move on to that. And of course, Ben, you can answer any question you want at any time. So— PIMENTEL: Can I add something on that? It’s almost like the way the web has changed reporting. In the past, like, I covered business. To find out how many employees a company has or when it was founded, I would have to call the PR department or the media rep. Now I can just go quickly to the website, where they have all the facts about the company. But even so, I still double check if that’s an updated information. I even go to the FCC filings to make sure. So I see it as that kind of a tool, the way the web—or, like, when you see something on Wikipedia, you do not use that as a source, right? You use that as a starting point to find other sources. ROBBINS: So Charles Robinson from Maryland Public Television. Charles, do you want to ask your question? Q: Sure. First of all, gentlemen, appreciate this. I’m working on a radio show on ChatGPT and AI. And one of the questions that I’ve been watching in this process is the inability of AI and ChatGPT to get the local nuances of a subject matter, specifically reporting on minority communities. And, Ben, I know you being out in San Francisco, there’s certain colloquialisms in Filipino culture that I wouldn’t get if I didn’t know it. Whereas, like, to give you an example, there’s been a move to kind of, like, homogenize everybody as opposed to getting the colloquialisms, the gestures, and all of that. And I can tell you, as a Black reporter, you know, it’s the reason why I go into the field because you can’t get it if all I do is read whatever someone has generated out there. Help me understand. Because, I’m going to tell you, I write a specific blog on Black politics. And I’m going to tell you, I’m hoping that ChatGPT is not watching me to try and figure out what Black politics is. ROBBINS: Ben. PIMENTEL: I mean, I agree. I mean, when I started my career, the best—and I still believe this—the best interviews are face-to-face interviews, for me. We get more information on how people react, how people talk, how they interact with their surroundings. Usually it’s harder to do that if you’re, you know, doing a lot of things. But whenever I have the opportunity to report on—I mean, I used to cover Asian American affairs in San Francisco. You can’t do that from a phone or a website. You have to go out into the community. And I cover business now, which is more—you know, I can do a lot of it by Zoom. But still, if I’m profiling a CEO, I’d rather—it’d be great if I could meet the person so that I can read his body language, he can react to me, and all that. In terms of the nuances, I agree totally. I mean, it’s possible that ChatGPT can—I mean, as we talked about—what’s impressive and troubling about this technology is it can evolve to a point where it can mimic a lot of these things. And for journalism, that’s an issue for us to think about because, again, how do you deal with a program that’s able to pretend that it’s, you know, writing as a Black person, or as a Filipino, or as an Asian American? Which, based on the technology, eventually it can. But do we want that kind of reporting and journalism that’s not based on more human interactions? ROBBINS: So thank you for that. So Justin Kerr who’s the publisher of the McKinley Park News—Justin, do you want to ask your question? Q: Yes. Yes. Thank you. Can folks hear me OK? ROBBINS: Absolutely. Q: OK. Great. So I publish the McKinley Park News, which is, I call it, a micro-local news outlet, focusing on a single neighborhood in Chicago. And it’s every beat in the neighborhood—crime, education, events, everything else. And it’s all original content. I mean, it’s really all stuff that you won’t find anywhere else on the internet, because it’s so local and, you know, there’s news deserts everywhere. A handful of weeks ago, I discovered through a third party that seemingly the entirety of my website had been scraped and included in these large language models that are used to power ChatGPT, all of these AI services, et cetera.  Now, this is in spite of the fact that I have a terms of service clearly linked up on every page of my website that expressly says: Here are the conditions that anyone is allowed to access and use this website—which is, you know, for news consumers, and no other purpose. And I also list a bunch of expressly prohibited things that, you know, you cannot access or use our website for. One of those things is to inform any large language model, algorithm, machine learning process, et cetera, et cetera, et cetera.  Despite this, everything that I have done has been taken from me and put into these large language models that are then used in interfaces that I see absolutely no benefit from—interfaces and services. So when someone interacts with the AI chat, they’re going to get—you know, maybe they ask something about the McKinley Park neighborhood of Chicago. They’re not—you know, we’re going to be the only source that they have for any sort of realistic or accurate answer. You know, and when someone interacts with a chat, I don’t get a link, I don’t get any attention, I don’t get a reference. I don’t get anything from that.  Not only that, these companies are licensing that capability to third parties. So any third party could go and use my expertise and content to create whatever they wanted, you know, leveraging what I do. As a local small news publisher, I have absolutely no motivation or reason to try to publish local news, because everything will be stolen from me and used in competing interfaces and services that I will never get a piece of. Not only that, this— ROBBINS: Justin, we get—we get the—we get the point. Q: I guess I’m mad because you guys sit up here and you’re using products and services, recommending products and services without the—without a single talk about provenance, where the information comes from. ChatGPT doesn’t have a license to my stuff. Neither do you. ROBBINS: OK. Q: So please stop stealing from me and other local news outlets. That’s—and how am I supposed to—my question is, how am I supposed to operate if everything is being stolen from me? Thank you very much. ROBBINS: And this is a—it’s an important question. And it’s an important question, obviously, for a very small publisher. But it’s also an important question for a big publisher. I mean, Robert Thompson from News Corp is raising this question as well. And we saw what—we saw what the internet did to the news business and how devastating it’s been. So, you know, it’s life and death—life and death for some—life and death for a very small publisher, but it’s very much life and death for big publishers as well. So, Dex, this goes over to you. HUNTER-TORRICKE: Yeah, sure. I mean, I think—you know, obviously I can’t comment on any, you know, specific website or, you know, terms and conditions on a website. You know, I think, you know, from the deep mind perspective, I think we would say that, you know, we believe that training large language models using open web content, you know, creates huge value for users and the media industry. You know, it leads to the creation of more innovative technologies that will then end up getting used by the media, by users, you know, to connect with, you know, stories and content. So actually, I think I would sort of disagree with that premise. I think the other piece, right, is there is obviously a lot of debate, you know, between different, you know, interests and, you know, between different industries over what has been the impact of the internet, you know, on, you know, the news industry, on the economics of it. You know, I think, you know, we would say that, you know, access to things like Google News and Google Search has actually been incredibly powerful for, you know, the media industry. You know, there’s twenty-four, you know, billion visits to, you know, local news outlets happening every month through Google Search and Google News. You know, there’s billions of dollars in ad revenue being generated by the media industry, you know, through having access to those platforms. You know, I think access to AI technologies will create similar opportunities for growth and innovation, but it’s certainly something which I think, you know, we’re very, very sensitive to, you know, what will be the impacts on the industry. Google has been working very, very closely with a lot of local news outlets and news associations, you know, over the years. We really want to have a strong, sustainable news ecosystem. That’s in all of our interest. So it’s something that we’re going to be keeping a very close eye on as AI technology continues to evolve. ROBBINS: So is—other than setting up a paywall, how does—how do news organizations, you know, protect themselves? And I say this as someone who sat on the digital strategy committee at the New York Times that made this decision to put up a paywall, because that was the only way the paper was going to survive. So, you know, yes, Justin, I understand that payrolls or logins kill your advertising revenue potential. But I am—yes, and we had that debate as well. And I understand the difference between your life and the life of the New York Times. Nevertheless, Justin raises a very basic question there. Is there any other way to opt out of the system? I mean, that’s the question that he’s asking, Dex. Is there? HUNTER-TORRICKE: Well, you know, I think what that system is, right, is still being determined. Generative AI is, you know, in its infancy. We obviously think it’s, you know, incredibly exciting, and it’s something that, you know, all of us—(audio break)—today to talk about it. But the technology is still evolving. What these models will look like, including what the regulatory model will look like in different jurisdictions, that is something that is shifting very, very quickly. And, you know, these are exactly the sorts of questions, you know, that we as an industry—(audio break)—is a piece which, you know, I’m sure the media industry will also have a point of view on these things.  But, in a way, it’s sort of a difficult one to answer. And I’m not deliberately trying to be evasive here with a whole set of reporters. You know, we don’t yet know what the full impacts really will be, with some of the AI technologies that have yet to be invented, for example. So this is something where it’s hard to say this is a definitively, like, model that is going to produce the greatest value either for publishers or for the industry or for society, because we need to actually figure out how that technology is going to evolve, and then have a conversation about this. And different, you know, communities, different markets around the world, will also have very different views on what’s the right way, you know, to protect the media industry, while also ensuring that we do continue to innovate? So that’s really how I’d answer at this stage. ROBBINS: So let’s move on to Amy Maxmen, who is the CFR Murrow fellow. Amy, would you like to ask your question? Q: Yeah. Hi. Can you hear me? ROBBINS: Yes. Q: OK, great. So I guess my question actually builds on, you know, what the discussion is so far. And part of my thought for a lot of the discussion here and everywhere else is about, like, how AI could be helpful or hurtful in journalism. And I kind of worry how much that discussion is a bit of a distraction. Because, I guess, I have to feel like the big use of AI for publishers is to save money. And that could be by cutting salaries further for journalists, and cutting full-time jobs that have benefits with them. Something that kind of stuck with me was that I heard another—I heard another talk, and the main use of AI in health care is in hospital billing departments to deny claims. At least, that’s what I heard. So it kind of reminds me that, you know, where is this going? This is going for a way for administrators and publishers to further cut costs.  So I guess my point is, knowing that we would lose a lot if we cut journalists and kind of just—you know, and cut editors, who really are needed to be able to make sure that the AI writing isn’t just super vague and unclear. So I would think the conversation might need to shift away from the good and the bad of AI, to actually, like, can we figure out how to fund journalists still, so that they use AI like a tool, and then also to make sure that publishers aren’t just using it to cut costs, which would be short-sighted. Can you figure out ways to make sure that, you know, journalists are actually maybe paid for their work, which actually is providing the raw material for AI? Basically, it’s more around kind of labor issues than around, like, is AI good or bad? HUNTER-TORRICKE: I think Amy actually raises, you know, a really important, you know, question about how we think conceptually about solving these issues, right? I actually really agree that it’s not really about whether AI is good or bad. That’s part of the conversation and, like, what are the impacts? But this is a conversation that’s about the future of journalism. You know, when social media came along, right, there were a lot of people who said, oh, obviously media organizations need to adapt to the arrival of social media platforms and algorithms by converting all of their content into stuff that’s really short form and designed to go viral.  And, you know, that’s where you had—I mean, without naming any outlets—you had a bunch of stuff that was kind of clickbaity. And what we actually saw is that, yeah, that engaged to a certain extent, but actually people got sick of that stuff, like, pretty quickly. And the pendulum swung enormously, and actually you saw there was a huge surge in people looking for quality, long-form, investigative reporting. And, you know, I think quality journalism has never been in so much demand. So actually, you know, even though you might have thought the technology incentivized and would guide the industry to one path, actually it was a very different set of outcomes really were going to succeed in that world.  And so I think when we look at the possibilities presented by technology, it’s not as clear-cut as saying, like, this is the way the ecosystem’s going to go, or even that we want it to go that way. I think we need to talk about what exactly are the principles of good journalism at this stage, what kind of environment do we want to have, and then figure out how to make the technology support that. ROBBINS: So, Ben, what do you think in your newsroom? I mean, are the bosses, you know, threatening to replace a third of the—you know, a third of the staff with our robot overlords? I promised Dex I would only say that once. Do you have a guild that’s, you know, negotiating terms? Or you guys are—no guild? What’s the conversation like? And what are you—you know, what are the owners saying? PIMENTEL: I mean, we are so small. You know, the Examiner is more than 150 years old, but it’s being rebuilt. It’s essentially just a two-year-old organization. But I think the point is—what’s striking is the use of ChatGPT and generative AI has emerged at a time when the media is still figuring out the business model. Like I said, I lived through the shift from the pre-website world, World Wide Web world, to—and after, which devastated the newspaper industry. I mean, I started in ’93 with the year that the website started to emerge. Within a decade, my newspaper back then was in trouble. And we’re still figuring it out. Dex mentioned the use of social media. That’s what led to the rise of Buzzfeed News, which is having problems now. And there are still efforts to figure out, OK, how do we—how do we make this a viable business model? The New York Times and more established newspapers have already figured out, OK, a paywall works. And that works for them because they’re established, they’re credible, and there are people who are willing to pay to get that information. So that’s an important point. But for others, the nonprofit model is becoming also a viable alternative in many cases. Like, in San Francisco there’s an outlet called Mission Local, actually founded by a professor of mine at Berkeley. Started out as a school project, and now it’s a nonprofit model, covering the Mission in a very good way. And you have other experiments. And what’s interesting is, of course, ChatGPT will definitely be used by—you know, as you said—at a time when there’s massive cuts in newsroom, they’re already signaling that they’re going to use it. And I hope that they use it in a responsible way, the way I explained it earlier. There are—there are important uses for it, for information that’s very beneficial to the community that can be automated. But beyond that, that’s the problem. I think that’s the discussion that the industry is still having. ROBBINS: So, thank you. And we have a lot of questions. So I’m going to ask—I’m going to go through them quickly. Dan MacLeod from the Bangor Daily News—Dan, do you want to ask your question? And I think I want to turn it on you, which is why would you use it, you know, given how committed you are and your value proposition, indeed, is local and, you know, having a direct relationship between reporters and local people? Q: Hi. Yeah. Yeah, I mean, that’s really my question. We have not started using it here. And the big kind of question for us is that the thing that, you know, we pride ourselves on, the thing our audience tells us that it values about us, is that we understand the communities we serve, we’re in them, you know, people recognize the reporters, they have, like, a pretty close connection with us. But this also seems to be, like, one of those technologies that is going to do to journalism what the internet did twenty-five years ago. And it’s sort of, like, either figure it out or, you know, get swept up. Is there anything that local newsrooms can do to leverage it in a way that maintains its—this is a big question—but sort of maintains its sort of core values with its audience?  My second question is that a lot of what this seems to be able to do, from what I’ve seen so far, promises to cut time on minor tasks. But is there anything that it can do better than, like, what a reporter could do? You know, like a reporter can also back—like, you know, research background information. AI says, like, we can do it faster and it saves you that time. Is there anything it can do sort of better? ROBBINS: Either of you?  HUNTER-TORRICKE: Yeah, so—yeah, go ahead. Sorry, go ahead, Ben. PIMENTEL: Go ahead. Go ahead, please. HUNTER-TORRICKE: Sure. So one example, right? You know, I’ve seen—(audio break)—using AI to go and look through databases of sport league competitions. So, you know, one, you know, kind of simple example is looking at how sport teams have been doing in local communities, and then working out, by interpreting the data, what are interesting trends of sport team performance. So you find out there’s a local team that just, you know, won top of its league, and they’ve never won, you know, in thirty years. Suddenly, like, that’s an interesting nugget that can then be developed into a story. You’ve turned an AI into something that’s actually generating interesting angles for writing a story. It doesn’t replace the need for human reporters to go and do all of that work to turn it into something that actually is going to be interesting enough that people want to read it and share it, but it’s something where it is additive to the work of an existing human newsroom. And I honestly think, like, that is the piece that I’m particularly excited about. You know, I think coming from the AI industry and looking at where the technology is going, I don’t see this as something that’s here to replace all of the work that human reporters are doing, or even a large part of it. Because being a journalist and, you know, delivering the kind of value that a media organization delivers, is infinitely more complex, actually, than the stuff that AI can deliver today, and certainly for the foreseeable future. Journalists do something that’s really, really important, which is they build relationships with sources, they have a ton of expertise, and that local context and understanding of a community. Things that AI is, frankly, just not very good at doing right now. So I think the way to think about AI is as a tool to support and enhance the work that you’re doing, rather than, oh, this something that can simply automate away a bunch of this. ROBBINS: So let’s—Lici Beveridge. Lici is with the Hattiesburg American. Lici, do you want to ask your question? Q: Sure. Hi. I am a full-time reporter and actually just started grad school. And the main focus of what I want to study is how to incorporate artificial intelligence into journalism and make it work for everybody, because it’s not going to go away. So we have to figure out how to use it responsibly. And I was just—this question is more for Benjamin. Is there any sort of—I guess, like a policy or kind of rules or something of how you guys approach the use of, like, ChatGPT, or whatever, in your reporting? I mean, do you have, like, a—we have to make sure we disclose the information was gathered from this, or that sort of thing? Because I think, ethically, is how we’re going to get to use this in a way that will be accepted by not just journalists, but by the communities—our communities. PIMENTEL: Yes. Definitely. I think that’s the basic policy that I would recommend and that’s been recommended by others. You disclose it. That if you’re using it in general, and maybe on specific stories. And just picking up on what Dex said, it can be useful for—we used to call it computer-assisted reporting, right? That’s what the web and computers made easier, right? Excel files, in terms of processing and crunching data, and all that, and looking for information.  What I worry about, and what I hope doesn’t happen, is—to follow up on Dex’s example—is, you know, you get a—it’s a sports event, and you want to get some historical perspective, and maybe you get the former record holders for a specific school, or whatever. And that’s good. The ChatGPT or the web helps you find that out. And then instead of finding those people and maybe doing an interview for profiles or your perspective, you could just ask ChatGPT, can you find their Instagram feed or Twitter feed, and see what they’ve said? And let the reporting end there. I mean, I can imagine young reporters will be tempted to do that because it’s easier, right? Instead of—as Dex said, it’s a tool as a step towards getting more information. And the best information is still going face-to-face with sources, or people, or a community. Q: Yeah. Because I know, like, I was actually the digital editor when—for about fifteen years. And, you know, when social media was just starting to come out. And everything was just, you know, dive into this, dive into that, without thinking of the impact later on. And as we quickly discovered, you know, things like we live in a place where there’s a lot of hurricanes and tornadoes. So we have people creating fake pictures of hurricanes and tornadoes. And, you know, they were submitting as, you know, user-generated content, which it wasn’t. It was all fake stuff. So, you know, we have to—I just kind of want to, like, be able to jump in, but do it with a lot of caution. PIMENTEL: Definitely, yes. ROBBINS: Well, you know, I thought Ben’s point about Wikipedia is a really interesting one, which is any reporter who would use Wikipedia as their sole source for a story, rather than using it as a lead source, you know, I’d fire them. But it is an interesting notion of —do you use this as a lead source, knowing that it makes errors, knowing that it’s lazy, knowing that it’s just a start, versus—and that is a—you know, that’s not even ethics. That’s your just basic sort of the rule that we also have to do inside the newsroom, which then to me raises a question for Dex, which is do we have any sense of how often—you know, this term of hallucinations. I mean, how often does it make mistakes right now? Do you have a sense of with Bard how often it makes mistakes? Certainly everybody has stories of fake sources that have showed up, errors that have showed up. Do we have a sense of how reliable this is? And, like, my Wikipedia page has errors in it, and I’ve never even fixed it because I find it faintly bemusing, because they’re really minor errors.  HUNTER-TORRICKE: Right, yeah. I mean, I don’t have any data points to hand. Absolutely it is something that we’re aware of. I expect that this is something that future iterations of the technology will continue to tackle and to, you know, diminish that problem. But, you know, going back to this bigger point, right, which is at what point can you trust this, I think you can trust a lot of things you find there. But you do have to verify them. And certainly, you know, as journalists, as media organizations, I mean, there’s a big much larger responsibility to do that than folks, you know, who may be looking at these experimental tools right now and using it, you know, just to share for, you know, fun and amusement. You know, the kinds of things that you’re sharing are going to really have a huge societal impact. I do think when you look at the evolution of tools like Wikipedia, though, we will go through this trajectory where, you know, at the beginning people will—a lot of folks will think, oh, this is really, like, not that reputable, because it’s something that’s been generated in a very novel way. And there are other more established, you know, formats where you would expect there to be a greater level of fact checking, a greater level of verification. So, you know, obviously, like, the establishment incumbent example to compare against Wikipedia back in the day was something like Encyclopedia Britannica. And then a moment was reached, you know, several years into the development of Wikipedia, where then research was finding that on average Wikipedia had fewer errors in it than Encyclopedia Britannica.  So we will absolutely see a moment come when AI will get more sophisticated, and we will see the content generally being good enough and with more minor errors which, you know, again, technology will continue to diminish over time. And at that point, I think then it will be a very, very different proposition than what we have today, where absolutely, you know, all of these tools are generally labeled with massive caveats and disclaimers warning that they’re experimental and that they’re not, you know, at the stage where you can simply trust everything that’s been put through them. ROBBINS: So Patrick McCloskey who is the editor-in-chief of the Dakota Digital Review—Patrick, would you like to ask your question? We only have a few minutes left. No, Patrick is—may not still be with us. So we actually only have three minutes left. So do you guys want to sum up? Because we actually have other questions, but they look long and complicated. So would you like to have any thoughts? Or maybe I will just ask you a really scary question, which is: We’re talking about this like it is Wikipedia or like it is a calculator. And that, yes, it’s going to have to be fixed, and we have to be careful, and we have to disclose, and we’re being very ethical about it. We’ve had major leaders of the tech industry have put out a letter that have said: Stop. Pause. Think about this before it destroys society. Is there some gap here that we need to be thinking about? I mean, this is—they are raising some really, really frightening notions. And are we perhaps missing a point here if we’re really just talking about this as, well, it’ll perfect itself. Dex, do you want to go first, and then we’ll have Ben finish up?  HUNTER-TORRICKE: Yeah. So, I mean, the CEO of Google DeepMind signed a letter recently, I think this might be one of the several letters that you referenced, you know, which called on folks to take the potential extinction risks associated with AI as seriously as other major global existential risks. So, for example, the threat of nuclear war, or a global pandemic. And that doesn’t mean at all that we think that that is the most likely scenario. You know, we absolutely believe in the positive value of AI for society, or we wouldn’t be building it.  It is something that if the technology continues to mature and evolve in the way that we expect it will, with our understanding of what is coming, it is something that we should certainly take seriously though, even if it’s a very small possibility. With any technology that’s this powerful, we have to apply the proportionality principle and ensure that we’re mitigating that risk. If we only start preparing for those risks, you know, when they’re apparent, it will probably be too late at that point. So absolutely I think it’s important to contextualize this, and not to induce panic or to say this is something that we think is likely to happen. But it’s something that we absolutely are keeping an eye on amongst very, very long-term challenges that we do need to take seriously. ROBBINS: So, Ben, do you have a sense that—I mean, I have a sense, and I don’t cover this. I just read about it. But I have the sense that these industries are saying, yes, we’re conscious that the world could end, but, you know, we’d sort of like other people to make the decision for us. You know, regulate us, please. Tell us what to do while we continue to race and develop this technology. Is there something more? Are they—can we trust these industries to deal with this? PIMENTEL: I mean, the fact that they used the phrase “extinction risk” is really, I think, very important. That tells me that even the CEOs of Google, DeepMind, and OpenAI, and Microsoft know—don’t know what’s up ahead. They don’t know how this technology is going to evolve. And of course, yes, there will be people who—in these companies, including Dex, who will try to ensure that we have guardrails, and policies, and all that. My problem is, it’s now a competitive landscape. It becomes part of the new competition in tech. And when you have that kind of competition, things get missed, or shortcuts are done. We’ve seen that over and over again. And that’s where you can’t leave this to these companies, not even to the regulators. I mean, the communities have to be involved in the conversations. Like, one risk of AI—it goes beyond journalism—that I’ve heard of, which is for me partly one of the most troubling, is the use of AI for persuasion. And on people who don’t even know that they’re being—they’re communicating with an AI system. The use of AI to, in real time, figure out how to sell you something or convince you about a political campaign. And, in real time, figure out how you’re reacting and adjust, because they have the data, they know that if you say something or respond in a certain way, or you have a facial expression—a certain kind of facial expression, they know how to respond. That, for me, is even scarier. That’s why the European Union just passed the—which could be the law—called AI Act, which would ban that, the use of AI for emotional cognition recognition and manipulation, in essence. The problem, again, is this has become a big wave in tech. Companies are scrambling. VCs are scrambling to fund the startups or even existing companies with mature programs for AI. And on the other hand, you have the regulators and the concerns about the fears of what is the impact. Who’s going to win? I mean, which thread is going to prevail? That’s the big question. ROBBINS: So this has been a fabulous conversation. And we will invite you back probably—you know, things are moving so fast—maybe in six months. Which is a lifetime in technology. I just really want to thank Dex Hunter-Torricke and Ben Pimentel. It’s a fabulous conversation. And everybody who asked questions. And sorry we didn’t get to all of them, but it shows you how fabulous it was. And we’ll do this again soon. I hope we can get you back. And over to Irina. FASKIANOS: Thank you for that. Thank you, Carla, Dex, and Ben. Just to—again, I’m sorry we couldn’t get to all your questions. We will send a link to this webinar. We will also send the link to the Nieman Report that Carla referenced at the top of this. You can follow Dex Hunter-Torricke on Twitter at @dexbarton, and Benjamin Pimentel at @benpimentel. As always, we encourage you to visit CFR.org, ForeignAffairs.com, and ThinkGlobalHealth.org for the latest developments and analysis on international trends and how they are affecting the United States. And of course, do email us to share suggestions for future webinars. You can reach us at [email protected]. So, again, thank you all for being with us and to our speakers and moderator. Have a good day. ROBBINS: Thank you all so much. (END)
  • Artificial Intelligence (AI)
    AI Meets World, Part One
    Podcast
    After decades of seeming like another sci-fi catchphrase, artificial intelligence (AI) is having its moment. Some experts predict that AI will usher in an era of boundless productivity and techno-utopia; others see a new realm of great-power competition and the end of humanity. Nearly all agree that AI will change the world. But will it be for the better?
  • Artificial Intelligence (AI)
    How Artificial Intelligence Could Change the World
    Play
    Artificial Intelligence (AI) could transform economies, politics, and everyday life. Some experts believe this increasingly powerful technology could lead to amazing advances and prosperity. Yet, many tech and industry leaders are warning that AI poses substantial risks, and they are calling for a moratorium on AI research so that safety measures can be established. But amid mounting great-power competition, it’s unclear whether national governments will be able to coordinate on regulating this technology that offers so many economic and strategic opportunities.
  • Women and Women's Rights
    Artificial Intelligence Enters the Political Arena
    Politics is one of the latest industries shaken up by AI, the use of artificially generated content in campaigns could spell trouble for candidates and voters alike in the fight against mis- and disinformation.    
  • Technology and Innovation
    Artificial Intelligence Enters the Political Arena
    Politics is one of the latest industries shaken up by AI, the use of artificially generated content in campaigns could spell trouble for candidates and voters alike in the fight against mis- and disinformation.