Social Media

  • Artificial Intelligence (AI)
    Cyber Week in Review: May 16, 2024
    Four Senators release AI Policy Roadmap; U.S. and partners unveil data-sharing agreement; Frank McCourt bids for TikTok; Biden passes new tariffs on Chinese tech; FBI seizes cybercrime marketplace.
  • China
    Is a TikTok Ban Coming?
    American lawmakers look increasingly willing to ban the popular app TikTok due to its parent company’s ties to the Chinese Communist Party.
  • Censorship and Freedom of Expression
    Press Freedom and Digital Safety
    Play
    Ela Stapley, digital security advisor at the International Women's Media Foundation, discusses strategies for the safety of journalists as they report on the 2024 election cycle. Tat Bellamy-Walker, communities reporter at the Seattle Times, discusses their experiences with online harassment and best practices for journalists on digital safety. The host of the webinar is Carla Anne Robbins, senior fellow at CFR and former deputy editorial page editor at the New York Times. A question-and-answer session follows their conversation. TRANSCRIPT FASKIANOS: Welcome to the Council on Foreign Relations Local Journalists Webinar. I’m Irina Faskianos, vice president for the National Program and Outreach here at CFR. CFR is an independent and nonpartisan membership organization, think tank, and publisher focused on U.S. foreign policy. CFR is also the publisher of Foreign Affairs magazine. As always, CFR takes no institutional positions on matters of policy. This webinar is part of CFR’s Local Journalists Initiative, created to help you draw connections between the local issues you cover and national and international dynamics. Our programming puts you in touch with CFR resources and expertise on international issues and provides a forum for sharing best practices. We are delighted to have over forty journalists from twenty-six states and U.S. territories with us today for this discussion on “Press Freedom and Digital Safety.” The webinar is on the record. The video and transcript will be posted on our website after the fact at CFR.org/localjournalists, and we will circulate it as well. We are pleased to have Ela Stapley, Tat Bellamy-Walker, and host Carla Anne Robbins with us for this discussion. I have shared their bios, but I’ll give you a few highlights. Ela Stapley is a digital security advisor working with the International Women’s Media Foundation. She is the coordinator of the course “Online Harassment: Strategies for Journalists’ Defense.” Ms. Stapley trains journalists around the world on digital security issues and provides one-on-one support for media workers in need of emergency assistance. Tat Bellamy-Walker is a communities reporter at the Seattle Times. Their work focuses on social justice, race, economics, and LGBTQIA+ issues in the Pacific Northwest. Tat also serves on the National Association of Hispanic Journalists LGBTQIA+ Task Force, as a member of the Seattle Times Committee on Diversity, Equity, and Inclusion. And Carla Anne Robbins is a senior fellow at CFR and co-host of the CFR podcast “The World Next Week.” She also serves as the faculty director of the Master of International Affairs Program and clinical professor of national security studies at Baruch College’s Marxe School of Public and International Affairs. And previously, she was deputy editorial page editor at the New York Times and chief diplomatic correspondent at the Wall Street Journal. Welcome, Ela, Tat, and Carla. Thank you very much for being with us today. And let me turn the conversation now over to Carla. ROBBINS: Irina, thank you so much. And, Ela and Tat, thank you so much for doing this. And thank you, everybody who’s here today. We’re going to chat among us just for about twenty, twenty-five minutes, and then questions. You’re all journalists; I’m sure you’re going to have a lot of questions. So, Ela, can we start with you by talking about the threat environment, as we national security people refer to it? The IWMF announcement on your safety training—and I want to talk about that—referred to, quote, a “spike in physical and digital violence” directed against U.S. newsrooms in particular. And it said, “This year alone, thirty journalists have been assaulted and eight have been arrested in the U.S., all following a surge of anti-media rhetoric.” And you also said that the U.S. currently ranks forty-fifth on the World Press Freedom Index, down from thirty-two just a decade ago; and also that this abuse disproportionately affects women and diverse journalists, who are often reluctant to speak out for fear of jeopardizing their careers. Can you talk a little bit about the threat environment here in the U.S., what’s driving it, and the different forms it’s taking—it’s taking? STAPLEY: Yeah, sure. So I’m Ela Stapley. I’m a digital security advisor. So when I look at the threat environment, I’m looking at it from a digital safety standpoint. What we do see in the U.S., and we have seen now for a number of years, is a massive uptick in online abuse—or online violence, as it’s now called, in order to get across the seriousness of the situation. So when we’re talking about online abuse/online violence, what we’re really saying there is attacks on journalists that are now so serious that it’s really limiting their ability to do their work. And it’s really having—I don’t say this lightly—an impact on democratic conversation. So one of the biggest issues that you see in the U.S. is this, along with common tactics that are used with online harassment and online violence. And that includes the publishing of journalists’ personal information online, known as doxing. This includes the home address or personal contact, such as a personal email or personal phone number for example, with an intent to do them some kind of harm. And we do see that being used against journalists in the U.S., especially if they’re covering particular beats. That includes so—kind of far-right or alt-right groups, for example, who—one of their tactics is doxing journalists online or people who talk about them in a way that they don’t agree with. So that is one of the biggest threats we’re seeing. And we’re in an election year. I do think we did see this during the last election. There will be an increase in online abuse and harassment during that time, and all the other threats that come with it, which include doxing but also things such as phishing attacks, for example; malware attacks; and possible hacking attacks of accounts, for example, could also be something that we see an uptick in. Other threats that journalists are facing. If they’re going out and they’re covering the election—so some of these rallies or places where they’re going—the chance of a physical confrontation might be quite high. So you’re seeing there kind of damaged equipment, which it sounds like a physical safety issue but is actually a digital security issue as well. So journalists quite often carrying their personal devices instead of work devices, that’s very common, especially for freelancers. And you know, if you haven’t backed those devices up, the content on them; or if you’re detained and those devices are searched, for example; what about the content that you have on them? How safe is that content? Not very. And do you have sensitive contacts on there or content that could put you or your sources at risk is something I’ll say that journalists, you know, need to be thinking about, I would say. And we do see that across the U.S. Obviously, some areas, some states may be more complicated than others. ROBBINS: So I want to get to Tat and talk about your experiences and that of your colleagues, and then want to talk to both of you, because it seems like there’s this intrinsic tension here because—I mean, I’m going to really date myself—back in the day, when I started in the business, the idea that we would want to not share our emails or not share our phone numbers with people who we would want to be reaching out with us, we wouldn’t want to hide from potential—people who could be potential sources. So I understand there has to be, you know, a separation between the public and the private because the private can really be a vulnerability, but it's certainly a very different world from the world in which—when I started. And I will add I started writing with a typewriter back in the—Tat, there used to be typewriters. For you young’uns. OK. So, Tat, can you talk about your experience and that of your colleagues, in Seattle but also the people that you deal with in the groups that you work with? BELLAMY-WALKER: Yeah. So I’ll talk a little bit about, like, my, like, personal experience. So last year I was covering, like, the local response to, like, the national, like, uptick in anti-drag legislation, and I interviewed, like, several, like, trans drag performers about, you know, how that had an impact on them. Like, it severely, like, limited their, you know—in terms of, like, violence, they were experiencing violence, and it was, like, difficult for them to navigate this, like, increasingly, like, hostile climate where, like, anti-drag, like, legislation was just going through the U.S. So from me, like, writing that story, I started to get, like, a lot of, like, transphobic emails targeting me and my sources. And then from, you know, the, you know, transphobic emails and messages, later on, there ended up being, like, at this conservative Facebook page that also seen, like, the stories that I cover. And I’ve been covering, like, LGBTQ issues for a very long time including writing, you know, personal essays about my experiences as, like, a trans person. And they—like, they wrote this whole—this whole Facebook post about, you know, like calling me, like, a girl, like—it was, like, this whole thing. And they included, like, you know, that I work at the—at the Seattle Times. Like, it was, like, this very intense situation. And it ended up escalating even more to the Blaze writing a story about me. And it just—like, it just escalated from me. You know, I wrote the story. Then, you know, there was the conservative Facebook page. And then, you know. You know, it ended up in a story being written about me. And so, like, things like that are very—are very serious, and really do you have, like, a negative impact on how, like, trans journalists, like, do our work. And for me, like, at that time, it did make me feel pretty, like, traumatized to see, like, how, you know, my story was, like, taken—like, it was—it just—it felt like it was just being used as this—like, this negative force, when I was trying to write about, like, why these drag performers were pushing for their craft, and why they—you know, be felt so intensely to push for their craft, at a time of such hostility targeting drag performers. So for me, at that time, what was most important was to, like, assess, like, my online presence, and see how far was this going. Like, how far was this harassment going? So I made sure to, like, lock down my accounts. You know, that was very important for me to do. Also, having a friend document the abusive language that was coming up under the different post was very helpful. And just kind of like logging what was happening to me on, like, a day-to-day basis. Yeah, so that's essentially what I experienced. And it made me want to—I guess, in some way it made me want to make sure that I'm very careful about the information that I put out there about myself. So I have since, like, removed, like, my email address, you know, from the Seattle Times website. I try to be pretty careful about what I put online about myself. Yeah, so that—I would say those—that is how that had, like, an impact on me and my role in journalism. ROBBINS: So before you wrote that story—because, of course, you were writing about people being harassed because of what they did—did you think about the fact that you were going to be harassed for what you did in writing about them? BELLAMY-WALKER: At that time, I did not—I did not think about that, about how, like, writing about this story would have an impact on me. At that time, I did not think about that. But now, like, in hindsight, I know that it's important to be prepared for those, like, online attacks. And, like, vitriol and in everything. But yeah, it just—like, I didn't realize, like, how far it would go. Because at that time I was also pretty vocal about, you know, the lack of diversity of trans journalists in—just in journalism and the industry in general. So that also caught fire online with folks, you know, targeting me for that as well. So, I feel like all of those situations started to make me, like, a very big target for—you know, for these for these folks. But I know now in the future that it's important for me to prepare for these online attacks and everything. ROBBINS: So you're talking about you preparing. Ela, I want to go back to the training that IWMF does, and this handbook, that we're going to share with everybody, that you all have developed. Which has really, I think, absolutely fabulous worksheets. This is one which I have here, which is an online violence risk assessment, which talks about things like have you previously been targeted. You know, questions that you want to ask yourself, that newsrooms want to ask themselves before the work starts. Can you talk about the training that you all have done, and some of the—some of the things that take place in that training, that that makes—preemptively, as well as once things have happened? Some of them are big changes—raising awareness in the newsroom—and some of them are actually technical changes. Like some of the things Tat’s talking about, training people about how to even reel back their information. When I read this I thought to myself, God, there’s so much information out there. Is it even possible to pull that back? STAPLEY: Yeah, so, unfortunately, Tat’s story is pretty familiar to me. It’s a story I’ve heard many times. And what we used to see—so I’ll talk a little bit about how online harassment kind of came to be and where it is now, just very briefly. So it used to be in the newsroom, online violence or harassment was seen as, you know, something that happened to normally women journalists, so nobody really paid that much attention, if I’m honest with you. It’s only in the last few years that newsrooms have started more seriously to pay attention to online violence as an issue in terms of protecting their journalists. So online harassers were also seen as kind of just a guy in a hoodie in their basement attacking a person over and over. And that stereotype still exists. That person still exists. But now there’s a whole other layer, there’s a whole array of other actors involved, including state-sponsored actors, particular groups online who are hacking groups, but also other groups who feel very passionately about particular topics on the internet. And I use the word “passionate” there not in—not in a positive sense but also negative. So they have strong opinions about it. And they will target journalists that publish on these issues. And I think before we could predict who those journalists would be. So if you were covering particular beats you were more likely to get harassment. But now we’re seeing it as just a general attack against journalists, regardless of the beat. So if you’re a sports journalist, you’re likely to get attacked by sports fans equal if you’re covering—you know, the journalists who are covering LGBTQ+ issues, anything to do with women, anything to do with race—disproportionately likely to face attacks. And if they are from that community themselves, even more so. There’s a lot of academic research that’s been done on this. So Tat’s situation, unfortunately, for me in my position, I would see Tat, I would think: This is a story that Tat’s covering. The likelihood of Tat getting abuse is incredibly high. Now, from our work with newsrooms, what we began to see is that newsrooms started to think about how they could better protect their staff. In some newsrooms, you know, that conversation needed to be had. But some newsrooms were reaching out to us proactively. And I have to say, the Seattle Times was one of those. And I have to give a big shoutout to the Seattle Times for their interest in the safety and security of their journalists. And we’ve worked very closely with the Seattle Times on this guide, actually. So part of the pre-emptive support is not only raising awareness with upper management, because if upper management are not on board it’s very difficult to implement changes, but also putting good practices in place. So the more you can do in advance of an online attack, the better it is for you. Because it’s very difficult to be putting best practices in place when you’re in the middle of a firestorm. So the more pre-emptive steps you can take, the better it is for you, as a newsroom but also as an individual journalist within that newsroom, especially if you fit into one of those categories that are more high risk. So we, at the IWMF, we've been working very closely with journalists. We started training journalists and newsrooms in data protection. So how to best protect your data online? This is the kind of information Tat was talking about—your email address, your cellphone, your home address. But what we realized was the training wasn't enough, because after the training the journalist would go, well, now what? And the newsrooms would be, like, well, we don't have anything. So what we needed was policy. We needed best practices that journalists could access easily and, ideally, roll out fairly easily to staff. Now, I will say that a lot of content for this does exist. There are other organizations that have been working on this topic also for an equally long number of times, and they do amazing work. But what we were hearing from journalists was: There's a lot of information and we need short, simple one-pagers that will really help us protect ourselves. And also. editors were saying: We need it to help protect our staff. So they didn't want to read a fifty-page document. What they wanted was a one-page checklist, for example. So the guide that we created came out of a pilot that we ran with ten newsrooms in the U.S. and internationally, where we worked with—and the Seattle Times was one of those—we worked with the newsroom very closely, with a particular person in that newsroom. to think: What do they need and how could we implement that for them? In some cases, in the Seattle Times, they created their online—their own guide for online harassment. In some cases, it was newsrooms that they only could really manage to have a checklist that would help them protect staff data as quickly as possible. So it really depends. Different newsrooms have different needs. There's no really one-size-fits-all when it comes to protecting staff. I can't say to this newsroom, you need to do this. I can say, what is your capacity? Because a lot of newsrooms are overstretched, both financially but also in terms of people. And how many cooks in the kitchen? Generally, the bigger the newsroom, the more difficult it is to roll out change quickly because you need more buy-in from different areas within the newsroom. And the most successful pre-emptive support we see is from newsrooms where there is, what we call, a newsroom—a champion in the newsroom. Someone who pushes for this. Someone who maintains that momentum and is also able to communicate with HR, for example. Because some support needs to come from HR. What do you do if you've got a journalist who needs time off, for example because they've been getting death threats? Support from IT departments. Traditionally, IT departments in newsrooms are responsible for the website, for making sure your email is running. They're not generally resourced and trained in how to deal with a journalist who's receiving thousands of death threats via their Twitter feed. So getting newsrooms to think about that, and also getting newsrooms to think about you have journalists who are using their personal social media for work-related content. And you request them to do this. But you are not responsible for protecting those accounts. And that’s a real gray area that leaves a lot of journalists very vulnerable. So their work email may have all the digital security measures in place and helped along by their IT team, but their personal Instagram account or their Facebook account has no security measures on it at all. And that is where they will be most vulnerable. Because online attackers, they don’t just look at the journalist in the newsroom. They look at the journalists, the whole picture. So your data that you have on the internet is really your calling card to the world. So when people Google you, what they see is how you are to them. So they make no distinction there. There’s no distinction for them in terms of work and personal. So at the IWMF what we’ve been doing is really working with newsrooms to help them roll out these best practices, best as possible, to put them together, to help them write them, and then to sit with them and try and figure out how they can roll it out. And some do it quicker than others, but there’s been a lot of interest. Especially now, during the election. ROBBINS: So, Tat, can you—what’s changed since your experience? What do you do differently now? BELLAMY-WALKER: Yeah. I would say maybe like one of the main things that I—that I do differently is, like, trying to prepare ahead of these potential attacks. So that includes like, doxing myself and removing personal info about myself, like, online. So like signing up for, like, Delete Me, sending takedown requests to data broker sites, submitting info removal requests to Google. Sometimes that works. Sometimes it doesn’t. But trying to, like, take away that, like, personal information about myself. I would also say, locking down my accounts and using more two-factor authentication. For, like, my passwords in the past, I have just used very simple, easy-to-remember passwords. But I have learned, like, since the training that it’s really important to have a password that’s way more secure. Even for me on the go, I just want something that’s easy to remember. So using, like, a password manager, like one password. So that has also been helpful for me. And also paying attention to my privacy settings. You know, on, like, Facebook or Twitter. You know, making sure that it’s only me that can look up, like, my phone or my personal, like. email address. So that is helpful. And just generally, like, using the resources from IWMF’s online violence response hub. That has been very helpful as well, and making sure that I have a good self-care practice. And having, like, a team of folks that I can process these different challenges with, because unfortunately, like, you know, this won’t probably be, like, you know, the last time that I experience threats like this, given the nature of my reporting. So it’s really important for me to also have, like, a self-care practice in place. ROBBINS: So maybe, Ela, you want to go through some of that a little bit more deeply, although Tat sounds like Tat’s really on top of it. So these online data brokers, can you just—do you have to pay them to delete yourself? Or are they legally—you know, do they have to respond to a request like that? STAPLEY: OK. So let me start by saying that the U.S. has some of the worst data privacy laws I’ve ever seen. ROBBINS: We’ve noticed that before. STAPLEY: So it’s very difficult for a journalist to protect their personal information, just because so much information in the U.S. has to exist in a public-facing database, which, for me, is quite astounding really. If you buy a house—I don’t know if this is statewide or if it’s just in certain states— ROBBINS: Let me just say, as journalists, we are ambivalent about this, OK? On a certain level, we want to protect ourselves. But on another level, that’s really useful if a corrupt person is buying that house, OK? So we’re—you know, we’re not really crazy about the ability to erase yourself that exists in Europe. So we’re ambivalent about this. But please, go on. STAPLEY: Yeah, but I think from a personal safety standpoint it makes you very vulnerable. And the reason for this is that journalists are public-facing. So but you don't have any of the protection that is normally offered to kind of public-facing people. If you work in government, for example, if you're incredibly famous and have a lot of money, for example, you can hire people. So a lot of journalists don't have that. So it makes them very vulnerable. And they're also reporting on things people have strong opinions about, or they don't want to hear. And they're also very—they're very visible. So and they give—this gives people something to focus on. And when they start digging, they start to find more and more information. So and when I talk about journalists having information on the internet, I’m not saying that they shouldn’t have anything. Because a journalist has to exist on the internet in some form, otherwise they don’t exist and they can’t get work, right? So it’s more about the type of information that they have on the internet. So ideally, if I were to look a journalist up online, I would only find professional information about them, their professional work email, where they work, probably the town they live in. But I shouldn’t be finding, ideally, pictures of their family. I shouldn’t be finding pictures of their dog in their home. I shouldn’t be finding photos of them on holiday last year, ideally. So it’s more about controlling the information and feeling that the journalists themselves is in control of that information that they have on the internet, rather than people putting information on the internet about you. So data brokers sites, you’re very familiar with them. As journalists, you use them to look up sources, I’m sure. But people are also using them to look up you. If I was a citizen, never mind just a journalist, in the United States, I would be signing up to a service. There are a number of them available. One of them is called Delete Me. And they will remove you from these data aggregate sites. Now you can remove yourself from these data aggregate sites, but they are basically scraping public data. So they just keep repopulating with the information. So it’s basically a constant wheel, basically, of you requesting the information to be taken down, and them taking it down, but six months later putting it back up. So companies will do this for you. And there’s a whole industry now in the U.S. around that. Now the information that they contain also is very personal. So it includes your home address, your phone number, your email. But also, people you live with, and family members, et cetera. And what we do see is people who harass online, if they can't find data on you they may well go after family members. I've had journalists where this has happened to them before. They've gone after parents, siblings. And so it was a bit about educating your family on what you're happy and not happy sharing online, especially if you live or have experienced already harassment. So that's a little bit about data broker sites. We don't really see this in any other country. It's very unique to the United States. With all the good and bad that they bring. But in terms of privacy for data for journalists' protection, they're not great. Other preemptive things that journalists can do is just Google yourself, and other search engines. Look yourself up regularly and just know what the internet says about you—whether it’s negative, whether it’s positive. Just have a reading of what the internet is saying about you. I would sign up to get Google Alerts for your name, and that will alert you if anything comes up—on Google only—about you. And when you look yourself up online, just map if there’s anything there that you’re slightly uncomfortable with. And that varies depending on the journalist. It could be that some are happier with certain information being out there and some are less happy. But that’s really a personal decision that the journalist makes themselves. And it really depends on, what we call in the industry, their risk profile. So what do I mean by that? That’s a little bit what I was talking about earlier, when I was talking about Tat’s case. The kind of beat you cover, whether you’ve experienced harassment previously, or any other digital threats previously, who those attackers may be. So it’s very different the far right or alt-right, to a government, to, you know, a group on the internet of Taylor Swift fans, for example. So knowing who the threat is can be helpful because it helps you gauge how more or less the harassment will be and also other digital threats. Do they do hacking? Are they going to commit identity theft in your name? So getting a read on that is very important. Identity theft, a lot of groups like to attack in that way, take out credit cards in your name. So it’s quite good to do a credit check on yourself and put a block on your credit if you are at high risk for that. And you don’t need to have this all the time. It could just be during periods of high levels of harassment. For example, during an election period where we see often a spike in online harassment. Once you have seen information about yourself online, you want to take it down. If you are the owner of that information, it’s on your social media, et cetera, you take it down. The internet pulls through, it removes it. Please bear in mind that once you have something on the internet it’s very difficult to guarantee it’s completely gone. The reason for that is people take screenshots and there are also services such as the Internet Archive, services like the Wayback Machine. These types of services are very good at taking down data, actually, if you request. You have to go and request that they remove your personal data. So you may have deleted information from Google or from your own personal Facebook, but maybe a copy of it exists in the Wayback Machine. And quite often, attackers will go there and search for that information and put it online. So if somebody has put information about you on what we call a third-party platform—they’ve written a horrible blog about you, or it exists in a public database—then it’s very difficult to get that data taken down. It will depend on laws and legislation, and that varies from state to state in the U.S., and can be quite complicated. I’ve had journalists who’ve been quite successful in kind of copyright. So if people are using their image, they’ve—instead of pursuing it through—there are very few laws in place to protect journalists from this, which is something else that that’s an issue. If you do receive online harassment, who do you go to legally? Or maybe even it’s the authorities themselves that harassing you, in certain states. So maybe you don’t want to go to the authorities. But there’s very little legal protection really there for you to get that data taken down and protected. So once you’ve done kind of knowing what the internet says about you, then you just need to make sure you have good account security. What do I mean by that? That means having something called two-factor authentication turned on. Most people are familiar with this these days. They weren’t when I was doing this five years ago. Nobody had heard of it. Most people are using it now. Most people are familiar with this through internet banking, where you log into your account and a text message comes to your phone or an email with a code. Most online services offer this now. Please, please turn on two-factor authentication. There are different types. Most people use SMS. If you are covering anything to do with alt-right, far right, anything where—or hacking groups, or particular—if you’re covering foreign news, I don’t know if there’s here, and you’re covering countries that like to hack a lot, you want to be looking at something a bit more secure, such as an app or a security key. And then making sure yeah, and Tat mentioned a password manager. The most important thing about passwords is that they're long. They should be at least fifteen, one-five, characters. And they should be different for each account. Sorry, everyone. And the reason for that is if you are using the same password on many accounts, and one of those services that you have signed up for gets hacked, they've been keeping your password in an Excel sheet on their server instead of an encrypted form, then everyone will have your password for your Gmail account, your Instagram account, et cetera. That's why it's really important to have different passwords for different accounts. How you can do that? Using a password manager or, it is statistically safer to write them down and keep them safe in your home. If you feel safe in your home, if you're not at risk of arrest and detention and you don't cross borders, statistically it's much safer to write them down. Don't obviously stick them to your computer, but you can keep them somewhere safe in your home. Much safer than having passwords that are very short or reusing the same password on many accounts. Or, on any other account. That will prevent hacking, basically. Which online abusers do like to do? So that's kind of a little bit of a very quick walkthrough on that. And we do have resources that we can send out which will guide you through that. ROBBINS: So I want to turn it over to the group. I’m sure you guys have questions. You’re journalists. So if you could raise your hands or put it in the Q&A, please. I’m sure you have many questions for our experts here. While you’re doing that, I’m just looking at the participant list. If not, I’m going to start calling on people. It’s something I do all the time. It’s the professor side of me that does that. Well, while people decide what they’re going to ask, Tat, so since I said Ela said that your newsroom is actually one that’s been trained in, and that’s actually quite good, how much support they give you? And what sort of support? I mean, if something costs money, did they pay for it, for example? You know, have they—you know, have they given—paid for password manager? Have they given you, you know? And what’s the—what’s the support they gave you, and what do you wish they gave you? BELLAMY-WALKER: That’s a really good question. Well, I would say, maybe the first thing that they had—like, you know, they sent over the different, like, resources, and, you know, for, like, online harassment. And also, they recommended that I take out my, like, email address from the bio online. Since so many of my—since so many of the messages were coming to my email. But in terms of, like, money towards, you know, getting, like, a password manager or, you know, trying to delete some of these, you know, information about me from the internet, I was not provided, like, support with that. And I think just, like, in the future, I—you know, at the time of these stories I was very new to my position. And I think it’s, like, you know, it would be great if, like, news organizations, like, give more trainings on online, like, risk. I think that would be very, like, helpful. Like, alongside having a guide, like a training as well, for, like, new employees. I think that would be very helpful. ROBBINS: So sort of basic onboarding? I mean, this should be a required—a required part of—a required part of it. Ela, are there newsrooms that are doing that now? They've just sort of included this as part of the onboarding process. STAPLEY: Well, ideally, it would be included in the onboarding process. A lot of newsrooms we’ve worked with have included it within the onboarding manual. But obviously, training is money. Newsrooms are short on money these days. So it can be quite difficult. And also, if there’s a high staff turnover, one of the issues we’ve noticed is you can create the best practice, you can train journalists, but journalists leave. New journalists come. Who’s staying on top of that and managing that? And that’s why it’s important to get HR involved from the beginning because maybe HR—in some newsrooms, HR is the editor and also the IT person. So it really depends on the size of the newsroom and how much support they can offer, in terms of financially as well, how much support they can offer. Delete Me is expensive if you add it up for many journalists within your newsroom, or other data broker removal services. One Password actually does free accounts for journalists. So I would recommend that you have a look at that. They have One Password for journalism. And you can—and you can sign up for that. But obviously, it costs money. You know, and there are bigger issues newsrooms need to think about as well. So one of the things we encourage them to think about is how much support can you offer, and also to be honest about that support. So what you don’t want is a journalist who’s been doxed, their home addresses all over the internet, they’ve had to move out, but they find out their newsroom can’t pay for that. So where did they go? Do they still have to work during that period, for example? So getting newsrooms to think through these issues in advance is really helpful for the newsroom because then they can say, look, if this happens we are able to provide this for this amount of time, and after that, you know, we can do this, this, and this. Some newsrooms can't afford to pay for journalists to move out of their home because their budget is too small, but maybe they can offer time off, for example, paid time off, or mental health support through insurance. Maybe they can start to build community networks in the newsroom. This is increasingly more important, as newsrooms—we were speaking about this earlier—are more remote. So people aren't coming into the office so much. So you're not connected to people as much. There's no kind of chatting to people around the water cooler like they used to. So, you know, this kind of self—almost kind of exchanging information between journalists around, like, how to protect against issues or which issues are causing more conflict or—could be tricky. It may not be being picked up, on especially for younger journalists coming into the newsroom because, you know, they're just starting out on their journalism career. They don't have years of experience behind them. And they can often be vulnerable to attacks and, you know, I, on several occasions, spoke to editors at newsrooms, small local newsrooms, who had sent out, you know, like, a young reporter or just a reporter—junior reporter to cover a protest, which was actually a far-right, or alt-right march. And then that journalists would be doxed. And the journalists were completely unprepared for that. The newsroom was completely unprepared for that. Because they hadn’t assessed the risk. They hadn’t seen what the risk, and they wouldn’t have known that doxing was a very common tactic used by these groups. So planning for that in advance is really important. That’s why risk assessment can be really great—a great tool. Getting newsrooms to think through risk assessment processes. ROBBINS: So we have two questions. One from someone named Theo. I’m not sure, I don’t have a list in front of me. Do you recommend any apps for password managers? This person says: I went to a seminar that suggested LastPass, and then LastPass had its data stolen a few months later. This has always made me actually nervous about password managers. I sort of wondered how secure they are. It seems to me every time I get my snail mail I’m getting another warning that, like, something else of mine has been hacked. And we’re going to give you a year of, you know, protection. Are there any of these apps—are they actually secure? STAPLEY: So, one of the things about digital security and safety that journalists really hate is that it’s a changing environment. So, something that was safe, you know, yesterday, isn’t safe today. And the reason for this is, is that tech changes, vulnerabilities become open. Hackers attack. Governments and other groups are always looking for ways to attack and find access. And people in my industry are always looking for ways to protect. So it’s always in a kind of constant change, which is frustrating for journalists because they just want to say use this tool, it’ll work forever, and it’ll be fine. And I’m afraid digital safety is not like that. So nothing you use that is connected to the internet in any shape or form is 100 percent safe, or any device. And the reason for that is, is there is always a possibility that there is a vulnerability that in some area that could be leveraged. So what you’re looking for is really for journalists to stay up to date with the latest tech information. And you’re all journalists. So this, you know, it’s just research. So it should be pretty OK for you to do. The best way to do it is just to sign up to the tech section of a big newspaper, national newspaper, and just get it coming into your inbox. And you’ll just stay up on, like, who’s buying who, what data breaches have there been, who’s been hacked, what hacking groups are out there. You don’t have to investigate in depth. You just have to have a general read of what’s happening in the global sphere around this issue. I think Elon Musk's buyout of Twitter, for example, is a very good example of, you know, what happens when a tech tool that we all depend on changes hands, right? I know journalists who built their entire careers on Twitter and are now just really floundering because it's so difficult to access audiences and get the information. So in order to answer your question, no, nothing is 100 percent safe. But if you're looking to use something, there are certain things that you should look for. Like, who owns this tool? What are they doing with your data? And how are they storing that data? So in terms of password managers, for example, password managers are currently the industry best practice for passwords for the majority of people. There are certain groups within that who may be advised not to use them, most of them are the more high-risk ones. So they—password managers are keeping your passwords in encrypted form on their servers. What does that mean? If someone hacks a password manager, they can't gain access to those passwords. In terms of LastPass, what we saw was security breaches but no actual passwords being accessed. But the fact that they'd have several security breaches made people very unsettled. And, you know, people have been migrating off LastPass, basically. It means their general security ethos may not be as secure as people want. So, you know, you have to move elsewhere. And that is for any tech tool that you use. So now maybe people aren't using Twitter; they’re moving over to LinkedIn. You may be using iMessage one day but may have to migrate over to WhatsApp another. So having many options in play is always—is always good as well. So don’t just rely on one thing and expect it to work forever in the world of tech. Generally, it doesn’t. ROBBINS: We are we have—so, Theo, I’m just going to answer your question really quickly, because that’s one that I actually know something about. This is—Theo asked whether there’s any suggestions—and Theo, I believe, is Theo Greenly, senior reporter at KUCB. Suggestions when finding/choosing a fixer on a reporting trip, especially abroad? Questions to ask or things to look for when initially assessing risk before a trip. I would just say, for finding a fixer, find somebody who’s worked in that country already and ask their advice. That’s the only way you can do it. It’s just—the same way if you’re going down a road and whether or not you think there are mines on that road, ask people who know. There’s, like, no—you just have to rely on the kindness of people who’ve already worked in that environment. And it’s just—that’s what I did for years and years and years working abroad, is that I always relied on people who knew more. I can tell you the first trip I had was in Haiti. The overthrow of Baby Doc. Yes, I’m that old. And I was flipping out. And I called my husband, a very experienced foreign correspondent. And he said to me, find Alfonso Chardy from the Miami Herald, and do everything that he’s already doing. He was completely right. And that’s how I learned how to do it. So that’s—you know, there’s no secret here. It’s just find more experienced reporters. And they’re usually really kind, and they’re really, really helpful. So there’s a question from—is it Steve Doyle? StDoyle. What suggestions do you have for journalists facing physical threats? How should journalists be prepared for that? Ela, Tat? I don’t know if you—this is focused on digital, but do you guys—have you heard of any training? I know that when my reporters at the Journal went overseas, they had a lot of training on security, particularly the ones who went to Afghanistan and Iraq. And we had to pay for it. We went to security companies that trained them. Have you heard anything about people being trained for physical protection in the United States? STAPLEY: Yeah, the IWMF is currently actually on their U.S. safety tour. So they’re visiting states and training them in physical and digital safety. So you can go to the website and check that out. So they do do also the HEFAT training as well. I’m not a physical security expert, so I can’t really speak to that. But, yes, there are organizations that offer this. But there’s a lot more that are obviously paid for than are actually free. But, yes, there are organizations out there that do offer this type of training, press freedom organizations. ROBBINS: Tat, have you done any training on physical security? Because you’re out and about in the community all the time. BELLAMY-WALKER: Hmm. Yeah. So I would also echo the IWMF’s HEFAT training. During the training, like, we learned how to, like, you know, if we’re in a protest and it gets extremely, like, hostile, we learned how to navigate ourselves, like, out of that situation. We learned how to navigate—if there’s a mass shooting, like, what to do. If—you know, if we’re, you know, getting kidnapped or something, we learned how to navigate that situation. So I would definitely recommend IWMF’s HEFAT training has something for folks to use to learn how to navigate these different physical threats that can come up in the field. ROBBINS: Great. Well, we will share a link to that as well when we send out our follow up—our follow-up emails. That's great to know, that that's available. Also never go in the center of a crowd. Hug the buildings. You don't want to get trampled. It's another thing my husband taught me in the early days. These are all really useful things. Question: For a reporter who covers a remote minority community in a news desert, she must be visible on social media for sources to reach her. At the same time, she’s getting harassed/doxed. We provided Delete Me, but she still needs to be findable. Best practices? That was—I mean, it seems to me, sort of that’s the great paradox here. You know, how can you be visible so people can find you, but at the same time you don’t want to get people—the wrong people finding you? How do we balance that? STAPLEY: Yeah. And, like I said, it’s different for each journalist. Depends on the degree of harassment, and how comfortable, and who’s harassing you as well. So generally, if the people who live close to you are harassing you, the physical threat level is higher. So that’s something to be mindful of. So, you know, if you’re—some of the most challenging cases are journalists who report on the communities that they are living in, and those communities are hostile to them in some form. And it can be very, very difficult for them to stay safe, because they also know where you live. Because, you know, they know your aunt or whoever, like they live three doors down. But I think really it's then about putting best practices in place. So having a plan for what if this happens, what will we do as a newsroom to support this journalist? And maybe seeing—asking the journalist what they feel that they need. So when it comes to harassment on social media, I'm afraid—a lot of responsibility for managing that harassment should come from the platforms, but it doesn't. And there are very few practices now in place, especially, you know, what we've seen with X, or what was previously Twitter. You know, the security there is not as efficient as it once was. I think I could say that. So you can be reporting things, but nothing's happening. Or they say that it adheres to their community guidelines. Often we hear that from Facebook, for example, or Instagram. One thing you should know, if you’re reporting harassment, is you should read the community guidelines and see how that harassment—you need—you need to parrot the same language back to them. So you need to show them how the harassment is violating their community standards, and just use the same words in your—in your report. And document it. So keep a spreadsheet of who—what platform it happened on, take a screenshot of the abuse. Don’t just have the URL, because people delete it. So make sure you have the handle name, the date, the time, et cetera. And the harassment, the platform it happened on, whether you reported it, who you reported it to, have you heard back from them. Why would you document it? Well, it really depends. Maybe, you know, it’s just personal, so you can track it. Maybe it’s for you to show editors. Maybe it’s to take to the authorities. But that’s not always appropriate for everybody. You may or may not want to document—and you can’t document everything. So you’re just looking for threat to life there, I would say. And it can be helpful to get—I know Tat mentioned this—to have, like, a community of people who can help you with that. So in the case of this journalist, like, what’s their external support network like? Are there other journalists that journalists can be in contact with? What can you offer that journalist in terms of support? So does that journalist need time every week to kind of document this during work hours so she doesn’t—or, he—doesn’t have to spend their time doing it on the weekend? Do they need access to mental health provision? Do they need an IT team? So it sounds like it's a small outlet, you probably don't have—maybe have an IT team? Or, you know, the owner's probably the IT person. That's normally how that works. So what can you do there to make sure their accounts are secure, and make sure they know that they don't always have to be online? So one of the most important things for journalists is for people to contact them. But if you're on a device all the time, and that device is just blowing up with hatred, it can be quite useful to have a different device, a different phone number that you use for personal use. And that, you know, maybe you don't work on the weekend, you switch your work phone off so you don't have to be reading all this abuse. I know switching the phone off for a journalist is like never going to happen, but in some cases it could be useful. If you’re in the middle of a sustained, like, vicious attack, you know, just having your phone explode with calls, messages, emails, all just coming at you 24/7, is really not great. And it really impedes your ability to do work as well. So, you know, putting a bit of separation there, and helping that journalist—letting that journalist know that you support that journalist doing that is really helpful. That’s a really good, important step for a newsroom to do, kind of giving them that support. ROBBINS: So one of the things that Ela said, and, Tat, I want to ask you about it. Ela said something about knowing something about who your attacker is, because then you might know more about whether they just—they’re just going to dox you—I don’t mean “just”—but if they’re going to focus on doxing, versus they maybe want to hack your personal accounts, or they want to go after your aunt, or they may actually come to your newsroom and physically threaten you. That people have patterns of their attacks. When you were getting attacked over the story you were doing about drag laws, did you have a sense—did you know who was attacking you? Did you research it? BELLAMY-WALKER: Yeah, I did. At first, it just seemed like it was just, like, random folks, you know, from, you know, the internet. But I started to see that there was definitely this, like, conservative Facebook page. Like, everyone from that conservative Facebook page. They were all definitely emailing me. You know, I’m definitely maybe not 100 percent sure about that, but it seemed like the Facebook page took the harassment to a whole different level, especially because they included, like, where I work. They, you know, had spoke about like a tweet that I had wrote about, like, the journalism industry in general, in terms of diversity. So many of the attacks started to heighten from the Facebook page, and then the article that was written about me. And so for me, it’s really important for me to, you know, check, you know, what is being, you know, written about me through either Google searches or I will search Facebook, and that’s how I came across this, you know, conservative Facebook page. I think they were called, like, the Whiskey Cowboys, or something like that. Yeah, yeah. So that’s how I look at—that’s how I came across them. It was after I had done, like, a search of my name in Facebook. And if I had not done that search of my name, I would not have realized, like, why it was becoming so intense. Because before then, I did—you know, definitely I get some emails here and there, but never something as targeted as it was. I’m like, whoa, like, these are getting, like, really, really personal. And then with the Facebook page, it was very, very personal attacks on me. ROBBINS: So, Ela, I think my final question to you is, sometimes a Facebook page isn’t necessarily who we think it is. I mean, it could be the Iranians. It could be somebody in New Jersey. It’s not—I mean, there’s Donald Trump, it’s some 300-pound guy in a basement in Newark, New Jersey. OK, well, that’s a story for another day. Do you guys or does someone else have—you know, has done more forensic research so that if we’re getting—we’re getting attacked we can say: That looks like X group, and we know that they tend to mainly focus on doxing, or you probably should be more aware that they’re going to go after your financial resources? Is there some sort of a guide for particular groups in the way they do their work? STAPLEY: Not a guide, as such. But, yes, there are journalists who’ve researched the people who have harassed them. And it also makes very good stories—I know journalists who have written good stories about that. And, obviously, there are tech professionals, IT professionals, who can also look into that. They can study things like IP addresses and things. And it helps build up a picture of who the attackers are. But I think here, the important thing is if you are writing on a particular story—on a particular topic or on a particular region of the world, knowing who’s active online with regards to that topic and regards to that region of the world, and what they can do in terms of their tech capacity, is important. Ideally, before anything happens, so that you can put steps in place. ROBBINS: But how would I, if I work at a medium-sized or small newspaper—you know, where would I turn for help for that sort of risk assessment, as I’m launching into that? You know, how would I know that if I’m going to go down this road that I might draw the ire of X, Y, or Z that has this capacity? Where would I look for that? STAPLEY: Yeah, speaking to other reporters who cover the same beat is very helpful, whether in your state or just, like, if you have reporters in other areas of the country or in other countries. You know, if you’re covering international news, like, speaking to them and finding out if they—what digital threats they’ve faced is a really useful step. So connecting to that network, like we talked about fixers in different countries. Like, getting a feel for it. But ideally, this should come from the newsroom themselves. So, you know, ideally, newsrooms should be proactive about doing risk assessments. And ideally, they should train managers. They should train editors on this. So a lot of responsibility does kind of fall to the editor, but a lot of them haven’t been trained in how to, like, roll out a risk assessment appropriately. And so getting newsrooms to really be proactive about this, training their editors, and being—you know, looking at the risk assessments, putting them in front of people, and getting them to—and asking them to fill them out. Because the risk assessment really is about mitigating risk. It’s getting you thinking, what are the risks? How can you reduce them in a way that makes it safer for you to go about your daily life, but also to continue reporting? Which at the end of the day, is what all journalists want to do. ROBBINS: Has anybody—like Pew or anybody else—brought together sort of a compendium of, you know, significant online attacks that journalists have suffered, sort of organized by topic or something? That would be really useful. STAPLEY: Yeah, there’s a number of organizations that have published on this. There’s been a lot of academic research done. The ICFJ and UNESCO did one, The Chilling it’s called. That was a global look, against women journalists, and involved a lot of case studies. We have our online violence response hub—which Tat mentioned earlier, which I’m very pleased to know that Tat was using—which is a one-stop shop for all things online harassment-related. And there you will find the latest research. So you can go there and search for academic research, but it also has, like, digital safety guides, guidance for newsrooms, as well as for journalists and for those who want to support journalists to better protect themselves. ROBBINS: That’s great. Ela, Tat, thank you both for this. I’m going to turn it back to Irina. We’re going to push out these resources. And this has just been—I’m fascinated. This has been a great conversation. Thank you so much, both of you. STAPLEY: Thank you. FASKIANOS: Yes. And I echo that. Ela Stapley and Tat Bellamy-Walker, and, of course, Carla Anne Robbins, thank you very much for this conversation. We will send out the resources and the link to this webinar and transcript. As always, we encourage you to visit CFR.org, ForeignAffairs.com, and ThinkGlobalHealth.org for the latest developments and analysis on international trends and how they are affecting the United States. And of course, you can email us to share suggestions for future webinars by sending an email to [email protected]. So thank you for being with us today. And thanks to all of you for your time. We appreciate it. ROBBINS: Ela and Tat, thank you for the work you do. Thanks, Irina. (END)
  • Technology and Innovation
    The Largest Senate Judiciary Committee Audience Is on Capitol Hill—and at Home—Today
    Today's Senate hearing on big tech and the child exploitation crisis should remind the public of Section 230's provision on parental controls, and the real-world analogies to how social media platforms operate.
  • Human Rights
    Women This Week: French Athletes Prohibited from Playing in Hijab for 2024 Olympic Games
    Welcome to “Women Around the World: This Week,” a series that highlights noteworthy news related to women and U.S. foreign policy. This week’s post covers September 23 to September 29.
  • Technology and Innovation
    The TikTok Trap
    TikTok is an easy scapegoat, but the lack of tech regulation and data protection is the underlying cause of our collective anxiety in the digital age.
  • Nigeria
    The Nigerian Conundrum
    When an irresistible society meets an immovable state. 
  • India
    Women This Week: Gender Disparities Rise in India’s Workforce
    Welcome to “Women Around the World: This Week,” a series that highlights noteworthy news related to women and U.S. foreign policy. This week’s post covers April 8 to April 14. 
  • Education
    Academic Webinar: Media Literacy and Propaganda
    Play
    Renee Hobbs, professor of communication studies and founder and director of the Media Education Lab at the University of Rhode Island, leads the conversation on media literacy and propaganda. FASKIANOS: Thank you, and welcome to today’s session of the Winter/Spring 2023 CFR Academic Webinar series. I am Irina Faskianos, vice president of the National Program and Outreach here at CFR.   Today’s discussion is on the record, and the video and transcript will be available on our website, CFR.org/academic. As always, CFR takes no institutional positions on matters of policy.   We are delighted to have Renee Hobbs with us to talk about media literacy and propaganda. Professor Hobbs is founder and director of the Media Education Lab and professor of communication studies at the University of Rhode Island. Through community and global service as a researcher, teacher, advocate, and media professional she has worked to advance the quality of digital and media literacy education in the United States and around the world. She is a founding coeditor of the Journal of Media Literacy Education, an open-access peer-reviewed journal that advances scholarship in the field. She’s authored twelve books on media literacy, published over a hundred-fifty articles in scholarly and professional journals, and she was awarded in 2018 the Research Excellence Award from the University of Rhode Island.   So, Renee, I can think of no one better to talk to us about this topic, very important topic, that you’ve been researching and advocating on for over thirty years. So, let’s start by defining media literacy and propaganda and why it is so critical for all of us to deepen our understanding of these topics.   HOBBS: So happy to be here, Irina. Thank you so much for the opportunity and the invitation to a dialogue.   I’ll take about—I’ll take about ten minutes and talk about media literacy defining and propaganda defining, and then we can have a robust and vigorous exchange of ideas. I’m looking forward to questions and comments from everyone who’s joining us today.   Why don’t we start with the phrase media literacy because media literacy is best described as an expanded conceptualization of literacy. So just as we think about literacy as reading and writing, speaking and listening, media literacy includes critical analysis of media and media production.   So to be an effective citizen in an information age, reading and writing and speaking and listening is no longer enough. One has to be skillful at critically analyzing all the different forms and formats and genres that messages now come to us in, and one has to be effective in communicating using media, using digital platforms.   So media literacy is literacy for the twenty-first century. Now, media literacy is sometimes taught in schools and often taught in the home but maybe not taught enough. The best evidence we have in the United States is that about one in three American students gets some exposure to media literacy in their elementary or secondary years, and because of deep investment in media literacy by the European Union, the European Commission, and quite a lot of research work happening in the twenty-eight member states, there is a robust and global community of media literacy educators and they come from very different backgrounds and fields.   They come from psychology, they come from sociology, they come from journalism, they come from education, they come from the humanities, even the fields of art and design. So to be media literate actually includes, if you think about it, a lot of different competencies, not just the ability to fact check, and media literacy isn’t just about news and information because we use media for all kinds of purposes, right, as media inform, entertain, and persuade us. And so media literacy considers media in all its complex functions as part of daily life.   OK. So how about the term “propaganda”?   Irina, this is a much harder word to define and, actually, some people have quibbled with me about my definition of propaganda. But my definition of propaganda is rooted in a deep understanding of the way the term has been used over, well, 400 years now.   In its original formulation propaganda was spreading the Gospel, the good news, as the Catholic Church tried to spread its messages about faith to people around the world. In the twentieth century the term began to be understood as a way to unify people. Propaganda was a way to build consensus for decision making, especially in democratic societies.   And then, of course, during the middle of the twentieth century it took a darker turn as we recognized how Nazi propaganda was used to lead to genocide, right, to destroy—to attempt to destroy and to create mass murder. So the word propaganda is kind of loaded with that twentieth century history.   But, yet, when we lived through the pandemic—here you are. You lived through it, didn’t you? (Laughs.) You lived through the pandemic because you got exposed to what I would call beneficial propaganda—propaganda that told you to wear a mask, propaganda that told you to get vaccinated, propaganda that said use social distancing.   So to understand propaganda and all its complexities we could say propaganda is communication designed to influence behavior, attitudes, and values, and propaganda is a form of mass communication, right.   So it isn’t persuasion that just happens, you know, you and me deciding, you know, should we go for pizza or Chinese for dinner tonight, right. I’ll try to persuade you. You try to persuade me. When we do it to large numbers of people and we use mediated symbols we’re engaging in propaganda.   So propaganda is a really important concept. Its meaning is situational and contextual, which is why when I work with students I often talk about how our understanding of propaganda is inflected by our cultural histories.   So, for instance, when I’m working with educators in Croatia, having had a long history of influence in the Soviet and, you know, in the communist era, their understanding of propaganda is inflected by the exposure to state-disseminated messages. And so the meaning of propaganda in your country and your cultural context might differ.   In Brazil, Irina, the word propaganda just means advertising, right, and advertising is a type of propaganda. Diplomacy can be a form of propaganda. The actions of government, politicians, can be a form of propaganda, but so can entertainment function as propaganda and so can education.   So propaganda is a really rich concept. Why is it important? Why is it important that we use media literacy skills like asking critical questions about media with propaganda?   Well, because propaganda tries to influence us by bypassing our critical thinking and the best way that propaganda has tried to change our behavior and influence our attitudes is by activating strong emotions, simplifying information, appealing to our deepest hopes, fears, and dreams, and attacking opponents, and these four mechanisms of propaganda can be used responsibly or irresponsibly.   So we are vulnerable to the terrible side of propaganda if we aren’t vigilant.   FASKIANOS: Fascinating. So in terms of the literacy how are you teaching this? Are you teaching students how to discern between the propaganda that is the good propaganda and, I mean, what—how do you make that distinction?   HOBBS: Got it. So students—propaganda—there’s a bunch of big ideas about propaganda that are really useful to understand. One is propaganda is in the eye of the beholder. So I don’t try—I don’t tell students what’s propaganda and what’s information, right. I encourage students to engage in a process of asking critical questions to come to their own conclusions about that.  And just want to show you one tool I use, Irina, in my teaching, I call it the media literacy smart phone. I’m going to show it to you a little bit so you can see it. The smart phone has some buttons on it that invite you to ask these questions like this one. Reality check—what is accurate or inaccurate about this message? That’s a good question to ask when you’re trying to determine whether something is harmful or beneficial propaganda.   Or how about this one? Public gain or private good—who’s making money from this message? Answer that question and you can often gain insight on the difference between harmful propaganda and beneficial propaganda.   Or how about this one? What’s left out? You know, the best way to spot propaganda is to notice what’s missing, right, because all media messages have a point of view, right. All media messages are selective and incomplete. So to identify the point of view of a media message notice what’s missing, what’s not being said, what’s left out.  There’s the values check button, the read between the lines button, the stereotype alert button. Propaganda often uses stereotypes to create in groups and out groups. If you’re in the in group propaganda feels really good—(laughter)—and if you’re in the out group you are being painted as an enemy, a villain, a dangerous person. Solution’s too easy. And record—save for later, you know, with the world we live in where we’re constantly swiping, clicking, we’re devoting only a few seconds to media messages because we’re moving so fast through so many of them.   This button reminds us that we actually have to make choices about what to pay attention to, what to allocate our attention to, and that means we sometimes have to slow down, right. So learning to allocate your attention and decide which messages deserve your attention and which messages don’t, these are all media literacy competencies.   So we aren’t telling people what to think, right. We aren’t—we aren’t naming that’s misinformation, that’s malinformation. We don’t do any of that. What we do is invite people to ask critical questions like who’s the author and what’s the purpose? What techniques are used to attract and hold your attention? How might different people interpret this message differently? What’s omitted? What are the values presented?  We want people to think for themselves because media literacy is a literacy practice and when people have these habits of mind built in, when they use them automatically when they’re reading the news, when they’re being persuaded, when they’re being entertained, then this goes back to the Enlightenment, Irina. We trust that people can differentiate between quality and junk, right, when they put the cognitive effort, when they’re effortful and strategic. And this kind of work can’t be done by yourself. It has to be done with others.   I mean, think about that question, how might different people interpret the message differently. This is why discussion and dialogue are so critically important to analyzing propaganda and to developing media literacy competencies.   FASKIANOS: Great. Fascinating.   Let’s go to questions to the group. We already have a few in the chat. You can also raise your hand and I will go back and forth between. If you do write your question in the chat or the Q&A box, please tell us who you are.   So I’m going to go to the first question from Andrew Jones, who’s an assistant professor of communications at Davis & Elkins College in Virginia. Would you draw a distinction between propaganda and public relations or do you see the two terms as interchangeable?   HOBBS: Ha ha, great question. Of course, I’ve had vigorous discussions about this in—with my students and with my colleagues. In 1928, Edward Bernays wrote a book called Propaganda and it became a classic in the field of communication. He very quickly recognized that the term propaganda was so negatively loaded that he changed the name to Public Relations.   So the grandfather of public relations understood that the word propaganda and public relations are, I would say, kissing cousins. So I don’t generally differentiate because I like—I think there’s a lot of utility to using the word propaganda in its big tent meaning, right.   So we—what we don’t want to do is just have propaganda be used as a smear word. That’s the term Neil Postman talked about when he said, you know, that’s a shortcut to critical thinking, right. By labeling something propaganda or, i.e., bad now you don’t have to think about it. Now you don’t have to ask critical questions, right.   So we want people to—we want to do whatever we can to make people think. So advertising can be a form of propaganda, right, and education can be a form of propaganda and entertainment can be a form of propaganda, and to determine whether you think it’s propaganda or not you really have to look very carefully at the form, the context, the audience, the purpose.   You have to really look at the whole rhetorical situation, make that determination yourself. What do you think? Is propaganda and public relations the same or are they different?   FASKIANOS: OK. People are not raising their hand but they’re writing their questions. So the next question is from Chip Pitts, who’s a lecturer at Stanford University, and it kind of follows on to what we were just talking about, to distinguish between propaganda and truth or falsity if that distinguish is important.   HOBBS: Oh, that’s a great question. This goes back to the earliest definitions of the term propaganda when it has long been recognized even at the very beginning of the First—when the First World War happened and propaganda really was becoming a tool used by governments, right, recognized that propaganda works best when it uses truth, right.  So propaganda can use truthful information, half-truths, or lies and, of course Goebbels was famous for saying that the best propaganda is truthful, right. (Laughs.) So propaganda can be truthful and very, very dangerous, right. Very harmful.   And so I think it’s important to recognize that propagandists use—can use truth, half-truths, or lies.   FASKIANOS: Yeah. So how do you, though, distinguish or have people, if you’re not telling people—you know, you’re teaching students how to think critically, which is so important. But as we saw with January 6 there is a subset of people who do not call it an insurrection.  HOBBS: Right.  FASKIANOS: You know, so we do have different groups that have—are using a different basis—set of facts. So what do you do in that case?   HOBBS: So media literacy is really rooted in this idea that we are co-learners in the search for truth and that none of us have a handle on it completely and we all need each other to apprise the complexity of what’s going on in the world.   So dialogue and discussion becomes a really central pedagogy of media literacy with this idea that we want to engage with each other with—from a position of intellectual humility. When I come into the classroom and I decide you can only call it an insurrection and if you call it a riot there’s something wrong with you, then I’ve created an in group and an out group, haven’t I? And I’ve set up a hierarchy that says if you agree with me you’re right and if you don’t agree with me you’re wrong. I can’t really have a discussion, can I?   That discussion is going to be false or artificial. It’s going to be stilted. Some people are going to be silenced in a discussion where I set the terms of what truth is, and that’s the very phenomenon we’re trying to fight against, right.   But if I come in with these critical questions and put you in the position of having to say how are they grabbing my attention, what is true, what seems accurate and inaccurate, how are stereotypes being used, right, then you have to engage in some genuine thinking.   And so teachers take—in that position don’t take—choose to take—choose not to take the position of an authority telling people what to think but, really, as a co-learner guiding with critical questions for students to come to their own conclusions about that.   FASKIANOS: Mmm hmm. That’s great. All right.   So we do have a raised hand. I’m going to go to Beverly Lindsay. And, Beverly, if you could tell us who you are—I know who you are, but for the group.   Q: I’m Beverly Lindsay, University of California multi-campus.   I spent a number of years working in the Department of State, in particular the Bureau of Educational and Cultural Affairs, and I’m still doing some funded programs from them. And years afterwards I was able to speak with the late Secretary of State Dean Rusk. I wasn’t in the State Department when he was there so we’re talking about a more recent period.   One of the statements that he made to me was the best propaganda has no propagandistic values. Years later when I was an international dean at a former university the executive vice president and the provost said to me, because this is a university wide program, that getting a Fulbright was simply propaganda in developing countries.   So you had two different views from two knowledgeable people. How would you think we might think about those type of responses now? He valued the—if you got a Fulbright to Oxford?  HOBBS: I really love this question, Beverly, and I actually do—I do something on this with my students as we look at the Voice of America, right, and we look at, well, this is journalism, right, and it’s journalism that’s designed to bring diverse perspectives on world issues to people in countries where they may not have this kind of journalism and, at the same time, there is a distinctly American ideology to this kind of journalism, right.   And so there’s a very interesting way in which maybe both of those ideas, maybe both of those frames that you just presented to us, maybe both of them are true, right. And I feel like it’s quite liberating to acknowledge that there’s some truth in both of those ideas, right, that the best diplomacy doesn’t have a propaganda intent and that soft power in whatever form it takes is strategic and intentional and it’s designed to accomplish a policy objective.   FASKIANOS: Great. So I’m going to take the next question written. Oh, Beverly has raised her hand. So I think there’s a follow-on before I go to the next one.   Beverly, do you want to follow up? You’re still muted.   Q: Sorry. If someone has a Fulbright to University College London or Oxford or one of the redbrick universities in the United Kingdom why would that not be propaganda in one country and not in another? Are we assuming that the people in England are more sophisticated?   HOBBS: Hmm. I like—I can’t speak to the specifics of that situation but I do think that one of the reasons why we say that propaganda is in the eye of the beholder is that meaning is not in texts. Meaning is in people, right. So as we humans try to use symbols to communicate and express ourselves, right, there’s slippage—(laughs)—right, between the meaning I’m encoding as I’m using language and words right now, right, and the meaning that you’re interpreting, because I’m making my choices based on my cultural context and you’re making meaning based on your cultural context.   So that humility, the humility of recognizing that we’re imperfect meaning makers, let’s be in a position where, again, both points of view might have validity, and one of the pedagogies that we try to emphasize in media literacy is listening with genuine curiosity and asking good faith questions with genuine curiosity is more generative of learning than asking questions or using questioning as a mechanism of attack, right.   And so we can see in our public discourse right now that we all are—we all have learned very well, right, how to weaponize information, right—(laughs)—how to use it for powerful purposes. But when we’re talking about education we adapt this stance of being open to the multiple interpretations that exist in any given context. So that’s the only way I can respond to that question.   FASKIANOS: So I’m going to go next to Asha Rangappa, who’s a senior lecturer at Yale.   It seems that the question is source and intention, not truth. Russia can say something that is true, but if they do it by covering up that they are the source of that content—black propaganda—with the intention of causing division and chaos, that’s still propaganda. So can you talk about how Russia is using propaganda in the war—in their war with Ukraine?  HOBBS: Oh, absolutely. What a great question and thank you so much for pointing out a very, very important—there’s two really important ideas in your question that I want to just underline and amplify.   One is that to be critical thinkers about propaganda the first question we want to ask is who’s the author and what’s the purpose. So many propagandists try to disguise that authorship, right, and there are so many ways to do that.   It’s so easy to disguise your identity. You can use a technique called astroturfing, which is you can set up a nonprofit organization, give it a little bit of money, and it sends out the message, right, and you, the company or government, whatever you are, you have some distance from it.   There’s, of course, sponsored content. It looks like it’s news but it’s really funded. It’s really propaganda. It’s really a form of—it’s an influence operation. So the first thing we want to try to do whenever we can is figure out who made the message and what is the purpose, and that’s why your second point is so, so important and I want to amplify this idea, this question about intentionality—what’s the author’s purpose.   But there’s something complicated about that, too, which is that intentionality is fundamentally unknowable. (Laughs.) I mean, we can make inferences about intentionality. But that’s what they are. They’re inferences.   Now, that being said, of course, we definitely see the very many and very creative ways that Russia has been active in creating and stoking and leveraging in groups and out groups to deepen divisiveness in this country and all around the world and in Ukraine and well before even the invasion of Crimea.   The Ukrainians were very much tuned into this and some of the best work happening in media literacy education was happening in Ukraine even before Crimea because they were so clearly aware of how propaganda was being used to create division between Ukrainians.   So this is partly why one of the things we want to help students recognize is how in group and out group identities can be amplified or weaponized through the power of language, right, the words we use to describe others, right, through the power of symbols and metaphors, and this goes all the way back to George Orwell in the 1930s, who wrote brilliantly about propaganda, and said basically every time humans open their mouths they’re persuading, right—(laughs)—by the very word you choose, right.   Irina, you chose insurrection. I chose riot. In the very choice of language we’ve got a point of view there, right.   FASKIANOS: Mmm hmm.  HOBBS: As we have, like, heightened consciousness about that then that really helps us recognize the very subtle forms that propaganda can take, and I think in the case of the Russian propaganda we see some brilliantly devious and terrible ways that propaganda was used to divide Americans and to polarize, and the polarization that we’re now experiencing in our country was created intentionally and strategically and is still being created intentionally and specifically by a whole bunch of different actions, not actors, not just foreign agents, I might add.   FASKIANOS: OK. So I am going to Holley Hansen, who is a teaching assistant professor and director of undergraduate studies at Oklahoma State University, asks even if people are able to teach media literacy techniques to people how do you counter the impact of the algorithms in social media, especially when they seem to reward extremist messages?   HOBBS: Yeah. Great question, and this is absolutely huge. It’s why the media literacy community is really working hard on a concept we call algorithm literacy, right, which is understanding how increasingly the messages that are in your media environment are tailored in ways that reinforce your prejudices, reinforce your beliefs.  There’s a lot of really cool activities that you can do with this. We have—there’s lesson plans and materials, resources, on the—on our website at MediaEducationLab.com. But, you know, my Google is not your Google and my Facebook is not your Facebook.   So one activity that I always do at the beginning of every semester with my students is I have them—we have some—certain keywords that we might use. Like, we might put in country names like Finland, Slovenia, the Philippines, and now take a screenshot of what comes up on your Google, and my students—within the group of thirty students my students will have different results on Google and then they’ll be able to sort of unpack how their Google has been trained by them, right, algorithmically to present them with some results and to deny them some other results.   This is a big a-ha for students and I think for all of us we’re—it’s so easy for us not to be aware. Again, we tend not to notice what we don’t see, right. So we aren’t even aware often of how our—how algorithmic bias is influencing our worldview. That’s another reason why media literacy educators insist on using dialogue and discussion and why increasingly educators are bringing people together using the power of Zoom technology from different regions of the country, different states.   So my colleague Wes Fryer in Oklahoma is working with middle school students in New Jersey to bring Oklahoma middle school students and New Jersey middle school students together to have dialogue and discussion because we—the algorithmic biases are not—they are not just limited to individuals. They also exist within community context and cultural milieus as well.   FASKIANOS: Fantastic. Let’s go to Serena Newberry, who’s raised—has a raised hand.   Q: Hello. I’m Serena Newberry. I’m a Schwarzman scholar at Tsinghua University in Beijing.  And somewhat building upon the previous question on Russia-Ukraine propaganda in addition to the critical thinking questions that you mentioned earlier, how would we go about separating propaganda attached to existing institutions, be it an organization or a country, when there is bias attached to that nation or institution?  For example, you mentioned that the author changed the name of the book Propaganda to Public Relations because rather than trying to convince people of using their critical thinking skills that word had so many negative connotations attached to it. So how do we go about that when trying to move forward in foreign relations and building bridges in other ways?   HOBBS: Yeah. That’s a really great question and I’ll tell you my China story.  I had the opportunity to go teach students media literacy in China on several occasions now and the word propaganda is very complicated in that country, right. (Laughs.)   And so we came to the conclusion that understanding media messages in all their many forms was something that required people to evaluate different levels of—different levels of trust and trustworthiness and that whether—what you called it was less important. What the label is was less important than the reasoning process that you use to make sense of it.   In China it’s called moral education, right, and it’s done in schools and it’s a way to create patriotic values, to disseminate patriotic values, and in the United States when I got taught I pledge allegiance to the flag of the United States that was also a form of moral education, patriotic education as it were, right.  And so I wouldn’t call that propaganda but I could see how someone might. And so I think it doesn’t matter what we call it. It matters that—what reasoning process and what evidence we use, what critical thinking skills we activate, in a dialogue and discussion.   FASKIANOS: So, John Gentry, adjunct professor at Georgetown University, has a question that also got an up vote—two up votes.   As I’m sure you know, the Soviets and then the Russians developed sophisticated propaganda mechanisms by what they called disinformation and active measures. They developed a doctrine known as reflective control designed to induce targets to make ostensibly independent decisions consistent with their interests.   How do you propose that targets identify and defend against such devices?   HOBBS: Wow. Yeah. That’s really a hard question because what is powerful about that framing is the way in which it is systemic, right, and that framing actually is really useful in understanding why people don’t act in their own best interest—(laughs)—right, sometimes—why sometimes people don’t act in their own best interests, right.   So I—what I appreciate about that observation, and this is—you’re acknowledging the way that sociologists have recognized that when propaganda is used in that way, systemic—in that systemic way it becomes actually really difficult or maybe even impossible for individuals to kind of work their way out of it or through it.  I think Jacques Ellul he defined—his framing for that—he called it sociological propaganda because of his sense that you couldn’t see the forest for the trees. So I think both the Russian framing of active measures and the way in which a whole worldview can be cultivated, right, that creates reality for people, and I think that’s partly why we value—we so much value freedom of speech and free markets as ways to protect us from the kind of abuses of power that are possible in more totalitarian or autocratic societies.   I think that’s why we so—we’re seeing countries, you know, sort of recognize and resist autocratic policies that allow one view of reality to be promulgated and all other interpretations of reality to be denied.   FASKIANOS: Mmm hmm. OK. So let’s go to Raj Bhala, who has raised hand.   Q: Thank you, and thank you for this wonderful presentation. So thought provoking.   So I’m asking you as a friendly member of the tenured professoriate like you, are we agents of propaganda, too? I have a new book coming out on the Sino-American trade war and I’ve often wondered have I fallen victim in researching and writing to propaganda from both sides. And, more generally, as you probably know from, you know, our careers, in our scholarship, in our teaching, in who we promote for tenure, the way we review their articles, are we also propagandists and even more so as universities get evermore corporatized with budget cuts?  HOBBS: Wow. What a—  FASKIANOS: And Raj is at the University of Kansas.  HOBBS: Raj, that is—thank you for asking that really, really great question, and this is a great opportunity to acknowledge the important work on propaganda done by Noam Chomsky at MIT, and in his book Manufacturing Consent he said that the information elites, and by that he meant the 20 percent of us who are knowledge workers and work in knowledge industries, he said we’re the ones who are most deeply indoctrinated into a system, an ideological system where propaganda—propagandists work their hardest on us and they don’t bother with the others because if they get us then they get the control. The control is embodied.   So I do think it’s very self-aware and reflective for all of us knowledge workers to be aware of how our own world view and understanding of the world has been shaped through communication—through communication and information—and the stance of intellectual humility is most urgent because—well, I think we’ve seen all around us the dangers of righteousness. What happens when you become too certain that your view of reality is the only view of reality, right? Well, bad things happen, right. (Laughs.) Bad things happen when you become too sure of yourself, too righteous, because you close yourself off to other ways of knowing and other sources of information and other points of view that may be mostly false but have a glimmer of truth in them and that’s the piece of truth you actually need to solve the puzzle, moving forward.   So the problem of righteousness, the danger of righteousness, is something that everyone working in the knowledge industries needs to be aware of and the stance of intellectual humility is so hard because we’re experts, right.  So it’s one of those things that we have to call each other out on and call each other into, right. Come into a place where we can accept that we might have a piece of—we might understand a piece of this complex problem but not all of it.   And my guess is, Raj, that in your writing and in your scholarship you adopt that stance of intellectual humility and that helps your readers recognize you’re offering them something but you’re aware that you don’t have the whole story, because that’s what we do, right, and that’s how we help each other to come closer to the truth.   FASKIANOS: So I’m going to take a written question from Skyler Ruderman, who’s at University of California Santa Cruz.   How do we start investigating internal propaganda when it is so thoroughly and casually disseminated throughout American mass culture and media, for example, the Department of Defense having oversight and script rewriting authority on movie production if the producers want to use military equipment or the ways twenty years ago consent was heavily manufactured with bipartisan support for the Iraq war in the major news outlets?   These things are easily written off, much like reciting the Pledge of Allegiance, as patriotic or nationalistic. So where do we start?   HOBBS: Wow. What a great—what a great question. Can I share my screen, Irina? Is that possible?  FASKIANOS: You should be able to. We’ll turn—  HOBBS: I should be able to share my screen. Let’s see.   FASKIANOS: There you go.   HOBBS: Can you see my screen right now?   FASKIANOS: We can.   HOBBS: I want to show you two resources that I think are really helpful for broadening our understanding of propaganda in just the ways that your question proposes.   One is: go explore my online learning modules on propaganda and check out propaganda in entertainment, right, or memes as propaganda, election propaganda, conspiracy theories, algorithmic personalization, and even art and activism as propaganda.   And then—let’s see if I can go back up here—and then go check out the Mind Over Media gallery. When I first started teaching about propaganda I was aware that my students live in a different media world than I do, right. I encounter some kinds of media and my students encounter different kinds of media because of what we talked about before—algorithmic personalization and this just gigantic flood of content that we get exposed to as creators and consumers.   So what I did was I created a tool that makes it possible for anyone anywhere in the world to upload examples of contemporary propaganda or what people think is examples of contemporary propaganda, and because I got some funding from the European Commission to do this work I have propaganda from a bunch of different countries and right now at the top of the list are these kinds of examples of different kinds of propaganda and, you know, some of them are really weird.   Like, for instance, this one, right. The person who uploaded this meme—the meme reads, for those of you who are not seeing the screen, remember when politics attracted the brightest and most intelligent—what the hell happened, right, and it’s got some pictures of politicians.   This person thinks this is propaganda because it attacks opponents and it attacks people who are Republican and it shows that Kennedy, Abraham Lincoln, and George Washington were good. However, it shows Trump as one who’s not very intellectual. And so some student uploaded this and I’m invited to rate this example, do I think this is beneficial or harmful. I think this is probably a little bit—no, I’m not sure how I feel. I’m going to be right in the middle here.   But take a look at the results, Irina. Twenty-seven percent of the people who’ve been to the website say they thought this propaganda was beneficial, 14 percent thought it was harmful, and then most of us are in the middle here. So it turns out that, in some ways, there is an opportunity to examine the stories we tell of the past and how they shape our understanding of the present day.   I’ve been doing that through recovering how propaganda used to be taught in the 1920s and ’30s in the years leading up to World War II as American educators began to be concerned about demagogues like Father McCoughlin (sic; Coughlin) on the radio, right, and the way in which the power of the voice coming—when the voice came into your living room it was a very powerful experience. It was so intimate. It was so personal. It had such an emotional power. And we realized that every generation has to address the emotional power of propaganda because the propaganda that you carry on your digital device, right, has got its own unique ways of bypassing your critical thinking and activating your emotions in ways that can be really, really dangerous.   FASKIANOS: So what would you say about TikTok?   HOBBS: Well, I’ve been fascinated. We’ve been using TikTok a lot in our education outreach initiatives and the project that I’m working on right now is called Courageous Rhode Island. It’s a federally funded project from the Department of Homeland Security and we’re using media literacy as a violence prevention tool to address the issues of domestic extremism, right.   And so we’ve been looking at TikTok videos that on the surface seem, well, quite entertaining. But then when you spend time, actually watching it—you watch it twice, right, and you start asking those critical questions that I shared with you earlier, then you really discover it’s, like, oh my gosh, this thing actually has a white nationalism agenda or an anti-trans agenda or a(n) anti—or a misogynistic worldview or an anti-Semitic worldview. But at first viewing it just looked like fun.   So we think it’s really important to take—to help slow down our encounter with TikTok, and when adults do that with teenagers and when teenagers do that with each other and when young adults do that with people of different ages it can be a mind-blowing learning experience.  And participants who are here in this call can join us on this journey. Every two weeks we have what we call courageous conversations. The next one’s coming up on April 4 and it’s called “High Conflict.” We’re talking about the media messages that put us into conflict with each other and what we can do about them. So TikTok’s one of those medium that can incite high conflict.   FASKIANOS: I’m going to take the next question from Pyonhong Yin (ph) at the University of Illinois at Urbana-Champaign.  Q: Hi. Can you hear me?   HOBBS: Yeah.   Q: Oh, OK. Yeah. Yeah. Thanks for your very interesting talk.   So I am a Ph.D. student in political science at the University of Illinois and I’m currently working on a paper about the propaganda during the international conflict, and I just got a question from two professors in a different field in political science and they asked me whether—or do you think propaganda is costly.  Like, so because I think during the conflict the leaders they—you know, the people—they usually make some very aggressive statements, right, and sometimes they might make some empty threats. So, to me, I think it’s costly because if they do not follow their words then, you know, the public—the majority of the people they do not trust the leaders. But, yeah, but I—yeah, so this is the—just the question. Yeah. So do you think the propaganda is costly?   HOBBS: So that’s interesting how you’re using the phrase costly, right. The idea is does—you’re asking, in a way, what are the consequences of the use of propaganda, right, and I think it’s a really important question because, remember, propaganda can be used to unify, right. So propaganda can be a vehicle that people use to create consensus in a group, right, and that’s—coming to consensus is part of the democratic process, right.  That’s how we—we come to consensus because it’s an essential way of solving problems nonviolently. But as you’re using the term costly you’re imagining a person, a propagandist, who says one thing in one context for one audience and one goal and maybe has to walk that back in a different context or at a different time period, and then that may have a cost because people may lower—the trust might be lowered, and I think that’s actually, like, a very important calculus that politicians have to consider in their use of propaganda.   So I really appreciate the idea of the kind of—almost like the mathematical or the financial metaphor that’s behind your question. There is a cost because the cost is trust can be increased or reduced, right, and from a politician’s point of view that’s currency, right. That has real value.   But we often focus on propaganda that diminishes trust. I want to make sure that we don’t forget that propaganda can increase trust, right. So it works both ways—the cost and the costliness. And you can learn more about this in my book. I’m putting up a link to my book in the chat, Mind Over Media: Propaganda Education in a Digital Age.   I think one way to interrogate the cost issues is to look at different agents of propaganda. Look, for example, at how activists use propaganda. For instance, Greta Thunberg, the world’s youngest and most important environmental propagandist, right. She’s been very skillful in using her language, her imagery, her messaging, to increase her credibility, right, and to—and she’s very aware of how at certain times certain messages might have a cost, and we can go back and look at the history of her speeches and see when she’s made some mistakes, right—when her messages had a cost, right, that weakened her credibility.   And so I think being strategic—looking at that—looking at propagandists’ choices and the cost or the consequences or the potential impacts, very interesting strategy. So great question. Very thought-provoking question around that metaphor. Thank you.   FASKIANOS: So I’m going to take a question from Oshin Bista, who’s a graduate student at Columbia University: What are your thoughts on the tensions/overlaps between approaching information with generous curiosity and the inaccessibility of the languages of media? How do we make this form of literacy accessible?   HOBBS: Great, great question. You know, the reason why that’s such a good question is because there is a vocabulary that has to be learned, right. To critically analyze news as propaganda there’s a whole lot of words you need to know—(laughs)—right. There’s a whole lot of genres that you need to know, right, and that knowledge, for instance, about the knowledge about the economics of news. To understand propaganda as it exists in journalism you have to understand the business model of journalism, right, why likes and clicks and subscriptions and popularity are a form of currency in the business, right.  So how to make that more accessible? I think actually journalists and media professionals can go a long way and one of the groups that I’m paying special attention to are the YouTube influencers who are doing this work through messages that are entertaining and informational and persuasive.   For example, check out Tiffany Ferguson and her internet education series. She’s a twenty-three-year-old college—recent college graduate who’s been helping her audience, mostly teenage girls, I would say—helping her audience learn to critically analyze all different aspects of internet culture, right.  That is a great example of somebody who’s using their power as a communicator to help their audience be better informed and make better choices, and I feel like a lot of media professionals can play that role in society.   In fact, another good example of that is Hank and John Green, the quintessential YouTubers, right. So I think media professionals are really well poised to bring media literacy knowledge and concepts to mass audiences and that’s why they’re a vital part of the media literacy movement globally. Not just here in the United States but all over the world.   FASKIANOS: So we’re seeing in Congress, you know, Congress taking on TikTok and wanting to ban it, and Chip has—Chip Pitts of Stanford has a follow-up question: Beyond education for media literacy, what laws, regulations, norms can our government and others deploy to help control the worst harms—required content moderation, you know, applied young international human rights standards versus U.S.-style free speech, et cetera? So what is your feeling on that?   HOBBS: Yeah. Great question. Of course, we’re always—we’re often asked—some people think that media literacy is a substitute for government regulation. But we’re always very attentive to say, well, our interest is in focusing on what media consumers need to know and be able to do.   That doesn’t mean that there isn’t a role for regulation and, for example, I think one of the easy to document positive impacts of media regulation is the GDPR regulation, right, that Germany enacted. That actually—that benefited the entire world, right.   And so the question about content moderation and Section 230 and the appropriate ways to regulate social media these are complex issues that people—that we can’t solve that in two seconds and we, certainly, can’t solve it globally because we can’t.   But we can think about how different countries around the world, as they implement social media regulation, it becomes like little laboratories. Let’s—so as countries pass laws about social media let’s see what happens, right. Let’s see what the results are culturally, politically. Let’s see what the benefits of that regulation and let’s see what some of the unintended consequences might be.   So that’s the only way that we’ll design regulation that accomplishes its beneficial goals without its unintended consequences. So I’m kind of happy that states like California are regulating social media now, right. That’s awesome to see little laboratories of experimentation.   But I’m not prepared to tell you what I think the best approach to regulation is. I think we just need to be attentive to the fact that regulation will be part of the solution in minimizing the harms of communication in the public sphere.   FASKIANOS: Well, unfortunately, we have to end here because we’re out of time, and we have so many more questions and comments. I’m sorry that we could not get to you all.   We will send out the link to this webinar so you can watch it again as well as links to Renee’s book, to her community conversations. I see it, “Courageous.” I have it up on my screen now for the “High Conflict” event on April 4, and anything else, Renee, that you think. I especially love the questions that you showed us on your phone. I want to get those so I can share them with my family.  So thank you for being with us and for all of your great questions and comments. Appreciate it.   The last Academic Webinar of the semester will be on Wednesday, April 12, at 1:00 p.m. Eastern time. So please do join us for that. We’ll send out the invitation under separate cover.  And I just want to flag for you all that we have CFR-paid internships for students and fellowships for professors. If you go to CFR.org/careers you can find the information there, and you do not have to be in New York or DC. You can be remote, virtual. They’re great opportunities for students even if you are not in one of these two cities.   Please follow us at @CFR_Academic and visit CFR.org, ForeignAffairs.com, and ThinkGlobalHealth.org for research and analysis on global issues.   Again, Renee Hobbs, thank you so much for this conversation, your research. We really appreciate it and look forward to continuing to follow the really tremendous work that you’re doing.   HOBBS: Thank you so much for the opportunity, Irina. I really enjoyed talking with everybody today. Bye now.   FASKIANOS: Bye-bye.   (END) 
  • Women and Women's Rights
    Gender and Power in an Age of Disinformation: A Conversation With Mary Anne Franks
    For women in the public eye, cultivating an online presence is often necessary and far too often dangerous. What can be done to make online spaces safer for women?
  • United States
    Fake News, Then and Now
    The problem of fake news has been with us from the beginning of the Republic, and American democracy was even worse at dealing with it then than it is now.
  • United States
    Bored Apes Play to Type, This Time Online
    The connection between the rise of social media and the decline of American politics and society seems obvious. But what if what everybody knows is wrong? 
  • China
    Where Is the Red Line on China's Internet?
    From reporting on a bombed city in Ukraine to exposing the plight of a chained woman in a shabby room, some of China's social commentators are showing surprising resilience in covering controversial issues.
  • Sub-Saharan Africa
    What’s Happening to Democracy in Africa?
    The pandemic is exacerbating a decline of democracy across sub-Saharan Africa. To combat the trend, the United States and other partners should commit to the painstaking work of bolstering democratic institutions.
  • Digital Policy
    Social Media and Online Speech: How Should Countries Regulate Tech Giants?
    Social media has been blamed for spreading disinformation and contributing to violence around the world. What are companies and governments doing about it?