Tow Center

The Role of Journalism, Law, and Trust & Safety in an AI Dominated World

Anika uses her career journey to argue that the work remains the same even as the terms, technologies, and times change

 

The conversation below has been adapted from the keynote and Q&A given at the Technology, Media, & Privacy Law Conference at the University of Florida in March 2024. We discussed how “AI” is the current description of trendy emerging technology and how “Trust & Safety” is the latest phrase that simply describes the traditional accountability work of journalism and law. 

Anika uses her career journey to argue that the work remains the same even as the terms, technologies, and times change. We also highlight and discuss four recommendations for how people whose brains are dedicated to technology, media, privacy, law, and journalism can begin to create a better future including: 

  1. Creating more training in the field of journalism to prepare future practitioners to join the frontline work of Trust & Safety

  2. A full court press from every sector to come together and create comprehensive and practical external regulatory structures for AI and emerging technology because the purely self-regulatory model of internal Trust & Safety does not work  

  3. Researchers, academics, journalists, and regulators working on AI, technology, and Trust & Safety needing to talk to the humans who have done, and are doing the work inside of companies 

  4. Trust & Safety workers and whistleblowers who have left the industry creating our own lane as external watchdogs and advocates for change but needing help from academia, civil society, and philanthropy in making this alternative pathway a viable career


Anika: It is truly such an honor to be back at the University of Florida, the place where it all began for me.

As I’ve thought about what I wanted to use my time to say at the Technology, Media, & Privacy Law Conference about Trust & Safety in an AI Dominated World, I’ve been thinking about all those words: large language models, Trust & Safety, AI, technology policy, media law, privacy, and journalism.

Since I left this campus, my career has given me the opportunity to put most of these terms on my resume, which is probably part of the reason why Jasmine invited me to speak. But coming back home to UF has made me think about how it all started for me, what all these words mean, how they all fit together in practice, and what they mean for our future.

Like some of you, I was born on this planet before the internet. During my formative years, all of that changed. And by the time I received my acceptance packet in the mail from UF in the early 2000s, the very first thing I did with my .edu email account was sign up for this hot new website called The Facebook.

When I arrived on campus as a freshman in the College of Journalism and Communications, I had already used the site to virtually meet a network of people that would soon become my social circle in real life. I walked around campus with a hot pink Motorola Razr flip phone clipped to my hip and I used the desktop computers in the computer lab to type out my reporting assignments.

Back then, I had a love for words. And I had big dreams of making the world a different and hopefully better place, for the next generation. Though, I wasn’t exactly sure how.

The summer after my first year of undergrad, I got a job as a copy clerk at my local hometown newspaper, the then St. Petersburg Times. The job was menial. I got mail, and made copies. But like many times along my journey, I seemed to be at the right place at the right time.

That summer, the largest newspaper in the state of Florida was going digital. I watched the design and layout floor that was once covered with easels and X-Acto knives be replaced by people sitting at MacBooks using InDesign. By the end of the summer I also got a new job: pushing “publish” on a website and bringing the Times reporting to the world wide web.

I could feel that something big was happening in that newsroom that summer. I begin to wonder what it all meant. I started questioning how the laws that were based on the printing press were going to adapt to regulate new emerging technologies. I wanted to know how I could be a part of figuring it all out.

Then, my senior year, I walked into a class required to graduate, and I found my intellectual home. This class was called “media law.” It covered the laws that outlined our core American rights, liberties and freedoms like expression and press freedoms. And to me, I saw a place where I could interrogate the impact of new technologies like the internet and social media to those rights.

I became determined to be an expert in “media law.” At the time, there were only three schools in the nation, including UF, who had programs that focused on the field or combined their journalism and law programs. I left UF and set out to explore the other two. And to stay connected with my community after graduation, I also signed up for the newest social media site on the block called Twitter.

I went next to the University of North Carolina School of Law. There, I devoured constitutional law and First Amendment classes. I got my first Blackberry, then my first iPhone, and my first MacBook. I became a research assistant for the Center for Civil Rights and the Center for Media Law and Policy. I dove head first into what was called “cyber law” and worked on a final class assignment to write terms of service for a fake website.

On Twitter, I watched as the Arab Spring unfolded and the new technology started to be credited with the power to shift politics and regimes. I wondered who was making the decisions and rules about the world changing expression that was happening on the platform.

By the time I was graduating from UNC, I realized that I wanted to “do media policy,” and somehow be a part of it all. I still had no idea what that really meant. But that calling carried me to Columbia Journalism School where the streets of New York City were still vibrating from the Occupy Wall Street movement.

Once again, I could feel that something big was happening in the world around me. So I began to write my thesis about how traditional constitutional principles like freedom of expression were expanding into social media. I titled it, “The Revolution Will Be Tweeted.”

At the time, when I told most people that I believed social media and Twitter were completely transforming our political reality, they laughed at me or looked at me like I was crazy. They doubted that a website that constantly experienced outages and revolved around breakfast opinions could produce a revolution.

I continued to believe differently. But, when I graduated, the field that would later be known as “Trust & Safety” didn’t really exist. So as hard as I tried, I couldn’t find a job that allowed me to work on policy about new media technologies. So instead, I spent a few years working in law firms while constantly checking job boards with search terms like “media law and policy” and “civil rights and technology” and “technology policy.”

Then one day I saw a posting for a job at a new think tank called Data & Society Research Institute. The work was dedicated to understanding the impacts of new technology on the society and culture forming around us. And the role was specifically focused on what was called back then “big data” and “civil rights.” The job description read like I had conjured my career dreams into reality. I applied, interviewed, and accepted the job. I’d also eventually get the opportunity to meet Jasmine.

This was the beginning of 2015 and the Black Lives Matter Movement was growing both on social media and in protests in the streets of cities across the nation. In this backdrop, policing technologies boomed. And my work at Data & Society became exploring the policy and legal implications of a wide berth of technologies like: body worn cameras, social media surveillance, biometric collection and facial recognition software, predictive policing, predictive algorithms, and risk assessment tools.

Back then, making statements like “technology is not neutral” or “there is bias baked into artificial intelligence” or “no, it’s not like the movie Minority Report,” were radical and controversial. But I was determined to beat the drum and lay the groundwork.

By summer of 2016, I began leading new work that was called “platform accountability” at the civil rights organization Color of Change. I wrote a policy framework and campaign playbook that I hoped could begin to reign in the power of technology and social media companies. Soon, I was working with AirBnB on the first civil rights audit of a tech company.

By the end of 2016, I found myself in Twitter’s headquarters speaking with the head of its growing Trust & Safety team. We were debating how the platform’s policies seemed to be insufficient in the dawn of what would become The Trump Era. Then she asked me what I would do to fix the policies that I was complaining about.

At that moment, I realized that I didn’t really know. I had no idea what levers to pull or what language to write that would make meaningful impact. I didn’t know how the Twitter machine worked. It was something I couldn’t stop thinking about when I left the Twitter offices. Especially as reporting and research uncovered the vital role that social media played in the 2016 election and as social media was used to incite genocide in Myanmar.

By 2019, I decided to join the Trust & Safety team at Twitter. Looking back, I can say that despite my background, I honestly had no idea what I was getting myself into.

My job as a senior policy official was to write and enforce the rules for what people could and couldn’t say on Twitter. My team was responsible for several policy areas like abuse, harassment, privacy, hate speech, violence, sensitive media, non consensual nudity, and misinformation. We were the highest level of content moderation for the platform.

And it was a baptism by fire. The first week of 2020, my boss came to me with a simple request: could you make sure that World War III doesn’t start on the platform? Somehow, over the next 48 hours, my team and I wrote policy recommendations for how information should safely flow on Twitter in the midst of a geopolitical crisis to avoid offline violence.

Throughout 2020, my team and I would repeat this responsibility as protests broke out around the world in response to COVID-19 mask mandates and stay at home orders. The pandemic also brought a new wave of misinformation and radicalized politics. I watched as fringe ideas like the violent overthrow of the United States government or a second civil war began to gain traction on Twitter.

By the summer of 2020, protests were spreading around the world and the tenor of the conversation on Twitter was changing. More people were beginning to talk about the need for a new American Revolution.

It all reached a fever pitch during the 2020 election.

Now, here is the part of my story where I usually start talking about coded language and writing policies. Then I get into the part about how I tried for months to warn Twitter’s leadership that people were openly planning for violence on January 6th. Next, there’s usually the part about being asked to spring into action on January 6th. And then there’s the arguments I wrote and made on January 8th that led to the president being permanently suspended. And then the whole whistleblowing to Congress thing usually comes next.

Thankfully, that’s not what I’m here to talk about today.

Instead, I want to talk about something I’ve never spoken publicly about before: How I felt on January 9, 2021. The only way I’ve ever been able to describe the feeling is “bar exam brain.” Maybe you know it. It’s that feeling of drain after having taken everything you’ve ever read and learned and rapidly applying the logic and lessons to a fast moving set of facts. And then writing it all down as fast as you can.

After doing just that at Twitter from January 6-8 2021, my brain literally hurt in a way I’d only ever experienced the day after I took the New York bar exam.

But that feeling also told me something important: in the moments when it counted the most, and everything was on the line, my training that began at UF in a media law class prepared me.

In the years since those fateful days, that training remained my foundation as I continued my work in the field called Trust & Safety as a senior policy official at Twitch and by conducting first of its kind research as a fellow at Stanford University. And now as a Senior Fellow at the Tow Center for Digital Journalism at Columbia University, I continue to write about the field and share my opinions for how it can improve.

Now, you may have noticed that I haven’t tried to give you my definition for the term “Trust & Safety” today. That’s intentional. Because, I hope that my narration of my career journey has shown that “Trust & Safety” is just the latest phrase in a long line of terms that simply describes the traditional accountability work of law and journalism.

This is the work that we have all dedicated our practices to. And the work that we are here to discuss.

Joining us for this discussion is my niece. Earlier this week, she told me that she and her elementary school classmates have debates at lunch about whether AI is going to take over the world. So, believe it or not, Trust & Safety in our increasingly AI Dominated world is also a pretty popular topic among 11 year olds.

It’s clear to me, and to the elementary kids, that we are once again in another one of those moments where new technologies are changing our world. But like I told my niece, I also want to remind us all: it is the job of humans to write the rules that will decide what AI is allowed to do.

Currently, the Trust & Safety format for accountability and self-regulation has been carbon copied from the departments of the social media companies I worked in and placed into the new companies developing large language models, other forms of generative AI, and the next iterations of web technologies.

But the same people doing the exact same things will not produce different results. And we cannot afford to simply repeat the same mistakes from technology’s past.

So how do we begin to create a better future? I really believe that it starts with the people whose brains are dedicated to technology, media law, and journalism. And I have four recommendations for where we can begin.

In the future of Trust & Safety and AI, I first hope that we create more training to prepare future practitioners to join the frontline work. We no longer live in an era where only three schools recognize this work. I believe our disciplines should provide a paved pathway into the technology industry for others, like it did for me. It’s hard to imagine, but when I worked in Trust & Safety, I never worked with anyone else who came from my fields of study. Yet, as my story shows, my training perfectly positioned me to participate in some of the most historic moments in media law history. And, I can say from experience that it would probably have been less stressful to workshop a World War III scenario in a classroom than to have had to figure it out on the fly, in real time, in real life. We owe it to ourselves to teach and train the next generation of practitioners.

Second, the reign of social media companies and my journey also shows us that even when it was at its best and most resourced, the purely self-regulatory model of internal Trust & Safety does not work. Technology companies have already shown that they have far too much power and that they are not responsible with how they wield it. Now, with the existential threat of AI that worries even pre-teens, we need a full court press from every sector to come together and create comprehensive and practical external regulatory structures for AI and emerging technology. This will require people from industry, academia, civil society, government, and every sector to work together and share knowledge.

That sharing of knowledge is really key. Third, I also believe that researchers, academics, and journalists working on AI, technology, and Trust & Safety need to talk to the humans who have done, and are doing the work inside of companies. When I did my own research, I realized that many Trust & Safety workers were sharing experiences with me that they said they had never spoken about before to anyone. When I asked why, one of them told me, “no one has ever asked.” Asking these questions about how the past unfolded is the only way to provide nuanced solutions for a better future.

Lastly, I encourage former Trust & Safety workers to join those of us who have left the industry and are creating our own lane as external watchdogs and advocates for change. I have been thrilled to see so many of my former colleagues writing and publishing terrific articles and essays that have provided new depth to conversations about technology accountability and regulation. But we need help from academia, civil society, and philanthropy in making this alternative pathway a viable career.

But none of these recommendations is without risk. And I must acknowledge them. If the past is prologue, we can anticipate a backlash.

While it may be expanding to new technologies, the reality is also that the work of holding power to account through Trust & Safety is under attack. Technology leaders like Elon Musk have publicly harassed and mocked Trust & Safety employees. Musk also led the way in eliminating Trust & Safety roles within social media by firing workers, and instituting massive content moderation changes. Companies like Twitch and YouTube have followed suit. And Tech investors like Marc Andreessen have written manifestos that labeled Trust & Safety “the enemy.”

This language and strategy mirrors the attacks that are also currently directed toward academics, journalists, poll workers, judges, and others who do the vital work of accountability that undergirds democracy. And it has been discouraging to watch.

But, I can honestly say that for the first time in a while, I am not without hope.

As I’ve alluded to, this trip has been a full circle moment for me in many ways. Returning to UF has made me think of the past version of myself that walked onto this campus nearly 20 years ago. And I’ve wondered what she would think if she was here today. I know she wouldn’t have understood most of the words we are discussing, and she probably would have seen “LLM” and assumed that we were talking about tax law.

And as I wrote this talk about all of these words, I also thought about my niece. She is the literal embodiment of the next generation that I dreamed of when I first came to this campus. The world she is living in, and will go to college in, is exceedingly different from the one I encountered.

But my journey has taught me that the work remains the same even though the terms, technologies, and times change.

So my hope for myself, my niece, and for us all as I leave this campus again is still the same. May the continued work of law and journalism provide the foundation to make our world a different and hopefully better place.


Q & A:

Jasmine: What is the cost of failing to implement adequate policy for social media sites with the emergence of LLMs and other generative AI?

Anika: The failure to implement adequate policy for social media sites both internally and with external regulation has already led to deadly results, like January 6th. And that happened before the emergence of LLMs. Now, I fear that the convergence of unregulated social media and prevalence of Generative AI could lead to a “November surprise” in this year’s elections.

Jasmine: You offered four recommendations for creating a better future at the intersection of law, journalism, and technology, one of these focuses on training for future practitioners. What might this training look like and how should administrators consider implementing such training?

Anika: This is something I am actually trying to build and envision. I’d love to develop a class that uses a mix of theory and practice to recreate a much lower stakes environment of a T&S team inside of a technology company. I think that it’s important to get both a solid foundation on the literature in the field as well as apply it by workshopping through previous real-world examples. Theory breaks down quickly in practice, and I think it’s also vital for the next generation of practitioners to spend time wrestling with how to fill those gaps. It’s something that I wish I would have had. I’d love to come together with administrators and other former practitioners to create this.

Jasmine: What is the value of having training in journalism for students and others who might want to go into tech?

Anika: One of the most useful skills that I’ve ever exercised during my career is the ability to write on deadline. This, of course, gets drilled into you with journalism training. But being able to write well, and write quickly makes all the difference and sets you apart at work. In the world of tech policy, filling a blank document with facts and logical arguments within a certain amount of time is the daily job.

Journalism training is also the only place where students are required to take a media law class and grapple with the legal and ethical considerations that govern technology. That foundation in free expression philosophies and understanding of the basics of the First Amendment are exceedingly helpful, and I think necessary, when your job is to govern speech.

Jasmine: Why is there such a disconnect between researchers, journalists, and the whistleblowers and other trust and safety workers in tech? Why are issues in the T&S space so slow to be studied?

Anika: As I mentioned, when I did my own research with T&S workers I quickly found that people were telling me stories that they said they’d never shared with anyone. I think it’s really important for researchers and journalists to start asking T&S workers, especially whistleblowers, about their experiences. I’m convinced that the only way we are going to effectively regulate the tech industry or fully understand how it works is by talking to the people who have done the work inside of the companies.

Historically, T&S workers have been hesitant to talk about their work for very good reasons, including threats, abuse, harassment and retaliation. I’ve found that recently, given the current state of the world and the importance of the work, people from within T&S, or who have recently left T&S, have become more willing to take the risk of speaking up.

Jasmine: What might an external watchdog look like for T&S for tech?

Anika: I argue that Congress should create a new independent body to regulate technology companies. I think the airline industry is a great example of where we can begin. In that field, the National Transportation and Safety Board sets basic safety thresholds that companies have to abide by. And when something goes horribly wrong, they have the authority to go investigate and collect the black box to determine what happened. I think society would greatly benefit if the technology industry had a regulatory system like that.

About the Tow Center

The Tow Center for Digital Journalism at Columbia's Graduate School of Journalism, a partner of CJR, is a research center exploring the ways in which technology is changing journalism, its practice and its consumption — as we seek new ways to judge the reliability, standards, and credibility of information online.

View other Tow articles »

Visit Tow Center website »