Navigating Generative AI and its Impact on the Future of Public Discourse

Columbia Engineering and the Knight First Amendment Institute recently convened multidisciplinary experts to discuss the impact of artificial intelligence on public discourse, free speech, and democracy.

Feb 27 2024 | By Hangyu Fan | Photo Credit: David Dini/Columbia Engineering

See what the experts said on generative AI and public discourse

Five panelists sitting in a row on-stage. The backs of audience members' heads appear in the foreground.

Panelists for “Legal and Philosophical Questions: Information Integrity, Trustworthiness, and the First Amendment," moderated by Katy Glenn Bass of the Knight First Amendment Institute

Generative AI tools such as ChatGPT are at the forefront of today’s technological innovation. However, these advancements also come with challenges such as disinformation and potential threats to democracy. As Generative AI becomes more accessible and powerful, the interplay between its capabilities and public discourse requires careful consideration and action.

To address these pressing issues, Columbia Engineering and the Knight First Amendment Institute at Columbia co-hosted a symposium, “Generative AI, Free Speech, and Public Discourse,” Feb. 20 held at The Forum. The event featured keynote speeches, including a surprise appearance by Columbia University President Minouche Shafik, and panel discussions with experts from fields including law, journalism, computer science, and social science. The talks centered on the critical need for a comprehensive understanding of generative AI's societal implications. 

Columbia Engineering Dean Shih-Fu Chang welcomed a packed audience (and even more attendees who joined via the livestream) and emphasized the significance of the School’s new partnership with the Knight First Amendment Institute. The Institute is “a leading defender of free speech in the digital age,” he said. “We are proud to join with them to bring our expertise in artificial intelligence, machine learning, and in data science from engineering as well as many other schools and disciplines from Columbia for this joint effort.”

Academia’s role

Anyone who has explored AI tools has witnessed its creative potential. Just one week prior to the symposium, OpenAI introduced Sora, an AI model that generates videos from text instructions. While celebrating this groundbreaking development, Sora's power is also something to consider. Like the introduction of other generative AI tools to the wider public, the astonishing capabilities of Sora are plain to see, but the technology’s social, political, and economic impact remain uncertain.

Beyond merely governance, experts are addressing ethical issues and potential disruptions tied to the use of generative AI. Columbia President Shafik emphasized the crucial role of educational institutions in guiding the future of this technology. "There needs to be a voice for the public good. And I think universities can play a very big role," she said in her remarks.

In considering AI’s impact on society, which is one of Shafik’s priorities as president, she added, “It’s hard to imagine any field of inquiry which will not be affected by these technologies, and I think for Columbia as a university, we have a huge comparative advantage in thinking about those impacts.” 

The symposium was a part of Columbia’s Dialogue Across Difference, a new initiative led by the Office of the Provost to foster an inclusive community that brings together diverse perspectives in conversation with empathy, respect, and trust. 

Trust and ethical frameworks are key to the adoption and regulation of AI

Two speakers sitting on stage holding a microphone. The left-most speaker holds a clipboard. A table with two water bottles sits between them. The backs of audience members' heads appear in the foreground.

Garud Iyengar (left), senior vice dean of research at Columbia Engineering, in conversation with keynote speaker, Tatsunori B. Hashimoto, computer science professor at Stanford

The symposium featured three keynote speeches, which focused on large language models and information integrity. Tatsunori B. Hashimoto, assistant professor of computer science at Stanford University, delivered the first keynote speech. “There is a big gap between the capabilities that these systems bring and the trustworthiness, how much can we rely upon these systems as building blocks for things like writing assistance or as part of our public discourse, as part of our society,” said Hashimoto.

In her keynote speech, Dilek Hakkani-Tür, professor of computer science at University of Illinois, Urbana-Champaign, focused on integrating diverse knowledge sources and enhancing the safety and factual accuracy of dialogue systems. "Research in the dialogue field has attracted so much attention since the large language models and their success in generating natural-sounding responses," said Hakkani-Tür. She described the current era as "exciting times" for computer science and related fields. 

Guest speaker, Bruce Schneier, a renowned security technologist and lecturer at Harvard Kennedy School, discussed the distinction between interpersonal trust and social trust, and how our interactions with AI might blur these lines, leading to potential manipulations by corporations behind these systems. "It's the role of the government to create trust in society. And therefore it is the role of the government to create an environment for trustworthy AI," said Schneier. He advocated for a specialized government AI agency to regulate AI and the humans behind them.

Deluge of content in the age of AI

Carl Vondrick sitting on stage with their legs crossed. Audience members partly obstruct the view.

Carl Vondrick, associate professor of computer science at Columbia Engineering and panelist on “Empirical and Technological Questions: Current Landscape, Challenges, and Opportunities”

The panel discussions delved deeper into present and future challenges of generative AI and explored the future impact on society as a whole. 

Moderated by Dean Chang, "Empirical and Technological Questions: Current Landscape, Challenges, and Opportunities," featured insights from leading figures who discussed the technical challenges and social implications of generative AI. The panelists discussed the importance of creating systems capable of identifying AI-generated content through technical means as well as teaching ordinary people how to spot fake content. Many panelists agreed that people who grow up with these technologies will be better equipped to recognize AI content.

“I think the younger generation is going to learn not to just trust a photograph,” said Carl Vondrick, associate professor of computer science at Columbia Engineering. “The first commodity camera came out about 100 years ago. You could use it for photographic proof. Not anymore.” 

In the second panel, "Legal and Philosophical Questions: Information Integrity, Trustworthiness, and the First Amendment," moderated by Katy Glenn Bass, research director at the Knight Institute, panelists discussed the ethical and legal implications of artificial intelligence on information integrity and public discourse. They explored lessons learned from past emergent technologies that were considered a threat or disruptive, and they also discussed what is needed now, with respect to generative AI tools, to establish meaningful oversight of these novel tools.

The panelists, whose expertise ranged from communications and digital law to public policy and cybersecurity, also touched on the worry over whether people are going to be able to achieve the delicate balance of trusting and believing content and “extreme skepticism” over content and institutions especially during an election year. A topic raised centered around some of the existing large language models that are seemingly on a path to dominate generative AI outputs. This is something to track.

“We don’t want a mono-culture of content that’s created by a handful of [language] models,” said panelist Camille François, lecturer at the School of International and Public Affairs and cybersecurity expert. 

“There is something to be said for making sure that we have a diversity in the set of tools that are going to bring about this new set of technologies [to] the public. That's personally why I’m interested by open-source AI and some models that are training on different datasets that are prioritizing other types of cultural and linguistic inputs and how they go about building up those models.”

What matters: interdisciplinary collaborations

Columbia University President Minouche Shafik, Columbia Engineering Dean Shih-Fu Chang, Knight First Amendment Institute Executive Director Jameel Jaffer, and Columbia Interim Provost Dennis Mitchell posing for a photo on stage

From left to right: Columbia University President Minouche Shafik, Columbia Engineering Dean Shih-Fu Chang, Knight First Amendment Institute Executive Director Jameel Jaffer, and Columbia Interim Provost Dennis Mitchell

With all the challenges brought about by the rapid rise of generative AI, the collaboration between Columbia Engineering and the Knight First Amendment Institute serves as a timely initiative. 

Algorithms have long shaped our interaction with the world by curating content on social media. Today, AI goes further by creating bots that disrupt discussions. It's not just about text anymore; AI is now generating fake images, mimicking human voices and faces, and even producing videos. The issue with AI transcends mere technology; it represents a new frontier that challenges existing boundaries. 

"We focus on the ways in which new technologies are reshaping democracy. And so our work is interdisciplinary by necessity," said Jameel Jaffer, executive director at the Knight First Amendment Institute. “We know very well that the kinds of questions we address can’t be answered by legal scholars or lawyers alone. Answering these questions will require many different kinds of knowledge and many different kinds of expertise.”

Today's program, noted Jaffer, is just the beginning of an exciting collaboration between the Engineering School and the Knight Institute. Over the coming months, Columbia Engineering and the Knight Institute plan to work with other schools to co-sponsor research projects focused on the issues explored at the symposium. Columbia Engineering has been busy in the AI space. The School has launched a number of new research centers specializing in AI foundations, including fairness, causal inference, explanation, and other emerging areas, to leverage science and engineering as forces for good that benefit our society.

“This symposium and our partnership with [the Knight Institute],” said Dean Chang, “underscores our commitment to the responsible and fair use of AI.”

Stay up-to-date with the Columbia Engineering newsletter

* indicates required