“Explorer” isn’t one of Duncan Watts’s many titles, but perhaps it should be.
A Signal through the Noise
How CSSLab’s Media Bias Detector is cutting through our cluttered media landscape
As the Stevens University Professor and the twenty-third Penn Integrates Knowledge Professor, Watts holds appointments in the Annenberg School for Communication, Penn Engineering, and Wharton. He is also the founder and director of the Computational Social Science (CSS) Lab, and it’s in this role that he’s charting new territory in the social sciences.
“Superficially, computational social science takes methods from computer science and applies them to social issues. But at a deeper level, computational social science can also mean advancing our understanding of the world by solving practical problems,” he reflects. “I started this lab to embody what computational social science can be.”
Tracking Media Bias in Real Time
Together with Managing Director Jeanne Ruane, Watts and his team of 23 students and researchers are exploring how people behave, how media works, how society functions, and how the human mind operates. For Watts, the key to understanding lies in our modern moment: as new technologies and data sources emerge, their applications hold new possibilities. And there are few better examples of this symbiosis than CSSLab’s Media Bias Detector.
“We’ve all experienced the divisiveness pervading popular media,” he notes. “Our hypothesis is that this is exacerbated not by lies or ‘fake news’ but by bias. It’s a timely question, and in our particular historical moment, CSSLab has these incredible tools—massive new data sets and large language models, in particular—to find an answer. So we threw our energy into investigating.”

A project of the Lab’s Media Accountability Project (PennMAP), the resulting Media Bias Detector uses large language models (LLMs) to identify media bias in headline news. Unlike similar tools, the Media Bias Detector doesn’t rely on the reputation of publishers but rather on the construction and language of the articles themselves. “We can analyze stories in real time, and our tool is better able to capture the heterogeneity within different news organizations over time,” says Watts. “For example, over President Trump’s first 100 days in office, the Media Bias Detector was able to show what the media covered most heavily and also what was ignored—tariffs and protests, for example—by certain publishers.”
Powered by Philanthropy
One part of the Media Bias Detector’s advantage is the academic rigor underpinning its processes—something many organizations simply do not have the expertise or resources to achieve. “We are very fortunate to have some philanthropic support. It allows us to really focus on quality, and additional support will be critical in the long-term success of our Lab’s initiatives,” notes Watts. This rigor and focus informs everything from how Watts and his team collect data to how they define, identify, and quantify bias on an algorithmic level.
It’s a reputation that earns them points not just with the average consumer who turns to the Media Bias Detector for information, but for industry leaders as well. “We have a number of data partnerships, and we’re forming more,” says Jeanne Ruane. “Having a breadth of data is crucial for the quality of our research, and it benefits not just our own work and that of our partners, but researchers around the globe.”
For inquiries about giving opportunities, contact Vanessa White at Penn Engineering, Lisa Millman at Wharton, or Eliza Walmsley at Annenberg.
This story has been adapted from a longer profile of CSSLab’s work, which will appear in the Summer 2026 issue of Inspiring Impact magazine.
