Skip to main
mason building

George Mason Neuroscientist Part of Global Research Collaboration Suggesting Framework to Examine Trust in Artificial Intelligence (AI)

Illustrated image of a person and robot facing one another.

AI is transforming our world—but can we trust it? Transdisciplinary research team’s TrustNet Framework addresses the role of trust between people, systems, and institutions to guide how to build and use AI across a variety of domains.

As AI becomes more embedded in our daily lives, from healthcare to hiring, and workplace to warfare, the question of ‘Can we put our trust in AI?’ has never been more urgent. A new open-access article in Nature’s Humanities and Social Sciences Communications, led by Frank Krueger and René Riedl, builds on insights from an international gathering of scientists at a Transdisciplinary Research Union for the Study of Trust (T-R-U-S-T) workshopin Vienna, Austria, where these ideas and framework were first envisioned.

“Our global collaboration dives into the psychology, ethics, and societal impact of trust in AI, proposing a transdisciplinary TrustNet Framework to understand and bolster trust in AI to address grand challenges in areas as broad and urgent as misinformation, discrimination, and warfare,” said Frank Krueger, Professor of Systems Social Neuroscience in the School of Systems Biology at George Mason University. 

People naturally seek out those they trust, enabling collective achievements like democratic governance and economic welfare—things no individual could accomplish alone. As the development and deployment of AI tools continue at a rapid rate, these researchers considered how we might trust various use cases to become interdependent with AI across all areas of life. 

AI has the potential to enhance our lives in meaningful ways. For example, AI companions can offer emotional support in elder care, while AI tools can generate content and automate tasks that boost productivity. Yet, risks persist. Researchers considered various scenarios: algorithms used in hiring may carry hidden biases, just as humans do. And when it comes to misinformation, how can we tell if any AI is better than us at distinguishing fact from fiction?

As AI increasingly mediates high-stakes decisions, trust and accountability must become central concerns. Ultimately, what’s at stake is trust—not just in AI systems, but in the people and institutions designing, deploying, and overseeing them. This evaluation analyzed 34,459 multi-, inter-, and transdisciplinary trust research articles to shape the proposed framework. 

Image
Figure 1: At the core of the TrustNet framework, new, connectable knowledge is developed and implemented across five key elements of trust: Trustworthiness, Risk, User, Sphere, and Terrain. The user is the central focus of the framework, playing a key role in the discourses on both societal and scientific knowledge.
Figure 1: At the core of the TrustNet framework, new, connectable knowledge is developed and implemented across five key elements of trust: Trustworthiness, Risk, User, Sphere, and Terrain. The user is
the central focus of the framework, playing a key role in the discourses on both societal and scientific knowledge.

The TrustNet Framework encourages research teams to consider three components, Problem Transformation, including connecting the grand challenge description to scientific knowledge; Production of new connectable knowledge, including clarifying roles and designing an integration concept; and Transdisciplinary integration, assessing results to generate outputs for society and science.

 “Future trust frameworks must consider not only how humans trust AI, but also how AI systems might evaluate and respond to human reliability, and how they establish forms of AI-to-AI trust in networked and automated environments,” explained Professor René Riedl, also instrumental in convening the initial group of researchers at the TRUST workshop and head of the master's program ‘Digital Business Management’ at the University of Applied Sciences Upper Austria & Johannes Kepler University Linz, Austria.

Members of the research team included representatives from the following organizations: George Mason University, Fairfax, VA; University of Applied Sciences Upper Austria & Johannes Kepler University Linz, Austria; McGill University, Montreal, Canada; Stanford University, Stanford, CA; Drexel University, Philadelphia, PA; University of Central Florida, Orlando, FL; University of Texas, Austin, TX; Vrije Universiteit Amsterdam, Amsterdam, Netherlands; Veterans Administration Medical Center, Washington, DC; NC State University, Raleigh, NC; American University, Washington, DC; Graz University of Technology, Graz, Austria; University of Oxford, Oxford, UK; and Tamagawa University, Machida, Japan.

“Those working in AI, policy, ethics, or tech design, would find this review article a valuable read to understand and combat the emerging societal AI trust challenges utilizing a common framework,” Krueger added. “Trust is the foundation of all healthy relationships—between people and technologies. AI will reshape society, but trust—between people, systems, and institutions—ultimately must guide how we build and use it.”

* The programs and services offered by George Mason University are open to all who seek them. George Mason does not discriminate on the basis of race, color, religion, ethnic national origin (including shared ancestry and/or ethnic characteristics), sex, disability, military status (including veteran status), sexual orientation, gender identity, gender expression, age, marital status, pregnancy status, genetic information, or any other characteristic protected by law. After an initial review of its policies and practices, the university affirms its commitment to meet all federal mandates as articulated in federal law, as well as recent executive orders and federal agency directives.