Chinese scientists have found that artificial intelligence, rather than human experts, may hold the key to securing public trust in controversial policies — a finding with profound implications for governance and communication in the age of AI.
In an era when trust in traditional authorities is waning, researchers have uncovered a surprising new agent of persuasion: the large language model (LLM). A study published in the Policy Studies Journal reveals that policy endorsements from LLMs — particularly Chinese-developed ones — can significantly boost citizens’ willingness to comply with controversial policies, while expert endorsements show no measurable effect. The research, based on two pre-registered survey experiments in China, signals a potential paradigm shift in how governments communicate public policy.
The study found that when an LLM endorsed a contested policy, public support increased notably. This effect was strongest for Chinese language models, suggesting that cultural and linguistic alignment plays a critical role in AI-mediated persuasion. The mechanism, the researchers argue, may lie in an improved public perception of the scientific rigor behind policymaking — a finding that hints at a deeper societal need for perceived objectivity and data-driven reasoning, even when the source is a machine.
This development carries particular weight for China, where the government is actively exploring AI applications in governance and public administration. The ability of LLMs to bridge the gap between policy intent and public acceptance could reshape everything from public health campaigns to economic reforms. For global professionals, the message is equally clear: AI is not merely a tool for automation or analytics but is emerging as a powerful medium for shaping human behavior and trust at scale. As AI systems become more embedded in daily life, understanding their persuasive power — and its limits — will be essential for anyone operating at the intersection of technology, policy, and society.
Why it matters:
The research suggests that AI can fill a credibility vacuum left by declining trust in human experts, especially in politically sensitive contexts. For global policymakers, technology executives, and communications strategists, this study offers early evidence of a new tool — and a new challenge — in the art of public persuasion: the machine that speaks more convincingly than the expert.
Source →
|
ScientificChina — tracking what’s happening in Chinese science, technology, research, and industrial innovation in a way global professionals can actually use.
Follow ScientificChina for deeper insight into China’s evolving science, technology, and industrial landscape.
To explore more, visit
ScientificChina.
|
|