Files / Emerging technologies

Trust, Attitude, and Use of Artificial Intelligence: 2024 Global Study

The University of Melbourne and KPMG jointly released a report, based on a global sample of ,, providing an in-depth analysis of public, employee, and student trust in , adoption patterns, risk perceptions, and regulatory expectations.

Detail

Published

22/12/2025

Key Chapter Title List

  1. Public Attitudes Towards AI
  2. Employee Attitudes Towards AI in the Workplace
  3. Student Attitudes Towards AI in Education
  4. Conclusions and Implications
  5. Research Methodology and Statistical Notes
  6. Sample Demographic Characteristics
  7. Key Indicators by Country
  8. Trends in Key Indicators Across 17 Countries
  9. Public Usage and Understanding of AI Systems
  10. Public Trust and Acceptance of AI Systems
  11. How the Public Views and Experiences the Benefits and Risks of AI
  12. Public Expectations for AI Regulation and Governance

File Introduction

This research was jointly conducted by the University of Melbourne and KPMG, aiming to provide an evidence-based understanding of global public trust, attitudes, usage, and governance expectations towards artificial intelligence. The report is based on 48,340 valid questionnaires collected from 47 countries and regions between November 2024 and January 2025, covering all major geographical areas, with the sample being nationally representative in terms of age, gender, and regional distribution. This is the fourth iteration of this series of studies and, for the first time, provides a trend comparison with data from 17 countries in 2022 (before the release of ChatGPT), offering a unique perspective on the evolution of public attitudes following the proliferation of generative AI.

The report structure is divided into three core empirical sections and a conclusion. The first section, Public Attitudes Towards AI, systematically examines the adoption, understanding, trust, sentiment, risk and benefit perceptions, and regulatory expectations of AI at the societal level. Using structural equation modeling, the study identified four complementary pathways influencing AI trust and acceptance: the knowledge pathway (AI literacy and training), the motivation pathway (perceived benefits), the uncertainty pathway (perceived risks), and the institutional pathway (regulatory adequacy and confidence in entities), with the institutional pathway having the most significant impact. The study found that although AI adoption rates are high (66% of the public regularly and intentionally use AI), trust remains a major challenge, with over half (54%) being cautious about trusting AI. The public trusts the technical capabilities of AI more but is more skeptical about its safety, security, and social impact. Emerging economies significantly lead developed economies in AI adoption, trust, acceptance, and AI literacy.

The second section, Employee Attitudes Towards AI in the Workplace, delves into the integration of AI in the workplace. Data shows that the era of AI at work has arrived, with 58% of employees regularly using AI in their jobs, with generative AI tools being the most prevalent. Although AI brings significant performance benefits (such as improved efficiency, innovation, and decision quality), it is also accompanied by widespread self-reported misuse, complacent use, and non-transparent use behaviors by employees, increasing organizational risks. Simultaneously, employees report complex impacts of AI on workload, stress, interpersonal collaboration, and monitoring. The study found that organizational support and governance for responsible AI use lag behind the speed of adoption, especially in developed economies.

The third section, Student Attitudes Towards AI in Education, reveals the widespread application of AI in education and its impacts. 83% of students regularly use AI in their studies and report benefits such as improved efficiency and personalized learning. However, student misuse, over-reliance, and complacent use behaviors are more prevalent, and have mixed impacts on critical thinking, collaboration, and assessment fairness. The study points out that educational institutions are significantly lagging in providing policy guidance and supporting students' responsible use of AI.

The conclusion section synthesizes the research findings, noting that alongside rapid AI adoption, the public, employees, and students all exhibit significant ambivalence—anticipating its benefits while also worrying about its risks and negative impacts. This tension highlights the major challenges of responsible AI integration at the individual, organizational, societal, and international levels. The report provides specific action pathways for policymakers, organizational leaders, educational institutions, and individuals, emphasizing that investing in AI literacy, establishing robust governance and regulatory frameworks, promoting international collaboration, and ensuring that AI development and use are centered on human well-being are key to achieving trustworthy and sustainable AI adoption.