GLOBAL GOVERNANCE OF ARTIFICIAL INTELLIGENCE AND DISINFORMATION

International Teaching GLOBAL GOVERNANCE OF ARTIFICIAL INTELLIGENCE AND DISINFORMATION

Back

1222500054
DEPARTMENT OF MANAGEMENT & INNOVATION SYSTEMS
EQF7
GLOBAL STUDIES AND EU
2024/2025

OBBLIGATORIO
YEAR OF COURSE 1
YEAR OF DIDACTIC SYSTEM 2018
SPRING SEMESTER
CFUHOURSACTIVITY
642LESSONS
Objectives
GENERAL OBJECTIVE

THE COURSE AIMS TO PROVIDE STUDENTS WITH THE KNOWLEDGE, SKILLS, AND CONCEPTUAL FRAMEWORKS NECESSARY TO ADDRESS THE ETHICAL AND POLITICAL DIMENSIONS OF THE DEVELOPMENT AND IMPLEMENTATION OF ARTIFICIAL INTELLIGENCE, BOTH THEORETICALLY AND OPERATIONALLY, WITH A SPECIAL FOCUS ON THE ISSUE OF DISINFORMATION. THE FUNCTIONING OF AI SYSTEMS, THEIR ETHICAL, POLITICAL, AND SOCIAL IMPLICATIONS, ESPECIALLY THE ROLE PLAYED BY ARTIFICIAL INTELLIGENCE IN THE DEVELOPMENT OF MODERN DISINFORMATION CAMPAIGNS, WILL BE ANALYZED. STUDENTS WILL THEN BE INTRODUCED TO THE TECHNICAL, ORGANIZATIONAL, AND REGULATORY TOOLS DEVELOPED TO GOVERN AI AND ADDRESS THE PROBLEM OF DISINFORMATION, PAYING PARTICULAR ATTENTION TO EU POLICIES IN BUILDING ITS OWN MODEL BASED ON RESPECT FOR HUMAN RIGHTS AND THE PROTECTION OF DEMOCRATIC PRINCIPLES.

KNOWLEDGE AND UNDERSTANDING

UPON COMPLETION OF THE COURSE, STUDENTS WILL BE ABLE TO:

1)UNDERSTAND THE FUNCTIONING OF AI AND ASSESS THE SOCIAL, LEGAL, AND POLITICAL IMPLICATIONS OF AI APPLICATIONS AND THEIR ARCHITECTURES AND TECHNICAL SPECIFICATIONS.
2)NAVIGATE THE PRINCIPLES AND GOVERNANCE DIMENSIONS RELATED TO THE DEVELOPMENT AND IMPLEMENTATION OF AI.
3)UNDERSTAND THE ROLE OF TECHNOLOGY, PARTICULARLY ARTIFICIAL INTELLIGENCE AND SOCIAL MEDIA PLATFORMS, IN THE SPREAD OF DISINFORMATION AND THEIR INFLUENCE ON GLOBAL POLITICS.
4)ACQUIRE IN-DEPTH KNOWLEDGE OF REGULATORY AND OPERATIONAL TOOLS TO MANAGE THE SOCIAL, LEGAL, AND POLITICAL IMPLICATIONS OF AI, WITH SPECIAL REFERENCE TO ITS USE IN THE FIELD OF DISINFORMATION.

ABILITY TO APPLY KNOWLEDGE AND UNDERSTANDING

UPON COMPLETION OF THE COURSE, STUDENTS WILL BE ABLE TO:

1)APPLY THE KNOWLEDGE ACQUIRED TO CONDUCT IMPACT ASSESSMENTS OR RISK ASSESSMENTS OF CONCRETE AI APPLICATIONS.
2)CRITICALLY EVALUATE THE ADEQUACY AND EFFECTIVENESS OF DIFFERENT MEASURES ADOPTED BY GOVERNMENTS, ORGANIZATIONS, AND INDIVIDUALS TO COUNTER DISINFORMATION, INCLUDING AUTOMATED CONTENT GOVERNANCE, FACT-CHECKING, AND MEDIA LITERACY.
3)DISCUSS THE ROLE OF TECHNOLOGY, PARTICULARLY ARTIFICIAL INTELLIGENCE AND SOCIAL MEDIA PLATFORMS, IN THE SPREAD OF DISINFORMATION AND THEIR INFLUENCE ON GLOBAL POLITICS.
4)CRITICALLY ANALYZE REGULATORY APPROACHES DEVELOPED TO ADDRESS MISINFORMATION, BEING ABLE TO UNDERSTAND AND EVALUATE THEIR LOGIC, POTENTIAL EFFECTIVENESS, AND LIMITATIONS, WITH A SPECIAL FOCUS ON THE EUROPEAN CONTEXT.
AUTONOMY OF JUDGMENT

THE STUDENT WILL BE ABLE TO:

1)CHOOSE THE MOST APPROPRIATE THEORETICAL APPROACHES FOR THE INTERPRETATION AND RESOLUTION OF SPECIFIC CASES.
2)CRITICALLY ANALYZE DATA AND CASE STUDIES TO ARRIVE AT INDEPENDENT JUDGMENTS AND REFLECTIONS ON SOCIAL, SCIENTIFIC, AND ETHICAL ISSUES RELATED TO AI AND DISINFORMATION.
3)APPLY ANALYSIS METHODOLOGIES FOR AN IN-DEPTH AND CRITICAL UNDERSTANDING OF DIGITAL REALITY.

COMMUNICATION SKILLS

THE STUDENT WILL BE ABLE TO:

1)USE APPROPRIATE VOCABULARY WHEN DISCUSSING ISSUES AND SPECIFIC CASES RELATED TO MISINFORMATION AND AI.
2)COMMUNICATE INFORMATION, IDEAS, PROBLEMS, AND SOLUTIONS TO BOTH SPECIALIST AND NON-SPECIALIST INTERLOCUTORS.
3)CLEARLY COMMUNICATE THEIR CONCLUSIONS AND KNOWLEDGE TO BOTH SPECIALIST AND NON-SPECIALIST INTERLOCUTORS.

LEARNING ABILITY

THE STUDENT WILL BE ABLE TO:

DEVELOP INDEPENDENT STUDY SKILLS.
DEVELOP THE ABILITY TO ASK QUESTIONS CONCERNING ONGOING DIGITIZATION PROCESSES.
DEMONSTRATE THE ABILITY TO RESEARCH USING TRADITIONAL BIBLIOGRAPHIC TOOLS AND COMPUTER-BASED ANALYSIS AND STORAGE RESOURCES.
PROMOTE, IN ACADEMIC AND PROFESSIONAL CONTEXTS, TECHNOLOGICAL, SOCIAL, OR CULTURAL ADVANCEMENT IN SOCIETY BASED ON KNOWLEDGE OF ISSUES RELATED TO DISINFORMATION AND AI.
Prerequisites
ENGLISH LINGUISTIC SKILLS
Contents
1)INTRODUCTION TO AI: FUNCTIONING AND FUNDAMENTAL CONCEPTS (2H LECTURE)
2)THE SOCIAL, LEGAL, AND POLITICAL IMPLICATIONS OF AI: TRUSTWORTHY, RESPONSIBLE, AND HUMAN-CENTRIC AI (2H LECTURE, 2H EXERCISE)
3)PRINCIPLES OF AI GOVERNANCE (TRANSPARENCY, ACCOUNTABILITY, FAIRNESS, PRIVACY, SECURITY, HUMAN CONTROL) (4H LECTURE, 2H EXERCISE)
4)GLOBAL GOVERNANCE OF AI: ACTORS, APPROACHES, MODELS (2H LECTURE)
5)EUROPEAN AI STRATEGY (2H LECTURE)
6)THE ARTIFICIAL INTELLIGENCE ACT (2H LECTURE)
7)HUMAN RIGHTS IMPACT ASSESSMENT AND RISK MANAGEMENT (2H LECTURE, 2H EXERCISE)
8)DISINFORMATION: INTRODUCTION AND FUNDAMENTAL CONCEPTS (2H LECTURE)
9)DISINFORMATION CAMPAIGNS: PHASES, TOOLS, STRATEGIES (2H LECTURE)
10)DISINFORMATION CAMPAIGNS, DIGITAL TECHNOLOGIES, AND AI (2H LECTURE, 2H EXERCISE)
11)COUNTERING DISINFORMATION AND ITS AUTOMATION: POTENTIALS AND RISKS (2H LECTURE)
12)IMPACT OF DISINFORMATION ON DEMOCRATIC SYSTEMS (2H LECTURE)
13)GLOBAL APPROACHES TO DISINFORMATION (2H LECTURE)
14)EUROPEAN STRATEGY ON DISINFORMATION (2H LECTURE)
15)THE DIGITAL SERVICES ACT (2H LECTURE)
16)CASE STUDIES: CAMBRIDGE ANALYTICA, US PRESIDENTIAL ELECTIONS, BREXIT, COVID, WAR IN UKRAINE (2H LECTURE, 2H EXERCISES).


Teaching Methods
LECTURES (32 HOURS), EXERCISES, AND GROUP WORK (10 HOURS). THROUGHOUT THE MODULE, STUDENTS WILL ENGAGE IN A VARIETY OF ACTIVITIES, SUCH AS CASE STUDIES, GROUP DISCUSSIONS, AND DEBATES, TO STIMULATE CRITICAL THINKING AND ACTIVE PARTICIPATION. THESE ACTIVITIES WILL PROVIDE STUDENTS WITH THE OPPORTUNITY TO APPLY THE CONCEPTS LEARNED IN CLASS TO REAL-WORLD SCENARIOS AND LEARN FROM INTERACTION WITH THEIR PEERS.
Verification of learning
THE EXAM CONSISTS OF AN ORAL INTERVIEW STRUCTURED IN TWO PHASES.

IN THE FIRST PHASE, THE STUDENT MUST DEMONSTRATE ACQUISITION OF THE FUNDAMENTAL KNOWLEDGE REGARDING THE GOVERNANCE OF THE SOCIAL, LEGAL, AND POLITICAL IMPLICATIONS OF AI, WITH PARTICULAR REFERENCE TO ITS USES IN THE FIELD OF MISINFORMATION.
THE SECOND PHASE INVOLVES THE STUDENT PRESENTING A SPECIFIC CASE OF THEIR CHOICE, DEMONSTRATING THEIR ABILITY TO IDENTIFY RELEVANT REGULATORY ASPECTS AND WHICH TOOLS AND OPERATIONAL PROCESSES SHOULD BE IMPLEMENTED FOR OPTIMAL GOVERNANCE.

THE FINAL GRADE IS GIVEN ON A SCALE OF THIRTY AND IS OBTAINED AS THE AVERAGE OF THE EVALUATIONS OBTAINED IN THE TWO PHASES. THE EXAM IS CONSIDERED PASSED WHEN THE GRADE FOR EACH OF THE TWO PHASES IS GREATER THAN 15/30 AND THE OVERALL GRADE IS GREATER THAN OR EQUAL TO 18.
THE MINIMUM EVALUATION LEVEL (15) IS ASSIGNED WHEN THE STUDENT HAS FRAGMENTED KNOWLEDGE OF THE THEORETICAL CONTENTS OR SHOWS LIMITED ABILITY TO CONNECT LEGISLATIVE REFERENCES TO THE STUDY CONTEXT.
THE MAXIMUM LEVEL (30) IS ASSIGNED WHEN THE STUDENT DEMONSTRATES A COMPLETE AND THOROUGH KNOWLEDGE OF THE THEORETICAL CONTENTS OR SHOWS A REMARKABLE ABILITY TO CONNECT LEGISLATIVE REFERENCES TO THE STUDY CONTEXT. HONORS ARE AWARDED WHEN THE CANDIDATE DEMONSTRATES SIGNIFICANT MASTERY OF THEORETICAL AND OPERATIONAL CONTENTS AND SHOWS THE ABILITY TO PRESENT ARGUMENTS WITH REMARKABLE LANGUAGE SKILLS AND INDEPENDENT PROCESSING ABILITIES, EVEN IN CONTEXTS DIFFERENT FROM THOSE PROPOSED BY THE TEACHER.
Texts
READINGS:

DIGNUM, V. (2019). RESPONSIBLE AI, SPRINGER (CAP. 2,4,6)
TURNER, J. (2018). ROBOT RULES: REGULATING ARTIFICIAL INTELLIGENCE. SPRINGER. INTRODUCTION 1-36
FJELD, J., ACHTEN, N., HILLIGOSS, H., NAGY, A., & SRIKUMAR, M. (2020). PRINCIPLED ARTIFICIAL INTELLIGENCE: MAPPING CONSENSUS IN ETHICAL AND RIGHTS-BASED APPROACHES TO PRINCIPLES FOR AI. BERKMAN KLEIN CENTER RESEARCH PUBLICATION, (2020-1).
PALLADINO N., A DIGITAL CONSTITUTIONALISM FRAMEWORK FOR AI: INSIGHTS FROM THE EUROPEA STRATEGY, DIGITAL POLITICS 3/2023.
BENNETT, W., & LIVINGSTON, S. (2020). THE DISINFORMATION AGE. CAMBRIDGE UNIVERSITY PRESS. (CAP. 1,2,3).
KATERINA SEDOVA, CHRISTINE MCNEILL, AURORA JOHNSON, ADITI JOSHI, AND IDO WULKAN, "AI AND THE FUTURE OF DISINFORMATION CAMPAIGNS. PART I: THE RICHDATA FRAMEWORK" (CENTER FOR SECURITY AND EMERGING TECHNOLOGY, DECEMBER 2021).
BONTRIDDER, N., & POULLET, Y. (2021). THE ROLE OF ARTIFICIAL INTELLIGENCE IN DISINFORMATION. DATA & POLICY, 3, E32.
KERTYSOVA, K. (2018). ARTIFICIAL INTELLIGENCE AND DISINFORMATION: HOW AI CHANGES THE WAY DISINFORMATION IS PRODUCED, DISSEMINATED, AND CAN BE COUNTERED. SECURITY AND HUMAN RIGHTS, 29(1-4), 55-81.
COLOMINA, C., MARGALEF, H. S., YOUNGS, R., & JONES, K. (2021). THE IMPACT OF DISINFORMATION ON DEMOCRATIC PROCESSES AND HUMAN RIGHTS IN THE WORLD. BRUSSELS: EUROPEAN PARLIAMENT.
More Information
ADDITIONAL TEACHING MATERIAL AND SUPPORT WILL BE PROVIDED BY THE INSTRUCTOR DURING THE COURSE.
  BETA VERSION Data source ESSE3