Election information?

Disinformation is false information deliberately spread to deceive people. In contrast, misinformation refers to inaccuracies that stem from inadvertent error.  Disinformation is an managed activity in which people introduce strategic deceptions and media manipulation tactics to facilitate political, military, or commercial goals.

UNESCO and Ipsos did a survey  (August-September 2023), in 16 countries, asking among other things,  “Are they concerned that disinformation will impact the next campaign?” The effort found the following key points.

  • 56% of internet users in the 16 countries surveyed frequently use social media as their primary source of news, surpassing television at 44%.
  • Trust in traditional media remains high, with 66% of those surveyed trusting television news, 63% trusting radio news, and 61% trusting print media news.
  • Internet users experience a high prevalence of disinformation on social media, with 68% indicating that disinformation is most widespread there.
  • 87% of respondents expressed concern about the impact of disinformation on upcoming elections in their country, with 47% being “very concerned”.
  • 67% of internet users have encountered hate speech online, with a majority believing it’s most prevalent on Facebook (58%).
  • There’s a strong call for regulation, with 88% believing that both governments and regulatory bodies, and social media platforms (90%) should address disinformation and hate speech issues.
  • A majority of internet users (89%) support the idea that governments and regulators should enforce trust and safety measures on social media platforms during election campaigns.
  • Only 48% of surveyed citizens have reported online content related to disinformation in the context of an election campaign.

In the United States, there is no oversight of the social media platforms by the government and social media companies have actually rolled back any self-policing of content for disinformation, because it works against their business model.  Their model is engagement by entertaining, not informing you.  Disinformation is actually amplified by the social media platforms.

The following image illustrates a two-phase framework conceptualizing the dissemination of disinformation on social media. The first phase is “seeding,” in which actors strategically insert misleading deceptions by masquerading or obfuscating statements as legitimate…. The second phase, “echoing,” represents how disinformation circulates in echo chambers through contradictions against opponents in cultural wars and other identity-driven controversies. Participants use falsehoods, selective truths, beliefs, value judgments, and all available controversies to rhetorically enact and exploit their identity in opposition to their perceived opponents.1Diaz Ruiz, C., & Nilsson, T. (2023). Disinformation and Echo Chambers: How Disinformation Circulates on Social Media Through Identity-Driven Controversies. Journal of Public Policy & Marketing, 42(1), 18-35. https://doi.org/10.1177/07439156221103852

Disinformation and echo chamber graphic Citizens in a democracy need a source of information the is not driven by the profit motive of social media platforms and is dedicated to presenting factual information for people to make good decisions.  Donating to this organization will help fund research and implementation of a platform to inform and amplify your voice.  Donate today, before our democracy  dies!

2023 Denver Democracy Summit

The Josef Korbel School of International Studies is pleased to announce that the 3rd Annual Denver Democracy Summit will be held October 25-26, 2023 on the University of Denver campus with proceedings streamed to a worldwide virtual audience. The Denver Democracy Summit will serve as a platform for the DU community and the world’s leading thought leaders on democracy to evaluate ongoing efforts to strengthen democratic norms, values, and institutions. The Summit will include discussions on democratic backsliding, political communication and misinformation, polarization and civil discourse, technology and democracy, and other topics.

More information at Denver Democracy Summit website.

Update 11/2/2023

If you did not have a chance to attend the Denver Democracy Summit you can watch the presentations on their YouTube channel playlist.

Artificial Intelligence Regulation

Artificial Intelligence Regulation

Artificial intelligence (AI) is everywhere these days.  AI is used in applications such as self-driving cars (e.g., Waymo, Nuro, and Zoox), some web search engines (e.g., Google Search and Bing), recommendation systems (used by YouTube, Amazon, and Netflix), understanding human speech (such as Siri and Alexa), generative or creative tools (ChatGPT,  Midjourney, and DALL-E),  playing games (such as chess and Go) and even writing U.S. legislation!

AI has added tremendous functionality to applications in most industries, but the regulation is lagging behind the advances.  Governments are slow to create legislation for fear of stifling the industry or out of ignorance of the dangers.

I decided to ask ChatGPT what it recommends for AI regulation by asking “How should artificial intelligence services, like ChatGPT, be regulated?” Here is it’s response:

The regulation of artificial intelligence services, including advanced models like ChatGPT, is a complex and evolving challenge that requires a balanced approach to ensure both innovation and safety. Here are some key considerations and principles that can guide the regulation of AI services:

1. Transparency and Accountability:
– Require AI developers and service providers to be transparent about the capabilities and limitations of their systems. This includes disclosing when users are interacting with AI rather than humans.
– Establish mechanisms for accountability, making it clear who is responsible for the actions and decisions of AI systems.

2. Data Privacy and Security:
– Enforce strict data privacy regulations to protect users’ personal information and prevent misuse of data.
– Implement security measures to safeguard AI systems from cyberattacks and data breaches.

3. Ethical Guidelines:
– Develop and promote ethical guidelines for AI development and usage, including avoiding biases and discriminatory behavior.
– Encourage AI developers to design systems that prioritize user well-being and safety.

4. Fairness and Bias Mitigation:
– Require AI developers to address and mitigate biases in their systems, ensuring that AI services are fair and do not discriminate against specific groups.
– Implement auditing and testing procedures to detect and correct bias in AI algorithms.

5. User Consent and Control:
– Ensure that users are informed about how AI services collect and use their data.
– Give users control over the data they share and how AI services use that data.

6. Certification and Testing:
– Establish certification processes for AI systems, especially those used in critical applications like healthcare, autonomous vehicles, and finance.
– Mandate rigorous testing and validation of AI systems to ensure their reliability and safety.

7. Liability Framework:
– Define liability frameworks that specify who is responsible if AI systems cause harm or make erroneous decisions.
– Consider establishing insurance requirements for AI developers to cover potential liabilities.

8. Continuous Monitoring and Reporting:
– Implement ongoing monitoring and reporting requirements for AI systems in use, especially those in critical domains.
– Create mechanisms for reporting and addressing AI-related incidents and failures.

9. International Cooperation:
– Promote international collaboration and standardization efforts to harmonize AI regulations across borders, as AI is a global technology.

10. Flexibility and Adaptability:
– Recognize that AI technology evolves rapidly, and regulations should be flexible and adaptable to accommodate innovation while maintaining safety.

11. Public Input:
– Solicit input from experts, stakeholders, and the public when formulating AI regulations to ensure they reflect diverse perspectives and needs.

12. Education and Awareness:
– Invest in education and awareness programs to help the public understand AI and its implications, fostering responsible use and informed decision-making.

Regulating AI services is a multifaceted challenge that requires a careful balance between encouraging innovation and ensuring public safety. Regulations should be informed by ongoing research, collaboration among stakeholders, and a commitment to ethical AI development and deployment. Furthermore, it’s essential to strike a balance between regulatory oversight and not stifling the positive potential of AI technology.

Not a bad response for an AI system!  ChatGPT cannot vote, but it probably has a louder voice in the U.S. Congress.  Maybe us humans will just have to get busy, before AI replaces Congress.  Hmm, an idea worth considering.