2023 Denver Democracy Summit

The Josef Korbel School of International Studies is pleased to announce that the 3rd Annual Denver Democracy Summit will be held October 25-26, 2023 on the University of Denver campus with proceedings streamed to a worldwide virtual audience. The Denver Democracy Summit will serve as a platform for the DU community and the world’s leading thought leaders on democracy to evaluate ongoing efforts to strengthen democratic norms, values, and institutions. The Summit will include discussions on democratic backsliding, political communication and misinformation, polarization and civil discourse, technology and democracy, and other topics.

More information at Denver Democracy Summit website.

Update 11/2/2023

If you did not have a chance to attend the Denver Democracy Summit you can watch the presentations on their YouTube channel playlist.

What is Liquid Democracy?

Liquid democracy is a hybrid form of democracy that combines elements of direct and representative democracy. It allows citizens to have a more dynamic and flexible role in the decision-making process. In a traditional representative democracy, people vote for elected officials who make decisions on their behalf. In a direct democracy, individuals vote directly on specific issues or policies. Liquid democracy seeks to bridge the gap between these two approaches.

In a liquid democracy, citizens have the option to either vote on issues directly or delegate their votes to someone they trust. Delegating votes means that an individual can choose another person, often referred to as a proxy or delegate, to vote on their behalf. The key feature of liquid democracy is that delegation can be temporary and can be reassigned at any time. This means that individuals can delegate their votes on specific issues to different people based on their expertise, trust, or personal preferences. For example, if you trust someone’s judgment on environmental issues, you can delegate your vote to them for those matters, while delegating your vote to someone else for economic policy decisions.

The goal of liquid democracy is to increase citizen participation and engagement in the decision-making process, while also benefiting from the expertise and knowledge of individuals who are well-versed in certain areas. It offers more flexibility and adaptability compared to traditional democratic models. Digital technology and online platforms have made it easier to implement liquid democracy by allowing for secure and efficient voting and delegation.

Liquid democracy is still a relatively new concept and has been experimented with in various contexts, such as political parties (German Pirate Party, Partido de la Red), organizations, and even some governments (Argentina). Its success and practicality can vary depending on the specific implementation and the cultural and political context in which it is used. Some current open-source software implementations are LiquidFeedback and DemocracyOS.

Artificial Intelligence Regulation

Artificial Intelligence Regulation

Artificial intelligence (AI) is everywhere these days.  AI is used in applications such as self-driving cars (e.g., Waymo, Nuro, and Zoox), some web search engines (e.g., Google Search and Bing), recommendation systems (used by YouTube, Amazon, and Netflix), understanding human speech (such as Siri and Alexa), generative or creative tools (ChatGPT,  Midjourney, and DALL-E),  playing games (such as chess and Go) and even writing U.S. legislation!

AI has added tremendous functionality to applications in most industries, but the regulation is lagging behind the advances.  Governments are slow to create legislation for fear of stifling the industry or out of ignorance of the dangers.

I decided to ask ChatGPT what it recommends for AI regulation by asking “How should artificial intelligence services, like ChatGPT, be regulated?” Here is it’s response:

The regulation of artificial intelligence services, including advanced models like ChatGPT, is a complex and evolving challenge that requires a balanced approach to ensure both innovation and safety. Here are some key considerations and principles that can guide the regulation of AI services:

1. Transparency and Accountability:
– Require AI developers and service providers to be transparent about the capabilities and limitations of their systems. This includes disclosing when users are interacting with AI rather than humans.
– Establish mechanisms for accountability, making it clear who is responsible for the actions and decisions of AI systems.

2. Data Privacy and Security:
– Enforce strict data privacy regulations to protect users’ personal information and prevent misuse of data.
– Implement security measures to safeguard AI systems from cyberattacks and data breaches.

3. Ethical Guidelines:
– Develop and promote ethical guidelines for AI development and usage, including avoiding biases and discriminatory behavior.
– Encourage AI developers to design systems that prioritize user well-being and safety.

4. Fairness and Bias Mitigation:
– Require AI developers to address and mitigate biases in their systems, ensuring that AI services are fair and do not discriminate against specific groups.
– Implement auditing and testing procedures to detect and correct bias in AI algorithms.

5. User Consent and Control:
– Ensure that users are informed about how AI services collect and use their data.
– Give users control over the data they share and how AI services use that data.

6. Certification and Testing:
– Establish certification processes for AI systems, especially those used in critical applications like healthcare, autonomous vehicles, and finance.
– Mandate rigorous testing and validation of AI systems to ensure their reliability and safety.

7. Liability Framework:
– Define liability frameworks that specify who is responsible if AI systems cause harm or make erroneous decisions.
– Consider establishing insurance requirements for AI developers to cover potential liabilities.

8. Continuous Monitoring and Reporting:
– Implement ongoing monitoring and reporting requirements for AI systems in use, especially those in critical domains.
– Create mechanisms for reporting and addressing AI-related incidents and failures.

9. International Cooperation:
– Promote international collaboration and standardization efforts to harmonize AI regulations across borders, as AI is a global technology.

10. Flexibility and Adaptability:
– Recognize that AI technology evolves rapidly, and regulations should be flexible and adaptable to accommodate innovation while maintaining safety.

11. Public Input:
– Solicit input from experts, stakeholders, and the public when formulating AI regulations to ensure they reflect diverse perspectives and needs.

12. Education and Awareness:
– Invest in education and awareness programs to help the public understand AI and its implications, fostering responsible use and informed decision-making.

Regulating AI services is a multifaceted challenge that requires a careful balance between encouraging innovation and ensuring public safety. Regulations should be informed by ongoing research, collaboration among stakeholders, and a commitment to ethical AI development and deployment. Furthermore, it’s essential to strike a balance between regulatory oversight and not stifling the positive potential of AI technology.

Not a bad response for an AI system!  ChatGPT cannot vote, but it probably has a louder voice in the U.S. Congress.  Maybe us humans will just have to get busy, before AI replaces Congress.  Hmm, an idea worth considering.