Balancing AI Progress with Safety Guardrails

An Interview with Claude AI, March 4, 2026

Dirk: What specific guardrails can be put on AI use and development that will not hinder progress but will prevent harm to society?

This is a great question that sits at the intersection of policy, technology, and ethics. There’s a growing body of thought on this, and the challenge is real — guardrails that are too loose invite misuse, while ones that are too tight can stifle beneficial innovation. Here’s a framework for thinking about it:

Transparency and disclosure requirements are among the least controversial guardrails. Requiring that AI-generated content be labeled (especially in political ads, news, and customer service), that companies disclose training data sources, and that organizations publish model cards describing capabilities and limitations doesn’t slow down development — it just ensures people know what they’re interacting with. The EU AI Act moves in this direction, and most researchers support it.

Tiered risk-based regulation is another approach that tries to balance progress with safety. Rather than treating all AI the same, you classify systems by risk level. A recommendation algorithm for a music app doesn’t need the same oversight as a system making parole decisions or diagnosing cancer. High-stakes domains like healthcare, criminal justice, finance, and critical infrastructure get stricter requirements (auditing, testing, human oversight), while lower-risk applications face lighter rules. This lets most innovation proceed freely while focusing scrutiny where harm is most likely.

Pre-deployment testing and red-teaming for frontier models is increasingly seen as a practical guardrail. Companies like Anthropic, OpenAI, and Google DeepMind already do this voluntarily. Making it a standard practice — especially for models above a certain capability threshold — adds a safety check without preventing the research itself. The key is that the testing happens before public release, not as a barrier to research.

Accountability and liability frameworks can shape behavior without prescribing specific technical approaches. If developers and deployers are legally responsible when their systems cause foreseeable harm (through negligence, inadequate testing, or ignoring known risks), they’re incentivized to build responsibly without being told exactly how. This is analogous to how product liability works in other industries.

A few guardrails that are more debated include mandatory licensing for AI developers (risks creating gatekeeping that favors incumbents), compute thresholds that trigger regulation (hard to calibrate and may become obsolete quickly), and broad restrictions on open-source model releases (could hamper the research ecosystem that drives progress for everyone).

Where it gets genuinely hard is in dual-use capabilities — the same model that helps a biologist design proteins could potentially help a bad actor design pathogens. There’s no clean line, and most serious thinkers acknowledge that some tension between openness and safety is inherent and requires ongoing, adaptive governance rather than a one-time rule.

The approaches that seem most promising share a few traits: they focus on outcomes and harms rather than prescribing specific technologies, they scale with actual risk, they preserve the ability to do open research and iterate, and they include mechanisms for updating rules as the technology evolves. Rigid, technology-specific regulation tends to age poorly, while principle-based frameworks tend to be more durable.

Would you like me to dig deeper into any particular dimension of this — say, the policy landscape in a specific region, the technical mechanisms for enforcement, or how this applies to a particular domain?

Dirk: Expand on the technical mechanisms for enforcement.

This is where theory meets practice, and it’s one of the harder parts of AI governance — because you need mechanisms that are actually verifiable, not just promises on paper.

Auditing and evaluation infrastructure is probably the most mature technical enforcement mechanism. Independent third-party audits can test models for bias, safety vulnerabilities, and compliance with stated policies before and after deployment. This looks like structured benchmark suites (testing for toxicity, fairness across demographics, susceptibility to jailbreaks), adversarial red-teaming by external teams, and ongoing monitoring of deployed systems. The technical challenge is that evaluations are only as good as the benchmarks, and sophisticated harms can slip through standardized tests. There’s active research into making evals more robust, but it remains an arms race.

Watermarking and provenance tracking is a promising area for enforcing transparency requirements. The idea is to embed detectable signals into AI-generated content — text, images, audio, video — so that it can be identified downstream. Some approaches work at the model level (statistical patterns in token sampling), while others work at the infrastructure level (cryptographic metadata attached to outputs). The technical limitation is robustness: many watermarking schemes can be defeated by paraphrasing, cropping, or re-encoding. C2PA (Coalition for Content Provenance and Authenticity) is an industry effort to build provenance standards into the content pipeline itself, which is more durable than watermarking alone but requires broad adoption to be effective.

Compute governance and hardware-level controls are a more upstream approach. Since training frontier models requires massive compute clusters, monitoring and regulating access to that compute provides a natural choke point. This could involve know-your-customer rules for cloud GPU providers, reporting thresholds when training runs exceed a certain scale (measured in FLOPs), or even hardware-level mechanisms. Some researchers have proposed secure hardware modules that could cryptographically verify what workloads are being run — essentially a “black box” for training runs. This is technically feasible but raises significant questions about surveillance and who controls the verification infrastructure.

Structured access and API-level controls are already widely practiced. Rather than releasing model weights openly, developers can serve models through APIs that enforce usage policies — rate limiting, content filtering, logging, and terms-of-service enforcement. This gives developers a technical surface for preventing misuse (blocking certain query types, flagging anomalous usage patterns) without restricting legitimate users. The tradeoff is that this only works for closed models; once weights are released openly, API-level controls are irrelevant. This fuels the ongoing open-source versus closed-source debate.

Interpretability and explainability tools represent a longer-term enforcement mechanism. If regulators require that high-stakes AI decisions be explainable, then there needs to be technical machinery to actually produce those explanations. Current approaches range from attention visualization and feature attribution (relatively shallow) to mechanistic interpretability research that tries to reverse-engineer what’s happening inside neural networks at a circuit level. Anthropic invests heavily in this area. The honest assessment is that interpretability for large models is still in its early stages — we can explain simple behaviors but not yet reliably audit a frontier model’s reasoning in the way you’d audit a financial ledger. Progress here would be transformative for enforcement.

Differential privacy and data governance tooling enforce rules about training data. Techniques like differential privacy allow models to be trained on sensitive data while providing mathematical guarantees that individual data points can’t be extracted. Federated learning allows training across distributed data without centralizing it. These are technically sound but come with performance costs — models trained with strict differential privacy tend to be less capable — so there’s an active tradeoff between privacy guarantees and model quality.

Monitoring and incident response systems are the post-deployment layer. This includes automated systems that flag unusual usage patterns (a sudden spike in requests for dangerous chemistry, for instance), human review pipelines, and structured incident reporting frameworks similar to what exists in cybersecurity (CVE-like databases for AI harms). The technical challenge is scale: billions of interactions happen daily, so monitoring has to be largely automated, which means it inherits all the limitations of the classifiers doing the monitoring.

The honest summary is that no single mechanism is sufficient. The most credible enforcement strategies layer multiple approaches — compute-level visibility, pre-deployment testing, API-level controls, watermarking, post-deployment monitoring — so that weaknesses in one layer are compensated by others. And all of them require sustained investment, because the technology they’re governing is itself a moving target.

Want me to go deeper on any of these — perhaps the compute governance angle, or how interpretability research could change the landscape?

Dirk: What is the policy landscape in the United States of America?

The U.S. AI policy landscape right now is genuinely messy — probably the most dynamic and contested it’s been. Here’s where things stand:

No comprehensive federal AI law exists. The U.S. has no equivalent to the EU AI Act. Instead, it relies on a fragmented patchwork of state laws, agency guidance, and voluntary standards.1 Congress has debated various proposals but hasn’t passed anything binding on the private sector.

The Trump administration has taken a deregulatory, innovation-first stance. In January 2025, Executive Order 14179 revoked portions of the Biden-era AI executive order that had emphasized safety testing and reporting requirements.2 The framing shifted explicitly toward maintaining U.S. competitiveness and reducing barriers to AI development.

The big flashpoint is federal preemption of state laws. In December 2025, President Trump signed an executive order proposing to establish a uniform federal AI policy framework that would preempt state AI laws deemed inconsistent with that policy. The order specifically criticized Colorado’s AI Act and created several enforcement mechanisms: the Attorney General was directed to establish an AI Litigation Task Force to challenge state AI laws on grounds including unconstitutional regulation of interstate commerce and federal preemption.2 The order also directed the Secretary of Commerce to publish an evaluation by March 2026 identifying burdensome state AI laws that conflict with federal policy.2

However, and this is critical: the executive order itself cannot overturn existing state law — that can only be done by an act of Congress or the courts. Until legal challenges are resolved, state laws remain enforceable.3 The Senate voted to remove a House provision that would have barred states from enforcing AI-specific regulations for ten years3, which shows the limits of the preemption push.

States have been filling the vacuum aggressively. Thirty-eight states passed AI-related legislation in 20254, covering topics from deepfakes in elections to AI in healthcare. The most significant state laws include:

Colorado’s AI Act, which requires deployers of high-risk AI systems to use reasonable care to avoid algorithmic discrimination, mandates impact assessments, transparency disclosures, and documentation of AI decision-making.1 It’s currently set to take effect mid-2026 but is expected to face both legislative amendments and federal legal challenges.

California has multiple laws taking effect, including new regulations under the California Consumer Privacy Act requiring businesses using automated decision-making technology for significant consumer decisions to provide pre-use notice, opt-out rights, and access to information about their AI use, with compliance required by January 2027.5

Texas passed its own Responsible AI Governance Act, and states like Illinois, New York City, Maryland, and New Jersey have enacted targeted regulations restricting AI use in hiring decisions.1

The result is a standoff. The federal government wants a light-touch, unified national standard. States are pressing ahead with enforceable rules covering discrimination, transparency, healthcare, and elections. Legal analysts note that for the time being, the executive order likely adds more — not less — complication to the current landscape6, because the boundaries of preemption are undefined and will likely be litigated for years.

The practical advice from virtually every law firm is the same: don’t wait for preemption challenges to resolve — identify the most stringent applicable state requirements and build compliance programs around those3, since building down is easier than scrambling to build up.

So the honest picture is that U.S. AI policy is in a transitional period defined more by tension between levels of government than by any coherent national strategy. Whether that resolves through federal legislation, court decisions on preemption, or continued state-led patchwork remains genuinely uncertain.

Would you like me to dig into any particular aspect — say, how this compares to the EU approach, what the Colorado AI Act specifically requires, or how the healthcare AI regulatory picture is shaping up?

Dirk: Give a summary of the EU AI Act.

The EU AI Act is the most comprehensive AI regulation anywhere in the world, and it stands in sharp contrast to the U.S. approach. Here’s how it works:

The core framework is risk-based. The Act assigns AI applications to risk categories. Applications that create unacceptable risk are banned outright. High-risk applications are subject to specific legal requirements. And applications not explicitly banned or listed as high-risk are largely left unregulated.7 This tiered structure is the backbone of the entire law — the higher the perceived risk, the stricter the obligations.

Banned outright (unacceptable risk): Prohibited practices include deploying subliminal, manipulative, or deceptive techniques to distort behavior and impair informed decision-making, causing significant harm.7 Government-run social scoring systems are the most commonly cited example. Real-time biometric identification in public spaces is also heavily restricted, with narrow exceptions for law enforcement that require judicial authorization and fundamental rights impact assessments.

High-risk systems get the heaviest regulation. These include AI used in critical areas like hiring and employment, credit scoring, healthcare, law enforcement, education, and critical infrastructure. Providers of high-risk systems face substantial obligations: risk management programs, data governance requirements, technical documentation, transparency measures, human oversight mechanisms, and conformity assessments before deployment. Developers of high-risk AI systems must conduct a self-conformity assessment, and high-risk AI systems must be registered in an EU database.8

General-purpose AI models (like large language models) have their own rules. All GPAI model providers must provide technical documentation, instructions for use, comply with the Copyright Directive, and publish a summary about the content used for training.7 Models deemed to present systemic risk face additional requirements including model evaluations, adversarial testing, incident tracking and reporting, and cybersecurity protections. Open-source models get lighter treatment unless they present systemic risk.

Transparency obligations apply broadly. Providers must ensure that AI systems intended to directly interact with individuals are designed so that those individuals are informed they are engaging with an AI system.9 Synthetic content like deepfakes must be labeled, and emotion recognition or biometric categorization systems require user notification.

The penalties are significant. Fines can reach up to 35 million euros or 7% of global annual turnover for violations of prohibited practices, up to 15 million euros or 3% for other requirements, and up to 7.5 million euros or 1% for providing incorrect or misleading information.8 These apply to both EU and non-EU companies offering AI systems in the EU.

The timeline is phased. The Act entered into force on August 1, 2024, with prohibited practices and AI literacy obligations applying from February 2025, governance rules and GPAI model obligations from August 2025, and the rules for high-risk systems becoming generally applicable by August 202610, with some categories extended to August 2027. The Commission’s “Digital Omnibus” proposal from November 2025 may push some high-risk deadlines further, linking them to the availability of harmonized technical standards rather than fixed dates.

Extraterritorial reach is a key feature, similar to GDPR. The Act applies to users located in the EU, and to third-country users where the AI system’s output is used in the EU.7 This means U.S. and other non-EU companies serving EU markets must comply.

What’s still evolving: Each member state must establish at least one AI regulatory sandbox by August 20267, creating controlled environments for testing new AI applications. The Commission is also finalizing codes of practice for GPAI transparency, and individual member states are passing their own implementing legislation (Italy was the first).

The big-picture comparison with the U.S. is stark: the EU has chosen a binding, comprehensive, centralized regulatory framework that prioritizes fundamental rights and safety, while the U.S. has no federal equivalent and is actively debating whether states should even be allowed to regulate. Whether the EU approach becomes a global standard (the “Brussels Effect,” as happened with GDPR) or proves too burdensome for innovation is one of the defining policy questions in AI right now.

Want me to compare how specific provisions map onto the guardrails we discussed earlier, or dive into how any particular sector (healthcare, employment, finance) is affected?

Download PDF

Project 1971

Lewis F. Powell 1976 official portrait
Lewis F. Powell, 1976

In 1971, Lewis Powell, soon to be Supreme Court Judge, wrote a memo with the subject Attack on American Free Enterprise System to the U.S. Chamber of Commerce laying out a strategy for American businesses to counter the growing criticism of capitalism and corporate influence.  This became known, in limit circles, as the Powell Memorandum or the Powell Manifesto.

This was the seed that grew into Project 2025.  The original memo was type-written and a scan is available here and a PDF reproduction here.

Summary of Key Points:

  1. Perceived Threats to Capitalism: Powell argued that the free enterprise system was under attack from various groups, including left-leaning academics, media, and consumer advocacy organizations.
  2. Call to Action for Business Leaders: He urged business leaders to take a more active role in shaping public opinion, education, and government policies.
  3. Influence in Education: Powell emphasized the need for businesses to fund academic programs and research that promote free-market principles.
  4. Legal and Political Engagement: He encouraged corporate America to challenge regulations, fund legal initiatives, and support pro-business candidates.
  5. Media Strategy: Powell recommended that businesses counter anti-corporate narratives by increasing their presence in media and public discourse.
  6. Long-Term Institutional Influence: His memo laid the groundwork for the establishment of conservative think tanks and lobbying efforts to defend capitalism.

This memo significantly influenced the rise of corporate political activism in the U.S., leading to increased business involvement in policy and legal matters.  It was used by the U.S. Chamber of Commerce to organize two Powell Memo Task Force meetings in 1973 (Disney World & Dallas) attended by Gerald Ford, CBS president Richard Jenks, ABC executive James Hagerty, Ad Council chairman Barton Cummings, Metromedia executive Mark Evans, Edward Scripps II, and Roger Ailes.

Long-Term Institutional Influence of Powell’s Memo

Lewis Powell’s memo had a lasting impact on American politics, business, and public policy. It directly influenced the creation and expansion of institutions that continue to shape economic and political discourse today.

1. Establishment of Conservative Think Tanks

Powell’s memo emphasized the need for long-term investment in intellectual and policy-driven organizations to counteract what he saw as anti-business narratives in academia and the media. Following the memo, several influential think tanks and policy institutes were either created or expanded, including:
The Heritage Foundation (1973) – Became a major force in conservative policy, shaping government decisions and judicial appointments.
The Cato Institute (1977) – Promoted libertarian economic policies and free-market principles.
The American Enterprise Institute (AEI) – Played a significant role in advocating for deregulation and corporate-friendly policies.
The American Legislative Exchange Council (ALEC, 1973) – A nonprofit organization of conservative state legislators and private sector representatives who draft and share model legislation for distribution among state governments in the United States.

These organizations worked to shift political and economic thought by producing research, influencing lawmakers, and shaping media discussions.

2. Expansion of Corporate Lobbying & Advocacy Groups

The memo urged businesses to become more politically active. This led to:
– A significant increase in corporate lobbying in Washington, D.C.
– The rise of Political Action Committees (PACs) to fund pro-business candidates.
– The growth of The Business Roundtable (1972), which became a powerful lobbying group advocating for corporate interests in policy-making.

3. Influence on the Judicial System

Powell argued that businesses needed to challenge regulations and government actions in court. This led to:
– The rise of legal advocacy groups such as the Federalist Society (founded in 1982), which played a major role in shaping the judiciary by promoting conservative judges.
– More frequent corporate-backed legal challenges against government regulations, leading to Supreme Court rulings that favored business interests (e.g., Citizens United v. FEC in 2010, which allowed unlimited corporate spending in elections).

4. Business Involvement in Higher Education

To counter what Powell saw as anti-capitalist bias in universities, businesses increased funding for:
Endowed professorships and research centers promoting free-market economics.
University partnerships with corporations, influencing curricula to favor business perspectives.
– The establishment of pro-business student organizations to shape future policymakers and corporate leaders.

Overall Impact

Powell’s vision helped shift American politics and economics towards more corporate-friendly policies, deregulation, and market-driven reforms. The Powell Memorandum seems to have served as an early foundation for the philanthropic priorities of the Olin, Scaiff, Coors, and Koch families. His memo laid the foundation for decades of conservative and pro-business advocacy, influencing economic policies that continue to shape the U.S. today.

This also weaken the average American citizen’s voice in our government making it less democratic.

Kleptocracy

Kleptocracy is a term used to describe a system of government or state in which leaders or officials exploit their power to steal resources from the country they govern, often for personal gain. The word comes from the Greek words klepto (meaning theft) and kratos (meaning power or rule).

Some characteristic of kleptocracies:

Corruption

The rulers and officials systematically engage in stealing state resources. Corruption is usually so rampant that the ruling class enriches itself at the expense of the population, often leading to extreme inequality, poverty, and a lack of basic services for citizens. In 2024, the United States scored a 65 out of 100 in the Corruption Perceptions Index published by Transparency International.  The world map, shown above, illustrates the corruption across the globe.

Exploitation of Public Office

Government positions are often used for personal enrichment rather than serving the public. Kleptocracy enriches not only high government officials, but a narrow class of plutocrats, who usually represent wealthy individuals and families who have amassed great assets through the usage of political favoritism, special interest legislation, monopolies, special tax breaks, state intervention, subsidies or outright graft.

Lack of Accountability

Often, kleptocratic leaders are above the law, using their power to silence opposition and evade prosecution. A kleptocratic financial system flourishes in the United States by illegally abusing the United States’ liberal economic structure for two reasons:

  1. The United States does not have a beneficial ownership registry, and kleptocrats take advantage of this privacy benefit.
  2. kleptocrats take advantage of incorporation agents, lawyers, and realtors to unknowingly launder their money.

In 2025, the United States scored a 3.9 out of 10 on the Opacity in Real Estate Ownership (OREO) Index.

Weak Institutions

The legal and political institutions necessary to prevent corruption (like an independent judiciary or free press) are often undermined. As the judiciary becomes ineffective, the rule of law diminishes. As the free press is muzzled, disinformation covering the theft of assets becomes easier.

Examples of Kleptocracy

Russia

Under Vladimir Putin, Russia has been widely considered a kleptocracy, where state resources are siphoned off by the ruling elite, including oligarchs close to the president. There has been widespread corruption in state-owned enterprises, and political dissent is often suppressed.

Venezuela

The government under Hugo Chávez and his successor Nicolás Maduro has been accused of corruption, including embezzlement and the diversion of state resources, especially from the country’s oil revenues. This has contributed to the country’s ongoing economic and political crisis.

Nigeria

Nigeria, particularly under the leadership of military dictatorships like that of Sani Abacha, has experienced significant corruption. Abacha’s regime is notorious for looting billions of dollars from the nation’s treasury. Many Nigerian officials are accused of embezzling public funds, leading to severe inequality and underdevelopment despite the country’s oil wealth.

Equatorial Guinea

President Teodoro Obiang Nguema has been in power since 1979 and has presided over a regime where the country’s vast oil wealth is largely controlled by his family and close associates. Despite the nation’s oil riches, most of the population lives in poverty.

Zimbabwe

Under Robert Mugabe’s rule, Zimbabwe became a classic example of kleptocracy. Mugabe and his associates are alleged to have diverted large sums of state funds and resources to themselves, while the country descended into hyperinflation, economic collapse, and widespread poverty.

United States of America

Some have accused the U.S.A. of being a supporter of kleptocracy by providing a place to hide and launder plundered assets.

Others have gone further, evidenced by firing inspectors generals —an independent check on mismanagement and abuse of power within government agencies— and the lack of integrity rules in Congress and the Supreme Court of the U.S. allowing these officials to profit from their positions.

The current administration has removed or diminished many checks on kleptocracy in the United States.

Now the president seems to have no limits on his money-making schemes that profit him, his family, and political supporters from his position.

We need a true public space online!

The Internet holds a lot of promise for connecting people around the world.  The problem is that it has been taken over by corporations for their benefit, not necessarily ours.  What is needed is a social platform that is owned by the people that use it, without algorithms pushing content that is negative and often untrue.

A report1https://www.pewresearch.org/internet/2021/11/22/the-future-of-digital-spaces-and-their-role-in-democracy/, released in 2021 by the Pew Research Center, stated, of the 862 expert respondents:

61% said they either hope or expect that by 2035 digital spaces and people’s uses of them WILL change in ways that significantly serve the public good. However, because some wrote that this is merely their hope and others listed one or more extremely difficult hurdles to overcome before that outcome can be achieved.

Some of the hurdles are:

  • Social media algorithms are the first thing to fix: Many of these experts said the key underlying problem is that social media platforms are designed for profit maximization and – in order to accelerate user engagement – these algorithms favor extreme and hateful speech.
  • General public’s digital literacy needs improvement by informing and educating the public.  People who better understand the impact of the emerging negatives in the digital sphere will become more involved and work to influence and motivate business and government leaders to upgrade public spaces.
  • Humans are self-centered and shortsighted, making them easy to manipulate: People’s attention and engagement in public online spaces are drawn by stimulating their emotions, playing to their survival instincts and stoking their fears.  Any new spaces that might be innovated and introduced can still be flooded with innuendo, accusation, fraud, lies and toxic divisiveness.

With these things in mind, I think there are many solutions already developed that can be used to create a new “Public Square”.  There will probably not be one solution, but several applications or platforms for the public to interact to exchange information, discuss solutions, build consensus, and support implementation of the best solutions.  My opinion is these platforms must be:

  1. Owned by the public: meaning non-profit organizations that implement, maintain, and support the platforms.  Organizations must be answerable to the public.  This eliminates the profit motive driving current social platforms to negative results.  Financial support must come from a broad base to reduce concentrations of power and influence.
  2. Free and open-source: meaning that software would be developed in a open and transparent way, to improve trust, and available to any group of people to use for their benefit at low cost.  This will also reduce lock-in that prevents people from changing platforms easily.
  3. Algorithms, used to select or promote content, are documented and ideally selectable.

To these ends, I am implementing a discussion forum to help accomplish these and other goals to improve democracy and society.

Improving Digital Literacy

Digital literacy refers to the ability to find, evaluate, create, and communicate information through digital technologies. Digital literacy is increasingly crucial for protecting democracy in our interconnected world. In the context of protecting democracy, it’s particularly focused on:

  1. Information evaluation:
  2. Spotting misinformation and disinformation:
  3. Understanding data privacy and security:
  4. Media creation and sharing:
  5. Critical thinking in the digital space:
    • Questioning the motivations behind online content.
    • Recognizing emotional manipulation in digital media.
    • Understanding filter bubbles and echo chambers.

To improve digital literacy:

  1. Education: Support digital literacy programs in schools and communities.
  2. Fact-checking: Familiarize yourself with fact-checking websites and teach others how to use them.
  3. Media diversity: Encourage consumption of diverse media sources to get a balanced perspective.
  4. Tech awareness: Stay informed about emerging technologies and their potential impacts on democracy.
  5. Online etiquette: Promote responsible online behavior and civil discourse.
  6. Platform knowledge: Understand how different social media platforms work, including their algorithms and content moderation policies.
  7. Data rights: Learn about data protection laws and advocate for strong privacy regulations.
  8. Cybersecurity practices: Adopt and promote good cybersecurity habits, like using a password manager, strong passwords and two-factor authentication.

By improving digital literacy, individuals can better navigate the complex online information landscape, make informed decisions, and contribute to a healthier democratic discourse.

Protecting Democracy

Experts predict a 19.2% four-year likelihood of democratic breakdown in the United States as of 2022.1

When democracy breaks down, it typically takes many years, often decades, to reverse the downward spiral. In the process, violence and corruption typically flourish, and talent and wealth flee to more stable countries, undermining national prosperity. It is not just our venerated institutions and norms that are at risk—it is our future national standing, strength, and ability to compete globally.2

As the chart below shows, by several indices, the United States’ democracy is in decline in recent years.

Our World in Data

There are many ways individuals can help protect and strengthen democracy in the United States. Here are some key actions you can take:

  1. Stay informed:
  • Follow reputable news sources to stay up-to-date on current events and political issues.
  • Fact-check information before sharing it to combat misinformation.
  1. Vote:
  • Register to vote and participate in all elections, including local ones.
  • Research candidates and ballot measures before voting.
  1. Engage in civic participation:
  • Attend town halls and local government meetings.
  • Contact your representatives about issues that matter to you.
  • Consider volunteering for campaigns or running for local office.
  1. Support a free press:
  • Subscribe to quality journalism outlets.
  • Advocate for press freedoms and protections for journalists.
  1. Promote civil discourse:
  • Engage in respectful political discussions with those who have different views.
  • Practice active listening and seek to understand others’ perspectives.
  1. Protect voting rights:
  • Support organizations working to ensure fair and accessible elections.
  • Volunteer as a poll worker or election observer.
  1. Strengthen democratic institutions:
  • Advocate for reforms that enhance government transparency and accountability.
  • Support nonpartisan redistricting efforts to combat gerrymandering.
  1. Educate others:
  • Discuss the importance of civic engagement with friends and family.
  • Support civics education in schools.
  1. Participate in peaceful protests and demonstrations:
  • Exercise your First Amendment rights to assembly and free speech.
  1. Support civil society organizations:
  • Donate to or volunteer with organizations that promote democratic values and civil rights.
  1. Promote digital literacy:
  • Learn and teach others how to identify online manipulation and disinformation.
  1. Encourage diverse representation:
  • Support candidates and initiatives that promote diversity in government.

  1. https://protectdemocracy.org/threat-index/↩︎
  2. https://www.newamerica.org/political-reform/statements/statement-of-concern/↩︎

 

Who gets to vote and who doesn’t

As our country approaches a presidential election in 2024, access to voting is changing at the state level.  A 2022 investigation from The Center for Public Integrity found over half the states in the United States of America had made access to voting worse than it had been previously.  Slightly less than half of the states had improved access to voting for their citizens. A handful of states remained unchanged.

The map below show which states changed which way.  Clicking on a state will open a state report on The Center for Public Integrity‘s web site.

 

The tactics to limit access range from removing voters from the rolls, to closing polling places in some areas, to gerrymandering. Gerrymander has negative connotations, and gerrymandering is almost always considered a corruption of the democratic process.  The Princeton’s Gerrymandering Project gave Florida an F grade on their 2022 congressional redistricting.

Gerrymandering Project map
Gerrymandering Project’s Report Card map

Democracy requires people voting to function well, but some people want to wipe out your vote! Find out what is happening in your state and make sure you vote counts.

Insurrection Defined

Watching the violence of January 6, 2021, I could not believe what I was witnessing in the United States of America.  As I watched and followed the news of this incident, some people called the act an insurrection and some people called it a peaceful protest with people taking a tour of the Capital building!

Well it was not peaceful and the crowd was not spontaneous in their efforts.  There was much rhetoric about “the steal” after the November 2020 election.  So the planning of the January 6th event began.

The Wall Street Journal published an opinion article that was titled Stop Calling Jan. 6 an ‘Insurrection’  behind their paywall but the beginning stated “That’s a legal term that denotes much more than a sporadically violent riot or disturbance.”1https://www.wsj.com/articles/stop-calling-jan-6-an-insurrection-capitol-riot-civil-disorder-insurgency-protest-first-amendment-11641417543  What happened on January 6th was not “a sporadically violent riot” but a on-going movement to keep a president, that lost reelection, in place as the president.

I went looking for legal definition of insurrection and found many references to the term, but no real definitions. The Constitution of the United States in the 14th Amendment states:

No person shall be a Senator or Representative in Congress, or elector of President and Vice-President, or hold any office, civil or military, under the United States, or under any State, who, having previously taken an oath, as a member of Congress, or as an officer of the United States, or as a member of any State legislature, or as an executive or judicial officer of any State, to support the Constitution of the United States, shall have engaged in insurrection or rebellion against the same, or given aid or comfort to the enemies thereof. But Congress may by a vote of two-thirds of each House, remove such disability.2https://constitution.congress.gov/browse/amendment-14/

The United State criminal code states:

§2383. Rebellion or insurrection

Whoever incites, sets on foot, assists, or engages in any rebellion or insurrection against the authority of the United States or the laws thereof, or gives aid or comfort thereto, shall be fined under this title or imprisoned not more than ten years, or both; and shall be incapable of holding any office under the United States.3https://www.govinfo.gov/content/pkg/USCODE-2010-title18/html/USCODE-2010-title18-partI-chap115-sec2383.htm

Still these references do not define “insurrection” so I continued to look for a definition.  The Oxford English Dictionary defines insurrection as

The action of rising in arms or open resistance against established authority or governmental restraint; with plural, an instance of this, an armed rising, a revolt; an incipient or limited rebellion.4“Insurrection, N.” Oxford English Dictionary, Oxford UP, September 2023, https://doi.org/10.1093/OED/7070529685.

After digging into the effort like a lawyer5I am not a lawyer, I just argue like one., I found some cases that did define insurrection.  In the Pan American World Airways, Inc. v. Aetna Casualty & Surety Co., 505 F.2d 989, 1017 (2d Cir. 1974) case, the court stated

“Insurrection” presents the key issue because “rebellion,” “revolution,” and “civil war” are progressive stages in the development of civil unrest, the most rudimentary form of which is “insurrection.” … The district court held that the word insurrection means “[1] a violent uprising by a group or movement [2] acting for the specific purpose of overthrowing the constituted government and seizing its powers.”6https://casetext.com/case/pan-american-world-air-v-aetna-cas-sur

With that legal definition, I think a competent lawyer could argue the January 6th “incident” was an insurrection.  I cannot wait to see what the Supreme Court of the United States thinks about this incident.

Election information?

Disinformation is false information deliberately spread to deceive people. In contrast, misinformation refers to inaccuracies that stem from inadvertent error.  Disinformation is an managed activity in which people introduce strategic deceptions and media manipulation tactics to facilitate political, military, or commercial goals.

UNESCO and Ipsos did a survey  (August-September 2023), in 16 countries, asking among other things,  “Are they concerned that disinformation will impact the next campaign?” The effort found the following key points.

  • 56% of internet users in the 16 countries surveyed frequently use social media as their primary source of news, surpassing television at 44%.
  • Trust in traditional media remains high, with 66% of those surveyed trusting television news, 63% trusting radio news, and 61% trusting print media news.
  • Internet users experience a high prevalence of disinformation on social media, with 68% indicating that disinformation is most widespread there.
  • 87% of respondents expressed concern about the impact of disinformation on upcoming elections in their country, with 47% being “very concerned”.
  • 67% of internet users have encountered hate speech online, with a majority believing it’s most prevalent on Facebook (58%).
  • There’s a strong call for regulation, with 88% believing that both governments and regulatory bodies, and social media platforms (90%) should address disinformation and hate speech issues.
  • A majority of internet users (89%) support the idea that governments and regulators should enforce trust and safety measures on social media platforms during election campaigns.
  • Only 48% of surveyed citizens have reported online content related to disinformation in the context of an election campaign.

In the United States, there is no oversight of the social media platforms by the government and social media companies have actually rolled back any self-policing of content for disinformation, because it works against their business model.  Their model is engagement by entertaining, not informing you.  Disinformation is actually amplified by the social media platforms.

The following image illustrates a two-phase framework conceptualizing the dissemination of disinformation on social media. The first phase is “seeding,” in which actors strategically insert misleading deceptions by masquerading or obfuscating statements as legitimate…. The second phase, “echoing,” represents how disinformation circulates in echo chambers through contradictions against opponents in cultural wars and other identity-driven controversies. Participants use falsehoods, selective truths, beliefs, value judgments, and all available controversies to rhetorically enact and exploit their identity in opposition to their perceived opponents.1Diaz Ruiz, C., & Nilsson, T. (2023). Disinformation and Echo Chambers: How Disinformation Circulates on Social Media Through Identity-Driven Controversies. Journal of Public Policy & Marketing, 42(1), 18-35. https://doi.org/10.1177/07439156221103852

Disinformation and echo chamber graphic Citizens in a democracy need a source of information the is not driven by the profit motive of social media platforms and is dedicated to presenting factual information for people to make good decisions.  Donating to this organization will help fund research and implementation of a platform to inform and amplify your voice.  Donate today, before our democracy  dies!

Can We Keep Our Republic?

When Benjamin Franklin left Independence Hall on the final day of deliberations at the close of the Constitutional Convention of 1787, he was asked  “Well, Doctor, what have we got—a Republic or a Monarchy?” to which he responded “A Republic, if you can keep it.”

These days it sometimes feels like we are in danger of becoming a autocracy.  In the book How Democracies Die, the authors list four key indicators of authoritarian behavior to test leaders.

  1. Rejection of (or weak commitment to) democratic rules of the game.
  2. Denial of the legitimacy of political opponents.
  3. Toleration or encouragement of violence.
  4. Readiness to curtail civil liberties of opponents, including the media.

They maintain a politician that meets even one of these criteria is cause for concern.  In the last century authoritarian leaders have come to power, changing the existing government into a autocracy by fooling the kings or the people into thinking only that leader can fix everything.

Democracy Index map
Democracy Index map

In 2022, the Economist Intelligence Unit (EIU) reported the United States of America (US) as a flawed democracy (overall score of 7.85 of 10), ranking 30 in a field of 167 countries analyzed in their report.  This is a decrease of four spots from 2021 report and kept the US in the flawed democracy group since 2016.  The report goes on to say:

The US’s overall score in the Democracy Index remains weighed down by the country’s intense levels of political and cultural polarisation. Pluralism and competing alternatives are essential for a functioning democracy, but differences of opinion in the US have hardened into political sectarianism and almost permanent institutional gridlock. This trend has long compromised the functioning of government, and the US score for this category remains at a low of 6.43 in 2022, unchanged from 2021….

Political culture is still the weakest category for the US, with a score of 6.25 in 2022, unchanged from 2021. Social cohesion and consensus have collapsed in recent years as disagreements over an expanding list of issues fuel the country’s “culture wars”….

These debates have extended beyond the usual set of actors (such as politicians and activists) and now implicate corporate executives as well as primary school teachers and librarians (over lesson plans and books discussing sexuality, gender and racial identity). A highly politicised media, including popular TV channels and social media platforms, continue to foment and amplify these divisions.

We must pull out of this democratic death spiral or we may end up with an autocracy!  “When American democracy has worked, it has relied upon two norms that we often take for granted—mutual tolerance and institutional forbearance.”1“How Democracies Die” pg 212 The political parties in the United States need to revive these norms and add new ones.  It cannot be done by one leader, it will take all of us! Get involved: vote; write letters to the editor; peacefully protest; contact your representatives; join with organizations like this one to amplify your voice. Do it today, before our democracy dies!