We need a true public space online!

The Internet holds a lot of promise for connecting people around the world.  The problem is that it has been taken over by corporations for their benefit, not necessarily ours.  What is needed is a social platform that is owned by the people that use it, without algorithms pushing content that is negative and often untrue.

A report1https://www.pewresearch.org/internet/2021/11/22/the-future-of-digital-spaces-and-their-role-in-democracy/, released in 2021 by the Pew Research Center, stated, of the 862 expert respondents:

61% said they either hope or expect that by 2035 digital spaces and people’s uses of them WILL change in ways that significantly serve the public good. However, because some wrote that this is merely their hope and others listed one or more extremely difficult hurdles to overcome before that outcome can be achieved.

Some of the hurdles are:

  • Social media algorithms are the first thing to fix: Many of these experts said the key underlying problem is that social media platforms are designed for profit maximization and – in order to accelerate user engagement – these algorithms favor extreme and hateful speech.
  • General public’s digital literacy needs improvement by informing and educating the public.  People who better understand the impact of the emerging negatives in the digital sphere will become more involved and work to influence and motivate business and government leaders to upgrade public spaces.
  • Humans are self-centered and shortsighted, making them easy to manipulate: People’s attention and engagement in public online spaces are drawn by stimulating their emotions, playing to their survival instincts and stoking their fears.  Any new spaces that might be innovated and introduced can still be flooded with innuendo, accusation, fraud, lies and toxic divisiveness.

With these things in mind, I think there are many solutions already developed that can be used to create a new “Public Square”.  There will probably not be one solution, but several applications or platforms for the public to interact to exchange information, discuss solutions, build consensus, and support implementation of the best solutions.  My opinion is these platforms must be:

  1. Owned by the public: meaning non-profit organizations that implement, maintain, and support the platforms.  Organizations must be answerable to the public.  This eliminates the profit motive driving current social platforms to negative results.  Financial support must come from a broad base to reduce concentrations of power and influence.
  2. Free and open-source: meaning that software would be developed in a open and transparent way, to improve trust, and available to any group of people to use for their benefit at low cost.  This will also reduce lock-in that prevents people from changing platforms easily.
  3. Algorithms, used to select or promote content, are documented and ideally selectable.

To these ends, I am implementing a discussion forum to help accomplish these and other goals to improve democracy and society.

Improving Digital Literacy

Digital literacy refers to the ability to find, evaluate, create, and communicate information through digital technologies. Digital literacy is increasingly crucial for protecting democracy in our interconnected world. In the context of protecting democracy, it’s particularly focused on:

  1. Information evaluation:
  2. Spotting misinformation and disinformation:
  3. Understanding data privacy and security:
  4. Media creation and sharing:
  5. Critical thinking in the digital space:
    • Questioning the motivations behind online content.
    • Recognizing emotional manipulation in digital media.
    • Understanding filter bubbles and echo chambers.

To improve digital literacy:

  1. Education: Support digital literacy programs in schools and communities.
  2. Fact-checking: Familiarize yourself with fact-checking websites and teach others how to use them.
  3. Media diversity: Encourage consumption of diverse media sources to get a balanced perspective.
  4. Tech awareness: Stay informed about emerging technologies and their potential impacts on democracy.
  5. Online etiquette: Promote responsible online behavior and civil discourse.
  6. Platform knowledge: Understand how different social media platforms work, including their algorithms and content moderation policies.
  7. Data rights: Learn about data protection laws and advocate for strong privacy regulations.
  8. Cybersecurity practices: Adopt and promote good cybersecurity habits, like using a password manager, strong passwords and two-factor authentication.

By improving digital literacy, individuals can better navigate the complex online information landscape, make informed decisions, and contribute to a healthier democratic discourse.

2023 Denver Democracy Summit

The Josef Korbel School of International Studies is pleased to announce that the 3rd Annual Denver Democracy Summit will be held October 25-26, 2023 on the University of Denver campus with proceedings streamed to a worldwide virtual audience. The Denver Democracy Summit will serve as a platform for the DU community and the world’s leading thought leaders on democracy to evaluate ongoing efforts to strengthen democratic norms, values, and institutions. The Summit will include discussions on democratic backsliding, political communication and misinformation, polarization and civil discourse, technology and democracy, and other topics.

More information at Denver Democracy Summit website.

Update 11/2/2023

If you did not have a chance to attend the Denver Democracy Summit you can watch the presentations on their YouTube channel playlist.

What is Liquid Democracy?

Liquid democracy is a hybrid form of democracy that combines elements of direct and representative democracy. It allows citizens to have a more dynamic and flexible role in the decision-making process. In a traditional representative democracy, people vote for elected officials who make decisions on their behalf. In a direct democracy, individuals vote directly on specific issues or policies. Liquid democracy seeks to bridge the gap between these two approaches.

In a liquid democracy, citizens have the option to either vote on issues directly or delegate their votes to someone they trust. Delegating votes means that an individual can choose another person, often referred to as a proxy or delegate, to vote on their behalf. The key feature of liquid democracy is that delegation can be temporary and can be reassigned at any time. This means that individuals can delegate their votes on specific issues to different people based on their expertise, trust, or personal preferences. For example, if you trust someone’s judgment on environmental issues, you can delegate your vote to them for those matters, while delegating your vote to someone else for economic policy decisions.

The goal of liquid democracy is to increase citizen participation and engagement in the decision-making process, while also benefiting from the expertise and knowledge of individuals who are well-versed in certain areas. It offers more flexibility and adaptability compared to traditional democratic models. Digital technology and online platforms have made it easier to implement liquid democracy by allowing for secure and efficient voting and delegation.

Liquid democracy is still a relatively new concept and has been experimented with in various contexts, such as political parties (German Pirate Party, Partido de la Red), organizations, and even some governments (Argentina). Its success and practicality can vary depending on the specific implementation and the cultural and political context in which it is used. Some current open-source software implementations are LiquidFeedback and DemocracyOS.

Artificial Intelligence Regulation

Artificial Intelligence Regulation

Artificial intelligence (AI) is everywhere these days.  AI is used in applications such as self-driving cars (e.g., Waymo, Nuro, and Zoox), some web search engines (e.g., Google Search and Bing), recommendation systems (used by YouTube, Amazon, and Netflix), understanding human speech (such as Siri and Alexa), generative or creative tools (ChatGPT,  Midjourney, and DALL-E),  playing games (such as chess and Go) and even writing U.S. legislation!

AI has added tremendous functionality to applications in most industries, but the regulation is lagging behind the advances.  Governments are slow to create legislation for fear of stifling the industry or out of ignorance of the dangers.

I decided to ask ChatGPT what it recommends for AI regulation by asking “How should artificial intelligence services, like ChatGPT, be regulated?” Here is it’s response:

The regulation of artificial intelligence services, including advanced models like ChatGPT, is a complex and evolving challenge that requires a balanced approach to ensure both innovation and safety. Here are some key considerations and principles that can guide the regulation of AI services:

1. Transparency and Accountability:
– Require AI developers and service providers to be transparent about the capabilities and limitations of their systems. This includes disclosing when users are interacting with AI rather than humans.
– Establish mechanisms for accountability, making it clear who is responsible for the actions and decisions of AI systems.

2. Data Privacy and Security:
– Enforce strict data privacy regulations to protect users’ personal information and prevent misuse of data.
– Implement security measures to safeguard AI systems from cyberattacks and data breaches.

3. Ethical Guidelines:
– Develop and promote ethical guidelines for AI development and usage, including avoiding biases and discriminatory behavior.
– Encourage AI developers to design systems that prioritize user well-being and safety.

4. Fairness and Bias Mitigation:
– Require AI developers to address and mitigate biases in their systems, ensuring that AI services are fair and do not discriminate against specific groups.
– Implement auditing and testing procedures to detect and correct bias in AI algorithms.

5. User Consent and Control:
– Ensure that users are informed about how AI services collect and use their data.
– Give users control over the data they share and how AI services use that data.

6. Certification and Testing:
– Establish certification processes for AI systems, especially those used in critical applications like healthcare, autonomous vehicles, and finance.
– Mandate rigorous testing and validation of AI systems to ensure their reliability and safety.

7. Liability Framework:
– Define liability frameworks that specify who is responsible if AI systems cause harm or make erroneous decisions.
– Consider establishing insurance requirements for AI developers to cover potential liabilities.

8. Continuous Monitoring and Reporting:
– Implement ongoing monitoring and reporting requirements for AI systems in use, especially those in critical domains.
– Create mechanisms for reporting and addressing AI-related incidents and failures.

9. International Cooperation:
– Promote international collaboration and standardization efforts to harmonize AI regulations across borders, as AI is a global technology.

10. Flexibility and Adaptability:
– Recognize that AI technology evolves rapidly, and regulations should be flexible and adaptable to accommodate innovation while maintaining safety.

11. Public Input:
– Solicit input from experts, stakeholders, and the public when formulating AI regulations to ensure they reflect diverse perspectives and needs.

12. Education and Awareness:
– Invest in education and awareness programs to help the public understand AI and its implications, fostering responsible use and informed decision-making.

Regulating AI services is a multifaceted challenge that requires a careful balance between encouraging innovation and ensuring public safety. Regulations should be informed by ongoing research, collaboration among stakeholders, and a commitment to ethical AI development and deployment. Furthermore, it’s essential to strike a balance between regulatory oversight and not stifling the positive potential of AI technology.

Not a bad response for an AI system!  ChatGPT cannot vote, but it probably has a louder voice in the U.S. Congress.  Maybe us humans will just have to get busy, before AI replaces Congress.  Hmm, an idea worth considering.