Commentary |
How AI could take over elections – and undermine democracy

Gerd Altmann/Pixabay

By:
Could organizations use artificial intelligence language models such as ChatGPT to induce voters to behave in specific ways?

Sen. Josh Hawley asked OpenAI CEO Sam Altman this question in a May 16, 2023, U.S. Senate hearing on artificial intelligence. Altman replied that he was indeed concerned that some people might use language models to manipulate, persuade and engage in one-on-one interactions with voters.

Altman did not elaborate, but he might have had something like this scenario in mind. Imagine that soon, political technologists develop a machine called Clogger – a political campaign in a black box. Clogger relentlessly pursues just one objective: to maximize the chances that its candidate – the campaign that buys the services of Clogger Inc. – prevails in an election.

While platforms like Facebook, Twitter and YouTube use forms of AI to get users to spend more time on their sites, Clogger’s AI would have a different objective: to change people’s voting behavior.

How Clogger would work

As a political scientist and a legal scholar who study the intersection of technology and democracy, we believe that something like Clogger could use automation to dramatically increase the scale and potentially the effectiveness of behavior manipulation and microtargeting techniques that political campaigns have used since the early 2000s. Just as advertisers use your browsing and social media history to individually target commercial and political ads now, Clogger would pay attention to you – and hundreds of millions of other voters – individually.

It would offer three advances over the current state-of-the-art algorithmic behavior manipulation. First, its language model would generate messages — texts, social media and email, perhaps including images and videos — tailored to you personally. Whereas advertisers strategically place a relatively small number of ads, language models such as ChatGPT can generate countless unique messages for you personally – and millions for others – over the course of a campaign.

Illustration: Amber/Pixabay
Second, Clogger would use a technique called reinforcement learning to generate a succession of messages that become increasingly more likely to change your vote. Reinforcement learning is a machine-learning, trial-and-error approach in which the computer takes actions and gets feedback about which work better in order to learn how to accomplish an objective. Machines that can play Go, Chess and many video games better than any human have used reinforcement learning.

Third, over the course of a campaign, Clogger’s messages could evolve in order to take into account your responses to the machine’s prior dispatches and what it has learned about changing others’ minds. Clogger would be able to carry on dynamic “conversations” with you – and millions of other people – over time. Clogger’s messages would be similar to ads that follow you across different websites and social media.

The nature of AI

Three more features – or bugs – are worth noting.

First, the messages that Clogger sends may or may not be political in content. The machine’s only goal is to maximize vote share, and it would likely devise strategies for achieving this goal that no human campaigner would have thought of.

One possibility is sending likely opponent voters information about nonpolitical passions that they have in sports or entertainment to bury the political messaging they receive. Another possibility is sending off-putting messages – for example incontinence advertisements – timed to coincide with opponents’ messaging. And another is manipulating voters’ social media friend groups to give the sense that their social circles support its candidate.

Second, Clogger has no regard for truth. Indeed, it has no way of knowing what is true or false. Language model “hallucinations” are not a problem for this machine because its objective is to change your vote, not to provide accurate information.

Third, because it is a black box type of artificial intelligence, people would have no way to know what strategies it uses.

Clogocracy

If the Republican presidential campaign were to deploy Clogger in 2024, the Democratic campaign would likely be compelled to respond in kind, perhaps with a similar machine. Call it Dogger. If the campaign managers thought that these machines were effective, the presidential contest might well come down to Clogger vs. Dogger, and the winner would be the client of the more effective machine.

Photo: Kp Yamu Jayanath/Pixabay
In the future, politicians who win their seats into office will do so only because they have access to the best artificial intelligence technology.

Political scientists and pundits would have much to say about why one or the other AI prevailed, but likely no one would really know. The president will have been elected not because his or her policy proposals or political ideas persuaded more Americans, but because he or she had the more effective AI. The content that won the day would have come from an AI focused solely on victory, with no political ideas of its own, rather than from candidates or parties.

In this very important sense, a machine would have won the election rather than a person. The election would no longer be democratic, even though all of the ordinary activities of democracy – the speeches, the ads, the messages, the voting and the counting of votes – will have occurred.

The AI-elected president could then go one of two ways. He or she could use the mantle of election to pursue Republican or Democratic party policies. But because the party ideas may have had little to do with why people voted the way that they did – Clogger and Dogger don’t care about policy views – the president’s actions would not necessarily reflect the will of the voters. Voters would have been manipulated by the AI rather than freely choosing their political leaders and policies.

Another path is for the president to pursue the messages, behaviors and policies that the machine predicts will maximize the chances of reelection. On this path, the president would have no particular platform or agenda beyond maintaining power. The president’s actions, guided by Clogger, would be those most likely to manipulate voters rather than serve their genuine interests or even the president’s own ideology.

Avoiding Clogocracy

It would be possible to avoid AI election manipulation if candidates, campaigns and consultants all forswore the use of such political AI. We believe that is unlikely. If politically effective black boxes were developed, the temptation to use them would be almost irresistible. Indeed, political consultants might well see using these tools as required by their professional responsibility to help their candidates win. And once one candidate uses such an effective tool, the opponents could hardly be expected to resist by disarming unilaterally.

Enhanced privacy protection would help. Clogger would depend on access to vast amounts of personal data in order to target individuals, craft messages tailored to persuade or manipulate them, and track and retarget them over the course of a campaign. Every bit of that information that companies or policymakers deny the machine would make it less effective.

Strong data privacy laws could help steer AI away from being manipulative.

Another solution lies with elections commissions. They could try to ban or severely regulate these machines. There’s a fierce debate about whether such “replicant” speech, even if it’s political in nature, can be regulated. The U.S.’s extreme free speech tradition leads many leading academics to say it cannot.

But there is no reason to automatically extend the First Amendment’s protection to the product of these machines. The nation might well choose to give machines rights, but that should be a decision grounded in the challenges of today, not the misplaced assumption that James Madison’s views in 1789 were intended to apply to AI.

European Union regulators are moving in this direction. Policymakers revised the European Parliament’s draft of its Artificial Intelligence Act to designate “AI systems to influence voters in campaigns” as “high risk” and subject to regulatory scrutiny.

One constitutionally safer, if smaller, step, already adopted in part by European internet regulators and in California, is to prohibit bots from passing themselves off as people. For example, regulation might require that campaign messages come with disclaimers when the content they contain is generated by machines rather than humans.

This would be like the advertising disclaimer requirements – “Paid for by the Sam Jones for Congress Committee” – but modified to reflect its AI origin: “This AI-generated ad was paid for by the Sam Jones for Congress Committee.” A stronger version could require: “This AI-generated message is being sent to you by the Sam Jones for Congress Committee because Clogger has predicted that doing so will increase your chances of voting for Sam Jones by 0.0002%.” At the very least, we believe voters deserve to know when it is a bot speaking to them, and they should know why, as well.

The possibility of a system like Clogger shows that the path toward human collective disempowerment may not require some superhuman artificial general intelligence. It might just require overeager campaigners and consultants who have powerful new tools that can effectively push millions of people’s many buttons.


Learn what you need to know about artificial intelligence by signing up for our newsletter series of four emails delivered over the course of a week. You can read all our stories on generative AI at TheConversation.com.The Conversation

About the author:

Archon Fung, Professor of Citizenship and Self-Government, Harvard Kennedy School and Lawrence Lessig, Professor of Law and Leadership, Harvard University

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Can an AI therapist help you through the day?

by Ben Rein

Would you let an AI chatbot be your therapist?

A recent study wanted to know if this would work, so they asked AI about 200 questions from the “Ask the Doctors” page on Reddit. Then they put those answers next to responses from real human doctors and asked healthcare providers to judge which was better, without knowing which one was AI. What do you think happened? They rated the chatbot’s answers as better 78% of the time and found that they were more likely to be empathetic.

But this raises a key point: empathy.

Everybody knows that ChatGPT can’t feel emotions. Therefore, it’s not capable of empathy, because it can’t really understand what you feel. And some scientists think that this is where AI loses: Chatbots will never work as therapists because humans won’t accept or appreciate empathy from a robot.

@dr.brein Can an AI chatbot really be your therapist? ________ This video was supported by the Pulitzer Center through the Truth Decay Grant Initiative, in collaboration with OpenMind Magazine. To read more about this topic, check out the accompanying article on OpenMind’s website, found in my bio 🔗. #PulitzerCenter #neuroscience #AI #therapy #empathy ♬ Mysterious and sad BGM(1120058) - S and N

When a real company, Koko, tried using chatbots, it didn’t work because people knew they were chatbots. The patients didn’t care when the chatbot said, “I understand how you’re feeling” because they knew it was an empty, emotionless statement.

But it makes me wonder, if chatbots continue gaining in use and acceptance, and we come to respect them more, this could change. And I’m curious how you’d feel about that.

If 100 years from now, AI chatbots are considered trained psychiatrists, would this be good or bad for society? It might seem ridiculous, but it’s real life. Right now, we essentially hold that decision in our hands. We are the first humans to coexist with these large language models, and we actively vote as consumers—with our clicks and our wallets—to determine the future of AI. In what capacity will we come to embrace AI? Where do we draw the line? It’s something to think about as we navigate this new virtual world. Thank you for your interest, and please follow for more science.


This story originally appeared on OpenMind, a digital magazine tackling science controversies and deceptions.

Building an app can help you grow your business, here's how to do it right

man typing on a smartphone
Photo: relexahotels/Pixabay

StatePoint - In today’s world, apps are crucial for business growth and customer experience. They enable shopping, appointment setting and customer service interactions. In fact, around three-quarters of U.S. adults say they buy things online using a smartphone, according to Pew Research, which means if you don’t have an app for your business, you’re leaving money on the table. However, if building one sounds daunting, experts say there is good news -- artificial intelligence can help.

“AI enhances app development through code generation, chatbots, process optimization, content creation, user stories and prototype generation. Anyone, even with little to no experience, can quickly and cost-effectively develop an app using AI,” says Sachin Dev Duggal, founder and chief wizard at Builder.ai, an AI-powered composable software platform that allows every business and entrepreneur to become digitally powered.

Despite the relative ease of developing an app harnessing today’s AI tech, it’s nevertheless important to get your app right. With over 77% of users uninstalling an app within the first three days after download, according to WifiTalents, you’ll want to ensure your app provides your users with real value.

So, before building your app, first consider how it will help customers, and how it will help you solve your short- and long-term business objectives. Asking yourself these questions can give you clarity on the type of app you need, how you will fund and maintain your app, and how it will function.

When you are ready to begin development, here are the benefits you can anticipate by using AI to meet your objectives:

  • Rapid development: AI-driven platforms significantly reduce development time.
  • Unlimited customizations: AI app development platforms offer pre-built, customizable modules.
  • High performance: AI creates high-performance apps with fast load times and smooth user experiences.
  • Cost efficiency: AI reduces the need for extensive developer hiring, lowering costs.
  • Error reduction: Around 66% of software projects fail. The primary cause? Human error.
  • Seamless articulation: New AI technology allows you to speak directly with the development platform, enabling you to convey your ideas and instructions effortlessly, making app development more intuitive and efficient.

So, how do you actually use AI to build your app? In the case of Builder.ai, it’s as simple as following these simple steps:

1. Choose and customize a base template.
2. Review and finalize features.
3. Identify the app’s platform (Android, iOS, desktop) and build a timeline.
4. Establish a payment plan.
5. Match with a product expert for guidance.
6. Review and monitor the app’s progress.
7. Launch your app.
8. Leverage data from your app to optimize business.

To learn more about developing your app with Builder.ai, visit https://www.builder.ai/.

“AI automates repetitive tasks, code generation, bug detection and testing, resulting in shorter development cycles and reduced costs while maintaining high quality. By giving everyone, regardless of their tech knowledge the power to build applications, we’re removing the barriers that have traditionally stopped individuals and business owners from unlocking their potential,” says Duggal.

League of Women Voters of Illinois hosting lecture on AI and misinformation

CHICAGO – Diane Chang will give a Zoom talk concerning strategies on how to protect and secure democracy in an age of threats from social media and AI for a virtual meeting of the League of Women Voters of Illinois (LWVIL) on Wednesday, April 17.

Addressing the rise of misinformation and disinformation — and its impact on our elections — the League of Women Voters of Illinois formed the Mis/Disinformation Task Force in January 2024 with their mission to educate the general public on mis/disinformation.

Diane Chang headshot
Diane Chang
Chang, Entrepreneur-in-Residence at the Brown Institute for Media Innovation at Columbia Journalism School and the former head of Election Integrity and Product Strategy at Meta, will discuss her experience building artificial intelligence and consumer technology products that connect people to information, safety, and sustainability. She led Meta’s election strategy integrity and product strategy from 2021–23.

In her current position at the Brown Institute, Ms. Chang is an advisor and consultant to nonprofits in the U.S. and abroad on technology and elections. She has a master’s degree in public policy from the Harvard Kennedy School Research Institute at Harvard University in Cambridge, Mass.

Organized by LWVIL’s Misinformation and Disinformation Task Force, the event is the second in a series of presentations where noted authorities will discuss topics that inform and educate voters starting at 7 p.m. The webinar is free and open to the public. All programs are recorded and made available on the LWVIL website.

Visit lwvil.org/misdis-info for more information or to register.


Some things to keep in mind when you need a law firm

StatePoint Media - Let’s face it, no person or business gets a thrill out of hiring a law firm. Fortunately, peer-reviewed rankings have simplified the process.

Best Lawyers, which has been tracking trends and innovations in the legal industry for more than four decades, serves as a trusted resource for identifying what it takes to be a preeminent law firm in the United States. Their recently released 14th annual rankings of Best Law Firms, found at bestlawfirms.com, provides keen insight, not only into the most successful law firms, but also the key factors to keep an eye out for when going through the reliably trying task of retaining counsel.

Here is some of Best Lawyers’ advice:

1. Does the firm use the latest technology?

Right now, even the legal profession is abuzz about generative Artificial Intelligence (gen AI) tools. With its ability to parse information more quickly, gen AI offers the immediate potential to automate routine tasks such as research; summarizing long, complex content; and writing first drafts of simple documents such as NDAs. All of which can save both time and money.

And smart firms are closely watching regulations and any risks that this new technology may bring, all while using it for the benefit of the firm and its clients.

2. What do other legal experts think about the way they do business?

There are better options available than just word of mouth when choosing legal representation. After all, hiring a law firm isn’t like choosing which novel to download next. Through Best Laywers’ research process, a firm’s performance is assessed by its peers, ultimately helping consumers make better-informed decisions.

Why is this important? At its heart, a robust peer-review process like Best Lawyers’ asks legal professionals to answer this key question: “If you had a legal issue and could not represent yourself, what firm would you hire?” This peer-review method is critical, and offers a straightforward way to help identify the most trusted firms.

3. Does the law firm embrace diversity?

Today’s leading law firms know that to be successful, the makeup of their staff should represent the communities they serve. Inclusion is a necessary element of well-rounded representation because a team with different backgrounds and experiences will bring diverse points of view to solving clients’ unique and complex challenges.

Fortunately, in recent years there has been an uptick in law firm diversity. In 2023, 21.6% of attorneys were members of traditionally underrepresented ethnic groups, according to an American Lawyer survey. That’s up more than 20% from the same survey just three years prior.

As a consumer, consider asking a law firm about its diversity track record. In fact, the best law firms will not only expect the question but welcome it.




More Sentinel Stories



Photo Galleries


Monticello Basketball vs Seneca
January 11, 2025
30 Photos

January 11, 2025
37 Photos

January 11, 2025
31 Photos

January 4, 2025
42 Photos

December 14, 2024
39 Photos

December 7, 2024
27 Photos