by Paul Arnote (parnote)
A "new genie" has escaped the bottle. A new prince occupies the castle. Elvis has left the building. The barn door has been bolted after the cows and horses have left.
No matter how you frame it, talks about "regulating" (read that legislation) the new, emerging and proliferating artificial intelligence (AI) technology might just be too little, too late.
I'll talk mostly about efforts in the U.S. to "regulate" the explosion of AI across the computing landscape, simply because this is where I am, and where the information I'm exposed to is most focused. But, we'll also talk a bit about the E.U., since they offer the best head-to-head comparison with U.S. efforts.
But first, we need to look at a bit of recent history.
A History Of Non-Action
"Social media" popped up, seemingly overnight, around a quarter century ago. Some of those early sites are nothing more than a blip on the history of the internet (think MySpace, followed by Google+ and many others).
Almost as fast, social media sites became the de facto "town hall" for the internet. Just as you might expect, that came with both good and bad. Although it allowed users from all around the globe to connect and share ideas and common interests, it also provided a fertile breeding ground for the tribal mentality that besieges and divides society to this day. Users would gather with those who shared their views, excluding or running off those who didn't. So, the "town hall" became a town square, with each corner hosting different groups of users, all with differing views. The tribal mentality effectively eliminated any healthy discussion of those views among "warring tribes," and shut down the process of these groups with differing views of ever finding common ground. Instead, they just sit/stand and shout at one another, with neither side listening to the other.
Of course, signing up for and using these social media sites is free. But little in this world is truly "free." There's always a price/cost, albeit often a hidden one. In the case of social media sites, it's the hidden cost. They sell advertising space on their sites, all while vacuuming up every last morsel of your personally identifiable private information to provide/sell to advertisers, who then use that information to target you with advertising tailored to your interests. Your personally identifiable private information is the currency that fuels social media.
In the U.S., legislators and others who oversee regulations did NOTHING to regulate either social media or the collection of your personally identifiable private information. Their inaction was louder than the song of crickets while camping in the remotest parts of nowhere.
Meanwhile, across the Atlantic, the E.U. had the cojones and foresight to enact the General Data Protection Regulation, a.k.a. the GDPR. The law, which went into effect five years ago on May 25, 2018, effectively put control of personally identifiable private information back into the hands of its owner, and put limits on data collection and how long that data could be retained. Even though they dragged their feet on addressing the issue, the eventual passage of the GDPR puts the E.U. lightyears ahead of any type of the U.S., where data and privacy protections are pretty much non-existent.
Oh, sure you have states that have tried to address data collection and privacy concerns. One such state is California. But, for the most part, such laws don't carry the weight of federal laws, nor sufficient penalties to make them effective, rendering them little more than lip-service and a public spectacle. Mostly, it's just political showmanship so those elected officials can stay in office, collect votes for re-election, and so they can say "look at what I've done for you!"
It's one thing to pass laws. It's entirely another thing to enforce those laws. Even fining corporations that make multiple billions of dollars every month a few million dollars in "penalties" is merely chump change for the offending corporations. Often, it's cheaper for the corporations to pay the fines than it is to institute changes that will ultimately and significantly lower their monthly profits.
Plus, you have to decide who will enforce the laws. Will you create a new enforcement entity, or (and as is the most likely case), will you strap an already under-manned, under-budgeted existing entity with the task of enforcing the new laws? It's easy to figure out which path most states take regarding enforcement. Taking the second path typically results in under-enforcement of the data collection and privacy laws, with only the most egregious violators prosecuted as an example to other violators or would-be violators.
And then you have the companies affected by such laws helping to write such laws, and pour a ton of money into lobbying against legislation that "goes against" their path to riches. Now, if you were going to help write a law affecting your line of work, of course you'd want to make sure it's as advantageous to your profits and bottom line as possible. This is exactly what happens in every single instance where those affected help write the laws that regulate their behavior.
Now, The Emergence Of AI
Without a doubt, AI is having its moment in the spotlight. Stories about it are in the "news" everywhere. I can hardly go a day without reading yet another story about how much of an asset AI is going to be, or how it will lead to a destruction of society, and everything in between. There seem to be as many opinions about the impact of AI as there are people writing articles about it. In the U.S., AI's moment in the spotlight is way ahead of any attempts to regulate it. I mean, data collection and privacy haven't even been addressed, and here is yet another new technology that demands attention.
It's not that legislators haven't tried or are opposed to regulating AI. It's more like they are all revved up in their sand buggy, but all they are doing is spinning their wheels in the sand.
Sure, Congress has held hearings. The White House has issued "policy statements." But little to nothing has been done to put up the "guardrails" that so many are calling for. One of the things explored include "mandatory" disclosure when information has been generated by AI. That is a good idea, just so long as the information originates in the U.S.'s jurisdiction, and the "rules" are followed. A quick drive down the highway should show you how well people follow the rules. What happens when that information is generated from one of the countries responsible for abuses of technology, like North Korea, Russia, or the People's Republic of China? All three countries are "safe havens" for threat actors, and many of those threat actors are state sponsored.
Three prominent experts with AI have testified before the Senate Judiciary Committee's Subcommittee on Privacy, Technology, and the Law (click on the following links to read their testimony). One was Gary Marcus, professor emeritus at New York University. He has been involved with AI, helping to start AI companies. His testimony highlights the worries and risks with AI, and the need for companies and governments to work together to minimize risks while keeping AI accessible for all people.
Another was Christina Montgomery, Chief Privacy & Trust Officer for IBM. Keep in mind that it was an earlier version of AI produced by IBM, known as Watson, that won the TV game show Jeopardy!. She spoke at the hearings to express how she, as IBM's AI Ethics Board co-chair, sees the necessary AI guardrails working.
Probably the most high profile person to give testimony at the hearing was Sam Altman, CEO and co-founder of OpenAI. It is OpenAI that has produced the latest version of AI that has the computing world all excited, with ChatGPT and DALL-E 2. He discussed the lengths OpenAI goes to to ensure that its AI products are safe and appropriately restrained.
Other tech leaders, including a group that includes Elon Musk and Steve Wozniak, have called for a six month "pause" on further AI development in an open letter, to give industry, regulators and legislators a chance to catch up with constraints and guardrails for AI. The letter has nearly 30,000 signatories at the time of the writing of this article. Other tech leaders want to continue AI development, full steam ahead, and call Musk's (et al) concerns unfounded. Former Google CEO Eric Schmidt is one tech leader who doesn't support a six month A.I. pause 'because it will simply benefit China,' according to an article on Fortune.
Meanwhile, the E.U. is considering far-reaching legislation on artificial intelligence (AI), according to an article on the World Economic Forum website. The fact that they are even considering legislation, which has already been drafted, puts the E.U. years ahead of the U.S. This is especially so when you factor in the protections already in place and afforded by the GDPR.
On the day that Elon Musk et al called for a pause in the "dangerous race" to develop ever more powerful AI, the UK government published its long awaited "Pro-innovation approach to AI regulation" white paper, according to an article on Lexology.
AI is the topic d'jour in most of the major countries with a reliance on tech. India is proposing its own legislation, aside from calling for an international approach to regulating AI. China is taking a look at a broad approach that would affect any AI company that has the ability to reach Chinese internet users.
Why Regulating AI Is So Difficult
THAT is the $25,000,000 question. As you can imagine, there isn't any singular answer.
With even CEOs of AI companies asking for regulation, it isn't that clear cut. The problem is how to regulate AI without stifling development, deployment or access.
Congress (I use that term collectively, for the House and the Senate) is largely ignorant about what AI is, what its use is, how it works, etc. Only ONE member of Congress has a master's degree in artificial intelligence, and that is Representative Jay Obernolte of California. To address this knowledge gap, House Speaker Kevin McCarthy has arranged a "class" for any/all members of Congress, regardless of their party affiliation, according to an article from Fox News. Lawmakers who attend will have the chance to hear from two AI experts from MIT.
But this is also the same group that really has no idea how the internet works, how email works, or any other modern tech for that matter. These lawmakers are dinosaurs when it comes to tech issues. So, how can they effectively write legislation to guard against abuses of tech? This is probably the number one reason that a transparent partnership between lawmakers and the tech industry is (or should be) the ONLY path forward.
According to an article on Fox News, in 2022, the House Energy and Commerce Committee passed the American Data Privacy and Protection Act (ADPPA), a bill that's aimed at boosting data privacy rights but would also play a big role in regulating emerging AI systems. The ADPPA won almost unanimous support from both parties last year and continues to be supported by companies that are eager to build trust in their AI products, and they believe that a federal regulatory structure will help them get there. BSA/Software Alliance represents dozens of companies, including Microsoft, Okta, Salesforce and others, that build software and AI tools that companies use to run their businesses. BSA is working closely with the committee to get a version of that bill passed this year that it hopes can be approved in a full House vote.
Then you have the concerns of how much any potential lobbyists and special interest groups might have influenced any possible legislation. The lobbyists and special interest groups will work tirelessly to water down any potential penalties, as well as any meaningful constraints. That. Is. Their. Jobs. The end result is typically legislation and regulations that favor the corporations, and the people it's intended to protect be damned. The promise of corporate/industry campaign donations for re-election coffers goes a long way towards getting the attention of a lawmaker who is ultimately concerned about re-election and lengthening his or her stay in office.
Another concern about AI is eliminating bias in its use and responses. That might be a bit difficult, since AI is likely to possess the same biases as those that program it. In other areas, AI has been caught making stuff up (lying) and it's a bit difficult to discern what is factual and what is fiction.
There is also the fear that AI could be harnessed to perform character assassinations on individuals. Without full disclosure, it would be almost impossible to discern whether something actually happened or not. In today's societal and political divisions, and with a major presidential election just around the corner, AI saying something happened or someone said something when neither actually happened is a HUGE concern. Without proper and adequate safeguards, AI could contribute to even greater divisions at a time when it's already extremely difficult to separate information from disinformation.
Summary
We've already talked about how frighteningly real images created by DALL-E appear, and how easy it is to wipe out the EXIF info of an image to help blur that line between reality and fantasy in last month's issue of The PCLinuxOS Magazine.
Because of their inaction on other tech trends, I personally don't hold out much hope for U.S. lawmakers to "get it right." Gridlock and inaction seem to be their ultimate goal, at least judging by their actions (or inactions). Only time will tell. We should all hope that they act before AI is used catastrophically or causes irreparable harm.
The whole issue about regulating AI is evolving at a lightning pace. We're all going to have to stay tuned and pay close attention. I suspect I'll be writing more about this in the very near future.
All images by geralt, on Pixabay.
|