The group says thatAI companies have to prove they are safe

The Zero Trust AI Framework: Towards a Common Understanding among Artificial Intelligence Companies, Politicians, and the Government of the US

Accountable Tech suggested many bright line rules, or policies that are clearly defined and left no room for subjectivity, as lawmakers meet with IBM and other artificial intelligence companies.

The group sent the framework to politicians and government agencies mainly in the US this month, asking them to consider it while crafting new laws and regulations around AI.

The Zero Trust AI framework also seeks to redefine the limits of digital shielding laws like Section 230 so generative AI companies are held liable if the model spits out false or dangerous information.

“We wanted to get the framework out now because the technology is evolving quickly, but new laws can’t move at that speed,” Jesse Lehrich, co-founder of Accountable Tech, tells The Verge.

Lehrich pointed to the Federal Trade Commission’s investigation into OpenAI as an example of existing rules being used to discover potential consumer harm. Other government agencies have also warned AI companies that they will be closely monitoring the use of AI in their specific sectors.

Discrimination and bias in artificial intelligence has been warned about by researchers for years. A recent Rolling Stone article charted how well-known experts such as Timnit Gebru sounded the alarm on this issue for years only to be ignored by the companies that employed them.

AI Companies Must Clearly Say Their AI Is Safe, says Nonprofit Group [AI Companies Must Prove Theirs is Safe, Say Their Laws]

The idea behind Section 230 makes sense, but there is a difference between a bad review on a website and a libelous one, according to Lehrich. Section 230 was part of a scheme to shield online services from liability for defamation but there isn’t a lot of evidence to suggest that it’s useful to hold platforms liable for false statements.

These include prohibiting AI use for emotion recognition, predictive policing, facial recognition used for mass surveillance in public places, social scoring, and fully automated hiring, firing, and HR management. They want to stop the collection of sensitive data for a given service, as well as collecting data in fields such as education and hiring.

Accountable Tech also urged lawmakers to prevent large cloud providers from owning or having a beneficial interest in large commercial AI services to limit the impact of Big Tech companies in the AI ecosystem. Microsoft invested in OpenAI, which was the most well-known generative Artificial Intelligence company. Google released its large language model Bard and is developing other AI models for commercial use.

The group proposes a method similar to one used in the pharmaceutical industry, where companies submit to regulation even before deploying an AI model to the public and ongoing monitoring after commercial release.

The nonprofits don’t call for a single regulatory body. Splitting up rules may be seen as a solution to make the rules more flexible or cumbersome to enforce.

Lehrich says it’s understandable smaller companies might balk at the amount of regulation they seek, but he believes there is room to tailor policies to company sizes.

Source: AI companies must prove their AI is safe, says nonprofit group

Self-Determination of Artificial Intelligence for the European Union: Wired Opinion on AI Ethics, Privacy and Content Slicing

There are different stages of the artificial intelligence supply chain and design requirements for each phase.

TikTok recently announced that its users in the European Union will soon be able to switch off its infamously engaging content-selection algorithm. The EU’s Digital Services Act (DSA) is driving this change as part of the region’s broader effort to regulate AI and digital services in accordance with human rights and values.

Nita Farahany is the author of The Battle for Your Brain: Robinson O. Everett Professor of Law and Philosophy at Duke University is the author of Defending the Right to Think Freely in the Age of Neuroscience.

Tax incentives and funding could also fuel innovation in business practices and products to bolster cognitive liberty. Leading AI ethics researchers emphasize that an organizational culture prioritizing safety is essential to counter the many risks posed by large language models. Governments can encourage this by giving tax breaks and funding opportunities to companies that are willing to collaborate with educational institutions to create safety programs for the use of artificial intelligence. Tax incentives could be used to support research and innovation in the field of artificial intelligence.

Technology companies should also adopt design principles embodying cognitive liberty. There are options like settings on the TikTok platform and more control of notifications on Apple devices that are in the right direction. Other features that enable self-determination—including labeling content with “badges” that specify content as human- or machine-generated, or asking users to engage critically with an article before resharing it—should become the norm across digital platforms.

WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. You can read more opinions here. Submit an op-ed at [email protected].

A nonprofit group called Accountable Tech has proposed a Zero Trust AI framework for artificial intelligence (AI) to politicians and government agencies in US. It said the framework seeks “a common understanding” among artificial intelligence companies, politicians and the government of the US. It also called for large cloud providers to be prevented from owning or having beneficial interest in large commercial AI services.