Home ai Legal Expert Warns That AI Outputs Are Not Protected Speech and Poses...

Legal Expert Warns That AI Outputs Are Not Protected Speech and Poses a Dangerous Proposition

Legal Expert Warns That AI Outputs Are Not Protected Speech and Poses a Dangerous Proposition

In a world where artificial intelligence (AI) is evolving rapidly and becoming more capable, legal experts are beginning to question the protection of AI-generated content under the First Amendment. Peter Salib, assistant professor of law at the University of Houston Law Center, argues that AI must be properly regulated to prevent potentially catastrophic consequences.

The debate centers around the idea that AI outputs, particularly those generated by large language models (LLMs), should be considered protected speech under the First Amendment. Some argue that since these outputs are undeniably speech-like and expressive, they should be given the same protection as human speech. However, Salib warns that considering AI outputs as protected speech would hinder the ability to regulate these systems effectively.

Salib points out that AI is becoming increasingly dangerous. LLMs have the capability to invent new chemical weapons, aid non-programmers in hacking vital infrastructure, and engage in complex games of manipulation. The potential risks to human life, limb, and freedom are significant. Salib’s research suggests that near-future generative AI systems could pose threats such as bioterrorism, the manufacture of pandemic viruses, and even fully automated drone-based political assassinations.

While AI outputs may appear speechy and expressive, Salib argues that they are not human speech. Unlike software created by individuals with specific ideas in mind, AI software is designed to say anything. Users can ask open questions to get models to provide answers they didn’t already know or content they never thought about. This distinction makes AI outputs different from human speech and therefore not deserving of the highest level of constitutional protection.

Salib suggests that regulations should focus on the outputs of AI rather than trying to prevent systems from generating bad outputs in the first place. Since it is currently impossible to write legal rules mandating safe code for AI systems, regulations should dictate what AI models are allowed to say. Depending on the level of danger posed by the outputs, laws could require models to remain unreleased or even be destroyed. This approach would incentivize AI companies to invest in safety research and stringent protocols.

Overall, the debate surrounding the protection of AI outputs under the First Amendment raises important questions about the regulation of AI systems. As AI continues to advance, it is crucial to strike a balance between protecting freedom of expression and ensuring public safety. The work of legal experts like Peter Salib shines a light on the potential dangers of unregulated AI and calls for the implementation of effective regulations to prevent harm.

Exit mobile version