Home ai The Push for Robust AI Regulation and Governance: Survey Highlights Concerns About...

The Push for Robust AI Regulation and Governance: Survey Highlights Concerns About Data Privacy and Copyright Laws

U.S. business leaders are increasingly calling for robust AI regulation and governance, according to a new survey conducted by The Harris Poll on behalf of data intelligence company Collibra. The survey highlights growing concerns among business leaders about data privacy, security risks, and the ethical use of artificial intelligence technologies.

One of the key findings of the survey is that 84% of data, privacy, and AI decision-makers support updating U.S. copyright laws to protect against AI. This reflects the tension between rapid technological advancement and outdated legal frameworks. AI has disrupted the technology vendor/creator relationship, and companies are rolling out AI tools and technology at an accelerated pace. As a result, there is a need to redefine “fair use” and retroactively apply copyright laws to 21st-century technology. Content creators deserve more transparency, protection, and compensation for their work, as data is the backbone of AI and all models require high-quality, trusted data.

The survey also revealed strong support for compensating individuals whose data is used to train AI models. 81% of respondents backed the idea of Big Tech companies providing such compensation, signaling a shift in how personal data is valued in the AI era. As companies transition from AI talent to data talent, the line between content creators and data citizens will blur, and the need for fair compensation and protection will become even more important.

In terms of AI regulation, the survey found a preference for federal and state-level regulation over international oversight. Individual states like Colorado have already implemented their own AI regulations due to the absence of comprehensive federal guidelines. Larger firms are more likely to back federal and state regulations compared to smaller businesses, which may be due to resource constraints and a gap in understanding the real-world applications of AI for small businesses.

The survey also highlighted a trust gap, with respondents expressing high confidence in their own companies’ AI direction but lower trust in government and Big Tech. Privacy concerns and security risks were identified as major threats to AI regulation in the U.S., with 64% of respondents citing each as a major concern. Companies like Collibra are developing AI governance solutions to address these issues and minimize data risks.

As businesses prioritize AI training and upskilling, the job market is likely to be reshaped in the coming years. Looking ahead, key priorities for AI governance in the United States include turning data into the biggest currency, creating a trusted framework, preparing for the Year of Data Talent, and prioritizing responsible access before responsible AI. Data governance should not only focus on the quantity of data but also on its quality.

In conclusion, while businesses are embracing AI technologies, they are also aware of the potential risks and are looking to policymakers to provide clear guidelines for responsible development and deployment. The future will see intense debate and negotiation as stakeholders work to create a regulatory environment that fosters innovation while protecting individual rights and promoting ethical AI use. Companies of all sizes will need to prioritize robust data governance and AI ethics to navigate the challenges and opportunities ahead.

Exit mobile version