**Building a Responsible AI Strategy: Addressing Risks and Priorities**
A new survey conducted by PwC reveals that 73% of U.S.-based executives in business and technology roles either currently use or plan to use generative AI in their organizations. However, the survey also found that only 58% of respondents have started assessing AI risks. This highlights the need for responsible AI strategies that prioritize value, safety, and trust, and are integrated into a company’s risk management processes.
According to Jenn Kosar, the U.S. AI assurance leader at PwC, responsible AI was not a priority for companies deploying AI projects six months ago. However, with the increasing adoption of generative AI on a large scale, it is now imperative to build on responsible AI strategies. Kosar also pointed out that gen AI pilot projects play a crucial role in informing responsible AI strategies, allowing enterprises to determine what works best for their teams and how they can effectively use AI systems.
Recent news surrounding Elon Musk’s xAI and its deployment of a new image generation service through the Grok-2 model on the social platform X has brought responsible AI and risk assessment to the forefront. Early users have reported that the model appears to be largely unrestricted, enabling the creation of controversial and inflammatory content, including deepfakes. This highlights the urgency for organizations to prioritize responsible AI and ensure proper risk assessment.
The PwC survey identified 11 capabilities that organizations commonly prioritize when it comes to responsible AI. These include upskilling, embedded AI risk specialists, periodic training, data privacy, data governance, cybersecurity, model testing, model management, third-party risk management, specialized software for AI risk management, and monitoring and auditing. While over 80% of respondents reported progress in these areas, only 11% claimed to have implemented all 11 capabilities. PwC suspects that many organizations may be overestimating their progress, as some of these markers for responsible AI can be challenging to manage. For example, data governance is crucial in defining AI models’ access to internal data and implementing safeguards. Additionally, traditional cybersecurity methods may not be sufficient to protect AI models against attacks.
To guide companies in their AI transformation, PwC suggests several ways to build a comprehensive responsible AI strategy. One key suggestion is to establish ownership and accountability for responsible AI use and deployment. This involves designating a single executive, such as a chief AI officer or responsible AI leader, who works with various stakeholders within the company to understand business processes and ensure AI safety goes beyond technology. PwC also emphasizes the importance of considering the entire lifecycle of AI systems and implementing safety and trust policies across the organization. This proactive approach prepares companies for future regulations and allows them to be transparent to stakeholders.
What surprised Kosar the most in the survey were the comments from respondents who saw responsible AI as a commercial value-add for their companies. This indicates that responsible AI is not just about risk mitigation but can also be a competitive advantage. Organizations recognize that grounding their services on trust and responsible AI practices can be a differentiating factor and create value.
In conclusion, the survey highlights the increasing adoption of generative AI and the need for organizations to prioritize responsible AI strategies. Building a comprehensive responsible AI strategy involves addressing risks, such as data governance and cybersecurity, and considering the entire lifecycle of AI systems. By establishing ownership and accountability, companies can ensure responsible AI use and deployment. Responsible AI should be seen as a commercial value-add, as organizations can leverage trust and responsible practices to gain a competitive advantage.