Auditing AI Models: Addressing Bias, Performance, and Ethical Standards
In today’s rapidly evolving technological landscape, organizations are faced with a pressing question: How can they effectively audit their AI models to ensure they are free from bias, perform optimally, and adhere to ethical standards? To shed light on this issue, VentureBeat recently hosted the VB AI Impact Tour in New York City, featuring industry leaders such as Verizon Communications’ Michael Raj, Patronus AI’s Rebecca Qian, and FirstMark’s Matt Turck. The event concluded with a conversation between VB CEO Matt Marshall and Justin Greenberger, SVP client success at UiPath, discussing strategies for achieving audit success and providing insights on where to begin.
The Changing Risk Landscape and the Need for Regular Evaluation
Greenberger emphasized that the risk landscape is no longer something that can be evaluated on an annual basis. In today’s fast-paced world, organizations must evaluate their risks almost monthly to keep up with the rapidly evolving AI technology. He highlighted the importance of understanding the risks, the controls in place to mitigate them, and how to evaluate them effectively. Greenberger cited the Institute of Internal Auditors’ updated AI framework as a valuable resource, but stressed the need for organizations to go beyond the basics. Monitoring key performance indicators (KPIs), ensuring transparency in data sources, and establishing accountability are crucial aspects of the evaluation cycle that should be tightened.
Generative AI: A Global Evolution
Greenberger drew attention to the impact of regulations on the development of generative AI. While some viewed regulations like GDPR as overbearing at first, they have ultimately laid the foundation for data security in many companies today. Interestingly, generative AI has not experienced the usual lag between countries with stricter regulations. Markets worldwide are evolving at a similar pace, leveling the competitive field and prompting organizations to consider their risk tolerance across all aspects of the technology.
Challenges in Pilots and Proof of Concepts
Although true enterprise-wide transformation is still in its early stages, numerous companies have initiated AI projects to test the waters. However, several challenges persist. One common hurdle is finding subject matter experts who possess the necessary contextual understanding and critical thinking skills to establish use case parameters. Additionally, enabling and engaging employees through education is crucial, particularly as technologies like deep fakes gain traction. Organizations must also catch up with the componentized implementation of generative AI, integrating it into existing workflows rather than completely overhauling processes. Audits will need to adapt to monitor the use of private data in various applications, such as medical use cases.
The Evolving Role of Humans
Humans continue to play a significant role in AI processes, as risks and controls evolve alongside the technology. Currently, users query the AI system, which then provides the necessary data for employees to perform their jobs. For instance, a logistics provider may receive a job quote that an employee accepts and offers to the customer. However, as audit controls improve and spot checks become more robust, the decision-making responsibilities of humans may decrease. Greenberger predicts that humans will shift towards more creative and emotional aspects of their roles, while AI handles the decision-making process. This shift is a matter of time and highlights the importance of developing skills in creative and emotional concepts for managers and executives.
In conclusion, auditing AI models for bias, performance, and ethical standards requires organizations to stay ahead of the evolving risk landscape. Regular evaluation, adherence to regulatory frameworks like GDPR, addressing challenges in pilot projects, and understanding the changing role of humans are all crucial aspects of successful AI audits. By embracing these strategies and incorporating best practices, organizations can ensure the ethical use and optimal performance of their AI models.