The AI industry is at a critical juncture, with concerns over data privacy, intellectual property rights, and the rapid advancement of AI technology reaching a fever pitch. In response to these concerns, Matt Calkins, the CEO of Appian, has unveiled a new set of guidelines aimed at promoting responsible AI development and building trust between AI providers and their customers.
Calkins believes that the current approach to AI regulation falls short in addressing critical issues such as data provenance and fair use. He criticizes big tech for avoiding discussions on these topics, leaving the industry in a gray zone where the potential of AI is hindered. Appian’s proposed guidelines tackle these issues head-on and consist of four key principles: disclosure of data sources, use of private data only with consent and compensation, anonymization and permission for personally identifiable data, and consent and compensation for copyrighted information.
According to Calkins, the next phase of AI is a race to trust. He argues that building trust with users will allow AI systems to access more personal and pertinent data, unlocking greater value than the current model of indiscriminate data consumption. However, this trust can only be achieved if AI providers prioritize responsible development practices and user privacy and consent.
Appian is well-positioned to benefit from this shift towards trustworthy AI. As a leading provider of low-code automation solutions, their platform enables organizations to build and deploy AI-powered applications while maintaining strict control over data privacy and security. By committing to responsible AI development, Appian could gain a significant competitive advantage as more enterprises seek out AI solutions that prioritize user trust.
Calkins’ proposed guidelines come at a time when the AI industry is facing increasing scrutiny from regulators, lawmakers, and the general public. By addressing concerns around job displacement, algorithmic bias, and misuse of AI by bad actors, Calkins aims to position Appian as a leader in the responsible AI movement.
While launch partners for the guidelines have not yet been secured, Calkins remains optimistic about their potential impact. He hopes to gather support from other industry players who share his vision for responsible AI development.
The stakes are high for the AI industry, and Calkins believes that trust will define the next phase of AI. Companies that can build trust with users and demonstrate a commitment to responsible AI development will thrive in this new era. Appian’s proposed guidelines offer a roadmap for navigating this transition by prioritizing transparency, user consent, and respect for intellectual property.
In conclusion, Calkins’ bold vision and commitment to responsible development have positioned Appian at the forefront of the responsible AI movement. The rest of the industry would be wise to take note and follow suit in order to succeed in the race for trustworthy AI.