Advertising

Enkrypt Secures Seed Funding to Develop a ‘Control Layer’ for Ensuring Generative AI Safety

blankEnkrypt AI, a Boston-based startup, has recently secured $2.35 million in seed funding to develop a ‘control layer’ for ensuring the safety of generative AI. While this amount may not be as significant as the funding raised by other AI companies, Enkrypt’s product is unique and addresses important challenges related to the secure deployment of generative AI models.

Generative AI is rapidly being adopted by companies to improve workflows and increase efficiency. However, implementing and fine-tuning these models can come with safety hurdles. It is crucial to ensure data privacy, security, reliability, and compliance throughout the deployment process. Currently, most companies handle these challenges manually, which can be time-consuming and result in significant delays for AI projects.

Enkrypt aims to bridge this gap with its comprehensive solution called Sentry. Sentry acts as a secure enterprise gateway between users and models, providing model access controls, data privacy, and model security. The solution routes all interactions through proprietary guardrails, ensuring data privacy, security, and compliance with regulations.

One of the key features of Sentry is its ability to prevent prompt injection attacks and jailbreaking, which enhances security. It can also sanitize model data, anonymize personal information, test generative AI APIs for corner cases, and run continuous moderation to filter out harmful content. This level of granularity and security allows Chief Information Security Officers (CISOs) and product leaders to have complete visibility and control over generative AI projects, reducing regulatory, financial, and reputation risks.

While Enkrypt is still in the pre-revenue stage, it has successfully tested Sentry with mid to large-sized enterprises in regulated industries such as finance and life sciences. For example, one Fortune 500 enterprise saw a significant reduction in jailbreak vulnerabilities from 6% to 0.6% when using Meta’s Llama2-7B model with Sentry. This enabled faster adoption of generative AI models and expanded its use across different departments.

Enkrypt’s next step is to further develop and demonstrate the capabilities of Sentry across various models and environments. The startup acknowledges its competition, such as Protect AI, which plans to build a comprehensive security and compliance product following its acquisition of Laiyer AI. However, Enkrypt believes its comprehensive solution sets it apart in the market.

The importance of AI safety has been recognized not only by Enkrypt but also by the U.S. government’s NIST standards body. They recently established an AI safety consortium with over 200 firms to focus on establishing the foundations for measuring AI safety.

Enkrypt’s seed funding will support the company’s efforts to bring its solution to more enterprises and solidify its position in the market. With the increasing necessity for safety in generative AI development and deployment, Enkrypt’s unique offering holds promise for the future of AI technology.