• December 22, 2024

As intelligent automation providers increasingly integrate generative AI into their products and businesses increasingly turn to those products to optimize their operations, a new security threat is emerging. A Boston-based startup developing security technology for generative AI systems to address that kind of threat announced it has secured $2.35 million in funding to advance its product.

Enkrypt AI is a security and compliance layer that sits between generative AI systems based on large language models and end users. The company says the technology will help businesses detect and mitigate LLM attacks such as jailbreaks and hallucinations, and prevent sensitive data leaks.

“Businesses are really excited about using LLMs, but they’re also worried about how trustworthy they are and the uncertain regulatory landscape,” said Sahil Agarwal, co-founder and CEO of Enkrypt AI. “Based on our conversations with CIOs, CISOs and CTOs, we are convinced that for LLMs to be widely adopted, it must be built on a foundation of security, privacy, and compliance. With Sentry, we are merging visibility and security, to ultimately align with and support adherence to regulatory frameworks like the White House Executive Order on AI, the EU AI Act, and other AI-centric regulations, laying the groundwork for safe and compliant AI integration.”

The seed round was led by Boldcap, with additional participation from Arka VC, Berkeley, Builders Fund, Kubera VC, and Veredas Partners.