• November 21, 2024
Enkrypt AI Launches LLM Safety Leaderboard to Evaluate Safety and Security of Generative AI Tools

Generative AI is being integrated into business technology at a blistering pace and intelligent automation providers seem especially keen to take advantage of the ways it can benefit business users. According to a Boston-based firm, however, the security and ethical use of the Large Language Models that underlie generative AI are not receiving enough attention.

Enkrypt AI said it is unveiling its LLM Safety Leaderboard that benchmarks the safety and security of various LLMs as a resource for businesses evaluating solutions that incorporate generative AI.

“LLMs are increasingly seen as potential back-office powerhouses for enterprises, processing data and enabling faster front-office decision-making,” the company said. “Consider a fintech where an LLM-powered application is key in rejecting a loan application from a person of color without clear explanation. This raises concerns about implicit biases, as LLMs often reflect societal inequities present in their training data sourced from the internet. Moreover, cases like Google’s LLM appearing ‘woke’ highlight the risks of overcorrecting these biases. How safe is Anthropic’s Claude3 Model? Is Cohere’s Command R+ LLM really ready for enterprise use? These scenarios underscore the urgent need for careful checks on these models to prevent exacerbating societal inequities and causing harm.”