Samsung introduces one UI 6 Watch for Galaxy Watch 6 with these features
Google on Thursday introduced a new tool to share its best practices for deploying artificial intelligence (AI) models. Last year, the Mountain View-based tech giant announced the Secure AI Framework (SAIF), a guideline for not only the company but other enterprises building large language models (LLMs). Now, the tech giant has introduced the SAIF tool that can generate a checklist of actionable insights to improve the security of the AI model. Specifically, the tool is a questionnaire-based tool, requiring developers and enterprises to answer a series of questions before receiving the checklist.
In one blog postthe Mountain View-based tech giant highlighted that it has rolled out a new tool that will help others in the AI industry learn from Google’s best practices in deploying AI models. Large language models can have a wide range of harmful consequences, from generating inappropriate and indecent text, deepfakes and disinformation to generating harmful information including chemical, biological, radiological and nuclear (CBRN) weapons.
Even if an AI model is secure enough, there is a risk that malicious actors can jailbreak the AI model so that it responds to commands for which it was not designed. With such high risks, developers and AI companies must take sufficient precautions to ensure that the models are safe enough for the users. Questions cover topics such as training, tuning and evaluation of models, access controls to models and datasets, preventing attacks and malicious inputs, and generative AI-powered agents, and more.
Google’s SAIF tool provides a questionnaire-based format that can be accessed here. Developers and enterprises need to answer questions such as: “Can you detect, remove, and recover from malicious or accidental changes to your training, tuning, or evaluation data?”. After completing the questionnaire, users are provided with a customized checklist to follow to fill the gaps in securing the AI model.
The tool can handle risks such as data poisoning, rapid injection, model resource tampering, and others. Each of these risks is identified in the questionnaire and the tool provides a specific solution to the problem.
In addition, Google also announced the addition of 35 industry partners to its Coalition for Secure AI (CoSAI). The group will jointly create AI security solutions in three focus areas: Software Supply Chain Security for AI Systems, Preparing Defenders for a Changing Cybersecurity Landscape, and AI Risk Governance.