Microsoft says that “responsible AI” is now the biggest priority – but what does this look like?
- Advertisement -
- Advertisement -
- Microsoft has released its 2025 repairable AI transparency report
- It outlines its plans to build and maintain responsible AI models
- New regulations come in with regard to the use of AI and Microsoft wants to be ready
With AI and large language models (LLMS) that are increasingly being used in many parts of modern life, the reliability and security of these models has become an important consideration for companies such as companies such as Microsoft.
The company has moved to sketch its approach to the future of AI 2025 Responsible AI Transparent ReportAnd explains how it sees the future of the technology that evolves in the coming years.
Just as we have seen AI wider through companies, we have also seen a wave of regulations around the world that are aimed at determining the safe and responsible use of AI tools and the implementation of AI Governance policy that helps companies in managing the risks related to AI usage.
A hands -on approach
In the report, the second after a first launch in May 2024, Microsoft explains how considerable investments made in responsible AI tools, policy and practices.
These include extensive risk management and mitigation, “modalities that go beyond text – such as images, audio and video – and extra support for agental systems”, as well as a “proactive, layered approach” for new regulations such as the EU’s AI ActProvide customers with materials and resources to enable them to prepare and meet incoming requirements.
Consistent risicobeheer, toezicht, herziening en rood-teaming van AI en generatieve AI-releases komen naast voortdurend onderzoek en ontwikkeling om ‘ons begrip te informeren over sociotechnische kwesties met betrekking tot de nieuwste vooruitgang in AI’, met het AI-laboratorium van het bedrijf Holp Microsoft, “Push the Frontier of What AI Systems Capabability, Efficiency, Efficiëntie, Efficiency and safety. “
As AI claims, Microsoft says that plans to build more adjustable tools and practices and to invest in risk management systems to offer “tools and practices for the most common risks between implementation scenarios”.
However, that is not all, because Microsoft is also planning to deepen his work with regard to incoming regulations by supporting effective administration in the AI chain chain.
It also says that it also works internally and externally to “clarify roles and expectations”, as well as continuing to investigate “AI risk measurement and evaluation and the tooling to operationalize it on a scale”, whereby the progress is shared with its broader ecosystem to support safer norms and norms.
“Our report emphasizes new developments with regard to how we build and implement AI systems responsible, how we support our customers and the wider ecosystem, and how we learn and evolve,” said Teresa Hutson, CVP, Trusted Technology Group and Natasha Crampton, Chief Responsible AI Officer.
“We look forward to hearing your feedback about the progress we have made and the possibilities to work together on everything that can still be done. Together we can promote AI -Governance efficiently and effectively, promote confidence in AI systems at a pace that comes to us.”
Maybe you like it too
- Advertisement -