Everyone uses AI at work, but only a few companies have rules to prevent exploding
- Advertisement -
- Advertisement -
- AI use is exploding, but most European companies still work without clear rules or policy
- Organizations celebrate the productivity gain and ignore the rising security threats from Deepfakes and AI abuse
- Employees use generative AI every day, but few know when, where or how they should
Because generative AI gets a grip on the workplaces of Europe, many organizations embrace its capacities without formal policy to guide their use.
According to Isaca83% of IT and business professionals believe that AI is already used by staff within their organizations, but only 31% report the presence of an extensive internal AI policy.
The use of AI in the workplace has a number of advantages. Fifty -six percent of the respondents say that AI has already improved productivity, 71% mentions the efficiency buyers and time savings, while 62% is optimistic that AI will further improve their organizations the following year.
Productivity gain without structure is a ticking bomb
However, AI applications are not universally positive, and what observed profits they yield come with reservations.
“The British government has made it clear through its AI action plan that responsible AI acceptance is a national priority,” says Chris Dimitriadis, the most important strategy officer of Isaca.
“AI threats are evolving rapidly, from deep fakes to phishing, and without sufficient training, investments and internal policy, companies will have difficulty keeping up with. Bridging this risk active gap is essential if the UK will lead with innovation and digital trust.”
This dissonance between enthusiasm and regulations forms remarkable challenges.
The concern about AI abuse is high and 64% of the respondents are extreme or very concerned about the fact that generative AI is being turned against them.
However, only 18% of the organizations invest in aids to detect deepfakes, despite the fact that 71% anticipate their proliferation in the near future.
These figures reflect a clear gap between risks, whereby the consciousness of threats does not translate into meaningful protective measures.
The situation is further complicated by a lack of roll -specific guidelines. Without this, employees are left to determine when and how to use AI, which increases the risk of unsafe or inappropriate applications.
“Without guidance, rules or training based on the extent to which AI can be used at work, employees can continue to use it in the wrong context or in an unsafe way. Likewise they may not see wrong information or deep sections as they were equipped with the right knowledge and tools.”
This absence of structure is not only a security risk, but also a missed opportunity for good professional development.
Almost half of the respondents, 42%, believe that they should improve their AI knowledge within six months to remain competitive in their role.
This marks an increase of 8% compared to the previous year and reflects a growing awareness that the development of skills is crucial.
Within two years, 89% expect you to be overpassed in AI, which underlines the urgency of formal training.
That said, companies that want it Best ai -toolsincluding the Best LLM for coding and the Best AI writersmust also take into account the responsibilities that belong to them.
Maybe you like it too
- Advertisement -