The news is by your side.

The Department of Homeland Security is embracing AI

0

The Department of Homeland Security has seen firsthand the opportunities and risks of artificial intelligence. Years later, a human trafficking victim was found using an AI tool that conjured up an image of the child ten years older. But it has also been tricked in studies through deep-fake images created by AI

Now the department becomes the first federal agency to embrace the technology with a plan to integrate generative AI models across a wide range of divisions. In partnerships with OpenAI, Anthropic and Meta, it will launch pilot programs using chatbots and other tools to help combat drug and human trafficking crimes, train immigration officials and prepare for emergency management across the country.

The rush to roll out the as-yet unproven technology is part of a larger struggle to keep up with the changes brought about by generative AI, which can create hyper-realistic images and videos and imitate human speech.

“You can’t ignore it,” Alejandro Mayorkas, secretary of the Department of Homeland Security, said in an interview. “And if you are not progressive in recognizing and preparing for the possibilities for good and the possibilities for harm, it will be too late and that is why we move quickly.”

The plan to integrate generative AI across the agency is the latest demonstration of how new technology like OpenAI’s ChatGPT is forcing even the most staid industries to reevaluate the way they do their work. Still, government agencies such as DHS are likely to face some of the harshest criticism over the way they use the technology, which has sparked rancorous debate as it has sometimes proven unreliable and discriminatory.

Those within the federal government have rushed to form plans in response to President Biden’s executive order issued late last year mandating the creation of AI safety standards and their adoption by the federal government.

The DHS, which employs 260,000 people, was created after the September 11 terrorist attacks and is charged with protecting Americans within the country’s borders, including monitoring human and drug trafficking, protecting critical infrastructure, disaster relief and border patrol .

As part of his plan The agency plans to hire 50 AI experts to work on solutions to protect the country’s critical infrastructure from AI-generated attacks and to combat the use of the technology to generate child sexual abuse materials and biological weapons to create.

In the pilot programs, on which it will spend $5 million, the agency will use AI models such as ChatGPT to support investigations into child abuse material and human and drug trafficking. It will also work with companies to sift through the wealth of text-based data to find patterns that can help researchers. For example, a detective looking for a suspect driving a blue pickup truck might look for the same type of vehicle for the first time in homeland security investigations.

DHS will use chatbots to train immigration officials who have worked with other employees and contractors posing as refugees and asylum seekers. The AI ​​tools will allow officials to get more training with mock interviews. The chatbots will also collect information about communities across the country to help them create disaster response plans.

The agency will report results from its pilot programs by the end of the year, said Eric Hysen, the department’s chief information officer and head of AI.

The agency has chosen OpenAI, Anthropic and Meta to experiment with a variety of tools and will use cloud providers Microsoft, Google and Amazon in its pilot programs. “We can’t do this alone,” he said. “We need to work with the private sector to help define what constitutes responsible use of generative AI.”

Leave A Reply

Your email address will not be published.