UK warns of artificial intelligence: it may leak secret government documents or create biological weapons

Deadly biological weapons, automated cybersecurity attacks, and powerful AI models that break free from human control. These are just some of the potential threats posed by AI, according to a new UK Government report published to help set the agenda. AI Safety SummitAn international summit on the security of the aforementioned technology will be held in that country next week. The document was produced with input from leading AI companies, such as Google’s DeepMind unit, and several British government departments, including intelligence agencies.

The summit provides an opportunity to bring countries and leading AI companies together to better understand the risks they pose, says Joe White, the UK’s technology envoy to the US. Managing the potential risks of algorithms will require old-fashioned organic collaboration, says White, who helped plan the meeting. “These are not machine-to-human challenges. They are human-to-human problems,” White highlights.

Rishi Sunak, the UK Prime Minister, will deliver a speech tomorrow about how while artificial intelligence opens up opportunities for humanity’s progress, it is important to be honest about the new risks it poses for future generations.

The dangers of artificial intelligence to humanity

the The UK AI Safety Summit will take place from 1-2 November It will focus primarily on the ways in which people can misuse or lose control of advanced forms of artificial intelligence. Some British AI experts and officials criticized the focus of the event, saying the government should prioritize short-term issues, such as helping the UK compete with global AI leaders such as the United States and China.

See also  Britain to withdraw coal from the electricity mix in 2024 | Europe update | DW

Some AI experts have warned that the recent surge in debate over far-fetched AI scenarios, including the possibility of human extinction, may distract regulators and the public from more pressing problems, such as discriminatory algorithms or technology that empowers already dominant companies.

The report released today examines the national security implications of large language models (LLMs), the AI ​​technology behind ChatGPT. White explains that British intelligence agencies are working with the Frontier AI Task Force, a British government think tank, to explore scenarios such as what could happen if bad actors combined an LLM with… Secret government documents. The pessimistic possibility analyzed in the document suggests that an LLM that accelerates scientific discoveries would also boost research projects. Creating biological weapons.

Last July, Dario Amodei, CEO of… start A US senator has told that within the next two or three years it will be possible to suggest a linguistic model how to carry out large-scale attacks with biological weapons. But White stresses that the report is a high-level document and is not intended to be “a shopping list of all the negative actions that could be taken.”

Leave a Reply

Your email address will not be published. Required fields are marked *