Big Tech Wigs: Curbing AI Implementations


More than 1100 tech luminaries, executives, and scientists have issued a warning against labs conducting large-scale experiments with artificial intelligence (AI) more powerful than ChatGPT, saying the technology poses a serious threat to humanity.

In an open letter published by The Future of Life Institute, a nonprofit organization that aims to reduce global existential and catastrophic risks to humanity, Apple co-founder Steve Wozniak, SpaceX and Tesla CEO Elon Musk , and the president of the MIT Future of Life Institute, Max Tegmark. joined other signatories in saying that AI poses "profound risks to society and humanity, as demonstrated by extensive and renowned research by top AI laboratories."

The signatories called for a six-month pause on the implementation of AI training systems more powerful than GPT-4, which is the large language model (LLM) that powers the popular natural language processing chatbot ChatGPT. The letter, in part, described a dystopian future reminiscent of those created by artificial neural networks in science fiction films, such as The Terminator and The Matrix. The letter pointedly wonders whether advanced AI could lead to a "loss of control of our civilization."

The letter also warns of political disruption "especially to democracy" from AI: chatbots acting like humans could flood social media and other networks with propaganda and falsehoods. And he warned that AI could "automate all work, including compliance."

The group called on civic leaders, not the tech community, to take ownership of decisions about the scope of AI deployments.

Policymakers should work with the AI ​​community to dramatically accelerate the development of robust AI governance systems that, at a minimum, include new AI regulators, oversight and monitoring systems, high-performance AI, and large computing power pools. . The letter also suggested using provenance and watermarking systems to help distinguish real content from synthetic content and to track model leaks, as well as a robust auditing and certification ecosystem.

“Contemporary AI systems are now becoming competitive with humans in general tasks,” the letter says. “Should we develop non-human minds that could eventually outnumber, outsmart, outdate, and replace us? Should we risk losing control of our civilization? Such decisions should not be delegated to unelected technology leaders.

(The UK government today published a white paper outlining general-purpose AI regulatory plans, saying it would "avoid strict legislation that could stifle innovation" and instead build on existing laws.)

The Future of Life Institute argued that AI labs are locked in an uncontrollable race to develop and deploy "increasingly powerful digital minds that no one, not even their creators, can reliably understand, predict or control."

The signatories included scientists from DeepMind Technologies, a British AI research lab and a subsidiary of Google's parent company Alphabet. Google recently announced Bard, an AI-based conversational chatbot that it developed using the LaMDA family of LLMs.

LLMs are deep learning algorithms (natural language processing computer programs) that can produce human-like responses to queries. Generative AI technology can also produce computer code, images, video, and sound.

Microsoft, which has invested more than €10 billion in ChatGPT and the creator of GPT-4 OpenAI, did not respond to a request for comment on today's letter. OpenAI and Google also did not immediately respond to a request for comment.

Andrzej Arendt, CEO of IT consultancy Cyber ​​Geeks, said that while generative AI tools alone cannot yet deliver the highest quality software as a final product, "their assistance in generating code snippets, system configurations or unit tests can help significantly." speed up the programmer's work.

“Will this make developers redundant? Not necessarily, partly because the results such tools provide cannot be used reliably; verification of the programmer is necessary,” Arendt continued. “In fact, changes in work methods have accompanied programmers since the beginning of the profession. The job of developers will shift to interacting with AI systems to some extent. »

The biggest changes will come with the introduction of large-scale artificial intelligence systems, Arendt said, which can be compared to the XNUMXth century industrial revolution that replaced an economy based on crafts, agriculture and manufacturing.

“With AI, the technological leap could be just as big, if not bigger. At this time, we cannot foresee all the consequences,” he said.

Vlad Tushkanov, a senior data scientist at Moscow-based cybersecurity firm Kaspersky, said integrating LLM algorithms into more services may lead to new threats. In fact, LLM technologists are already investigating attacks, such as rapid injection, that can be used against LLMs and the services they power.

“Since the situation is changing rapidly, it is difficult to estimate what will happen next and whether these peculiarities of the LLM turn out to be the side effect of its immaturity or whether they are its inherent vulnerability,” Tushkanov said. "However, enterprises may want to include them in their threat models when considering integrating LLM into consumer applications."

That said, LLMs and AI technologies are useful and already automate a lot of “painful work” that is necessary but not enjoyable or interesting for people. Chatbots, for example, can scan millions of alerts, emails, likely phishing web pages, and potentially malicious executables daily.

"This volume of work would be impossible without automation," Tushkanov said. "...Despite all the advances and cutting-edge technologies, there is still a huge shortage of cybersecurity talent. It is estimated that the industry needs millions more professionals, and in this highly creative field, we cannot stop wasting the people we have." in monotonous and repetitive tasks.

Generative artificial intelligence and machine learning won't replace all IT jobs, including people fighting cybersecurity threats, Tushkanov said. Solutions to these threats are developed in a harsh environment, where cybercriminals work against organizations to evade detection.

“Therefore, it is very difficult to automate them, because cybercriminals adapt to each new tool and approach,” Tushkanov said. "In addition, with cybersecurity, accuracy and quality are very important, and at the moment large language models are, for example, prone to hallucinations (as our tests show, cybersecurity tasks are no exception)" .

The Future of Life Institute said in its letter that with safeguards, humanity can enjoy a prosperous future with AI.

“Design these systems for the benefit of all and give society the opportunity to adapt,” the letter says. "Society has stopped at other technologies with potentially catastrophic effects on society. We can do that here. Let's enjoy a long summer of AI, don't rush to fall unprepared."

Copyright © 2023 IDG Communications, Inc.