Budapest Post

Cum Deo pro Patria et Libertate
Budapest, Europe and world news

0:00
0:00

Elon Musk and Others Call for Pause on A.I., Citing ‘Profound Risks to Society’

More than 1,000 tech leaders, researchers and others signed an open letter urging a moratorium on the development of the most powerful artificial intelligence systems.
More than 1,000 technology leaders and researchers, including Elon Musk, have urged artificial intelligence labs to pause development of the most advanced systems, warning in an open letter that A.I. tools present “profound risks to society and humanity.”

A.I. developers are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict or reliably control,” according to the letter, which the nonprofit Future of Life Institute released on Wednesday.

Others who signed the letter include Steve Wozniak, a co-founder of Apple; Andrew Yang, an entrepreneur and a 2020 presidential candidate; and Rachel Bronson, the president of the Bulletin of the Atomic Scientists, which sets the Doomsday Clock.

“These things are shaping our world,” said Gary Marcus, an entrepreneur and academic who has long complained of flaws in A.I. systems, in an interview. “We have a perfect storm of corporate irresponsibility, widespread adoption, lack of regulation and a huge number of unknowns.”

A.I. powers chatbots like ChatGPT, Microsoft’s Bing and Google’s Bard, which can perform humanlike conversations, create essays on an endless variety of topics and perform more complex tasks, like writing computer code.

The push to develop more powerful chatbots has led to a race that could determine the next leaders of the tech industry. But these tools have been criticized for getting details wrong and their ability to spread misinformation.

The open letter called for a pause in the development of A.I. systems more powerful than GPT-4, the chatbot introduced this month by the research lab OpenAI, which Mr. Musk co-founded. The pause would provide time to introduce “shared safety protocols” for A.I. systems, the letter said. “If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” it added.

Development of powerful A.I. systems should advance “only once we are confident that their effects will be positive and their risks will be manageable,” the letter said.

“Humanity can enjoy a flourishing future with A.I.,” the letter said. “Having succeeded in creating powerful A.I. systems, we can now enjoy an ‘A.I. summer’ in which we reap the rewards, engineer these systems for the clear benefit of all and give society a chance to adapt.”

Sam Altman, the chief executive of OpenAI, did not sign the letter.

Mr. Marcus and others believe that persuading the wider tech community to agree to a moratorium would be difficult. But swift government action is also a slim possibility, because lawmakers have done little to regulate artificial intelligence.

Politicians in the United States don’t have much of an understanding of the technology, Representative Jay Obernolte, a California Republican, recently told The New York Times. In 2021, European Union policymakers proposed a law designed to regulate A.I. technologies that might create harm, including facial recognition systems.

Expected to be passed as soon as this year, the measure would require companies to conduct risk assessments of A.I technologies to determine how their applications could affect health, safety and individual rights.

GPT-4 is what A.I. researchers call a neural network, a type of mathematical system that learns skills by analyzing data. A neural network is the same technology that digital assistants like Siri and Alexa use to recognize spoken commands, and that self-driving cars use to identify pedestrians.

Around 2018, companies like Google and OpenAI began building neural networks that learned from enormous amounts of digital text, including books, Wikipedia articles, chat logs and other information culled from the internet. The networks are called large language models, or L.L.M.s.

By pinpointing billions of patterns in all that text, the L.L.M.s learn to generate text on their own, including tweets, term papers and computer programs. They could even carry on a conversation. Over the years, OpenAI and other companies have built L.L.M.s that learn from more and more data.

This has improved their abilities, but the systems still make mistakes. They often get facts wrong and will make up information without warning, a phenomenon that researchers call “hallucination.” Because the systems deliver all information with what seems like complete confidence, it is often difficult for people to tell what is right and what is wrong.

Experts are worried that these systems could be misused to spread disinformation with more speed and efficiency than was possible in the past. They believe that these could even be used to coax behavior from people across the internet.

Before GPT-4 was released, OpenAI asked outside researchers to test dangerous uses of the system. The researchers showed that it could be coaxed into suggesting how to buy illegal firearms online, describe ways to make dangerous substances from household items and write Facebook posts to convince women that abortion is unsafe.

They also found that the system was able to use Task Rabbit to hire a human across the internet and defeat a Captcha test, which is widely used to identify bots online. When the human asked if the system was “a robot,” the system said it was a visually impaired person.

After changes by OpenAI, GPT-4 no longer does these things.

For years, many A.I. researchers, academics and tech executives, including Mr. Musk, have worried that A.I. systems could cause even greater harm. Some are part of a vast online community called rationalists or effective altruists who believe that A.I could eventually destroy humanity.

The letter was shepherded by the Future of Life Institute, an organization dedicated to researching existential risks to humanity that has long warned of the dangers of artificial intelligence. But it was signed by a wide variety of people from industry and academia.

Though some who signed the letter are known for repeatedly expressing concerns that A.I. could destroy humanity, others, including Mr. Marcus, are more concerned about its near-term dangers, including the spread of disinformation and the risk that people will rely on these systems for medical and emotional advice.

The letter “shows how many people are deeply worried about what is going on,” said Mr. Marcus, who signed the letter. He believes the letter will be a turning point. “It think it is a really important moment in the history of A.I. — and maybe humanity,” he said.

He acknowledged, however, that those who had signed the letter might find it difficult to persuade the wider community of companies and researchers to put a moratorium in place. “The letter is not perfect,” he said. “But the spirit is exactly right.”
AI Disclaimer: An advanced artificial intelligence (AI) system generated the content of this page on its own. This innovative technology conducts extensive research from a variety of reliable sources, performs rigorous fact-checking and verification, cleans up and balances biased or manipulated content, and presents a minimal factual summary that is just enough yet essential for you to function as an informed and educated citizen. Please keep in mind, however, that this system is an evolving technology, and as a result, the article may contain accidental inaccuracies or errors. We urge you to help us improve our site by reporting any inaccuracies you find using the "Contact Us" link at the bottom of this page. Your helpful feedback helps us improve our system and deliver more precise content. When you find an article of interest here, please look for the full and extensive coverage of this topic in traditional news sources, as they are written by professional journalists that we try to support, not replace. We appreciate your understanding and assistance.
Newsletter

Related Articles

0:00
0:00
Close
The Billion-Dollar Inheritance and the Death on the Railway Tracks: The Scandal Shaking Europe
World’s Cleanest Countries 2025 Ranked by Air, Water, Waste, and Hygiene Standards
Denmark Revives EU ‘Chat Control’ Proposal for Encrypted Message Scanning
Perplexity makes unsolicited $34.5 billion all-cash offer for Google’s Chrome browser
Cristiano Ronaldo and Georgina Rodríguez announce engagement
Taylor Swift announces 12th studio album on Travis Kelce’s podcast after high-profile year together
Asia-Pacific dominates world’s busiest flight routes, with South Korea’s Jeju–Seoul corridor leading global rankings
Sam Altman challenges Elon Musk with plans for Neuralink rival
Trump and Putin Meeting: Focus on Listening and Communication
Instagram Released a New Feature – and Sent Users Into a Panic
China Accuses: Nvidia Chips Are U.S. Espionage Tools
Mercedes’ CEO Is Killing Germany’s Auto Legacy
US Postal Service Targets Unregulated Vape Distributors in Crackdown
RFK Jr. Announces HHS Investigation into Big Pharma Incentives to Doctors
Australia to Recognize the State of Palestine at UN Assembly
The Collapse of the Programmer Dream: AI Experts Now the Real High-Earners
Security flaws in a carmaker’s web portal let one hacker remotely unlock cars from anywhere
Denmark Pushes for Child Sexual Abuse Scanning Bill in EU, Could Be Adopted by October 2025
Street justice isn’t pretty but how else do you deal with this kind of insanity? Sometimes someone needs to standup and say something
Armenia and Azerbaijan sign U.S.-brokered accord at White House outlining transit link via southern Armenia
Barcelona Resolves Captaincy Issue with Marc-André ter Stegen
US Justice Department Seeks Release of Epstein and Maxwell Grand Jury Exhibits Amid Legal and Victim Challenges
Spain Scraps F-35 Jet Deal as Trump Pushes for More NATO Spending
France Faces Largest Wildfire Since 1949 as Blazes Rage Across Aude
French Senate Report Alleges State Cover‑Up in Perrier ‘Natural Mineral Water’ Scandal
British Labour Government Utilizes Counter-Terrorism Tools for Social Media Monitoring Against Legitimate Critics
OpenAI Launches GPT‑5, Its Most Advanced AI Model Yet
Brazilian President Lula says he’ll contact the leaders of BRICS states to propose a unified response to U.S. tariffs
US envoy Steve Witkoff arrived in Moscow to seek a breakthrough in the Ukraine war ahead of President Trump’s peace deadline
WhatsApp Deletes 6.8 Million Scam Accounts Amid Rising Global Fraud
Britain's Online Safety Law Sparks Outcry Over Privacy, Free Speech, and Mass Surveillance
Nine people have been hospitalized and dozens of salmonella cases have been reported after an outbreak of infections linked to certain brands of pistachios and pistachio-containing products, according to the Public Health Agency of Canada
Karol Nawrocki Inaugurated as Poland’s President, Setting Stage for Clash with Tusk Government
US Charges Two Chinese Nationals for Illegal Nvidia AI Chip Exports
Texas Residents Face Water Restrictions While AI Data Centers Consume Millions of Gallons
U.S. Tariff Policy Triggers Market Volatility Amid Growing Global Trade Tensions
Tariffs, AI, and the Shifting U.S. Macro Landscape: Navigating a New Economic Regime
German Finance Minister Criticizes Trump’s Attacks on Institutions
India Rejects U.S. Tariff Threat, Defends Russian Oil Purchases
United States Establishes Strategic Bitcoin Reserve and Digital Asset Stockpile
Thousands of Private ChatGPT Conversations Accidentally Indexed by Google
China Tightens Mineral Controls, Curtailing Critical Inputs for Western Defence Contractors
OpenAI’s Bold Bet: Teaching AI to Think, Not Just Chat
U.S. Tariffs Surge to Highest Levels in Nearly a Century Under Second Trump Term
Ong Beng Seng Pleads Guilty in Corruption Case Linked to Former Singapore Transport Minister
BP’s Largest Oil and Gas Find in 25 Years Uncovered Offshore Brazil
Italy Fines Shein One Million Euros for Misleading Sustainability Claims
JPMorgan and Coinbase Unveil Partnership to Let Chase Cardholders Buy Crypto Directly
Declassified Annex Links Soros‑Affiliated Officials and Clinton Campaign to ‘Russiagate’ Narrative
UK's Online Safety Law: A Front for Censorship
×