Budapest Post

Cum Deo pro Patria et Libertate
Budapest, Europe and world news

Will AI turn humans into 'waste product'?

Will AI turn humans into 'waste product'?

A tech guru warns that robots that think for themselves may take over the world and use weapons of mass destruction to wipe out mankind. Is it time to worry?
This year, the BBC’s prestigious Reith Lectures will be delivered for the first time by a computer scientist. UK-born Stuart Russell, professor of computer science at the University of California, Berkeley, will look at ‘Living with Artificial Intelligence’ in a series of weekly broadcasts during December.

In a trailer for the lecture series, Russell was interviewed on BBC Radio 4’s Today on Monday. As confirmation of the old journalistic adage that “if it bleeds, it leads,” the conversation was dominated by gloomy prognostications about what AI might be doing to our society and even more nightmarish possibilities for the future. Never mind developing machines that can learn – we need to learn from the history of new technologies to treat both hype and horror with equal scepticism.

Artificial intelligence is already in use in society. Computers can guess what we would like to watch next on YouTube, what products we might want to buy on Amazon and show us adverts based on previous internet searches on Google. More usefully, perhaps, machines can learn to identify cancerous growths on medical scans with great speed and accuracy and flag up potentially fraudulent financial transactions – something very useful when banks and other institutions perform astonishing volumes of trades constantly.

Russell believes that AI is “not working necessarily to our benefit and the revelations we’ve seen recently from Facebook suggest media companies know it is ripping societies apart. These are very simple algorithms, so the question I’ll be asking in the lectures is what happens when the algorithms become much more intelligent than they are right now.”

This is an odd way of looking at things – that algorithms rather than human politics are the problem in society right now. Of course, dumb algorithms that push social-media posts to you on the basis that “if you liked that one, you might like this one,” probably don't help in getting people out of their “echo chambers.” But people sticking with their own ‘tribe’ when it comes to politics is mostly about personal choice and unwillingness to accept that people with a different view might have a point, not the work of evil computer algorithms.

Where Russell is really concerned is when AI goes beyond task-specific applications to the possibility of general-purpose AI. Instead of setting computers up to do particular things – like churning through vast amounts of data with a particular goal and learning how to do it better and faster than humans – general-purpose AI systems would be able to take on a wide variety of tasks and make decisions for themselves.

In particular, Russell worries about autonomous weapons that “can find targets, decide which targets to attack and then go ahead and attack them, all without any human being in the loop.” He fears that these AI WMD could destroy whole cities or regions, or take out an entire ethnic group.

Russell cooperated on a startling and scary Black Mirror-style film, Slaughterbots, in 2017, showing one particularly gloomy vision of tiny, bee-like drones selecting and assassinating anyone who dares to disagree with the authorities.

But while some degree of learning and autonomy is in use already – for example, to take humans out of the dangerous business of clearing minefields – the combination of recognising individuals or groups accurately and making decisions about who and how to attack are way beyond current capabilities. As a US drone strike in Afghanistan in August – which killed 10 people, including seven children – showed, it's possible to have hi-tech, intelligence-led attacks that go horribly wrong. Moreover, if political and military leaders have few qualms about killing the innocent, why wait for fantasy AI-powered autonomous weapons when you can just carpet-bomb whole areas, whether it is Dresden in the Second World War or Cambodia in the Seventies?

The all-conquering power of AI is, as things stand, just hype. Take driverless cars. Just a few years ago, they were the Next Big Thing. Google, Apple, Tesla and more poured billions into trying to develop them. Now they’re on the back burner because the difficulties are just too great. A year ago, Uber – once dreaming of fleets of robotaxis – sold off its autonomous vehicles division. As for robots and AI taking over our jobs, at best they will be a tool to improve the productivity of humans. Using computers to do bits of our jobs could be useful, but actually replacing teachers, lawyers or drivers is a whole different ball-game.

Silicon Valley seems to have a schizophrenic attitude to its own technology. On the one hand, the importance of artificial intelligence is exaggerated. On the other hand, we have doom-mongering speculations about AI systems gradually taking control of society, leaving human beings, in Russell’s words, as so much “waste product.” In truth, AI keeps confirming that it is both extremely useful for doing specific tasks and also pretty dumb at anything beyond that.

According to Melanie Mitchell, Davis Professor of Complexity at the Santa Fe Institute, we suffer from multiple misunderstandings about AI. First, specific AI and general AI are completely different levels of difficulty. For example, getting computers to translate different languages has involved an enormous amount of work but text- and voice-based systems are getting pretty good. Getting two AI machines to hold a conversation, on the other hand, is much harder.

Second, many things that humans find easy are really difficult to automate. For example, we’re evolved to scan the world quickly, pick out distinct things and figure out what is important right now. Computers find this extremely hard. Third, humans have a rich experience of the physical world through our senses that, researchers are finding, has a significant impact on how we think. Fourth, Mitchell argues, human beings develop common sense, built on experience and practice. AI systems can chuck ever-greater amounts of processing power at problems, but struggle to replicate that. Elon Musk failed in his attempt to fully automate his Tesla factories – humans were simply irreplaceable for some tasks.

If we could cut out the boosterism about AI, we could see a useful group of technologies that can help us out in specific ways to make our lives easier. Equally, it would burst the bubble of all those catastrophists who think AI systems will take over the world. Ultimately, we’re still in control of the machines and they’re not about to replace us any time soon. With a bit of historical perspective, we can see that the fretting about AI is just the latest in a seemingly endless series of fearful spasms about new technology.
AI Disclaimer: An advanced artificial intelligence (AI) system generated the content of this page on its own. This innovative technology conducts extensive research from a variety of reliable sources, performs rigorous fact-checking and verification, cleans up and balances biased or manipulated content, and presents a minimal factual summary that is just enough yet essential for you to function as an informed and educated citizen. Please keep in mind, however, that this system is an evolving technology, and as a result, the article may contain accidental inaccuracies or errors. We urge you to help us improve our site by reporting any inaccuracies you find using the "Contact Us" link at the bottom of this page. Your helpful feedback helps us improve our system and deliver more precise content. When you find an article of interest here, please look for the full and extensive coverage of this topic in traditional news sources, as they are written by professional journalists that we try to support, not replace. We appreciate your understanding and assistance.
Newsletter

Related Articles

0:00
0:00
Close
The Billion-Dollar Inheritance and the Death on the Railway Tracks: The Scandal Shaking Europe
World’s Cleanest Countries 2025 Ranked by Air, Water, Waste, and Hygiene Standards
Denmark Revives EU ‘Chat Control’ Proposal for Encrypted Message Scanning
Perplexity makes unsolicited $34.5 billion all-cash offer for Google’s Chrome browser
Cristiano Ronaldo and Georgina Rodríguez announce engagement
Taylor Swift announces 12th studio album on Travis Kelce’s podcast after high-profile year together
Asia-Pacific dominates world’s busiest flight routes, with South Korea’s Jeju–Seoul corridor leading global rankings
Sam Altman challenges Elon Musk with plans for Neuralink rival
Trump and Putin Meeting: Focus on Listening and Communication
Instagram Released a New Feature – and Sent Users Into a Panic
China Accuses: Nvidia Chips Are U.S. Espionage Tools
Mercedes’ CEO Is Killing Germany’s Auto Legacy
US Postal Service Targets Unregulated Vape Distributors in Crackdown
RFK Jr. Announces HHS Investigation into Big Pharma Incentives to Doctors
Australia to Recognize the State of Palestine at UN Assembly
The Collapse of the Programmer Dream: AI Experts Now the Real High-Earners
Security flaws in a carmaker’s web portal let one hacker remotely unlock cars from anywhere
Denmark Pushes for Child Sexual Abuse Scanning Bill in EU, Could Be Adopted by October 2025
Street justice isn’t pretty but how else do you deal with this kind of insanity? Sometimes someone needs to standup and say something
Armenia and Azerbaijan sign U.S.-brokered accord at White House outlining transit link via southern Armenia
Barcelona Resolves Captaincy Issue with Marc-André ter Stegen
US Justice Department Seeks Release of Epstein and Maxwell Grand Jury Exhibits Amid Legal and Victim Challenges
Spain Scraps F-35 Jet Deal as Trump Pushes for More NATO Spending
France Faces Largest Wildfire Since 1949 as Blazes Rage Across Aude
French Senate Report Alleges State Cover‑Up in Perrier ‘Natural Mineral Water’ Scandal
British Labour Government Utilizes Counter-Terrorism Tools for Social Media Monitoring Against Legitimate Critics
OpenAI Launches GPT‑5, Its Most Advanced AI Model Yet
Brazilian President Lula says he’ll contact the leaders of BRICS states to propose a unified response to U.S. tariffs
US envoy Steve Witkoff arrived in Moscow to seek a breakthrough in the Ukraine war ahead of President Trump’s peace deadline
WhatsApp Deletes 6.8 Million Scam Accounts Amid Rising Global Fraud
Britain's Online Safety Law Sparks Outcry Over Privacy, Free Speech, and Mass Surveillance
Nine people have been hospitalized and dozens of salmonella cases have been reported after an outbreak of infections linked to certain brands of pistachios and pistachio-containing products, according to the Public Health Agency of Canada
Karol Nawrocki Inaugurated as Poland’s President, Setting Stage for Clash with Tusk Government
US Charges Two Chinese Nationals for Illegal Nvidia AI Chip Exports
Texas Residents Face Water Restrictions While AI Data Centers Consume Millions of Gallons
U.S. Tariff Policy Triggers Market Volatility Amid Growing Global Trade Tensions
Tariffs, AI, and the Shifting U.S. Macro Landscape: Navigating a New Economic Regime
German Finance Minister Criticizes Trump’s Attacks on Institutions
India Rejects U.S. Tariff Threat, Defends Russian Oil Purchases
United States Establishes Strategic Bitcoin Reserve and Digital Asset Stockpile
Thousands of Private ChatGPT Conversations Accidentally Indexed by Google
China Tightens Mineral Controls, Curtailing Critical Inputs for Western Defence Contractors
OpenAI’s Bold Bet: Teaching AI to Think, Not Just Chat
U.S. Tariffs Surge to Highest Levels in Nearly a Century Under Second Trump Term
Ong Beng Seng Pleads Guilty in Corruption Case Linked to Former Singapore Transport Minister
BP’s Largest Oil and Gas Find in 25 Years Uncovered Offshore Brazil
Italy Fines Shein One Million Euros for Misleading Sustainability Claims
JPMorgan and Coinbase Unveil Partnership to Let Chase Cardholders Buy Crypto Directly
Declassified Annex Links Soros‑Affiliated Officials and Clinton Campaign to ‘Russiagate’ Narrative
UK's Online Safety Law: A Front for Censorship
×