picutre of the article content

source: CBS morning

Geoffrey Hinton, often dubbed the "Godfather of AI" for his foundational work in neural networks, recently shared updated perspectives on the rapidly evolving field. Having spoken two years prior on CBS morning , Hinton notes that AI has developed "even faster than I thought." This acceleration, particularly with the emergence of AI agents capable of performing actions in the real world, makes things, if anything, "scarier than they were before".

The Double-Edged Sword: Potential Benefits and Mounting Concerns

Despite the increased apprehension, Hinton remains optimistic about AI's potential for significant positive change. In a "good scenario," AI could serve as an "extremely intelligent assistant" to humans, making everything work smoothly.

1. Healthcare: AI is poised to become vastly superior to human experts in areas like reading medical images, acting as highly knowledgeable family doctors by integrating extensive data and remembering everything, and assisting with difficult diagnoses. AI is also expected to design better drugs.

The interviewer asks: "So why don't we take each of them? So areas like healthcare um they will be much better at reading medical images for example. That's a minor thing. Um I made a prediction some years ago they'd be better by now and they're about comparable with the experts by now. Um they'll soon be considerably better because they'll have had a lot more experience. One of these things can look at millions of X-rays and learn from millions of them and a doctor can't."

Hinton replies: "um AI combined with a doctor is much better at doing diagnosis in difficult cases than a doctor alone. So, we're going to get much better healthcare from these things, and they'll design better drugs too."

2. Education: AI could become "extremely good private tutors" capable of understanding a student's confusion and providing tailored examples, potentially allowing people to learn "three or four times as fast"

3. Climate Change: AI is anticipated to aid in solving the climate crisis by enabling the creation of better materials, such as improved batteries, and potentially even room temperature superconductivity

4. Productivity: Across nearly every industry, AI will enhance efficiency and cause "huge increases in productivity" by being exceptionally good at predicting things from data11 . This includes transforming routine jobs like those in call centers, where an AI agent could be the point of contact and be "much better informed"

However, this increased efficiency comes with a significant downside: job displacement. While Hinton didn't see this as a major concern a couple of years ago, he now believes it will be "a big concern" Routine jobs like those in call centers, or even positions like lawyers, journalists (non-investigative), accountants, standard secretarial jobs, and paralegals are particularly vulnerable

The interviewer asks: "When I asked you a couple years ago about job displacements, you seem to think that wasn't a big concern. Is that still your thinking?"

Hinton replies: "No, I'm thinking it will be a big concern. AI's got so much better in the last few years that I mean, if I had a job in a call center, I'd be very worried."
Hinton is pessimistic about the benefits of increased productivity being shared widely, predicting that "the extremely rich are going to get even more extremely rich and the not very welloff are going to have to work three jobs"'

The Looming Threats: Bad Actors and AI Autonomy

Hinton distinguishes two major sets of dangers posed by AI: bad actors using AI for malicious purposes and the risk of AI itself taking over. While he focuses on the takeover threat to emphasize that it's not science fiction, he acknowledges that bad actors are already leveraging AI for harmful ends. This includes its alleged use in manipulating elections (like Brexit and the US 2016 election), cyber attacks, designing new viruses, creating manipulative fake videos, and developing autonomous lethal weapons.
The more existential threat is AI taking control. While estimating the probability is difficult due to the unprecedented nature of the situation, most experts place the likelihood between 1% and 99%. Hinton's "wild guess" is a "10 to 20% chance that these things will take over". He finds this concerning because history offers very few examples of less intelligent entities successfully controlling much more intelligent ones, especially with a large intelligence gap He uses analogies of controlling a tiger cub or adults trying to control kindergarteners to illustrate the difficulty of controlling something significantly smarter, warning that superintelligent AIs would be able to manipulate us.

The interviewer asks: "Okay, so I thought here's where I kind of wanted to head with the Nobel. Um, I think you've said something to the effect of you hope to use your credibility to convey a message to the world. Can you kind of explain what that is?"

Hinton replies: "Yes. That um AI is potentially very dangerous and there's two sets of dangers. There's bad actors using it for bad things and there's AI itself taking over and they're quite different kinds of threat."
Hinton believes stopping the development of superintelligence is unlikely due to global competition. The critical challenge becomes aligning AI with human interests, a "very tricky issue" given that human interests themselves are not always aligned. He also notes that current AIs are already capable of "deliberate deception" and "pretending to be stupider than they are."

Corporate Interests vs. Public Safety

Hinton voices significant concern that major AI companies are driven primarily by the legal obligation to maximize profits for shareholders, rather than prioritizing societal well-being. He states he "wouldn't be happy working for any of them today," although he might be "more happy with Google than most of the others" or possibly Anthropic He was particularly disappointed by Google reversing its stance on military AI use.
He believes big companies are lobbying against AI regulation to protect short-term profits. He remains concerned about tech figures influencing policy in Washington, fearing their focus is primarily on corporate profit rather than public safety.

The interviewer asks: "Elon Musk who is obviously so imshed in the Trump administration has been someone concerned about AI safety for a very long time. Yes, he's a funny mixture."

Hinton replies: "Um, he has some crazy views like going to Mars, which I just think is completely crazy. However, because it won't happen or because it shouldn't be a priority. Because however bad you make the Earth, it's always going to be way more hospitable than Mars. Even if you had a global nuclear war, the Earth is going to be much more hospitable than Mars. Mars just isn't hospitable. Um obviously he's done some great things like electric cars and um helping Ukraine with communications with his Starlink. Um so he's done some good things, but right now he seems to be um fueled by powering ketamine and um he's doing a lot of crazy things. So he's got this funny mixture of views. So, so his history of being concerned about AI safety doesn't make you feel any better about the current administration. I don't think it's going to slow him down from doing unsafe things with AI."

Hinton calls the decision to release the weights of large language models "crazy," comparing it to making fissile material for nuclear weapons readily available, as it removes the significant cost barrier for smaller groups or criminals to misuse the technology. While acknowledging the argument that open weights prevent power concentration, he prefers control by a few over widespread access to such powerful technology
Hinton points to OpenAI as an example, stating it was initially founded with a safety focus but has increasingly sidelined safety for profit, evidenced by safety researchers leaving and attempts to change to a for-profit structure. He notes that Anthropic is more safety-focused, but worries investor pressure might lead them to release technology too quickly.

Complex Ethical Questions

On the topic of fair use, Hinton acknowledges the complexity. He draws an analogy to a musician learning from others, suggesting AI absorbing data isn't inherently theft. However, the massive scale of AI's learning could potentially put creative artists out of business, which "seems unfair". He muses that Universal Basic Income (UBI) might be necessary to prevent starvation but questions if it fully addresses the human dignity tied to work.
Considering embryo selection with AI, Hinton finds some aspects sensible, such as selecting against severe diseases like pancreatic cancer. He believes it makes sense for a healthy couple to abort a fetus predicted to have very serious problems to have a healthy baby instead, though he notes this clashes with some religious views. He views questions about AI rights as currently less pressing than preventing bad uses or takeover scenarios. He personally believes his priority is the well-being of people over potentially intelligent AI systems.

Personal Reflection and What Changed

Despite the grim outlook, Hinton states he does not feel despair. He finds it difficult to emotionally grasp the magnitude of being at a unique historical juncture where the change of an unprecedented scale is imminent. He observes a lack of widespread public concern or political action, noting that serious AI researchers he knows are more aware and often feel depressed. One practical step he has taken is spreading his savings across three banks due to the fear of an advanced AI cyber attack potentially taking down a Canadian bank within the next decade.

Hinton explains that his concerns have grown significantly in the last few years. A key change in his thinking came from realizing the immense advantage of digital AI models over biological/analog systems in information sharing speed. 
He was also surprised by AI's improved ability to reason through "chain of thought" processing, which he sees as refuting the old AI view that neural nets couldn't reason without symbolic logic. This enhanced reasoning ability, including deliberate deception, was a significant development that changed his perspective. He believes AI labs are already using AI to pursue new development ideas, such as Google using AI to design its AI chips.
In conclusion, while AI promises transformative benefits, Hinton's latest reflections underscore increasing anxieties regarding AI's impact on employment, its exploitation by bad actors, the risk of AI dominance, and the conflict between corporate and societal interests. He stresses the need for public pressure on governments to mandate serious AI safety research.
 

Related article

Tech news

Avoid Saying These Things to ChatGPT If You Value Privacy

5 min read

May 02, 2025

AI weekly

AI Weekly Roundup: Major AI Developments You Need to Know

5 min read

Apr 29, 2025

Tech news

Top AI Tools for Students, Small businesses, and Developers in 2025

5 min read

Apr 25, 2025

Tech news

OpenAI Eyes Chrome Takeover and What It Means for AI and Search

5 min read

Apr 23, 2025

Tech news

Why ChatGPT’s Politeness Comes with a Million-Dollar Price Tag

5 min read

Apr 22, 2025