Andrew Ng, a prominent figure in artificial intelligence, offers a grounded perspective on the technology's rapid evolution. While acknowledging the transformative power of AI, he cautions against both unchecked hype and stifling fear, emphasizing that the technology is simultaneously amazing and highly limited.
With a background that includes co-founding Google Brain and leading AI initiatives at Baidu, Ng provides insights into the current investment climate, the future of work, and the realistic path toward more advanced systems. His views suggest a future shaped not by human replacement, but by human augmentation through more accessible and powerful tools.
Key Takeaways
- Andrew Ng believes parts of the AI industry, particularly the initial training of models, show signs of an investment bubble.
- He argues that AI will make coding more accessible, and advises that more people should learn to code, not fewer.
- Ng is skeptical that Artificial General Intelligence (AGI) is imminent, citing the complex and manual nature of current AI development.
- He advocates for transparency-focused regulations rather than broad restrictions that could hinder innovation and its benefits.
- Future growth areas in AI include voice-related technologies and autonomous systems known as "agentic AI."
The State of AI Investment
The artificial intelligence sector has seen an unprecedented influx of capital, with billions of dollars pouring into companies developing the next generation of generative AI. This has led to widespread discussion about a potential investment bubble, a concern Ng shares, but with a specific distinction.
He separates the AI development process into two main stages: training and inference. The training phase, where foundational models are built using massive datasets and computational power, is where he sees potential for overvaluation.
"When will the payoff for all of the capital expenses going into this training, when will they pay off? Whatever happens, it will be good for the industry, but certain businesses might do poorly."
Ng suggests that the enormous costs associated with pre-training large models may not yield returns for every investor or company involved. He implies that a market correction could impact businesses focused solely on this capital-intensive stage.
The Strength of Application
In contrast, Ng is highly confident about the demand for AI's second stage, known as inference. This is the phase where trained models are actively used to answer queries, generate content, or perform tasks for end-users.
"Inference demand is massive, and I’m very confident inference demand will continue to grow," Ng stated. This growing demand for practical AI applications necessitates a significant expansion of infrastructure.
Data Center Expansion
According to Ng, the sustained growth in AI applications means a significant build-out of data centers is required. This expansion is necessary to support the computational power needed for billions of daily user interactions with AI systems.
This outlook highlights a shift from speculative investment in model creation to tangible value derived from real-world AI use. The long-term health of the industry, in his view, rests on the utility and adoption of these systems by the public and businesses alike.
Coding in the Age of AI
A common narrative suggests that as AI becomes capable of writing code, the need for human coders will diminish. Ng strongly refutes this idea, arguing that advice to stop learning to code is fundamentally misguided.
"We’ll look back on that as some of the worst career advice ever given," he said. "Because as coding becomes easier, as it has for decades, as technology has improved, more people should code, not fewer."
A Historical Parallel
The evolution of coding tools has consistently lowered the barrier to entry. From machine language to high-level programming languages and now AI assistants, each step has empowered more people to build software. Ng sees AI as the next logical step in this democratization process.
He believes that AI tools will not replace developers but will instead make them more productive. This increased efficiency will enable individuals in various roles, from recruiters to marketers, to leverage coding for their specific needs without requiring a deep, traditional computer science background.
"People that use AI to write code will just be more productive, and I think have more fun than people that don’t. There will be a big societal shift towards people who code."
His perspective transforms the role of a coder from a specialized technician to anyone who can use logic and prompts to solve problems with technology. For example, he notes that his best recruiters now use prompts or write simple code to screen résumés, a task previously done manually.
A Pragmatic View on AGI and AI Risks
While some industry leaders predict the arrival of Artificial General Intelligence (AGI)—AI that matches or exceeds human performance on all tasks—within the next few years, Ng remains skeptical about its near-term feasibility.
"I look at how complex the training recipes are and how manual AI training and development is today, and there’s no way this is going to take us all the way to AGI just by itself," Ng explained. He emphasizes that the process of preparing data and training models is far more labor-intensive than is widely appreciated.
This view provides a crucial reality check, suggesting that the path to human-level intelligence in machines is not a simple matter of scaling up current methods. Significant conceptual breakthroughs are still needed.
Balancing Benefits and Harms
As AI systems become more integrated into daily life, concerns about their potential for harm have grown. Ng acknowledges these risks but urges a balanced approach to regulation.
He argues that the benefits of AI, such as providing mental health support to those who might otherwise have none, often outweigh the potential harms. He cautions against creating restrictive laws based on isolated negative incidents.
"I am nervous about one or two anecdotes leading to stifling regulations. That means it doesn’t help save 10 lives, right?" he remarked, highlighting the complex calculus regulators face.
Instead of broad prohibitions, Ng advocates for laws that mandate transparency. He believes that requiring large AI platforms to be open about their operations would allow regulators and the public to identify and address problems more effectively.
The Next Frontiers for AI
Looking ahead, Ng identified several areas where he anticipates significant progress and commercial value. One of the most underestimated fields, in his opinion, is voice-related AI.
"I think people underestimate how big voice AI will get," he said. "If you look at “Star Trek” movies, no one envisioned everyone typing on the keyboard, right?" This suggests a future where interaction with technology becomes more natural and conversational, moving beyond screens and keyboards.
Another critical area is what he terms "agentic AI." These are AI systems designed to perform sequences of actions autonomously to achieve a goal. While the term experienced a surge of hype, Ng is confident in its underlying value.
"I’m very confident that the field of agentic AI will keep on growing and rising in value," he stated. He predicts that the actual commercial applications of these autonomous systems will continue to rise rapidly, regardless of market hype cycles. These systems could one day handle complex tasks like planning travel, conducting research, or managing logistics with minimal human input.





