AI8 views6 min read

AI Expert Explains How to Start Using ChatGPT Safely

A Carnegie Mellon AI expert and OpenAI board member explains how anyone can start using ChatGPT for simple tasks and discusses the technology's future risks.

Clara Holloway
By
Clara Holloway

Clara Holloway is a technology and society correspondent for Neurozzio, focusing on the psychological and societal impacts of artificial intelligence. She reports on AI ethics, human-computer interaction, and the real-world consequences of emerging technologies.

Author Profile
AI Expert Explains How to Start Using ChatGPT Safely

An expert from Carnegie Mellon University and a member of OpenAI's safety committee, Mr. Kolter, is encouraging the public to become familiar with artificial intelligence tools like ChatGPT. He advises starting with simple, everyday tasks to demystify the technology, while also addressing the significant safety and security challenges that AI presents.

Kolter, who co-founded the AI safety company Gray Swan AI, provides practical advice for beginners and discusses the profound long-term implications of AI, including risks that were once considered science fiction.

Key Takeaways

  • AI expert Mr. Kolter recommends beginners start using ChatGPT for fun, simple tasks like writing stories or planning meals to build familiarity.
  • While AI is a powerful tool for productivity, including coding and research, it can hinder deep learning by removing the necessary struggle for comprehension.
  • Kolter highlights significant safety concerns, such as the potential for AI to provide instructions for creating bioweapons, and emphasizes the need for secure deployment.
  • The long-term risk of AI systems improving themselves, once dismissed as science fiction, is now a plausible scenario that requires serious academic and industry attention.

Getting Started with Artificial Intelligence

Since OpenAI released ChatGPT in November 2022, artificial intelligence has become widely accessible. The platform now serves more than 700 million users weekly, according to an OpenAI report. Despite its popularity, many people find the technology intimidating.

Mr. Kolter, who directs the Machine Learning Department at Carnegie Mellon University, suggests that the best way to overcome this hesitation is through direct interaction. He believes everyone should experiment with these tools to understand their capabilities and limitations.

"What everyone should be doing… is just using these tools, not feeling like there’s some unknown, scary thing that is inscrutable and hard for the average person to get a handle on," Kolter stated.

He recommends starting with enjoyable or practical activities. "Some of the first things I did with ChatGPT — it was very good at things like writing bedtime stories for my kids," he explained. "Just experiment with it, ask it to plan your meals for the week, whatever sort of normal things that are part of your everyday routine."

Practical Applications for Everyday Life

Kolter has integrated AI into his daily personal and professional routines, demonstrating its versatility. He noted that for him, ChatGPT has largely replaced traditional search engines like Google for information retrieval.

Daily AI Usage

Mr. Kolter estimates he uses ChatGPT for 20 to 100 queries per day. These range from simple questions to complex problem-solving for work and home projects.

At home, he uses it for DIY projects by taking a picture of a broken item and asking for repair instructions. "I’ve done more DIYs around the house, because I’m able to just ask ChatGPT," he said. This approach eliminates the need to search for product manuals.

In his professional work, Kolter uses AI to accelerate research. He prompts it to summarize academic papers to quickly determine if they are relevant for a deeper reading. However, he identifies coding as one of the most transformative applications.

"The biggest thing that I am extremely impressed with right now is coding with AI systems," he said. "If you have these systems that can now write code, there’s this potential for fundamentally transforming how you interact with computers."

AI in Education and Learning

While AI can be a powerful assistant, Kolter urges caution when using it for educational purposes. He argues that AI is most effective when a user has some existing knowledge of a subject. For a complete novice, learning a skill like coding solely through AI might be difficult.

The primary concern is that AI can remove the productive struggle essential for deep learning. "The way humans learn... is you learn through struggling with hard problems," he explained. "The danger of AI education is that it short-circuits this process."

The 'Productive Struggle' in Learning

Educational psychology emphasizes that grappling with difficult concepts without immediate answers helps build stronger, long-term understanding. By providing instant solutions, AI tools can prevent this crucial cognitive process from occurring, potentially leading to superficial knowledge.

Kolter acknowledges that AI can be structured to act as a tutor that facilitates this struggle, but its most common use is simply to provide answers. "If you just use it to answer your questions, there’s a lot of sense in which you’re not really doing the learning yourself," he warned.

Understanding the Technology and Its Risks

Kolter demystifies the inner workings of large language models (LLMs) like ChatGPT, describing the underlying code as surprisingly simple—around 100 lines. The complexity lies not in the code but in the AI "model," which contains billions of numerical values trained on vast amounts of data.

When a user inputs a question, the system converts it into numbers, processes them through the model, and predicts the most likely next word. This process repeats to generate a full response. Kolter calls this achievement "one of the most impressive discoveries in all of science."

Alongside its potential, Kolter is deeply involved in addressing its risks. As chair of OpenAI's safety and security committee and co-founder of Gray Swan AI, his work focuses on how to deploy these powerful tools safely.

"It’s not at all far-fetched that in the very near future, AI systems will be able to basically tell a novice how to do things like create a bioweapon," he stated, highlighting a major security concern. He confirmed that major labs are actively working to mitigate such risks.

The Future of AI and Existential Questions

Looking ahead, Kolter expresses both optimism and serious concern. His hope is that AI will unlock a new era of human creativity by offloading time-consuming tasks. "Everyone will only be limited by our imagination, in terms of what you can build and be empowered to do," he projected.

However, he also validates concerns about more distant, science-fiction-like scenarios. One significant possibility is that AI systems will soon become capable of conducting AI research themselves, leading to a rapid, recursive cycle of self-improvement.

Kolter reflected on how the academic community's stance has shifted. "For a very long time, as faculty working in AI, our job was to reassure the public that the Terminator was not real," he said. "I just don’t know if we can confidently make that assertion anymore."

He concluded that while he doesn't know if these extreme scenarios will become a problem, it is no longer unreasonable to consider them. He believes it is now the responsibility of academics to contribute seriously to the discussion on these profound issues.