Shawn Lim
May 3, 2024

Why OpenAI believes artificial general intelligence can align with humanity

OpenAI's CTO Mira Murati stresses that the platform is doing all it can to achieve artificial general intelligence (AGI) while ensuring it does not become uncontrollable and replace humans.

Mira Murati, the chief technology officer at OpenAI.
Mira Murati, the chief technology officer at OpenAI.

OpenAI announced last month that it would open a Japan office—its first in Asia—to tap into new revenue streams and foster a long-term partnership with Japan's local businesses, government, and research institutions. 

While OpenAI has always believed in scaling large-language models (LLM) by inputting significant amounts of computing and data, Mira Murati, the chief technology officer at OpenAI, said during a fireside chat at Qualtrics' X4 Summit 2024, that seeing them perform across various domains has been nothing short of astonishing for the company.

For instance, these models excel in specialised tests such as those in biology and mathematics, which are traditionally used in academic settings.

Murati, known for her work in leading the development and strategy of AI technologies like GPT-3 and ChatGPT, noted that these models' statistical performance translates into solving complex, real-world problems, which “sometimes seems almost magical”.

“Today, sophisticated models like GPT-3.5 can be used for minimal or no cost, starkly contrasting previous technological adoptions that took much longer to penetrate society,” said Murati.

“This swift adoption has significantly influenced economic dynamics, especially with the introduction of large language models into the workforce. The speed at which these technologies have entered public consciousness and influenced the regulatory landscape is noteworthy.”

She added: “It is encouraging to see numerous governments establishing AI safety institutes and engaging with civil society, policymakers, and regulators worldwide. These efforts aim to create a broader infrastructure to ensure the technology is helpful and safe for global society.”

The promise of AI

LLMs like GPT have a tremendous opportunity to transform our relationship with knowledge and creativity across every domain.

Murati, whose role involves not just technological development but also addressing the broader implications of AI on society focusing on safety and ethical issues, said she is particularly excited about the potential in education and healthcare because the possibilities to improve the quality of life worldwide are vast.

She explained that OpenAI is driven by a grand vision to enhance global living standards, enabling most people to access free, high-quality healthcare and education.

For example, the company is moving towards more personalised learning experiences in education, with institutions like Khan Academy and Carnegie Mellon already using OpenAI’s models to develop curricula.

This personalisation shifts the traditional model from one teacher per 30 students to AI tutors who can provide bespoke motivation and education, enhancing individual learning and creativity.

For example, Arizona State University and Harvard-founded Edx, a US for-profit online education platform, are leveraging AI to streamline administrative tasks and provide live support during lessons.

“In healthcare, the adoption of AI is still emerging but shows significant promise. Companies like Moderna and Unlearn AI are using AI to refine and accelerate clinical trials, making them more robust and efficient,” said Murati.

Additionally, companies like Soma Health are utilising AI to reduce the administrative burden on physicians, allowing them to focus more on patient care. Overall, we are just beginning to explore the vast potential of these technologies, and there is an incredible opportunity for further innovations that could fundamentally enhance how we live and learn.”

When asked by moderator Gurdeep Pall, president of AI Strategy at Qualtrics, how far away the world is from the reality where bots and assistants can thoroughly plan and book a trip, Murati said OpenAI envisions a gradual progression where the platform delegates increasingly complex tasks to AI systems over extended periods.

Murati explained that as AI systems gains access to more tools and OpenAI develops new user interfaces, the platform anticipates much more seamless and enhanced collaboration.

“Soon, interactions with AI systems will start with specific, less ambiguous directions. However, as these systems' underlying capabilities improve and the tools at their disposal become more powerful and diverse, these agents can handle more complex tasks and operate over longer timelines,” said Murarti.

“We are already observing the beginnings of this trend with systems like GPT-3 and others. Over the next few months, we expect significant advancements in this area.”

Safety and ethical measures as AI models become more powerful

OpenAI’s mission is to develop artificial general intelligence (AGI), a highly advanced AI system that the platform plans to deploy globally in beneficial ways for all humanity. However, the path to achieving that is controversial.

In March 2024, OpenAI co-founder Elon Musk initiated legal action against OpenAI, asserting that the organisation and its executives violated the company's original charter. According to the lawsuit, this charter mandated that OpenAI should advance AGI to serve the interests of humanity universally.

Musk's legal complaint targeted OpenAI's shift to a commercial business model and its collaboration with Microsoft, as well as the launch of the GPT-4 model in March 2023. He claimed these actions directly contravene the foundational agreement, especially as he believes the GPT-4 model has already achieved the capabilities of AGI.

As the platform progresses towards achieving AGI, Murati stressed that OpenAI focuses not just on enhancing these models' capabilities, robustness, and utility. She explained the focus is equally on ensuring their safety, aligning them with human values, and guaranteeing they function as intended without causing harm or becoming uncontrollably powerful.

OpenAI deploys these models when they are less powerful and in highly controlled environments. The platform aims to understand specific use cases and industries, assessing capabilities and associated risks.

This approach, which OpenAI calls iterative deployment, involves integrating safety measures at every development stage, from model creation to after deployment, when the platform monitors and enforces policies.

“Part of our strategy involves engaging with industry, government, regulators, and society—not as passive onlookers but as active participants. This helps everyone understand what these technologies are capable of and how they can advance their interests securely and beneficially,” explained Murati.

“We also focus on potential misuse and risks. While eliminating risk is impossible, our dedicated team works tirelessly with various stakeholders to minimise possible harm. This team's efforts include tackling long-term or catastrophic risks that might arise from creating models beyond our control, a field known as alignment research. This involves aligning these models with user behaviours, guidance, and values, which are technically, socially, and governance-wise complex.”

Murati continues: “Our preparedness strategy involves a scientific and data-driven approach to understanding risks from frontier AI systems. This encompasses predicting the capabilities of these models before they are fully developed, tracking risks, and preparing for deployment. It also includes identifying key risks across various categories, like cybersecurity and biological threats, and collaborating with external experts to test these models' limits to ensure they can be safeguarded against potential misuse.”

Ultimately, OpenAI aims to create AGI that benefits humanity by developing this technology in partnership with others through iterative deployment, introducing it to the public when the stakes are manageable, and collaborating with industry, governments, and civil society to enhance the technology's robustness for broader application.

The platform implements this strategy through its API platform and by working with its ChatGPT team to engage directly with users for feedback. This feedback helps the platform to align and refine its model, making it more valuable and robust.

According to Murati, gathering feedback occurs daily, especially in the early stages of technology development as OpenAI understands the model's capabilities but may not have expertise in specific domains like healthcare.

She explained that OpenAI relies heavily on deep customer partnerships to advance the models and develop the product. This product development method starts with the technology but involves working with partners and users to leverage its potential, making it user-friendly and discoverable entirely.

“Our GPT-3.5 API made the importance of the user interface evident. It was a gamechanger when we introduced it through the ChatGPT interface, allowing easier dialogue interaction,” said Murati.

“I recall showing the ChatGPT interface to a customer during a meeting about something else. They were incredibly impressed with it, even though the underlying model was less potent than other technologies I had shown them. This experience, although not the only factor, was one of the considerations for releasing ChatGPT as a research preview.”

According to Murati, developing the Sora video generation model with artists and creators involves rigorous testing to understand potential misuse and develop necessary guardrails.

However, she explained that OpenAI has not released the beta version of this model because they are still gathering feedback to ensure it adds value and enhances creativity rather than detract from it.

“This approach to product development and deployment is unique, as it involves real-time adjustments and feedback collection from a wide range of users,” said Murati.

On the API side, Murati said OpenAI aims to empower as many companies as possible to use this model by providing tools that make it affordable and user-friendly, eliminating the need for in-depth machine-learning expertise for integration into businesses.

“We plan to continue investing in our platform to enable as many businesses as possible to benefit from these models. We anticipate dealing with challenges in materials and consumer paths and aim to educate enterprises further,” explained Murati. 

“Additionally, I foresee a proliferation of smaller, open-source models that anyone can use, contributing to an increasingly rich ecosystem.”

The path to achieving AGI

OpenAI expects increased multi-modality, incorporating speech, video, and images into their AI models to broaden the model’s understanding of the world in a way that mirrors human perception and interaction, which goes beyond mere language.

Another development area is reinforcement learning, where models like ChatGPT interact directly with humans, receiving feedback that helps them become more helpful and valuable. This approach will make the technology more adaptable and applicable in real-world settings.

“Addressing the alignment problem becomes crucial as these models grow in power. We must ensure they align with user intentions, behaviours, and societal values,” said Murati. “Over the next few years, we anticipate significant progress across these four vectors, driving the evolution of our technology to serve better and enhance human capabilities.”

When Pall asked Murati what it means for humanity if, by 2045, models become significantly more capable than humans, Murati did not answer directly but said predicting the future over such a long stretch is challenging.

However, Murati said if OpenAI continues making significant progress towards its goals, the future could very well be very positive.

“Technology will enhance our knowledge and creativity even further. It's likely to change the nature of work and how we interact with information, each other, and ourselves,” said Murati.

“I'm optimistic we can achieve an incredible future with the right approach. We still have not solved many challenges, such as climate change and providing universal access to quality education and healthcare. This technology holds the potential to help us address these critical issues.”


Campaign's media and technology editor Shawn Lim is reporting from the Qualtrics' X4 Summit in Salt Lake City this week. 

Source:
Campaign Asia

Related Articles

Just Published

16 hours ago

Top 10 car brands in Southeast Asia

Malaysia's largest car manufacturer Perodua pipped other global favourites like Toyota, BMW and Tesla to become Southeast Asia’s top car brand in 2024. Dive into the insights from Campaign’s exclusive research with Milieu Insight.

16 hours ago

'All polish, no punch': Adland reacts to Jaguar’s ...

The internet has spoken about Jaguar's radical rebrand with mixed reviews. But what do industry experts think?

16 hours ago

Creative Minds: Nutthida Patthanhatirat thrives on ...

This art director’s journey spans from Photoshop struggles to creative triumphs, fuelled by her love of dogs, a taste for luxe, and an unstoppable knack for turning challenges into bold projects.