Is it possible to prioritise AI safety as much as innovation in a for-profit industry?

Tensions within OpenAI’s board spotlight the challenge of commercialising AI and establishing competitive advantage while also setting up guardrails.

OpenAI CEO Sam Altman was ousted by the company's board on Friday. (Photo: Getty Images).
OpenAI CEO Sam Altman was ousted by the company's board on Friday. (Photo: Getty Images).

The tension between quickly developing AI products versus approaching the technology with caution to mitigate harm was reported as a key factor in the firing of OpenAI CEO Sam Altman by the company’s board on Friday.

According to reports, Altman wanted to rapidly accelerate AI’s development by seeking funding and commercialising products, while board members favoured moving more slowly to mitigate AI’s potential risks.

In a Friday blog post, OpenAI wrote that Altman’s departure “follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.”

In for-profit companies, slowing down the pace of innovation to do the right thing can be a tricky sell.

Even U.S. President Joe Biden’s AI Executive Order, released last month, recognised the balance between promoting innovation while protecting consumers.

It seems Altman’s leadership was favoured by OpenAI staff. On Monday morning, most of the company’s more than 700 employees wrote a letter to the board warning they would resign unless Altman and OpenAI’s former president Greg Brockman were reinstated and all current board members stepped down.

The employees said they would move to the new subsidiary at Microsoft that Altman and Brockman announced they had joined on Monday.

OpenAI hired Twitch co-founder Emmett Shear as interim CEO on Monday, but he may not hold the position for long. Altman, who initially appeared uninterested in returning, said his and Brockman’s move to Microsoft isn’t finalised, and that they’re willing to return should the board members who fired him step aside.

As OpenAI contends with growing competition from the likes of Google, Microsoft and Meta, the ad industry is in a similar race to develop AI products to outpace competitors and help their clients do the same. The race can mean safety is sidelined, according to advertising executives and analysts we asked to provide comment.

As AI becomes one of the most competitive technologies in the world, Campaign asked experts within the ad industry whether it is possible or realistic to balance the pace of innovation with implementing safety guardrails. Their full responses are below.

Craig Elimeliah, chief creative officer, /Prompt

The tension between rapidly developing AI and approaching it cautiously to mitigate harm, highlighted by Sam Altman’s firing, is absolutely a central challenge in the AI and ad industries. 

Balancing the pace of innovation with safety guardrails is not only possible, but necessary. It requires a concerted effort to integrate ethical considerations into the development process, ensuring AI is both innovative and responsible.

In a for-profit industry, this tension often manifests as a push-pull between market demands and ethical imperatives. 

The competitive drive can accelerate innovation, but without careful oversight, it risks overlooking crucial safety and ethical standards. 

I expect companies to increasingly adopt a dual focus, pursuing technological and creative advancements at speed while also establishing robust ethical frameworks and transparency in AI development at speed. The outcome will likely be a new industry standard where safety and innovation are not mutually exclusive, but are integrated into the core of AI development strategies. 

This will involve collaborative efforts across agencies, clients and sectors, including regulatory bodies, to create guidelines that foster both innovation and public trust in AI technologies.

Elav Horwitz, global head of applied innovation, McCann Worldgroup

If we cut through the noise, tweets and dynamics between boards and founders, startups and corporates, the central issue remains: Aligning this powerful technology with humanity's goals. Generative AI has the power to advance the world and also to mirror some of its less pleasant biases —it’s up to us to make sure we focus it on the former. That is certainly an achievable objective, but it requires awareness and action on the part of the humans directing and safeguarding the machine.

The challenge of generative AI has been apparent from day one, but now we seem closer than ever to having a machine more capable than any human (we wish OpenAI would disclose more on this). Regardless of how this unfolds, our industry urgently needs governance and policy to adopt this technology responsibly. The genie is out of the bottle; we know it's going to transform our industry and redefine creativity. More than ever, it's crucial to protect human talent, guard against misinformation and balance the quest for speed and efficiency against long-term implications.

Nicole Greene, VP analyst, Gartner

Generative AI is unique in that it is equally a technology and people solution. It’s changing our relationship with technology. We need to think through this idea that if we can define how generative AI can help society, then you can ensure the right constraints and the right goals, but these need to be detailed.

Responsible use and risk mitigation are vital to success with generative AI—Gartner research shows that 56% of marketing technology leaders believe that the benefits of AI outweigh the risks. In contrast, 82% of U.S. consumers are concerned there will be no net benefits to society. There’s a huge difference there and one that advertisers need to help bridge. 

We need a proactive approach to responsible use and risk mitigation—transparency, privacy and security, governance, and a focus on ethical use—both in the constraints and goals we set within these models and in addressing the bias that is inherent and can be amplified by the technology.

Max Lenderman, chief experience officer, GMR Marketing

Altman always had to thread the needle between the explicitly non-profit mission of OpenAI to responsibly usher in the era of AI—it’s the reason Musk put his money behind it in its early inception— that is intentional, methodical and responsible. Then they got $13 billion from Microsoft, and billions more from VCs, to productise and scale ChatGPT. 

The result is the first indication that, no, safety is not as much of a consideration in a for-profit capacity. I am not a doomsdayer; I think the arc of humanity bends toward goodness. But profit motives make good people do bad things. So, we are in a post-safe world. Just look at the safety issues plaguing X and it has nothing to do with AI (for the moment—that’s a whole other story, innit?). We are in a post-safe world and businesses will have an entirely new way of gauging safety from now on.

Michael Dobell, chief innovation officer, MediaMonks

It’s a high stakes game being played at the moment when the AI firms are flush with VC cash but have yet to start demonstrating remotely the ARR required to be viable. Thus the push for speed, features and hopefully with that adequate paid user base. 

On the other hand we know that ChatGPT was basically launched as a MVP and there’s a lot of fundamental research and development needed to make it and models like it run faster, with lower energy demand and with the built in ethics and safety systems to help them scale safely. 

There is also the aspect of giving the economy, culture and governance time to adapt and make a responsible transition. 

Adding to the tension is the flywheel effect in play where being fast means being first but with it comes unclear risk. Do they have an AI alignment risk? I can’t start to speculate. 

What is obvious is that there’s a balancing act going on that’s familiar to any organisation which values innovation and growth. Getting it right is usually about leaders inside the organisation getting the communication and culture right as much as it is the technology and business underpinnings. You set the culture right and it’s like getting the PH in the garden right, good things grow. 

I think what’s really clear is that Sam Altman had the PH right with 710 of the 770 employees at OpenAI who signed the petition for his return on Monday.

Slavi Samardzija, CEO, Annalect

Recent developments in AI, particularly generative AI, are tremendously exciting—and the opportunities for the global business community are seemingly endless. But, as with many new and emerging technologies, there are risks that must be anticipated and mitigated. 

As new technologies continue to emerge, we, as an industry, should prioritise balancing rapid prototype development with due diligence across employee training, tools and infrastructure, legal, privacy and data ethics. Doing so will ensure that we are adequately mitigating risks and protecting the interests of our own organisations and the clients we serve.

Source:
Campaign US

Related Articles

Just Published

5 hours ago

Top 10 car brands in Southeast Asia

Malaysia's largest car manufacturer Perodua pipped other global favourites like Toyota, BMW and Tesla to become Southeast Asia’s top car brand in 2024. Dive into the insights from Campaign’s exclusive research with Milieu Insight.

5 hours ago

'All polish, no punch': Adland reacts to Jaguar’s ...

The internet has spoken about Jaguar's radical rebrand with mixed reviews. But what do industry experts think?

5 hours ago

Creative Minds: Nutthida Patthanhatirat thrives on ...

This art director’s journey spans from Photoshop struggles to creative triumphs, fuelled by her love of dogs, a taste for luxe, and an unstoppable knack for turning challenges into bold projects.