2024-05-01 16:52:36
The Sam Altman saga reveals the need for AI transparency - Democratic Voice USA
The Sam Altman saga reveals the need for AI transparency

It’s been a roller-coaster week for Sam Altman, the past, now present and hopefully future CEO of artificial-intelligence giant OpenAI.

Last weekend, the company’s board shocked the technology world by firing Altman — for no apparent reason.

The move left Microsoft – OpenAI’s largest investor — reeling.

After failing to have him reinstated, Microsoft CEO Satya Nadella announced that Altman and his co-founder Greg Brockman were jumping ship to lead Microsoft’s new AI research arm.

Next up was a near company-wide revolt, as most of OpenAI’s 800 employees made clear they wanted Altman back — or were ready to follow him to Microsoft.

And so by midweek Altman had been reinstated at OpenAI, accompanied by a new board of directors, which includes former Harvard President Lawrence Summers.

The entire affair has been stealthy, both in speed and possible subterfuge.

What were the real reasons for dismissing Altman — a hugely capable leader who, among other things, was spearheading a funding round valuing OpenAI at $87 billion?

That’s probably a question for ChatGPT.

Altman began to raise the ire of OpenAI’s board even as he was steering the firm toward a massive IPO.AP

OpenAI began life as a non-profit tasked with advancing responsible AI research but has more recently morphed into a typical high-growth tech company.

Some on the board, including the company’s Chief Scientist and an AI ethicist, worried that Altman was breaking away from the company’s founding principles of altruism.

They feared Altman’s bottom-line focus — and new AI products reaching near-sentient status — could put humanity at risk. 

The Altman-OpenAI saga has left many industry observers with a Silicon Valley-style case of whiplash.

There’s also a fair measure of uncertainty around this next-gen OpenAI – both in terms of its ongoing stability and its approach to the future growth of AI as a whole. 

Microsoft CEO Satya Nadella told Altman to come work for him after his initial ouster from OpenAI. AP

Will this week’s backroom machinations further edify existing tech giants, like Microsoft?

Or will fast-running start-ups like OpenAI remain as the steward of AI’s future?

Will governments throttle AI’s growth through onerous new rules?

Or will so-called “doomer” AI skeptics turn the public against AI before it even gets fully going? 

The truth is that none of these choices address AI’s biggest concern, the murkiness over how to train, build, and ship new AI products responsibly.

And fixing this begins with doubling-down on openness and transparency.

Indeed, Microsoft’s Nadella called the naming of a new OpenAI board as a key first step toward “well-informed and effective governance.”

Microsoft is the largest investor in OpenAI, having poured some $10 billion into the AI leader.REUTERS

For AI to reach its potential safely at scale, we need transparency improvements at every step.

We need to decentralize AI’s existing framework so that it’s governed by many rather than a few.

Embracing decentralized decision-making reduces any single point of failure — such as a disgruntled board, a charismatic CEO or authoritarian regime.

As Walter Isaacson wrote, “Innovation occurs when ripe seeds fall on fertile ground.”

In other words, the AI technology stack is fertile; to cultivate it, we must plant new — and more inclusive — ideas.

Let’s start at the bottom of that stack, with hardware.

Today, three companies — Amazon Web Services, Microsoft, and Google — control three-fourths of the cloud computing market storing all that AI data.

One company, NVIDIA, manufactures most of the chips.

Decentralization would allow smaller, user-owned networks to offset this hegemony, while adding much-needed capacity to the industry.

Altman was in the Middle East raising money for a new hardware venture that would rival NVIDIA when he was fired.

To dislodge the big players entirely, he should embrace a decentralized model instead.

Next up are so-called “foundation models,” the AI “brains” that generate language, make art and write code (and lame jokes).

OpenAI’s 800 employees threatened to join Altman at Microsoft before he was rehired by their board. AFP via Getty Images

Companies guard these models with little oversight or transparency.

OpenAI’s models, for instance, are closed tight to public scrutiny. User-owned networks with multi-stakeholder input would be better than Microsoft or OpenAI having complete foundational control — which is where we are headed. 

Equally important is actual data.

To “train” an AI foundation model, we need lots of data.

Companies like Microsoft and Amazon have grown rich and powerful amassing mountains of user data; that’s one reason OpenAI partnered with Microsoft to begin with.

Yet, users don’t know how these AI firms are exploiting their personal data to train their models.

Decentralized data marketplaces such as Ocean Protocol allow individuals and organizations to securely (and accurately) share their data with AI developers.

The data silos of tech giants become less important.

Finally, at the top of the stack are applications.

Imagine a chatbot for K–12 students that acts as their personal tutor, fitness instructor and guidance counselor.

Embracing a range of open-source data systems is crucial if we are to decentralize control over AI from the hands of a few to the hands of many.Sergey Nivens – stock.adobe.com

We want transparency from AI products that talk to our children — and everyone else.

We also want some say in what these apps collect and store about us, how they use and monetize this information, and when they destroy it. 

OpenAI currently offers little of any.

AI could alter humanity’s fate profoundly. But so far, just a select few — Altman and Nadella among them — are determining its future behind closed doors.

They claim to represent the interests of all of humanity, but no one really knows. 

Neither do we know why OpenAI initially sent Altman packing last week.

But a lack of “consistent candidness” — a k a “transparency” — was cited by his detractors.

Back where it all began, Altman will likely emerge stronger than ever.

Now he must use that strength to advance the core “openness” OpenAI has always claimed to hold dear.

Alex Tapscott is the author of “Web3: Charting the Internet’s Next Economic and Cultural Frontier” (Harper Collins, out now) and a portfolio manager at Ninepoint Partners.

Source link: https://nypost.com/2023/11/25/opinion/the-sam-altman-saga-reveals-the-need-for-ai-transparency/

Leave a Reply

Your email address will not be published. Required fields are marked *