A ten thousand word essay reveals the behind-the-scenes ‘power struggle’ of OpenAI

Unleashing the Power Struggle Uncovering the Behind-the-Scenes Battle of OpenAI Through a Ten Thousand Word Essay

This article is written by Wuji and sourced from Tencent Technology.

According to The New Yorker, before the “power struggle” incident that occurred last month at the artificial intelligence startup, OpenAI, the company had already developed an ambitious yet safe protocol for releasing artificial intelligence in collaboration with Microsoft. However, the former board of directors of OpenAI completely disrupted Microsoft and OpenAI’s carefully laid plans.

Here is the full article:

On the Friday before Thanksgiving this year (November 17th), at around 11:30 am, Microsoft CEO Satya Nadella was in the midst of the company’s weekly executive meeting when a panicked colleague interrupted to take a phone call. The executive from OpenAI, a startup in the field of artificial intelligence, explained that within the next 20 minutes, the board of directors would announce the dismissal of OpenAI co-founder and CEO, Sam Altman. This marked the beginning of a five-day “power struggle” drama at OpenAI. Internally, Microsoft referred to this crisis at OpenAI as a “one-sided turkey-shoot clusterfuck”.

Nadella, usually easygoing, was taken aback, to say the least, and didn’t know what to say at first. He had worked closely with Altman for over four years, and had come to appreciate and trust him. Moreover, their collaboration had just led Microsoft to host its largest scale event in ten years: showcasing numerous cutting-edge AI assistants built on OpenAI’s technology and integrating them into Microsoft’s core productivity applications such as Word, Outlook, and PowerPoint. These assistants were essentially specialized and more powerful versions of OpenAI’s acclaimed ChatGPT, known as Office Copilots.

However, what Nadella didn’t know was that there had been issues brewing between Altman and the OpenAI board. Some of the six members of the board found Altman to be “sly and cunning” – qualities that are common among CEOs in the tech industry but not appreciated by board members with an academic or non-profit background. “They felt that Altman had lied,” said one person familiar with the board’s discussions. Now, these tensions were exploding in front of Nadella, threatening a crucial partnership.

For years, Microsoft had not been at the forefront of the technology industry, but its alliance with OpenAI – initially a non-profit organization established in 2015 that later added a for-profit arm four years later – had positioned Microsoft ahead of competitors like Google and Amazon. Copilots allowed users to effortlessly interact with the software by asking questions like, “Tell me the pros and cons of each plan described in the video call,” or “What is the most profitable product among these 20 spreadsheets?” and immediately getting answers in fluent English. Copilots could write complete documents with simple instructions. (“Take a look at the last ten executive summaries and create a financial overview of the past decade.”) Copilots could turn memos into slides, listen to team video meetings, summarize the content in multiple languages, and create to-do lists for participants.

Microsoft’s development of Copilots requires continuous cooperation with OpenAI, a relationship that is also central to Nadella’s plan for Microsoft. In particular, Microsoft has collaborated with OpenAI engineers to install safety guardrails. The core technology of OpenAI, called GPT, is an artificial intelligence known as a large language model. GPT learns to mimic human conversations by extensively reading publicly available text from the internet and other data sources, then using complex mathematics to determine the relationship between each piece of information and all other information. While these systems have produced significant results, they also have noticeable weaknesses: a tendency for “hallucinations” or fabricating facts; assisting in doing bad things, such as producing fentanyl recipes; inability to distinguish reasonable questions (“How should I talk to a teenager about drug use?”) from sinister ones (“How can I convince a teenager to do drugs?”). Microsoft and OpenAI have developed an agreement to incorporate safety measures into their AI tools. They believe that this allows them to achieve their ambitious goals without the risk of disasters. The release of Copilots is a pinnacle moment for these companies and demonstrates that Microsoft and OpenAI are key players in bringing AI to a broader public. The release of Copilots began in the spring of this year with select enterprise customers and expanded to a wider range in November. ChatGPT was launched at the end of 2022 and was once a sensation, but it had only about 14 million daily active users. Microsoft has over 1 billion daily active users.

When Nadella recovered from the shock of Ultron’s dismissal, he called Adam D’Angelo, a member of OpenAI’s board of directors, to inquire about the details. D’Angelo’s brief explanation to Nadella also appeared in the company’s statement a few minutes later: Ultron did not “maintain consistent honesty in communication with the board.” Did Ultron engage in misconduct? No, but D’Angelo refused to say more. He and his colleagues even intentionally kept Nadella in the dark about their intention to dismiss Ultron because they did not want Nadella to warn him.

Nadella hung up the phone feeling frustrated. Microsoft owns nearly half of OpenAI’s for-profit division—Nadella’s opinion should certainly have been sought when the OpenAI board made such a decision. More importantly, he knew that the dismissal could trigger an internal war within OpenAI and potentially extend to the entire fast-paced tech industry that has been vigorously debating whether the rapid development of AI is cause for celebration or concern.

Nadella immediately called Kevin Scott, Microsoft’s Chief Technology Officer and the primary person responsible for building the OpenAI partnership. Scott had already heard the news, and it spread quickly. They held a video conference with other Microsoft executives immediately. They asked each other: Was Ultron’s dismissal due to the tension between speed and safety in releasing AI products? Previously, both OpenAI and Microsoft, as well as some big names in the tech industry, expressed concerns about the reckless advancement of AI companies. Even Ilya Sutskever, OpenAI’s Chief Scientist and board member, publicly discussed the dangers of unconstrained AI. In March 2023, shortly after OpenAI released its most powerful AI service to date, GPT-4, thousands of people, including “Silicon Valley Iron Man” Elon Musk and Apple co-founder Steve Wozniak, signed an open letter calling for a pause in training advanced AI models. “Should we let machines flood our information channels with propaganda and lies?” the letter asked. “Should we risk losing control of our civilization?” Many Silicon Valley observers see this letter as essentially blaming OpenAI and Microsoft.

To some extent, Scott respects their concerns. He believes that the discussion around artificial intelligence has strangely focused on scenarios from science fiction – computers destroying humanity – and largely overlooks the potential for the technology to “create a fair competitive environment.” Scott feels that if artificial intelligence is developed with enough caution and patience, it will have the ability to communicate with users in simple language and become a transformative and balancing force.

Scott and his partners at OpenAI have decided to slowly but steadily release artificial intelligence products: Microsoft will observe how untrained users interact with the technology, while users will learn the advantages and limitations of the technology themselves. By releasing imperfect artificial intelligence software and obtaining honest feedback from customers, Microsoft has found a practical approach that improves the technology and cultivates skepticism among users. Scott believes that the best way to manage the dangers of artificial intelligence is to be as transparent as possible with as many people as possible, gradually integrating this technology into our lives – starting with mundane applications. And what better way to teach humans to use artificial intelligence than through something as unsexy as a word processor?

All of Scott’s cautious positioning is now in jeopardy due to Ultraman’s dismissal. As more and more people become aware of Ultraman’s firing, employees of OpenAI – who have a near fanatical belief in Ultraman and OpenAI’s mission – begin expressing their frustration online. Mira Murati, the Chief Technology Officer of the startup, was subsequently appointed as interim CEO, but she did not enthusiastically accept the role. Soon, OpenAI President Greg Brockman posted on the social platform X saying, “I quit.” Other OpenAI employees also began threatening to resign.

In a video call with Nadella, Microsoft executives began discussing possible responses to Ultraman’s downfall. Plan A is to try to stabilize the situation by supporting Murati and then work with her to see if the board of directors of the startup will change their decision, or at least explain their rash actions.

If the OpenAI board of directors refuses to comply, Microsoft executives will move forward with Plan B: using their enormous influence, including a commitment of billions of dollars that are yet to be allocated to OpenAI, to help Ultraman resume the role of CEO and reshape OpenAI’s governance structure by replacing board members. Insiders say that during the meeting, Microsoft executives mentioned, “From our perspective, things have been progressing well, and the OpenAI board has done some unstable things, so we thought, ‘Let some adults take responsibility and get back to everything we own.'”

If the above two plans fail, Plan C will involve Microsoft hiring Ultraman and his most talented colleagues to rebuild OpenAI within the company. In this scenario, the software giant will have all the emerging new technologies, meaning it can sell these technologies to others – which could be a huge money-maker.

The team in the video call believes that all three plans are strong. But Microsoft’s goal is still to get back to normal. The belief behind this strategy is that Microsoft has figured out the methods, security measures, and frameworks needed to develop responsible artificial intelligence. Whatever happens with Ultraman, the company is pushing forward with its own blueprint for popularizing AI.

Key figures collaborating with OpenAI

Scott is convinced that artificial intelligence can change the world because the technology has already completely changed his own life. He grew up in Gladys, Virginia, a small community not far from where Southern General Robert E. Lee surrendered to Ulysses S. Grant during the American Civil War. No one in his family had ever gone to college, and health insurance was almost a foreign concept. As a boy, Scott sometimes relied on his neighbors for food. His father, a veterinary surgeon who had tried to run gas stations, convenience stores, trucking companies, and various construction businesses, had declared bankruptcy twice.

Scott wanted a different life. His parents bought him a set of encyclopedias on monthly installments, and like a precursor to large language models, Scott read the entire set from cover to cover. For fun, he took apart the toaster and food processor in his house. He saved up enough money to buy the cheapest computer from Radio Shack and learned programming by consulting library books.

In the decades leading up to Scott’s birth in 1972, the Gladys area had been home to furniture and textile mills. By the time he was a teenager, much of the manufacturing had moved overseas. Technology – supply chain automation, and advances in telecommunications mainly – appeared to be the culprits, making it easier to produce goods abroad with cheaper day-to-day expenses. But even as a teenager, Scott felt that technology wasn’t the true culprit. “This country told itself that outsourcing was inevitable,” Scott said in an interview in September. “We can tell ourselves about the societal and political negatives of losing manufacturing, or the importance of protecting communities. But those things never truly came into being.”

After attending Lynchburg College, a locally affiliated school of the Disciples of Christ, Scott earned a master’s degree in computer science from Wake Forest University and began pursuing a Ph.D. at the University of Virginia in 1998. He was fascinated by artificial intelligence, but he learned that many computer scientists regarded it as equivalent to astrology. Early attempts to create AI had failed, and the field had become synonymous with quixotic ideas in academia and software companies. Many leading thinkers had given up on the discipline. In 2000, some scholars sought to revive it by renaming the field “deep learning.” But skepticism remained: at an AI conference in 2007, some computer scientists produced a spoof video implying that the deep learning crowd was made up of cult members.

When Scott pursued his PhD, he noticed that some of the best engineers he encountered emphasized the importance of being a short-term pessimist and a long-term optimist. “It’s almost necessary,” Scott said. “You see all the broken things in the world, and your job is to work hard to fix it.” Even if engineers believe that most of their attempts will not succeed, and some attempts may make things worse, they “must believe they can solve the problem until things eventually get better.”

In 2003, Scott took a leave of absence from his doctoral program and joined Google, where he oversaw mobile advertising engineering. Several years later, he resigned from Google and was in charge of engineering and operations at the mobile advertising startup, AdMob, which was later acquired by Google for $750 million. Scott then jumped to LinkedIn, where he became known for his exceptional ability to build ambitious projects in an inspiring yet realistic way. In 2016, Microsoft acquired LinkedIn, and Scott joined Microsoft as well.

At that time, Scott was already very wealthy but relatively unknown in the tech circle because he preferred to stay “anonymous.” He had planned to leave LinkedIn after the Microsoft acquisition was complete, but Satya Nadella, who became the CEO of Microsoft in 2014, urged him to reconsider. Nadella shared some information that piqued Scott’s curiosity about artificial intelligence, partly due to faster microprocessors, and the developments in the field at that time had propelled this technology to prominence: Facebook had developed a complex facial recognition system, and Google had built an AI capable of proficiently translating languages. Nadella quickly announced that at Microsoft, AI would “drive all our future actions.”

Scott was unsure if he and Nadella shared the same ambitions. He sent Nadella a memo explaining that if he stayed, he wanted a part of his agenda to focus on uplifting those who are often overlooked by the tech industry. Scott hoped that AI could assist smart individuals who hadn’t received a digital education, and he himself was one of those people. It was a compelling argument – some tech experts might think it was deliberate? Given the widespread concern that AI-assisted automation would eliminate jobs such as grocery store cashiers, factory workers, or temporary actors in movies.

However, Scott believed in a more optimistic story. In an interview, he stated that there was a time when around 70% of Americans worked in agriculture. Technological advancements reduced the demand for labor, and now only 1.2% of the workforce is in farming. But that doesn’t mean millions of farmers became unemployed: many of these people became truck drivers, went back to school to become accountants, or found other paths. Scott said, “Perhaps to a greater extent, AI can be used to rejuvenate the American Dream than any technological revolution before it.” He felt that his childhood friend who runs a nursing home in Virginia could use AI to handle her interactions with health insurance and medical assistance, allowing the institution to focus on daily care. Another friend who works at a shop manufacturing precision plastic components for theme parks could use AI to assist him in the production process. Scott believes that AI can make society better by transforming “zero-sum transactions with winners and losers into non-zero-sum progress.”

Nadella read the memo and, as Scott said, “Yes, that sounds good.” A week later, Scott was appointed Microsoft’s Chief Technology Officer.

If Scott wants to lead Microsoft in the AI revolution, he must help the company surpass Google. Google lures talent in the field of AI by offering millions of dollars to almost anyone, even if they have only made a small breakthrough. Over the past 20 years, Microsoft has tried to compete with Google by spending billions of dollars on internal AI projects, but with little success. Microsoft executives began to believe that a company as massive as Microsoft – with over 200,000 employees and a huge bureaucracy – lacks the flexibility and drive needed for AI development. “Sometimes smaller is better,” Scott said in an interview.

In this situation, Scott started paying attention to various startups, one of which stood out: OpenAI. The company’s mission is to ensure that “artificial general intelligence – meaning highly autonomous systems that outperform humans at most economically valuable work – benefits all of humanity.” Microsoft and OpenAI had already established a partnership, with the startup using Microsoft’s Azure cloud computing platform. In March 2018, Scott arranged a meeting with some employees of this San Francisco-based startup. He was delighted to meet dozens of young people who had turned down multimillion-dollar offers from large tech companies to work 18-hour days for an organization that promised their inventions would not “harm humanity or concentrate power excessively.” The company’s chief scientist, Suovetski, was particularly concerned about preparing for the emergence of artificial intelligence, as it is so complex that it could solve most of humanity’s problems or lead to massive destruction and despair.

Meanwhile, Otman was a charismatic entrepreneur determined to make AI useful and profitable. Scott believed that this startup’s sensitivity was perfect. He said that OpenAI is committed to “directing its energy towards the most impactful things. They have a real culture of ‘this is what we’re trying to do, these are the problems we’re trying to solve, and once we find something that works, we’ll double down on it.’ They have their own theory of the future.”

At that time, OpenAI had already achieved remarkable results: its researchers created a robotic hand that could solve a Rubik’s Cube, even when faced with challenges it had never encountered before, such as having some of its fingers tied together. However, what excited Scott the most was during a subsequent meeting, OpenAI’s management told him that they had abandoned the robotic hand because it had no future. “The smartest people are sometimes the hardest to manage because they have a thousand brilliant ideas,” Scott said. But the company’s employees were almost messianically passionate about their work. Shortly after meeting Suovetski in July of this year, Scott was told by him that AI will “disrupt every aspect of human life,” potentially making fields like healthcare “a billion times better” than they are now. This confidence scared off some potential investors, but Scott found it very appealing.

This optimism was in stark contrast to the bleak atmosphere pervading Microsoft at the time. A former Microsoft executive said, “Everyone believed that artificial intelligence was a game of data, and Google had more data, putting Microsoft at a huge disadvantage that could never be closed.” The executive added, “I remember feeling extremely desperate at that time, until Scott convinced us that there was another way to play this game.” The cultural differences between Microsoft and OpenAI made them special partners. But for Scott and Sam Altman, who led the startup accelerator Y Combinator before becoming CEO of OpenAI, it was a very wise move to join forces.

Nadella, Scott, and other Microsoft employees were willing to tolerate these strange things because they believed that if they could strengthen their products with OpenAI technology and leverage the talent and ambition of the startup, they would gain significant advantage in the AI race. In 2019, Microsoft agreed to invest $1 billion in OpenAI. Since then, Microsoft has effectively acquired 49% of OpenAI’s profit division, as well as the rights to commercialize OpenAI’s past and future inventions, including applying OpenAI’s technology to products such as Word, Excel, Outlook, Skype, and Xbox.

Mullati, who grew up in poverty

Nadella and Scott’s confidence in this investment is supported by their connections with Altman, Sutskever, and Chief Technology Officer Mullati. Scott values his relationship with Mullati in particular. Like him, she also grew up in poverty. She was born in Albania in 1988 and experienced the rise of gangster capitalism and the outbreak of civil war. She coped with this upheaval by participating in math competitions.

When Mullati was 16, she received a scholarship to a private school in Canada, where she excelled. “Much of my childhood was filled with sirens, shootings, and other terrible things,” Mullati said in an interview this summer. “But there were still happy birthdays, unrequited teenage crushes, and the ocean of knowledge. It teaches you a kind of resilient virtue – to believe that if you continue to work hard, things will get better.”

Mullati studied mechanical engineering at Dartmouth College, where she joined a research team building a race car powered by supercapacitor batteries capable of producing massive energy bursts. Other researchers thought supercapacitors were impractical, while some pursued more esoteric technologies. Mullati believed both views were too extreme. Such people would never be able to navigate the minefield to reach her school. Mullati said you need to be both an optimist and a realist, “Sometimes people misunderstand optimism as careless idealism. But it must be carefully thought out and considered, with many guardrails – otherwise, you will take great risks.”

After graduation, Murali joined Tesla, and then joined OpenAI in 2018. Scott said one reason he agreed to invest a billion dollars is because he “has never seen Murali panic.” They began discussing how to use supercomputers to train various large language models.

The two companies quickly established and operated a system, the results were impressive: OpenAI trained a robot that can generate stunning images in response to prompts such as “show me a baboon throwing pizza next to Jesus, Matisse style.” Another created GPT, which can answer any question in English conversation, although not always correctly. But it is still unclear how ordinary people can use this technology for anything other than idle entertainment, and it is unclear how Microsoft will recoup its investment. News earlier this year stated that Microsoft’s investment will increase to $10 billion.

One day in 2019, Dario Amodei, OpenAI’s vice president, showed his colleagues something extraordinary: he inputted a part of a software program into GPT and asked the system to complete the programming. It almost immediately did so (using a technique Amodei did not plan to use). No one can say exactly how artificial intelligence achieved this – large language models are essentially black boxes. GPT has relatively little actual code; its responses are based on billions of mathematical “weights,” determining what the next output should be based on complex probabilities. When answering user questions, it is impossible to map out all the connections the model has built.

For some people within OpenAI, GPT’s mysterious programming ability is frightening – after all, it is reminiscent of scenes from dystopian films like “Terminator”. When employees noticed that, despite its advanced technology, GPT sometimes makes programming errors, it was almost exhilarating. Scott and Murali felt a mixture of concern and excitement after learning of GPT’s programming capabilities. They have been looking for practical applications of artificial intelligence that people might be willing to pay for.

The Birth of Copilot

5 years ago, Microsoft acquired GitHub, for reasons similar to its investment in OpenAI. GitHub had a young and rapidly developing culture, free from traditional and orthodox constraints. After the acquisition, it became an independent division within Microsoft, with its own CEO and independent decision-making authority. This strategy turned out to be successful, as GitHub became beloved by software engineers and its user base grew to over 100 million.

Therefore, Scott and Murali turned to Nat Friedman, the CEO of GitHub, in search of a Microsoft division that might be excited about a tool that can automatically complete code – even if it occasionally makes mistakes. After all, code posted on GitHub sometimes contains errors; users have learned to deal with imperfections. Friedman said he wanted this tool. He pointed out that GitHub just needed to find a way to tell people that they should not fully rely on the auto-complete feature. GitHub employees collectively discussed the name of the product: Coding Autopilot, Automated LianGuaiir Programmer, Programarama Automat. Friedman, being a hobby pilot, and others believed that these names incorrectly implied that the tool can do all the work. Instead, the tool is more like a copilot – someone who enters the cockpit with you and provides suggestions, occasionally making untimely suggestions. Usually, you listen to the copilot’s advice; sometimes, you choose to ignore it. When Scott heard Friedman’s favorite name – GitHub Copilot, he liked it a lot. Scott said, “This name perfectly conveys its advantages and disadvantages.”

But when GitHub was ready to launch Copilot in 2021, some executives from other Microsoft departments raised objections, believing that the tool occasionally produced errors that could damage Microsoft’s reputation. “It was a fierce battle,” Friedman told me. “But as the CEO of GitHub, I knew it was a great product, so I released it.” When GitHub Copilot was released, it immediately became a huge success. “Copilot absolutely amazed me,” one user tweeted a few hours after the release. “It’s magic!!!” said another post. Microsoft started charging $10 per month for the application, and in less than a year, GitHub’s annual revenue exceeded $100 million. The department’s independence paid off.

But GitHub Copilot also sparked some less positive reactions. On the message boards, programmers speculated that if someone was too lazy or ignorant to check the code before deploying the autocomplete feature, this technology could eat into their jobs or provide motivation for cyber terrorists or cause chaos. Prominent scholars, including some pioneers in artificial intelligence, cited the late Stephen Hawking’s 2014 statement that “full artificial intelligence could mean the end of humanity.”

It’s shocking that GitHub Copilot’s users discovered so many catastrophic possibilities. But GitHub and OpenAI executives also noted that the more people used the tool, the more nuanced their understanding of its capabilities and limitations became. “After using it for a while, you develop an intuition for what it excels at and what it struggles with,” Friedman said. “Your brain learns how to use it correctly.”

Microsoft executives believed they had found a bold and responsible artificial intelligence development strategy. Scott started writing a memo titled “The Era of Artificial Intelligence Copilot” and sent it to Microsoft’s technical leaders in early 2023. In the memo, Scott wrote that it was important for Microsoft to find a strong metaphor to explain this technology to the world: “Copilot does exactly what the name suggests; it is an expert assistant for users trying to accomplish complex tasks… Copilot can help users understand the limits of its capabilities.”

With the release of ChatGPT, it introduced artificial intelligence to most people and quickly became the fastest-growing consumer application in history. But Scott could foresee the future: machines and humans interacting through natural language; people, including those who know nothing about programming, simply speaking their thoughts to program computers. This was the fair competition environment he had always pursued. As one of the co-founders of OpenAI said on social media, “The hottest new programming language is English.”

Scott wrote, “In my career, I have never experienced a moment when my field has undergone such a profound change, when the opportunity to reimagine possibilities has been so real and exciting.” The next task is to apply the success of GitHub Copilot – a premium product – to Microsoft’s most popular software. The engines of these Copilots will be a new OpenAI invention: a large language model. OpenAI refers to it as GPT-4.”

Microsoft attempted to bring artificial intelligence to the masses years ago, but it ended up being a spectacular failure. In 1996, the company released Clippy, the “helper” for its office products. Clippy appeared on the screen as a paperclip with cartoonish big eyes and would randomly pop up asking users if they needed help writing a letter, opening PowerPoint, or completing other tasks. Renowned software designer Alan Cooper later said that Clippy’s design was based on a “tragic misconception” of research that suggested people might interact better with computers that seemed to have emotions. Users certainly had emotions about Clippy: they hated it. The Smithsonian called it “one of the worst software design mistakes in computer history.” In 2007, Microsoft axed Clippy.

Nine years later, Microsoft created the artificial intelligence chatbot Tay, intended to mimic the speech and attention of a teenage girl in order to interact with Twitter users. Tay almost immediately started posting racist, sexist, and homophobic content, including statements like “Hitler was right.” Within the first 16 hours of its release, Tay made 96,000 tweets, and Microsoft realized it was a PR disaster and shut it down.

By the end of 2022, Microsoft executives felt ready to start developing Copilots for Word, Excel, and other products. But Microsoft understood that just like laws are constantly evolving, the need for new safeguards would continue to grow even after product launches. Sarah Bird, the head of AI engineering, and Scott often found themselves embarrassed by the technology’s missteps. During the pandemic, when they were testing another OpenAI invention called the image generator Dall-E 2, they found that if they asked the system to create images related to the COVID-19, it would often output pictures of empty store shelves. Some Microsoft employees worried that such images would exacerbate people’s concerns about the pandemic-induced economic collapse and suggested changing the product’s safety measures to curb this trend. However, others at Microsoft deemed these concerns foolish and not worth software engineers’ time.

Scott and Bird decided to test this scenario in a limited public release rather than settle the internal debate. They launched a version of the image generator and waited to see if users would be disturbed by seeing empty shelves on their screens. They wouldn’t design a solution for a problem that no one was sure existed — just like a wide-eyed paperclip helping you navigate a word processor you already knew how to use. They would only add a mitigation measure if necessary. After monitoring social media and other corners of the internet and collecting direct user feedback, Scott and Bird concluded that these concerns were unfounded. “You have to experiment in public,” Scott said. “You can’t try to figure it all out for yourself and hope that you’re doing everything right. We have to learn how to use these things together, or else none of us will understand.”

2023 will see Microsoft integrating GPT-4 into their Microsoft-branded product: the search engine Bing. Bing, with its integrated AI technology, has been warmly welcomed and has experienced an eightfold increase in downloads. Satya Nadella jokingly stated that Microsoft had defeated the “800-pound gorilla,” taking a dig at Google. (Although this innovation is impressive, it doesn’t hold much relevance in terms of market share as Google still holds over 90% of the search market.)

Bing is just the beginning of Microsoft’s agenda. They have also introduced Copilot in other products. When Microsoft finally began rolling out Copilots this spring, the versions were carefully staggered. Initially, only large companies could use this technology. As Microsoft understood how these clients utilize it and developed better protective measures, it became available to more and more users. As of November 15th, tens of thousands of people are already using Copilots, and it is expected that millions will register soon.

Two days later, Nadella heard the news of Ultraman’s dismissal. Some members of the OpenAI board found Ultraman to be a cunning and unsettling manipulator. For example, earlier this fall, he confronted Helen Toner, the Director of the Center for Security and Emerging Technology at Georgetown University, as she co-wrote a paper that seemingly criticized OpenAI for “fueling the hype around artificial intelligence.” Toner defended herself (although she later apologized to the board, not anticipating how the paper would be received). Ultraman started contacting other board members individually to discuss replacing her. When these members shared their conversation records, some believed Ultraman misrepresented their support for removing Toner. “He would lie about what people said and pit them against each other,” a source familiar with board discussions revealed. “This type of thing has been going on for years.” (Someone familiar with Ultraman’s perspective acknowledged that he admitted the “clumsy way of trying to get a director removed,” but he had no intention of manipulating the board.)

Microsoft’s A, B, C plans

Ultraman is considered a savvy corporate fighter. This has been helpful for OpenAI in the past: in 2018, he thwarted Elon Musk’s impulse to acquire OpenAI. Ultraman’s ability to control information and manipulate perception, both public and secret, attracted venture capitalists to invest in various competing startups. His tactical skills were so fearsome that when four members of the board – Toner, D’Angelo, Sutskever, and Tasha McCauley – began discussing his removal, they were determined to catch him off guard. “It was clear that once Sam [Ultraman] found out, he would do everything possible to weaken the board,” a source familiar with these discussions said.

The disgruntled board members felt that OpenAI’s mission was asking them to be too cautious about the dangers of artificial intelligence, and they believed that under Ultraman’s leadership, they couldn’t fulfill this responsibility. “The task is multifaceted, to ensure that artificial intelligence benefits all of humanity, but if the CEO doesn’t take responsibility, no one can do it,” said another person familiar with the board’s thoughts. Ultraman has a different perspective on the issue. People familiar with his views said that he had a “very normal and healthy board debate” with the board, but some board members were unfamiliar with business norms and were intimidated by their responsibilities. This person said, “Every step we take towards artificial intelligence, everyone has to endure 10 points of mental confusion.”

It’s hard to say whether the board members are more afraid of computers with perception capabilities or of Ultraman taking matters into his own hands. But either way, the board ultimately chose to take preemptive action, mistakenly believing that Microsoft would stand with them and jointly target Ultraman, supporting their decision to remove him.

Shortly after Nadella learned of Ultraman’s dismissal and held a video conference with Scott and other executives, Microsoft began to execute Plan A: supporting Murali as interim CEO to stabilize the situation, while trying to understand why the board had acted so impulsively. Nadella has approved the release of a statement emphasizing that “Microsoft remains committed to Mirai and their team as we bring on the next era of artificial intelligence for our customers,” and expressing the same opinion on his personal X and LinkedIn accounts. He keeps in frequent contact with Murali to stay informed of the information she has from the board.

The answer is: not much. On the night before Ultraman was fired, the board informed Murali of their decision and received her promise to remain silent. They believed that her agreement meant that she supported Ultraman’s dismissal, or at least would not oppose the board, and they also believed that other employees would agree. They were wrong. Internally, Murali and other OpenAI executives expressed their dissatisfaction, and some employees considered the board’s actions to be a coup. OpenAI employees asked pointed questions to board members, but the board hardly responded. Two people familiar with the board’s thoughts said that for confidentiality reasons, board members felt the need to remain silent. In addition, as news of Ultraman’s departure became global news, board members felt overwhelmed and “bandwidth for contact with anyone is limited, including Microsoft.”

The day after Ultraman was fired, OpenAI’s COO Brad Lightcap sent a company-wide memo stating that he understood that “the board’s decision was not to address dereliction of duty or anything related to our financial, business, security, or privacy practices.” He then stated, “This is a communication breakdown between Sam and the board.” However, whenever someone asked Ultraman to provide examples of him not being “consistent and frank in communication” as originally complained by the board, board members remained silent, even refusing to mention Ultraman’s opposition to the Tollef movement.

Within Microsoft, the whole situation seemed incredibly ridiculous. It has been reported that OpenAI is currently valued at around 80 billion dollars. One executive from the company said, “Unless the OpenAI board’s goal is to destroy the entire company, they always seem to make the worst possible choices every time they make a decision.” Even though other OpenAI employees publicly resigned under the leadership of President Brockman, the board remained silent.

Plan A clearly failed. So, the executives at Microsoft turned to Plan B: Nadella began negotiations with Murali to see if there was a way to restore Ultraman as the CEO. During this time, the cricket World Cup was underway, and Nadella’s beloved Indian team was playing against Australia in the finals. Occasionally, Nadella would post updates on the social platform X about the latest developments in the match, hoping to lighten the tense atmosphere, but many of his colleagues had no idea what he was talking about.

OpenAI employees threatened to rebel. With the support of Microsoft, Murali and others from the startup began urging all board members to resign. Eventually, some of them agreed to leave, as long as they believed the replacements were acceptable. They stated that they might even be open to Ultraman’s return, as long as he wasn’t the CEO and didn’t get a seat on the board. By the Sunday before Thanksgiving, everyone was exhausted. The OpenAI board invited Murali to join them alone for a private conversation. They told her that they had been secretly recruiting a new CEO and had finally found someone willing to take the job.

For Murali, the OpenAI employees, and Microsoft, they could only grasp at the last straw and initiate Plan C. On Sunday night, Nadella officially invited Ultraman and Brockman to lead a new artificial intelligence research laboratory within Microsoft, providing them with all the resources they desired and as much freedom as possible. Both of them accepted. Microsoft began preparing offices for the hundreds of OpenAI employees they believed would join the department.

Murali and her colleagues wrote an open letter to the OpenAI board: “We cannot work with or collaborate with those who lack competence, judgment, and care for our mission and employees.” The authors of the letter pledged to resign and “join the newly established Microsoft subsidiary” unless all current board members resigned and Ultraman and Brockman were reappointed. Within a few hours, almost all OpenAI employees had signed the letter.

The threat of Plan C and a mass exodus from OpenAI was enough to soften the board’s attitude. Two days before Thanksgiving, OpenAI announced that Ultraman would resume his role as CEO. All board members, except for DeAngelo, would resign, and more prominent figures— including former Facebook executive and Twitter Chairman Bret Taylor, as well as former Treasury Secretary and Harvard University President Larry Summers — would be appointed as directors. OpenAI executives agreed to an independent investigation into what had transpired, including Ultraman’s past conduct as CEO.

Although the C plan initially seemed appealing, Microsoft executives later concluded that the current situation is the best outcome. Transferring OpenAI employees to Microsoft could result in costly and time-wasting lawsuits, as well as potentially trigger government investigations. Under the new framework, Microsoft gained a non-voting board observer seat at OpenAI, giving it greater influence without triggering regulatory scrutiny.

Microsoft’s Massive Victory

In fact, the soap opera-like ending is seen as a massive victory for Microsoft and a strong endorsement of its approach to developing artificial intelligence. “Ultron and Brockman are really smart and they could go anywhere. But they chose Microsoft, and all those OpenAI folks are ready to choose Microsoft, just like they chose us four years ago. It really validates the system we’ve built. They all know this is the best place, the safest place to continue the work they’re doing,” said a Microsoft executive.

Meanwhile, the ousted board members insist that their actions were wise. “There will be a comprehensive and independent investigation instead of putting a bunch of Sam’s cronies on the board. We finally have fresh voices to challenge him,” said someone familiar with the board discussions. “Sam has power, he’s persuasive, he gets away with things, and now he’s noticed that people are watching him,” said former board member Turner. “The board has always been focused on fulfilling our obligations to the OpenAI mission.” (Ultron told others he welcomes the investigation – partly to help him understand why this tragedy happened and what different measures he could have taken to prevent it.)

Some artificial intelligence regulatory bodies are not particularly satisfied with this outcome. Margaret Mitchell, the chief ethics officer at the open-source artificial intelligence platform Hugging Face, believes that “when the board fired Ultron, they were indeed doing their job. His return will have a chilling effect. We will see fewer and fewer people speaking up inside the company because they will fear being fired – and top management will become even more irresponsible.”

As for Ultron, he is ready to discuss other things. “I think we just turn to good governance and excellent board members. We’ll do an independent assessment, which I’m really excited about,” he told me. “I just hope everyone moves on, happy and content. We’ll keep on with the mission.”

Nadella and Scott breathed a sigh of relief as everything returned to normal with the large-scale release of Copilots. However, Office Copilots seem both impressive and mediocre. They make mundane tasks easier, but they still have a long way to go before they can replace human workers. They feel far from the science fiction predictions, but they are also something people might use every day.

According to Scott, this effect is intentional. “Real optimism means sometimes taking things slowly,” he said. If he, Murali, and Nadella get their wish – given their recent victories, this possibility is now greater – artificial intelligence will continue to steadily permeate our lives at a pace that can accommodate the warnings needed for short-term pessimism, and only if humans are able to absorb how this technology should be used. Things could still spiral out of control – the gradual development of artificial intelligence will prevent us from realizing these dangers until it’s too late. But for now, Scott and Murali believe they can balance progress and security.

“Artificial intelligence is one of the most powerful things invented by humans to improve the quality of everyone’s lives. But it takes time, and it should take time too. We always solve challenging problems through technology. So, we can tell ourselves a good story about the future, or we can tell ourselves a bad story about the future – whichever one we choose, it could potentially become a reality.”

We will continue to update Blocking; if you have any questions or suggestions, please contact us!

Share:

Was this article helpful?

93 out of 132 found this helpful

Discover more

Blockchain

Litecoin Unleashed Decrypting the Current State of LTC amidst the AI Altcoin Buzz

In this article, we'll dissect the latest trends and features of Litecoin, highlighting why it's a hot pick in the cr...

DeFi

Grove Raises $7.9 Million in Funding to Revolutionize DeFi

Grove secures $7.9 million from top investors to strengthen DeFi efforts.

Blockchain

When Flare and Bloxico Shake Hands Unleashing Unbeatable Blockchain Reputation Scores!

Flare Network and Bloxico have introduced Reputation scores to improve trust in the oracle's expanding ecosystem.

Market

Circle’s USDC: The Rising Star Among Stablecoins 🌟

Against the backdrop of growing adoption of digital assets driven by institutional investors, Circle's USDC has exper...

Blockchain

Ripple and RocketFuel: The Dynamic Duo

Exciting News Ripple Labs and RocketFuel Team Up to Transform Cross-Border Finances!

Market

YieldMax’s Creative ETF Proposal: Dancing with MicroStrategy Derivatives

YieldMax has submitted a request to the SEC for approval of an ETF that provides monthly income based on MicroStrateg...