OpenAI, which wants to change everything, is being changed.
OpenAI, a transformative force, is undergoing transformation.Just before the holiday, the Wall Street Journal quoted insiders as saying that OpenAI is discussing the possibility of selling stocks with investors, which could raise the company’s valuation to 80-90 billion US dollars.
If this becomes a reality, OpenAI will become one of the highest valued private companies in the world.
The “GPT” that has already become synonymous with the new wave of AI is expected to generate over 1 billion US dollars in revenue for the company soon.
But OpenAI is still far from what they want.
- After a 90% plunge in its stock price, BC Technology’s shares doubled in February. Is it a comeback against the wind with the help of the cryptocurrency exchange OSL, or just a speculative play?
- Can Web3 membership card connecting restaurants bring more repeat customers, with Blackbird invested by a16z?
- Coinbase discloses its own case How hackers penetrated layer by layer through social engineering
They say that all they want to do is to build smart and safe computers that can end history and bring humanity into a generous era that we can’t imagine.
Famous tech journalist Steven Levy wrote. He has been following this company since its inception and recently traveled to various countries with Sam Altman to give speeches.
In his recent feature article, Levy shows us the development process of OpenAI and the internal scenes of many important decisions, and raises a question: Is OpenAI, which aspires to change everything with AGI (Artificial General Intelligence), still the same as it was in the beginning?
“Believers” Gather
AGI is like a “faith”.
No one knows exactly what it is like, but in the hearts of those who believe it can be created, it can solve human problems better than humans themselves.
When it appears, human society will be permanently changed. Sam Altman believes:
AGI will only be successfully built once.
With what seems like a “standard technology genius” life, Altman completed the path of learning technology, entrepreneurship, and investment at the age of 28, and became the president of the famous incubator Y Combinator.
In his view, investing in startups is not about getting high returns, but about investing in and promoting innovations that have the potential to disrupt everything.
It was in the year when Altman took charge of Y Combinator that AI witnessed many breakthroughs – deep learning and neural networks enabled computers to “understand” images and translate text better; DeepMind was acquired by Google and two years later, it overturned people’s imagination of AI with AlphaGo.
It was time.
Altman wanted to establish a new type of organization, ahead of big companies in the pursuit of AGI, and ensure that this technology is used responsibly and safely.
He raised the initial startup capital from Musk, LinkedIn co-founder Reid Hoffman, LianGuaiyLianGuail co-founder Peter Thiel, and Y Combinator founding partner Jessica Livingston, and began looking for talent for OpenAI.
Altman only wants people who truly believe in AGI:
In 2015, when we were hiring, if an AI researcher said they really took AGI seriously, that would basically ruin your career. But I wanted those who took AGI seriously.
Greg Brockman, who was the CTO at payment company Stripe at the time, was such a person. In addition, Andrej Karpathy of Google Brain also became a co-founder of OpenAI.
But if you talk about one of Altman’s most desired, it would be engineer Ilya Sutskever. In the future, he will also be the person who sees the dawn of AI the earliest.
Sutskever is a proud student of “father of artificial intelligence” Geoffrey Hinton and was one of the core scientists at Google Brain in 2015.
Altman’s way of recruiting is quite special.
He first wrote an email inviting Sutskever to have dinner together, accompanied by Musk, Brockman, and others. At the dinner table, no one explicitly invited Sutskever to join OpenAI, but they were discussing “AI and the future of AGI”.
Sutskever took the bait.
He went home and wrote an email to Altman expressing his willingness to join. However, the email accidentally got stuck in the drafts folder. It wasn’t until Altman started reaching out proactively that Sutskever finally joined OpenAI.
In December 2015, OpenAI was officially established.
It believes that AGI will eventually come, and it wants to make AI accessible to everyone in an open-source manner.
Until now, the size of OpenAI’s team has grown from just a few people to more than 500 people, and “believing in the coming of AGI” remains a consensus among team members—at least in the eyes of the company’s executives:
Why would someone who doesn’t believe want to work here?
Ivory Tower
The idealistic origin makes OpenAI like an ivory tower.
There is nothing happier than working together with like-minded and talented colleagues towards the same goal, right?
I say, how should we do it?
Altman expressed his confusion when he met with CTO Brockman and a small team early on.
The reality is that OpenAI was also confused for quite some time.
Over a year after its establishment, Brockman met with Levy and remained tight-lipped about the company’s progress, only saying, “Our goal is to create systems that can do things that humans couldn’t do before.”
From the outside, OpenAI looks more like a laboratory publishing various research papers.
Brockman now admits that at that time, “nothing was working.” The researchers were trying out various algorithms, experimenting with game-solving techniques on one hand, and investing a lot of effort in studying robotics on the other. Altman said:
We know what we want to do. We know why we want to do it. But we have absolutely no idea how to do it.
Sutskever has his own idea:
The general idea is to not go against deep learning.
In this relatively pure environment, researchers are conducting their own experiments and trials.
Alec Radford, a researcher who joined OpenAI in 2016, once described working at OpenAI as “joining a graduate program” – an open and low-pressure space for researching AI.
The office of OpenAI also has this feeling.
In the headquarters in San Francisco, Altman set up a university-style cafeteria, self-service bar, and a library.
This library combines the appearance of Altman’s favorite bookstore in Paris and the Bender Room study room at Stanford.
At that time, the then-unknown Radford was interested in a project that involved studying neural networks and human dialogue. Initially, he tried to train a language model using 2 billion Reddit comments, but the results were not ideal.
However, Radford was only 23 years old at the time, and at OpenAI, people had the capital to fail and continue trying. Brockman recalled:
Our attitude at the time was basically that Alce is great, let him do his own thing.
Due to limited computing power support from OpenAI, Radford’s next project attempted to use a more concentrated and smaller dataset – about 100 million Amazon product reviews.
Radford’s task for the language model was straightforward, which was to predict the next word when generating comments. The model could even write positive and negative comments based on the requirements:
“This is completely unexpected,” Radford said, he never expected the system to understand positive/negative sentiments, which is a complex function in semantics.
Sutskever encouraged Radford to continue exploring this experiment and see if it could be applied in different fields.
In 2017, the paper that would change everything appeared – “Attention Is All You Need.”
The real “aha moment” was when Ilya saw the release of the transformer paper. He said, “This is what we’ve been waiting for.”
Brockman recalled. In his view, this process also reflected OpenAI’s working philosophy:
This has always been our strategy – to try our best to solve problems and then believe that either we or someone else in the field will eventually figure out what we are missing.
Radford began to incorporate the Transformer architecture into his experiments: “I made more progress in two weeks than in the past two years.” He realized that the key to the new model lies in the quantity, giving the model a large amount of data.
Radford and his collaborators named their new model “generatively pretrained transformer” or “GPT-1”.
GPT made its debut, and OpenAI began to undergo changes.
Parting ways
They have a different opinion on the best way to achieve safe AGI.
Regarding research director Dario Amodei leaving the team in 2019 and founding Anthropic, Altman commented in the Wall Street Journal.
In 2021, the main technical leader of the GPT-2 and GPT-3 projects, Rewon Child, also left OpenAI and is now the co-founder of Inflection AI founded by Mustafa Suleyman of DeepMind.
Of course, one of the most famous “former partners” of OpenAI is Musk.
Back in 2018, OpenAI had already started focusing on large language models and reclaimed resources from previous scattered research.
However, Musk still felt that everything was too inefficient, or that OpenAI needed better leadership. Later, he said in an interview that he believed safety should be given a more important position.
Regardless of the situation, the solution he proposed at the time was straightforward – let him take over OpenAI.
As we all know, Musk also left OpenAI and withdrew the funding.
As a transition, Reid Hoffman, who also sponsored OpenAI in the early days, agreed to temporarily provide financial support for company expenses.
Although Altman is known as the person with the most connections among Silicon Valley millennials, it is not easy to raise funds for a non-profit organization that requires a large amount of money to purchase equipment for AI training.
In March 2019, OpenAI announced a new company structure.
OpenAI will maintain its non-profit nature but will also establish a “caped-profit” for-profit company, OpenAI LP.
When the profits generated by OpenAI for investors reach the cap (specific figures not disclosed), all additional profits will be invested in the research of the non-profit company.
This complex structure has raised many questions. Adam D’Angelo, a member of the OpenAI board of directors, said that he and the board’s mission is to ensure that OpenAI continues on its original path:
We have a legal disclaimer to clarify that as an investor, you may lose all your money. Our existence is not to make money for you. Our goal is to achieve a technological objective, which is the most important thing. By the way, we don’t know what money will be like in the “post AGI” era.
What D’Angelo said at the end sounds like a joke, but it is an actual existing clause – if the company successfully creates AGI, all financial agreements will need to be reconsidered and reconsidered.
Yes, OpenAI is indeed considering the world after AGI is born.
In 2019, OpenAI completed three rounds of financing and raised over $1 billion, but this is insignificant compared to the consumption of training large models.
Each iteration of GPT puts higher demands on computing power. Altman said:
It seems that we didn’t know how big of a ship we needed at that time.
What happened next, everyone knows.
Microsoft and OpenAI reached an exclusive partnership.
Microsoft initially invested $1 billion, mainly by providing Azure cloud computing services; then further deepened the cooperation and has now invested $13 billion.
Being at the forefront can be a very expensive thing.
Microsoft CTO Kevin Scott said.
This investment is clearly worth it.
With Microsoft’s OpenAI, its market value rose from $1.79 trillion in 2022 to $2.48 trillion in August 2023, reaching a historical high.
Bing, supported by GPT, has made Microsoft even more confident. Microsoft CEO Satya Nadella, seeing Google catching up, said in an interview:
“I want people to know we made them dance.”
Is it still OpenAI?
GPT-5, no matter how much OpenAI emphasizes that they haven’t started working on it, rumors and leaks seem to always be focused on it.
Perhaps this means that we are all expecting the stunning return when GPT-4 emerges, and looking forward to seeing how the technology can advance.
The reality is, it seems that we don’t need GPT-5, as OpenAI is already continuously creating excitement.
GPT-4 is integrated into Windows, ChatGPT is “back online”, ChatGPT can now see images, ChatGPT can hear and speak…
Every update is sensational.
These activities have also raised questions about OpenAI’s “original intentions”. A senior executive in the AI field commented:
“If you really think about it, (OpenAI) is actually doing five businesses. First, there is the product itself, then the collaboration with Microsoft, the developer ecosystem, and there is also an app store. Oh, right, there is obviously also an AGI research task. Of course, they are also running an investment fund.”
He was referring to a $175 million project aimed at supporting companies that use OpenAI technology for entrepreneurship.
All of these have different cultures, and in fact, they conflict with the research task.
During the interview, Levy repeatedly asked executives how deep the changes in OpenAI’s culture would be after becoming a “product company”.
All these executives insist unequivocally that even if the company’s structure changes and even if it continues to face competition from rivals such as Google and Meta, AGI will always be the core of the company.
However, Levy also noticed the changes that have taken place in OpenAI.
Do you remember that we mentioned at the beginning that all employees in OpenAI are clear “AGI believers”?
But as OpenAI has developed to its current stage, the company has added a large number of support teams, such as lawyers, marketers, policy experts, UI designers, product managers, and countless faceless content reviewers.
Levy once interviewed Tom Rubin, a copyright lawyer who joined OpenAI in March of this year, about the copyright infringement cases that OpenAI faces.
Rubin is optimistic about the legal challenges that OpenAI faces, but when asked if he believes that AGI will eventually be developed, he became somewhat hesitant.
After a moment of pause, he said:
I can’t answer this question.
Later, he clarified that as an intellectual property lawyer, promoting the development of artificial intelligence is not his job, but he also “looks forward to its arrival.”
Regarding the twists and turns they are experiencing, Altman said:
What I want to emphasize is that we don’t have a clear plan. We are like shining a flashlight whenever we come to a corner. We are willing to navigate through this maze and eventually reach the finish line.
We will continue to update Blocking; if you have any questions or suggestions, please contact us!
Was this article helpful?
93 out of 132 found this helpful
Related articles
- 2023 Cryptocurrency Industry Hot Narratives and Current Situation (II) Inscriptions, Meme
- a16z Leading investment of $24 million in Blackbird when restaurants meet Web3
- 1kx Exploring the Design Space of Dynamic NFTs
- Web3 Weekly Financing Overview (9.25-10.1)
- Meng Yan ‘Reflection on the Blockchain Industry’—How can trust be generated and disseminated without relying on authority in the past decade?
- Is something with no elasticity in supply not suitable to serve as currency?
- Web3 Social Resurgence Who will be the next phenomenon-level application among friend.tech, Telegram, and others?