One of the nice things about OpenAI is that it was built on distrust. It began as a nonprofit research lab because its founders didn’t think artificial intelligence should be pioneered by commercial firms, which are driven overwhelmingly by the profit motive.
As it evolved, OpenAI turned into what you might call a fruitful contradiction: a for-profit company overseen by a nonprofit board with a corporate culture somewhere in between.
Many of the people at the company seem simultaneously motivated by the scientist’s desire to discover, the capitalist’s desire to ship product and the do-gooder’s desire to do this all safely.
The events of the past week — Sam Altman’s firing, all the drama, his rehiring — revolve around one central question: Is this fruitful contradiction sustainable?
Can one organization, or one person, maintain the brain of a scientist, the drive of a capitalist and the cautious heart of a regulatory agency? Or, as Charlie Warzel wrote in The Atlantic, will the money always win out?
AI’s research lineage
It’s important to remember that AI is quite different from other parts of the tech world. It is (or at least was) more academic. AI is a field that had a research lineage stretching back centuries. Even today, many of the giants of the field are primarily researchers, not entrepreneurs — people like Yann LeCun and Geoffrey Hinton, who won the Turing Award (the Nobel Prize of computing) together in 2018 and now disagree about where AI is taking us.
It’s only in the last several years that academic researchers have been leaving the university aeries and flocking to industry. Researchers at places like Alphabet, the parent company of Google; Microsoft; OpenAI; and Meta, which owns Facebook, still communicate with one another by publishing research papers, the way professors do.
But the field also has the intensity and the audacity of the hottest of all startup sectors. While talking with AI researchers over the past year or so, I have often felt I was on one of those airport moving walkways going 3 mph, and they were on walkways going 4,000 mph. The researchers kept telling me that this phase of AI’s history is so exhilarating precisely because nobody can predict what will happen next. “The point of being an AI researcher is you should understand what’s going on. We’re constantly being surprised,” Stanford doctoral candidate Rishi Bommasani said.
The people in AI seem to be experiencing radically different brain states all at once. I’ve found it incredibly hard to write about AI because it is literally unknowable whether this technology is leading us to heaven or hell, and so my attitude about it shifts with my mood.
Podcaster and Massachusetts Institute of Technology scientist Lex Fridman, who has emerged as the father confessor of the tech world, expressed the rapid-fire range of emotions I encountered again and again: “You sit back, both proud, like a parent, but almost like proud and scared that this thing will be much smarter than me. Like both pride and sadness, almost like a melancholy feeling, but ultimately joy.”
Eye of a hurricane
When I visited the OpenAI headquarters in May, I found the culture quite impressive. Many of the people I interviewed had arrived when OpenAI was a nonprofit research lab, before the ChatGPT hullabaloo — when most of us had never heard of the company. “My parents didn’t really know what OpenAI did,” said Joanne Jang, a product manager, “and they were like, ‘You’re leaving Google?’” Mark Chen, a researcher who was involved in creating the visual tool DALL-E 2, had a similar experience. “Before ChatGPT, my mom would call me, like, every week, and she’d be like, ‘Hey, you know you can stop like bumming around and go work at Google or something.’” These people are not primarily driven by the money.
Even after GPT made headlines, being at OpenAI was like being in the eye of a hurricane. “It just feels a lot calmer than the rest of the world,” Jang said. “From, like, the early days, it did feel more like a research lab, because mainly we were only hiring for researchers,” said Elena Chatziathanasiadou, a recruiter. “And then, as we grew, it started becoming apparent to everyone that progress would come from both engineering and research.”
I didn’t meet any tech bros there, or even people who had the kind of “we are changing the world” bravado I would probably have if I were pioneering this technology. Diane Yoon, whose job title is vice president of people, said, “The word I would use for this workforce is earnest … earnestness.”
Usually, when I visit a tech company, as a journalist, I get to meet very few executives, and those I do interview are remorselessly on message. OpenAI just put out a sign-up sheet and had people come to talk to me.
Torn over safety
As impressive as they all were, I remember telling myself, “This isn’t going to last.” I thought there was too much money floating around. These people may be earnest researchers, but whether they know it or not, they are still in a race to put out products, generate revenue and be first.
It was also clear that the folks were torn over safety. On the one hand, safety was on everybody’s mind. For example, I asked Marc Chen about his emotions the day DALL-E 2 was released. “A lot of it was just this feeling of apprehension. Like, did we get the safety?” On the other hand, everybody I spoke to was dedicated to OpenAI’s core mission — to create a technology capable of artificial general intelligence, capable of matching or surpassing human intelligence across a broad range of tasks.
AI is a field that has brilliant people painting wildly diverging but also persuasive portraits of where this is going. Venture capital investor Marc Andreessen emphasizes that it is going to change the world vastly for the better. Cognitive scientist Gary Marcus depicts an equally persuasive scenario about how all this could go wrong.
Nobody really knows who is right, but the researchers just keep plowing ahead. Their behavior reminds me of something Alan Turing wrote in 1950: “We can only see a short distance ahead, but we can see plenty there that needs to be done.”
I had hoped that OpenAI could navigate the tensions, though even then, there were worries. As Brad Lightcap, OpenAI’s chief operating officer, said, “The big thing is, is really just maintaining the culture and the mission orientation as we grow. The thing that actually keeps me up, if you’re asking honestly, is, how do you maintain that focus at scale?”
Those words were prescient. Organizational culture is not easily built but is easy to destroy. The literal safety of the world is wrapped up in the question: Will a newly unleashed Altman preserve the fruitful contradiction, or will he succumb to the pressures of go-go-go?
David Brooks is a New York Times columnist.