In the ever-evolving landscape of AI, a fascinating debate has emerged: are these cutting-edge labs truly focused on monetization, or is something else at play? This question gains urgency as we witness a unique moment in the history of AI companies, particularly those crafting their own foundation models.
The scene is set with a diverse cast: industry veterans, once household names at major tech companies, now striking out on their own; legendary researchers, brimming with experience but ambiguous commercial goals. It's a scenario ripe for controversy.
Some of these new labs could well become the next OpenAI, giants in the AI realm. Yet, there's an intriguing possibility that they might choose a different path, one focused on research without the pressure of commercialization.
This dichotomy has led to a fascinating conundrum: how do we gauge the true intentions of these AI entities? Is it even possible to tell who's genuinely trying to make money?
To simplify this complex issue, let's introduce a sliding scale, a five-level framework to assess the ambitions of companies building foundation models. Here's how it breaks down:
- Level 5: These are the big players, raking in millions daily, like OpenAI, Anthropic, and Gemini.
- Level 4: Here, we find those with a clear, multi-stage plan to dominate the AI market and become the richest in their field.
- Level 3: This level is occupied by those with promising product ideas, but the specifics are still under wraps.
- Level 2: At this stage, there's a concept, a plan in its infancy.
- Level 1: An intriguing perspective - true wealth is found in self-love and not in monetary gains.
The big names are firmly at Level 5, but the new generation of labs, with their ambitious dreams, adds an intriguing layer of complexity.
The beauty of this scale is its flexibility. The individuals involved can choose their level, and with the current AI boom, no one's going to question their business plan. Even if a lab is primarily a research project, investors are happy to be a part of it.
Now, let's apply this scale to some of the biggest AI labs of our time.
Humans&: This week's AI sensation, Humans& has an intriguing pitch for the next generation of AI models, focusing on communication and coordination tools. However, despite the media hype, they've been vague about how this translates into tangible, monetizable products. They seem keen on building products, but the team is hesitant to commit to specifics. Their most concrete statement? They're building an AI workplace tool to replace Slack, Jira, and Google Docs, but with a fundamental redefinition of how these tools operate. A confusing proposition, but specific enough to place them at Level 3.
Thinking Machines Lab (TML): A tough one to rate! With a former CTO and project lead for ChatGPT at the helm, and a $2 billion seed round, one would assume a clear roadmap. But recent events have cast doubt. The departure of CTO and co-founder Barret Zoph, along with at least five other employees, has raised concerns about the lab's direction. Nearly half of the executives on TML's founding team have left within a year. It's as if they realized their Level 4 aspirations were more like Level 2 or 3. While there's not enough evidence for a downgrade yet, it's a close call.
World Labs: Led by Fei-Fei Li, one of AI research's most respected names, World Labs has an impressive track record. Li's establishment of the ImageNet challenge kickstarted contemporary deep learning techniques. In 2024, when Li announced $230 million in funding for World Labs, a spatial AI company, one might assume a Level 2 or lower operation. But a lot can change in the fast-paced world of AI. Since then, World Labs has released a full world-generating model and a commercial product. There's also a growing demand for world-modeling in the video game and special effects industries, with no major labs able to compete. This trajectory suggests a Level 4 company, perhaps soon to graduate to Level 5.
Safe Superintelligence (SSI): Founded by former OpenAI chief scientist Ilya Sutskever, SSI appears to be a classic Level 1 startup. Sutskever has gone to great lengths to insulate SSI from commercial pressures, even turning down an acquisition offer from Meta. There are no product cycles, and aside from the foundation model, no other products seem imminent. Yet, Sutskever's recent Dwarkesh appearance hinted at two potential pivots: if research timelines extend, or if the world needs the best AI to impact it. In other words, SSI could quickly jump levels if research takes an unexpected turn.
This scale offers a unique perspective on the intentions and ambitions of AI companies. It highlights the diverse motivations within the industry and invites discussion. Where do you think these labs stand? And what does this say about the future of AI and its relationship with commercialization? Feel free to share your thoughts in the comments!