GPT-4.5 review: Yesterday, OpenAI unveiled GPT-4.5, the most expensive AI model ever produced. But instead of revolutionizing the industry, it landed with a thud. Despite massive investment and sky-high expectations, the model doesn’t crush any benchmarks, win any awards, or introduce any groundbreaking capabilities.
Its main selling point? “Vibes.” Supposedly, GPT-4.5 chats in a more natural, human-like manner. And while that might sound good, it’s an entirely subjective measure. The AI singularity that some feared—or hoped for—now looks more like a sigmoid of sorrow than an exponential explosion.
GPT-4.5 review: A Lackluster Release
Sam Altman, OpenAI’s CEO, didn’t even bother to attend the product launch. Instead, a group of interns presented the model, signaling a stark contrast to the dramatic AI safety warnings issued in 2023, when tech leaders—including Altman—signed petitions urging a pause in AI development.
The reality? GPT-4.5 isn’t the threat people once imagined. It’s not self-aware, doesn’t even recognize what GPT-4.5 is, and still makes silly mistakes. Its training data cuts off at October 2023, which is hardly an improvement over its predecessor. Sure, it correctly counted the “R’s” in strawberry, but it failed at counting the “L’s” in Lollapalooza. Hardly the superintelligence we were promised.
The Cost Problem
If you thought Claude was expensive at $15 per million tokens, brace yourself—GPT-4.5 is five times pricier. The breakdown:
- $75 per million input tokens
- $150 per million output tokens
- Only available to $200/month pro users
These numbers are absurd. And for what? A marginally better chatbot experience? This raises a fundamental question: Is AI actually improving, or are we just paying more for diminishing returns?
For reference, OpenAI claims it introduced a new “Vibes Benchmark” to measure creative thinking. But can vibes justify these costs? More details on pricing here.
Performance vs. Expectations
When it comes to hard metrics, GPT-4.5 isn’t just underwhelming—it’s losing ground. It still hallucinates, still struggles with complex reasoning, and performs worse than DeepSeek on the 8er Polyglot coding benchmark. Worse yet, it’s hundreds of times more expensive than some competitors.
Even in AI-driven programming, OpenAI’s model underperforms compared to more specialized alternatives. That’s a serious problem for an industry that expected each new iteration to be exponentially better. Check out a comparative analysis here.
The Grok Factor & OpenAI’s Future
If you dislike Elon Musk, you might need a deep inhale of Copium—because XAI’s Grok is currently the best AI model in the world. That’s not just an opinion; it’s the consensus of the betting markets.
Of course, OpenAI is still the favorite to hold the top spot by the end of 2025, but its odds are slipping. That’s a serious issue, given the company’s need to secure billions in funding as it transitions into a for-profit entity.
Altman insists that there’s “no wall” to AI scaling. He envisions infinitely larger models fueled by trillions of dollars from SoftBank and Saudi investors. But what if scale alone isn’t enough? What if GPT-5 is just a glorified router that picks the best model for a given prompt, rather than a true leap forward?
If that’s the case, then the AI plateau is real. And for programmers, that’s actually good news. AI coding tools remain valuable, but only for those who already know what they’re doing. Human expertise is still essential, and that’s unlikely to change anytime soon.
Conclusion
We expected to be battling robots and roasting rats over trash-can fires by now. Instead, we’re stuck in a dystopia where artificial superintelligence never comes, and nothing ever happens.
So, where does AI go from here? That’s the real question. The hype train is still moving—but for how much longer?