There would need to be another paradigm shift if they wanna keep inflating AI usage.
We went from simple chatbots to thinking models which massively exploded token utilization.
We then went from simple thinking models to tool calls and agents. Agents, and particularly long horizon agents, burn truly insane numbers of tokens blowing thinking models well out of the water.
People are trying to do agentic swarms as the next step but I don't think those make sense as of right now. Particularly they are just too insanely expensive and not that useful.
Plus right now the models just aren't good at it. It's like early agents when they first started making tool calls.
Agents are really quite bad at using subagents. They don't really internalize how to deploy them and they also don't utilize them in the ways that make sense (produce planning documents, have verifiable artifacts, break down tasks in ways that minimize risk, recognize model limitations in instruction following, iterate on results, etc).
So there needs to be a new paradigm shift every few months or so? Because I remember people hailing AI reaching a new level of capability less than half a year ago, and saying it’d still be so much worth it even at ten times the price. And that already has lost momentum? If that’s the case, then AI companies are hugely overvalued. These contrasts are just wild to me.
Your last paragraph is also striking in that it exemplifies how far away from general intelligence they still are.
Most of everything tends to suck. Most projects go nowhere, most companies fail, most scientific papers are garbage.
> how far away from general intelligence they still are
Economically the real question is to what extent can these systems replace or augment human labour. And I think right now the extent is pretty shocking if not currently very well integrated.
Scientifically the fact they are bad at using subagents is sort of expected. How to use agents effectively is still a bit of an open question. A human from mid 2025 would be bad at it. Why should a model trained on data from 2025 be good at it?
If these things were to be generally intelligent they need feedback and retraining. Which persumable the Labs will do once these sorts of questions start having good answers and we can create good benchmarks and measures for meta orchestration.
Claude uses up its 6 hour or whatever quota in a couple coding prompts. Buying extra credits for the same amount as a monthly subscription and it's used up in 3 hours.
Kimi gives me about double what Claude does per window but uses up its entire weekly quota in the same time, for the same price as Claude. And I get worse results.
Gemini worked OK for a day or two and now is running one tool every 30m and getting nothing done, apparently they've been in constant outage status for for nearly a month: https://aistudio.google.com/status
I haven't tried ChatGPT because of ethical issues but well, I'm not sure that makes any sense.
Four prompts a day isn't something where I go, wow, this has revolutionized my programming. I might very well be getting more done if I wasn't fighting with the constant CLI bugs and work left half finished for 3h to 5 days when my quota is used up.
OpenAI was already projected to burn money at a ludicrous rate and now the news is they are going to burn it even faster. I just don’t see how this ends well for anyone. Regardless of what you think of AI it is apparently entirely too expensive to run. Maybe costs will come down dramatically, but is that at the cost of stagnation in the models? And if that is the case is AI a failure to investors?
At some point in the next few years investors are going to want their returns. The only way I see that happening is though an IPO and then… I don’t know if they have a sustainable business model or one in sight.
It certainly won’t help. And I’m not just saying the headline “US bails out OpenAI for $850B” will cost them support. The programs they cut to fund said purchase will cost them support. And that’s assuming they can even pull this off by November, because if the democrats take back the house and senate it’s game over. They won’t have time to fund the purchase with corporate tax breaks. At any time dems may be able to force a shutdown just like they did over DHS funding.
GM was bailed out because of the potential loss of US manufacturing jobs and the fact that auto factories can be converted to build other things in times of war. There was also no readily available buyer for GM as Ford was also hurting and there wasn't anyone else in a position to buy such a large company who was interested, though perhaps someone would have emerged over time.
OpenAI is a young company and if it collapses it is an indictment that AI is perhaps not really all that valuable. Further the technology and brand can be sold to a US company with plenty of expertise in AI themselves. The loss of jobs, relative to the larger economy, is minuscule. Google, Microsoft, Amazon, Apple, would all be happy to buy at a discount.
I disagree that they would be bailed out. The contagion and impact on the larger economy would be somewhat limited. They would more than likely just be sold to the highest bidder. The US would certainly dictate that the buyer is a US company, but I don’t see a bail out.
Anthropic is capturing exploding enterprise demand via their agentic tools, OpenAI is failing (relatively) to do so. They’re stuck trying to squeeze more $$ out of consumer chatbots that have reached the second knee of the S-curve.
We went from simple chatbots to thinking models which massively exploded token utilization.
We then went from simple thinking models to tool calls and agents. Agents, and particularly long horizon agents, burn truly insane numbers of tokens blowing thinking models well out of the water.
People are trying to do agentic swarms as the next step but I don't think those make sense as of right now. Particularly they are just too insanely expensive and not that useful.
Plus right now the models just aren't good at it. It's like early agents when they first started making tool calls.
Agents are really quite bad at using subagents. They don't really internalize how to deploy them and they also don't utilize them in the ways that make sense (produce planning documents, have verifiable artifacts, break down tasks in ways that minimize risk, recognize model limitations in instruction following, iterate on results, etc).
Your last paragraph is also striking in that it exemplifies how far away from general intelligence they still are.
Most of everything tends to suck. Most projects go nowhere, most companies fail, most scientific papers are garbage.
> how far away from general intelligence they still are
Economically the real question is to what extent can these systems replace or augment human labour. And I think right now the extent is pretty shocking if not currently very well integrated.
Scientifically the fact they are bad at using subagents is sort of expected. How to use agents effectively is still a bit of an open question. A human from mid 2025 would be bad at it. Why should a model trained on data from 2025 be good at it?
If these things were to be generally intelligent they need feedback and retraining. Which persumable the Labs will do once these sorts of questions start having good answers and we can create good benchmarks and measures for meta orchestration.
Umm, whats your point? We arent spending 1.4t on other shitty things that are tipping to fail
Claude uses up its 6 hour or whatever quota in a couple coding prompts. Buying extra credits for the same amount as a monthly subscription and it's used up in 3 hours.
Kimi gives me about double what Claude does per window but uses up its entire weekly quota in the same time, for the same price as Claude. And I get worse results.
Gemini worked OK for a day or two and now is running one tool every 30m and getting nothing done, apparently they've been in constant outage status for for nearly a month: https://aistudio.google.com/status
I haven't tried ChatGPT because of ethical issues but well, I'm not sure that makes any sense.
Four prompts a day isn't something where I go, wow, this has revolutionized my programming. I might very well be getting more done if I wasn't fighting with the constant CLI bugs and work left half finished for 3h to 5 days when my quota is used up.
At some point in the next few years investors are going to want their returns. The only way I see that happening is though an IPO and then… I don’t know if they have a sustainable business model or one in sight.
Which doesn't mean an end to the AI race, since China is unlikely to care whether US companies secure financing
Also, if this happens OpenAI will probably bailed out with taxpayer money
As to why.. it's corruption. They would be bailed out as an act of corruption, to use the public machine for private gains.
OpenAI is a young company and if it collapses it is an indictment that AI is perhaps not really all that valuable. Further the technology and brand can be sold to a US company with plenty of expertise in AI themselves. The loss of jobs, relative to the larger economy, is minuscule. Google, Microsoft, Amazon, Apple, would all be happy to buy at a discount.
Anthropic is capturing exploding enterprise demand via their agentic tools, OpenAI is failing (relatively) to do so. They’re stuck trying to squeeze more $$ out of consumer chatbots that have reached the second knee of the S-curve.