Not the first to notice this I'm sure but it feels like there's an insane amount of pressure pushing capital towards anything with a hint of AI legitimacy. It's as if asset owners across the planet have come to a consensus that the only industry that will matter going forward is this one (fair enough I guess), but this intense systemic pressure squeezes insane amounts of money toward litearlly any AI shaped outlet that opens up. It's just starting to feel like "scared and desperate" money more than "smart money".
Is it not a case of many funders don't want to risk missing out on the next big thing? And a loss of a few billion now is better than the loss of many billions down the line and control of the future?
Of course the motivation makes sense on the surface. What I'm getting at is that the supply of capital vs the supply of potential "control of the future" plays feels incredibly imbalanced. Money seems to be so desperate to move into AI it's lost all prudence (the particular people and company mentioned in the OP nonwithstanding, maybe they do deserve 1B).
"not wanting to risk missing out" is essentially just FOMO right? "Smart" money has feels more like FOMO money these days. We literally have shoe companies savying they're going to pivot to AI and having their market cap increase in multiples as reward.
I don't think Silicon Valley has been smart money for a decade plus. Quantum Computing is becoming the exact same with academic and government funding. With a lot of cash being spent on long shots or no hopers.
AlphaZero worked because chess and Go have terminal rewards and positions you can prove are right or wrong. General intelligence has neither, and the leap from self-play in a well-defined game to self-play in arbitrary environments is the hard part Silver isn't really demoing. Sara Hooker's stuff on scaling laws lines up here (1)
Geoffrey Hinton said that the breakthrough the AlphaGo team had was getting it to play against itself and improve in that means, since it could then go beyond the human training data it had learned on. He said that an equivalent form of self-training for generalized information would let a superintelligence take off (this is from my memory, not an exact quote).
The TechCrunch article doesn't specify how/what kind of data a recursive general AI could use to achieve such a thing. If it is possible that's exciting. Seems like a real philosophical question to answer- How could a general AI self-train?
Why wouldn't it be exciting? It would learn through logic and reason as opposed to our faulty human artifacts. And it wouldn't be limited to what we currently know. A good test would be if it could rediscover mathematics or relativity.
Yesterday i watched a video stating that evolutionary algoritm become more relevant in machine learning again.
But if you think about our brain, if you learn something new, you play with it and recall it and challange the new information. Perhaps we can build something similiar. A model adjusting itself until its perfect.
Silver has a couple of recent papers that probably give an idea of what they are up to:
>...Here we show that it is possible for machines to discover a state-of-the-art RL rule that outperforms manually designed rules. This was achieved by meta-learning from the cumulative experiences of a population of agents across a large number of complex environments... https://www.nature.com/articles/s41586-025-09761-x
Housing, healthcare, and food production all spring to mind as industries that matter waaaay more than AI! (≧ᗜ≦)
"not wanting to risk missing out" is essentially just FOMO right? "Smart" money has feels more like FOMO money these days. We literally have shoe companies savying they're going to pivot to AI and having their market cap increase in multiples as reward.
(1) https://philippdubach.com/posts/the-most-expensive-assumptio...
So pre-money in this case is their valuation even before they've received any investment.
The TechCrunch article doesn't specify how/what kind of data a recursive general AI could use to achieve such a thing. If it is possible that's exciting. Seems like a real philosophical question to answer- How could a general AI self-train?
Would it be exciting though? I mean it would certainly excite some things, but I don’t know that it would be something to rejoice.
Their website doesn’t even have a hint of what the approach is.
But if you think about our brain, if you learn something new, you play with it and recall it and challange the new information. Perhaps we can build something similiar. A model adjusting itself until its perfect.
>...Here we show that it is possible for machines to discover a state-of-the-art RL rule that outperforms manually designed rules. This was achieved by meta-learning from the cumulative experiences of a population of agents across a large number of complex environments... https://www.nature.com/articles/s41586-025-09761-x
and
>A new generation of agents will acquire superhuman capabilities by learning predominantly from experience. This note explores the key characteristics that will define this upcoming era. https://storage.googleapis.com/deepmind-media/Era-of-Experie...
Sorta sums up the whole industry.