I would very much like not to have to download 22 GB for some inference capability that is way worse than API calls both in terms of quality and speed.
I would rather pay money than seeing this thing running in my browser that only prints 5 tps on high-end consumer hardware.
Fair, but actually you'd surely want your choice of those three, right?
And what's being discussed here is what the better implementation of option 3 is.
My point is that if you're going with one of the possible implementations of option 3, then 22GB per browser is objectively a lot better than 22GB per website.
More like "you need to sign up for our website and pay for a subscription", and I'd much rather do that if it's actually providing value. I am absolutely not going to run model locally which slowly churns out words at 5 tps while making the computer hot to touch.
It works, I've shipped this as a "local inference"/poor person's ollama for low-end llm tasks like search. The main win is that it's free and privacy preserving, and (mostly) transparent to users in that they don't have to do anything, which is great for giving non-technical users local inference without making them do scary native things.
But keep in mind the actual experience for users is not great; the model download is orders of magnitude greater than downloading the browser itself, and something that needs to happen before you get your first token back. That's unfixable until operating systems start reliably shipping their own prebaked models that an API like this could plug into.
> It works, I've shipped this as a "local inference"/poor person's ollama for low-end llm tasks like search
fantastic!
> the model download is orders of magnitude greater than downloading the browser itself, and something that needs to happen before you get your first token back
sure but does this mean the model is lazily downloaded? that is, if I used this and I am the first time the model was called, the user would be waiting until the model was downloaded at that point?
that sounds like a horrible user experience - maybe chrome reduces the confusion by showing a download dialog status or similar?
> `> Storage: At least 22 GB of free space on the volume that contains your Chrome profile.`
Yes, I can read and comprehend English and you should assume I read the page. Because of the "At least" wording, I was curious what a person who has actually used the feature has noticed, aka, learning from people who have actually done it already.
The model download is lazy and cached, so it's a one-time cost presumably across all origins (I assume so since the alternative would be a trivial DoS waiting to happen).
So it's once per browser, not once per site.
You can track the download state yourself and pop whatever UI you want.
chrome://on-device-internals reports "Model Name: v3Nano Version: 2025.06.30.1229 Folder size: 4,072.13 MiB" on a random Windows machine I just checked.
Thank You stranger! I would have assumed the size would vary based on whether your hardware supports the high-quality GPU backend (4 GB) or defaults to a smaller CPU-compatible version (3 GB) but the 22GB note on that page is really confusing. Even if it was including the model server where's the remaining 18GB going towards?
I'd imagine that the 22GB was decided through modelling various scenarios. For a start, it's not just a 4GB current model, it's 2x4GB to be able to update it without needing time when the computer is without a model, that's up to 8GB.
Then it's possible the model you get will scale with the CPU/GPU/RAM available, so if you have a 12GB GPU you probably get a better model, perhaps that's a 10-11GB model? At 2x that's 22GB.
Then consider that a machine is not static, GPUs/hardware come and go, VRAM allocation in integrated graphics changes, etc. You end up with just needing to pick a number and not confuse users.
This is part of it, and also we just didn't want to use up the last of the user's disk space! It's disrespectful to use up 3 GB if the user only has 4 GB left; it's sketchy if the user only has 10 GB. At 22 GB, we felt there was more room to breathe.
One could argue that users should have more agency and transparency into these decisions, and for power users I agree... some kind of neato model management UI in chrome://settings would have been cool. But 99% of users would never see that, so I don't think it ever got built.
Would it be less dystopian for Operating Systems to ship with their own browser that ships with their own models? Or do you find the current situation where Operating Systems ship with browsers dystopian?
I find the very idea of this AI thing making its way through like a virus onto our computer systems (either in centralised form, as it mostly happens now, or installed locally, like this article writes about) quite dystopian. On the other hand I do not find the idea of the Internet browser as dystopian (even though a data and corporate behemoth like Alphabet being the entity behind Chrome is indeed dystopian, I agree on that).
On a second thought you're on to something, maybe if we hadn't let the Internet browser take over our computer-lives as much as it did in the last 20 or so years then Chrome (under its current manifestation, that is) wouldn't have happened. At least we are now aware of the dangers awaiting in front of us when it comes to AI.
> But keep in mind the actual experience for users is not great; the model download is orders of magnitude greater than downloading the browser itself, and something that needs to happen before you get your first token back.
With MoE models, you could fetch expert layers from the network on demand by issuing HTTP range queries for the corresponding offset, similar to how bittorrent downloads file chunks from multiple hosts. You'd still have to download shared layers, but time to first token would now be proportional to active-size rather than total-size. Of course this wouldn't be totally "offline" inference anymore, but for a web browser feature that's not a key consideration.
> With MoE models, you could fetch expert layers from the network on demand
This is a common misconception, probably due to the unfortunate naming. Expert layers are not "expert" at any particular subject, and active-size only refers to the activated layers per token. You'd still need all (or most of all) the layers for any particular query, even if some layers have a very low chance of being activated.
All in all, you'd be better off with lazy loading the entire model, at least you'd know you have the capability to run inference from then on.
Ultimately it would amount to lazy-loading the model, but the parameters themselves would be fetched from the network as needed, which still decreases time-to-first-token. It's true that "expert" choices will span most of the model, regardless of any particular "subject" or "topic" choice, but if we simply care about time-to-first-token it's still a viable strategy.
Perhaps you could generate a few tokens before the entire model is downloaded, but since every token takes a potentially different "path" through an MoE model, you'd still need to wait for the entire download before getting deeper than a handful of tokens... which is not really a UX improvement imo.
Even at its worst, it's a minor UX improvement compared to having to download everything prior to getting to the first token. Ultimately we will complete the download, but we can still pick the best priority so that the first handful of tokens goes through.
Is it actually privacy preserving? Chrome mostly exists to extract all the information from a user it can without immediately getting a lawsuit of greater penalty than what is gained through ads, military contracts, etc. Android isn't too far off either. I would welcome any alternative to this. I can see applications for this being things like "while device is at rest and charging summarize all of the users recent text communications" or whatever else as a legal loop hole for wiretap laws
>I can see applications for this being things like "while device is at rest and charging summarize all of the users recent text communications" or whatever else as a legal loop hole for wiretap laws
This just exposes an API for sites to use. If they wanted to do the types of spying you're cynically suggesting, they could just add it without an API and you'd be none the wiser. Chrome contains closed source components so you wouldn't even know.
It's a lot easier to hide the language they need in a EULA for a feature like this than it would be elsewhere.
I appreciate you feel this is a cynical take. But have you seen the class action lawsuits against Google over the last 5 years? They exceed a billion dollars as far as I can remember and they are for more blatant things than this.
So they don't get sued when the LLM inevitably instructs someone to drink bleach, hurt a stranger, delete the production db(I guess vibe coders say this is a good thing now though?), etc.
Who says they'll be using the 20GB model? You'd hardly need frontier level intelligence to detect CSAM or ad keywords. Moreover, it's downloaded on first use, not bundled with the browser, so you won't really notice unless you're checking the chrome user data directory, but that also contains caches and other site data you'd likely chalk it up to random sites.
Hey, I'm the Chrome PM for the built-in AI APIs. I wanted to jump in on the privacy concern mentioned here.
It’s a totally valid question, and transparency is the only way this can work. On-device processing is an important core design goal of these APIs.
There are NO logs of the input / output interactions sent to any server, not even for training purposes. The only metrics we have are on performance, stability, and other generic API usage signals like any other APIs. These are all controlled by existing user preferences in Chrome.
It's a one-time download shared by all websites that use the Prompt API.
What's a bigger issue is that the models on most standard PCs are both tiny and slow. I was going to try using the Prompt API to change the text of (infocom) text adventures on the fly. But for many PCs, this will currently be too slow to be feasible.
Seems like a good way for a rogue JS script to offload token generation to a bunch of unsuspecting visitors
It would actually be pretty interesting to see if its possible to decentralize the compute to generate something useful from a larger prompt broken down and sent to a bunch of browsers using a subagent pattern or something like RLM, each working on a smaller part of the prompt
This feels like a lot of work for low reward, the technical/business infrastructure would be wild. And if anyone wants to offload their prompts to users browsers, they might as well just use the Chrome API correctly? How many server side prompts would realistically be useful to offload to a low end model like this?
Plus even if you really wanted to do that, WebGPU exists and has for a while right?
> How many server side prompts would realistically be useful to offload to a low end model like this?
There's a lot of ways this API could go, e.g. more powerful models eventually, or perhaps integration with cloud models. For example, I could see Google trying to default Gemini as the model for users signed into Chrome
I think we’ll get more powerful models when they become reasonable to run on regular people’s computers, in which case the compute costs would hopefully fall enough that people don’t need to resort to this kind of weird stuff.
As for cloud models, that would be interesting, although I guess then the fraud would be easier in spoofing whatever parameters (ip address? domain name? some Chrome install identifier?) to get around whatever rate limiting they come up with, rather than actually using people’s computers.
Anyways I’m sure if it ends up being abused, they can throw a permissions dialog in front of it. Just need to figure out a way to make normal people understand.
Low per-device reward combined with a high user count - either by large legitimate players or by botnets - has been the monetisation strategy of most online enterprises.
Imagine a Vendor API that adds a way to link from the page straight into a device purchase workflow. As a trial of the API in Chrome you can order a new Google Pixel 9b directly from any page with the word Android in it!
Or a LocalNet API that integrates with trusted hardware devices on your local network. As a trial (Chrome beta programme — strictly limited but here’s 3x signup links to share with your friends) you can adjust your Google Next Mini underfloor heating directly from Chrome!
Or a DirectCast API that lets you stream <video> elements to a device of your choice even over a VPN. As a Chrome trial, you can use your Google Cloud account to stream directly from YouTube Premium to any linked Google Chromecast devices you own!
This API seems perfect for an idea I've had for a while: a de-snarkifier for social media.
Social media can be intellectually stimulating and educational, but it's also easy to get sucked into ideological sniping and flamewars, even if you didn't go looking for it. The emotional and intellectual energy spent flaming strangers on the Internet is a complete waste of human capital.
With an API like this, I assume you could have a browser extension that could de-snarkify content before showing it to you. You could ask the LLM to preserve all factual content from the post, but to de-claw any aggressive or snarky language. If you really wanted to have fun, you could ask it to turn anything written in an aggressive tone into something that sounds absurd or incompetent, so that the more aggressive the post, the more it would make the author look silly.
This could have a double benefit. For the reader, it insulates them from the personal attacks of random strangers on the Internet. Don't get me wrong, there is a time and a place for real, charged arguments about important issues that affect us all. But there is little to be gained from having those fights with strangers; on the contrary, I think it poisons the body politic when strangers are screaming at each other.
For the writer, it takes away any incentive to be snarky or rude. If other people filter their content this way, there's no point in trying to be mean to them, and no "race to the bottom" for who can be more nasty.
Consuming things like comments gives my brain a false sense of social participation. It uses up my limited "social participation budget", with nothing to show for it. Often I reach for comments to see if an article is worth reading, has obvious false information, or see what the "consensus" is and instead I end up wasting time on anything but that. Its not good for my mind to marinate in contextless opinions of random people and increasingly, bots with an agenda. Sorting through all of that in my head uses up energy that could be better spent with real people. If I can simply see a summary of something potentially useful in under a minute, then my brain will get its dopamine hit (or alleviate FOMO) and be uninterested in sinking hours on something detrimental to my life. My experience suggests that, out of all countless hours I've spent on the internet reading things, less than 1% of it has been of any use to me. Its been a net negative.
How often do I feel the need to eavesdrop on a group of people I don't know, discussing something in real life? Almost never. Why would I want to do that online then? Also its mostly kids online. Why would I want to eavesdrop on what a bunch of kids are talking about? And yet its difficult to avoid due to the nature of aggregation platforms. If it were up to me, I'd filter out any and all content generated by or aimed at people under 25 (or even 30).
Imagine surfing the web without ever hearing anything about or adjacent to US politics, celebrities, Musk or AI? I'll seek out that information as and when I need to.
Yes, I can just not use certain websites out of sheer will. I've made progress there, but it can be better still.
That is unironically exactly what I want from social media.
I want the option to engage with the substance of new developments in the world, technology, etc. without the drama. I don't want to be drawn into the drama of strangers (who could, for all I know, just be bots or ragebaiting AIs).
If I want drama, there's plenty of it on TV, or I could talk to my friends about what is going on with people I actually know.
The anti-pattern, in my mind, is logging on to engage with substantive content and to be inadvertently drawn into flamewars with strangers.
I would really just like the quirkier internet of old.
Flamewars these days are just created by shit-stirrers in another country who are just pumping out rage bait from an massive array of smartphones. It's not even an impassioned flamewar, it simply exists to aggravate.
Using AI to forcefully disengage by simply suppressing that content would be nice and also have the secondary effect of depriving various internet resources of ad revenue.
I'd argue the issue is people have figued out that "shit stirring" can make actual meaningful differences to reality, be they foreign or local.
When the limit of effect a flamewar would have is if Star Trek or Star Wars got the top billing, or Vim was recommended to new programmers instead of Emacs, it was a fun novelty.
But now there's real money and power resulting from this shit stirring of course people will use it as a means to an ends. They've optimised professional shit-stirring because it's so valuable now.
Are humans supposed to enjoy the "flavor" of diarrhea, as the result of giving every village idiot a microphone so they can spew shit from their mouths?
Sure, you might say this sort of thing is boiling flavor out of your food, but... boiling the bacteria out of what you consume isn't a bad thing.
I worry that "boiling" is still optimistic, since it isn't as simple or foolproof. It's more like a complex fermentation process, where it's possible for a malicious input to hijack how it works and generate something more dangerous than what you put in.
Even if the output is only shown to a human, imagine a comment in a thread that tricks an LLM into "summarizing" a false account where other innocent people said terrible ban-worthy things.
But... It's the type of idea that is unpredictable as it comes into contact with reality. If it works, it probably works very differently from the initial idea of how it will work.
Yeah, I 100% agree with the caution in this comment.
I see the merit in such a proposal. It's the linguistic equivalent to boiling the food you consume, instead of eating it raw with all the associated bad stuff.
The problem is, as you said, that this plan is unlikely to be as rosy as it's portrayed and probably has a lot of drawbacks in real life.
I wasn't even talking about drawbacks, though that applies too.
I mean... you would be basically taking a complex thing, transforming and reconstructing it. What we want out of social media isn't a simple, legible function. The positives. You'd have to discover them.
If someone starts building with the intitial idea above, my guess is that they'd end up with some sort of custom feed that draws inspiration and inputs from social media... but isn't social media. It's something else that you can scroll, read and whatnot.
That is exactly what I want. A boring but factual summary of useful nuggets from the mountain of shite that is ALL of social media. For example, on any given day, reddit/X/Bluesky/HN only has a couple of paragraphs worth of stuff that I care to know about. I want to train my brain to equate the internet with something boring that's only worth visiting when I need to look up information. I want this tech to reduce my (and hopefully others') use of internet to down by 98%.
I want to go to news.ycombinator.com/reddit.com/etc on any given day and just see a couple of paragraphs and maybe a few reference links to follow if I so choose. Spend a few minutes reading that and close it.
All of that in the hope of diverting my limited time/energy on Earth to endeavours in real life with real people.
The boiling - cooking - is the bias here. Winograds Understanding Computers and Cognition is the most excellent resource in 2026, written over three decades ago by now.
Don't you think its better to just curate your social media and follow communities where the default is not toxicity? This is basically a distortion layer for reality and will just encourage more echo chambers.
Also what is toxic to one person is not toxic to another depending on their subjective choices. How will you solve for this without everyone just seeing what they want to see even if reality is not like that? I feel that will just enhance the problems of social media than reduce it.
It kind of falls apart when you start to think of edge cases rather than "hey this tool will keep morons off my feed!" mentality
I'm inclined to think that this will actually decrease the power of echo chambers. Echo chambers become that way by policing dissent, either through moderation or through aggressive attacks on dissenters. A de-snarkifier would de-fang the latter.
I agree that what is toxic to one person is not toxic to another, but think that this is largely because many people enjoy seeing their perceived enemies attacked. In other words, it comes down to a viewpoint bias: attacking my group/viewpoint is toxic, while attacking other groups/viewpoints is good and noble.
My ideal is that a de-snarkifier would be strongly instructed to be viewpoint neutral; to filter based on whether the comment is being respectful, without regard to the views being expressed.
My idea would backfire if other people program their filter to reinforce their own biases by favoring content that they agree with and creating or amplifying personal attacks on their perceived enemies. That would be unfortunate, but ultimately we can only control what we do; each person gets to make their own decision.
For YouTube, this already exists and I‘m using it. The extension is caller DeArrow and aims to reduce sensationalism via crowdsourcing, though I wouldn’t be surprised if top contributors are bots using LLMs.
Man, that before-after slider on the home page makes me so sad... YouTube used to just be random people sharing cool stuff, and those de-sensationalized titles really brought me back to that time for a second! Cool stuff.
For people like me had tried it in the past and found it annoying, note that it now has a 'casual' mode where it only changes the truly useless titles and leaves reasonable ones alone.
Kinda looking forward to something like this, as it has the potential to remove empty junk calories from the internet, hopefully leading to SIGNIFICANTLY less use of today's popular platforms.
My wish list:
- Eliminate ALL clickbait titles and ads. I only want to see a dry factual title.
- For any given topic, I only care about the main article (with the option to only see a summary, unless its a high quality blog) and couple of substantive comments, rest is junk I don't want to see.
The current state of popular social media sites has meant that I don't use it at all (except HN, which is trending in the same direction due to saturation with AI), but every other week or so I end up wasting a few hours, which I'd like to avoid entirely.
Ideally this would lead to 98% of content filtered/summarised out, and over time only use the internet for looking things up with intention. I want this to remove majority of "entertainment" value from the internet (by default) so that time/energy can be refocused in real life and high quality sources (books) only.
I actually have built myself a personal AI agent that does this for nthe main news headlines and for a summary of my personal email (sadly I can’t run it on work email yet). It can extract any actions required from a mail and make them into tasks, and also has a killer feature - a “sort out my email” button that archives all the emails it classifies as FYI, spam, mailing list or moot (it has classifiers for this), first producing a one-pager markdown summary of the whole lot in one shot, leaving all emails marked “action required” or “Urgent”. Email summaries are deliberately dry and factual with all advertising false urgency removed.
I can manually “hold” emails so they don’t go in the “sort out my email” woodchipper. It’s been life-changing.
Or just ignore it. Or say you will not engage under [conditions]. Ultimately it will be you who looks foolish when the AI rewrote something incorrectly and you engaged with something that wasn't being said.
I would love an app like this. I am a frequent user of https://www.boringreport.org/ for news, which does something like what you’re describing but for news articles.
I was literally just thinking that I’d like something like this for HN, which has become an incredibly bitter, cynical, and depressing place in the last decade. On virtually any story, most of the top comments are negative. Every major company is a greedy monster trying to destroy your life, every CEO is a sociopath, everything is terrible, all the time. I wonder how most HN users even get out of bed every day.
I've thought about this for HN which, now that it's become so big, just has a lot of aggressive negativity and snark. You'd probably run into the same problem as Usenet Killfiles: the folks that use Killfiles would see random orphaned conversations or would just miss large parts of threads while the people that don't have Killfiles would see a mess of toxicity that would make them want to leave. Likewise if you prompt filter your experience, you'll be separating your experience from everyone else's.
I love this "de-snarkifier" idea and it seems to have broad interest. I couldn't resist hacking (well, vibe coding[1]) a "Snarknada" prototype to explore the viability, including patterns for low-latency and accuracy.
You’ve hit on exactly why we think on-device is the right move for this class of use cases. If you tried to "de-snark" an entire infinite-scrolling feed via a cloud API, the token costs would be astronomical for a developer. Plus, people (rightly) don't want to send their private social feeds or DMs to a third-party server just to clean up the tone.
Moving this to the device should make high-frequency "Semantic Mutation" financially and technically viable for the first time. If you (or anyone else) starts building this more seriously than my PM vibe coded toy, and hits specific friction points, I’d love to hear about them: it helps us prioritize the roadmap.
[1]: If you're using a coding agent (Cursor, Claude Code, etc.), I recommend pointing it to https://www.npmjs.com/package/built-in-ai-skills-md-agent-md. Most models were trained on the now-obsolete window.ai namespace, and this skill file helps them use the current APIs correctly.
been cranking on this too but not just for snark but for spam/scam heuristics too.
it's something I feel is finally viable to combat at zero cost to the user.
This plus webmcp would allow it to serve as a form of automod too on websites that you authenticate with (imagine a world where your social media profile has an automod of its own powered locally. can use this to steer your feed or to mute/block/moderate as needbe). Even without WebMCP I have been working on making it autodetect html elements and extract UGC (comments/threads..etc) automatically to moderate (since my initial tests with a small group found some websites with frequent UI changes would break if hardcoded or if they did a lot of AB testing)
Even better, the concept would allow you to also use it to hide certain spoilers (imagine sports or new movies that just came out and you want to not have to hide away from all socials).
didn't find any contacts on your new HN account, but in a few weeks will be able to reach out to you with it fleshed out. :)
We have a community of nearly 14k that we will distribute this to
I think it's a step into a future of proper Model API.
But it's just a small step.
It reminds me of Apple's Foundation Models [1]
While many AI integrations are focused on text communication / chat style.
A lot of software benefits from non-text interfaces.
I believe at some point OSes and browsers should provide an API to manage models so you'll have access to on-device/remote ones with a simplified interface for the app.
Making something standardized that is cross-platform would be fantastic. It also needs to be on mobile devices, so the players that can easily make it happen are mostly Apple and Google.
(Meta will follow or vice-versa I guess)
Key-point: it shouldn't be exclusive to promoted models.
The better part of this is having a local-first AI, particularly because it has tool-calling builtin & structured output.
I haven't pushed out a full version[1] which uses ducklake-wasm + this to make a completely local SQL answering machine, but for now all it does is retype prompts in the browser.
How do you envision short term and long term target usage of it?
And do you guys communicate between other browsers when doing something like this to try to settle on something common? I don't mean W3C but practically, it's a small world after all.
I can't speak for "you guys" anymore, as I'm retired, but from my personal perspective/recollection:
The target usage for the prompt API is anything that would benefit from the general capabilities of a language model, and can't be encompassed by the more-specific APIs for summarization/writing/rewriting. Realistic use cases currently are things like sentiment analysis, keyword extraction, etc. I have a number of ideas on how to integrate it into my current retirement project around Japanese flashcards, e.g. generating example sentences. If the small (~10 GiB) model class keeps getting smarter, the class of things possible on-device in this way gets larger and larger over time.
But overall, yeah, the goal with the prompt API, as with all web APIs, is to put something out there for discussion as early as possible, and get input from the broad community, especially including other browsers, to see if it's something that they are interested in collaborating on. https://www.chromium.org/blink/guidelines/web-platform-chang... (which I also wrote) goes into how the Chromium project thinks about such collaboration in general.
It's a tiny script that looks up the rss feed and uses the content to generate summaries; quite a nice fit with our static site. Sometime I'd like to extend it to ask different questions about the content.
This looks like it uses Gemini Nano under the hood. But the latest Gemma4 E2B and E4B models appear to be much better, so you'd probably be better off deploying quantized versions through an extension for now.
I no longer have any inside knowledge, but from my time on this team they were very quick about getting the latest small (Google) models into Chrome. I expect that if Gemma 4 (or its equivalent Gemini Nano) isn't already in Chrome, then it will be soon.
Note that the article here was last updated 2025-09-21, and as of that time it was already on Gemini Nano 3.
I would rather pay money than seeing this thing running in my browser that only prints 5 tps on high-end consumer hardware.
The options are:
1. 22GB per website
2. 22GB per browser
3. 0GB / No AI capabilities
By having this in Chrome they are simply ensuring that option 2 replaces option 1. You still have option 3.
1. No AI
2. AI that works and is actually useful
3. AI that is slow, crappy and hallucinates all the time
I choose 1 and 2.
And what's being discussed here is what the better implementation of option 3 is.
My point is that if you're going with one of the possible implementations of option 3, then 22GB per browser is objectively a lot better than 22GB per website.
https://chromestatus.com/feature/6325545693478912
The parameters are not part of this initial release but can be added back with the origin trial you discovered.
But keep in mind the actual experience for users is not great; the model download is orders of magnitude greater than downloading the browser itself, and something that needs to happen before you get your first token back. That's unfixable until operating systems start reliably shipping their own prebaked models that an API like this could plug into.
Maybe the next big thing will be some software subscription premium offers with a bunch of 5090s as an extra.
fantastic!
> the model download is orders of magnitude greater than downloading the browser itself, and something that needs to happen before you get your first token back
sure but does this mean the model is lazily downloaded? that is, if I used this and I am the first time the model was called, the user would be waiting until the model was downloaded at that point?
that sounds like a horrible user experience - maybe chrome reduces the confusion by showing a download dialog status or similar?
also, any idea what the on disk impact is?
Yes, I can read and comprehend English and you should assume I read the page. Because of the "At least" wording, I was curious what a person who has actually used the feature has noticed, aka, learning from people who have actually done it already.
So it's once per browser, not once per site.
You can track the download state yourself and pop whatever UI you want.
Then it's possible the model you get will scale with the CPU/GPU/RAM available, so if you have a 12GB GPU you probably get a better model, perhaps that's a 10-11GB model? At 2x that's 22GB.
Then consider that a machine is not static, GPUs/hardware come and go, VRAM allocation in integrated graphics changes, etc. You end up with just needing to pick a number and not confuse users.
This is part of it, and also we just didn't want to use up the last of the user's disk space! It's disrespectful to use up 3 GB if the user only has 4 GB left; it's sketchy if the user only has 10 GB. At 22 GB, we felt there was more room to breathe.
One could argue that users should have more agency and transparency into these decisions, and for power users I agree... some kind of neato model management UI in chrome://settings would have been cool. But 99% of users would never see that, so I don't think it ever got built.
If it turns out useful enough I'm sure browsers will just start including it as (perhaps optional?) part of installation.
Here's to hoping that that dystopia will never happen.
On a second thought you're on to something, maybe if we hadn't let the Internet browser take over our computer-lives as much as it did in the last 20 or so years then Chrome (under its current manifestation, that is) wouldn't have happened. At least we are now aware of the dangers awaiting in front of us when it comes to AI.
With MoE models, you could fetch expert layers from the network on demand by issuing HTTP range queries for the corresponding offset, similar to how bittorrent downloads file chunks from multiple hosts. You'd still have to download shared layers, but time to first token would now be proportional to active-size rather than total-size. Of course this wouldn't be totally "offline" inference anymore, but for a web browser feature that's not a key consideration.
This is a common misconception, probably due to the unfortunate naming. Expert layers are not "expert" at any particular subject, and active-size only refers to the activated layers per token. You'd still need all (or most of all) the layers for any particular query, even if some layers have a very low chance of being activated.
All in all, you'd be better off with lazy loading the entire model, at least you'd know you have the capability to run inference from then on.
This just exposes an API for sites to use. If they wanted to do the types of spying you're cynically suggesting, they could just add it without an API and you'd be none the wiser. Chrome contains closed source components so you wouldn't even know.
I appreciate you feel this is a cynical take. But have you seen the class action lawsuits against Google over the last 5 years? They exceed a billion dollars as far as I can remember and they are for more blatant things than this.
Why would adding a ML API or library require an EULA change?
It’s a totally valid question, and transparency is the only way this can work. On-device processing is an important core design goal of these APIs.
There are NO logs of the input / output interactions sent to any server, not even for training purposes. The only metrics we have are on performance, stability, and other generic API usage signals like any other APIs. These are all controlled by existing user preferences in Chrome.
What's a bigger issue is that the models on most standard PCs are both tiny and slow. I was going to try using the Prompt API to change the text of (infocom) text adventures on the fly. But for many PCs, this will currently be too slow to be feasible.
It would actually be pretty interesting to see if its possible to decentralize the compute to generate something useful from a larger prompt broken down and sent to a bunch of browsers using a subagent pattern or something like RLM, each working on a smaller part of the prompt
Plus even if you really wanted to do that, WebGPU exists and has for a while right?
There's a lot of ways this API could go, e.g. more powerful models eventually, or perhaps integration with cloud models. For example, I could see Google trying to default Gemini as the model for users signed into Chrome
As for cloud models, that would be interesting, although I guess then the fraud would be easier in spoofing whatever parameters (ip address? domain name? some Chrome install identifier?) to get around whatever rate limiting they come up with, rather than actually using people’s computers.
Anyways I’m sure if it ends up being abused, they can throw a permissions dialog in front of it. Just need to figure out a way to make normal people understand.
Edit: simple example is a spam bot
If you want to do anything interesting you need transformers.js and a decent mode. Qwen 0.9B is where things start working usefully
Or a LocalNet API that integrates with trusted hardware devices on your local network. As a trial (Chrome beta programme — strictly limited but here’s 3x signup links to share with your friends) you can adjust your Google Next Mini underfloor heating directly from Chrome!
Or a DirectCast API that lets you stream <video> elements to a device of your choice even over a VPN. As a Chrome trial, you can use your Google Cloud account to stream directly from YouTube Premium to any linked Google Chromecast devices you own!
Social media can be intellectually stimulating and educational, but it's also easy to get sucked into ideological sniping and flamewars, even if you didn't go looking for it. The emotional and intellectual energy spent flaming strangers on the Internet is a complete waste of human capital.
With an API like this, I assume you could have a browser extension that could de-snarkify content before showing it to you. You could ask the LLM to preserve all factual content from the post, but to de-claw any aggressive or snarky language. If you really wanted to have fun, you could ask it to turn anything written in an aggressive tone into something that sounds absurd or incompetent, so that the more aggressive the post, the more it would make the author look silly.
This could have a double benefit. For the reader, it insulates them from the personal attacks of random strangers on the Internet. Don't get me wrong, there is a time and a place for real, charged arguments about important issues that affect us all. But there is little to be gained from having those fights with strangers; on the contrary, I think it poisons the body politic when strangers are screaming at each other.
For the writer, it takes away any incentive to be snarky or rude. If other people filter their content this way, there's no point in trying to be mean to them, and no "race to the bottom" for who can be more nasty.
Consuming things like comments gives my brain a false sense of social participation. It uses up my limited "social participation budget", with nothing to show for it. Often I reach for comments to see if an article is worth reading, has obvious false information, or see what the "consensus" is and instead I end up wasting time on anything but that. Its not good for my mind to marinate in contextless opinions of random people and increasingly, bots with an agenda. Sorting through all of that in my head uses up energy that could be better spent with real people. If I can simply see a summary of something potentially useful in under a minute, then my brain will get its dopamine hit (or alleviate FOMO) and be uninterested in sinking hours on something detrimental to my life. My experience suggests that, out of all countless hours I've spent on the internet reading things, less than 1% of it has been of any use to me. Its been a net negative.
How often do I feel the need to eavesdrop on a group of people I don't know, discussing something in real life? Almost never. Why would I want to do that online then? Also its mostly kids online. Why would I want to eavesdrop on what a bunch of kids are talking about? And yet its difficult to avoid due to the nature of aggregation platforms. If it were up to me, I'd filter out any and all content generated by or aimed at people under 25 (or even 30).
Imagine surfing the web without ever hearing anything about or adjacent to US politics, celebrities, Musk or AI? I'll seek out that information as and when I need to.
Yes, I can just not use certain websites out of sheer will. I've made progress there, but it can be better still.
I want the option to engage with the substance of new developments in the world, technology, etc. without the drama. I don't want to be drawn into the drama of strangers (who could, for all I know, just be bots or ragebaiting AIs).
If I want drama, there's plenty of it on TV, or I could talk to my friends about what is going on with people I actually know.
The anti-pattern, in my mind, is logging on to engage with substantive content and to be inadvertently drawn into flamewars with strangers.
Flamewars these days are just created by shit-stirrers in another country who are just pumping out rage bait from an massive array of smartphones. It's not even an impassioned flamewar, it simply exists to aggravate.
Using AI to forcefully disengage by simply suppressing that content would be nice and also have the secondary effect of depriving various internet resources of ad revenue.
When the limit of effect a flamewar would have is if Star Trek or Star Wars got the top billing, or Vim was recommended to new programmers instead of Emacs, it was a fun novelty.
But now there's real money and power resulting from this shit stirring of course people will use it as a means to an ends. They've optimised professional shit-stirring because it's so valuable now.
Sure, you might say this sort of thing is boiling flavor out of your food, but... boiling the bacteria out of what you consume isn't a bad thing.
Even if the output is only shown to a human, imagine a comment in a thread that tricks an LLM into "summarizing" a false account where other innocent people said terrible ban-worthy things.
But... It's the type of idea that is unpredictable as it comes into contact with reality. If it works, it probably works very differently from the initial idea of how it will work.
I see the merit in such a proposal. It's the linguistic equivalent to boiling the food you consume, instead of eating it raw with all the associated bad stuff.
The problem is, as you said, that this plan is unlikely to be as rosy as it's portrayed and probably has a lot of drawbacks in real life.
Interesting to think about and explore, though.
I mean... you would be basically taking a complex thing, transforming and reconstructing it. What we want out of social media isn't a simple, legible function. The positives. You'd have to discover them.
If someone starts building with the intitial idea above, my guess is that they'd end up with some sort of custom feed that draws inspiration and inputs from social media... but isn't social media. It's something else that you can scroll, read and whatnot.
I want to go to news.ycombinator.com/reddit.com/etc on any given day and just see a couple of paragraphs and maybe a few reference links to follow if I so choose. Spend a few minutes reading that and close it.
All of that in the hope of diverting my limited time/energy on Earth to endeavours in real life with real people.
Also what is toxic to one person is not toxic to another depending on their subjective choices. How will you solve for this without everyone just seeing what they want to see even if reality is not like that? I feel that will just enhance the problems of social media than reduce it.
It kind of falls apart when you start to think of edge cases rather than "hey this tool will keep morons off my feed!" mentality
I agree that what is toxic to one person is not toxic to another, but think that this is largely because many people enjoy seeing their perceived enemies attacked. In other words, it comes down to a viewpoint bias: attacking my group/viewpoint is toxic, while attacking other groups/viewpoints is good and noble.
My ideal is that a de-snarkifier would be strongly instructed to be viewpoint neutral; to filter based on whether the comment is being respectful, without regard to the views being expressed.
My idea would backfire if other people program their filter to reinforce their own biases by favoring content that they agree with and creating or amplifying personal attacks on their perceived enemies. That would be unfortunate, but ultimately we can only control what we do; each person gets to make their own decision.
My wish list:
- Eliminate ALL clickbait titles and ads. I only want to see a dry factual title.
- For any given topic, I only care about the main article (with the option to only see a summary, unless its a high quality blog) and couple of substantive comments, rest is junk I don't want to see.
The current state of popular social media sites has meant that I don't use it at all (except HN, which is trending in the same direction due to saturation with AI), but every other week or so I end up wasting a few hours, which I'd like to avoid entirely.
Ideally this would lead to 98% of content filtered/summarised out, and over time only use the internet for looking things up with intention. I want this to remove majority of "entertainment" value from the internet (by default) so that time/energy can be refocused in real life and high quality sources (books) only.
I can manually “hold” emails so they don’t go in the “sort out my email” woodchipper. It’s been life-changing.
DeArrow works for YouTube atleast. uBlock Origin or Brave browser works for ads. Not sure why you'd need an AI to remove ads...
I love this "de-snarkifier" idea and it seems to have broad interest. I couldn't resist hacking (well, vibe coding[1]) a "Snarknada" prototype to explore the viability, including patterns for low-latency and accuracy.
You’ve hit on exactly why we think on-device is the right move for this class of use cases. If you tried to "de-snark" an entire infinite-scrolling feed via a cloud API, the token costs would be astronomical for a developer. Plus, people (rightly) don't want to send their private social feeds or DMs to a third-party server just to clean up the tone.
Moving this to the device should make high-frequency "Semantic Mutation" financially and technically viable for the first time. If you (or anyone else) starts building this more seriously than my PM vibe coded toy, and hits specific friction points, I’d love to hear about them: it helps us prioritize the roadmap.
[1]: If you're using a coding agent (Cursor, Claude Code, etc.), I recommend pointing it to https://www.npmjs.com/package/built-in-ai-skills-md-agent-md. Most models were trained on the now-obsolete window.ai namespace, and this skill file helps them use the current APIs correctly.
it's something I feel is finally viable to combat at zero cost to the user.
This plus webmcp would allow it to serve as a form of automod too on websites that you authenticate with (imagine a world where your social media profile has an automod of its own powered locally. can use this to steer your feed or to mute/block/moderate as needbe). Even without WebMCP I have been working on making it autodetect html elements and extract UGC (comments/threads..etc) automatically to moderate (since my initial tests with a small group found some websites with frequent UI changes would break if hardcoded or if they did a lot of AB testing)
Even better, the concept would allow you to also use it to hide certain spoilers (imagine sports or new movies that just came out and you want to not have to hide away from all socials).
didn't find any contacts on your new HN account, but in a few weeks will be able to reach out to you with it fleshed out. :)
We have a community of nearly 14k that we will distribute this to
While many AI integrations are focused on text communication / chat style. A lot of software benefits from non-text interfaces.
I believe at some point OSes and browsers should provide an API to manage models so you'll have access to on-device/remote ones with a simplified interface for the app. Making something standardized that is cross-platform would be fantastic. It also needs to be on mobile devices, so the players that can easily make it happen are mostly Apple and Google. (Meta will follow or vice-versa I guess)
Key-point: it shouldn't be exclusive to promoted models.
(1) https://developer.apple.com/documentation/foundationmodels So the app would be able to query and get the right model(s).
I haven't pushed out a full version[1] which uses ducklake-wasm + this to make a completely local SQL answering machine, but for now all it does is retype prompts in the browser.
[1] - https://notmysock.org/code/voice-gemini-prompt.html
And do you guys communicate between other browsers when doing something like this to try to settle on something common? I don't mean W3C but practically, it's a small world after all.
The target usage for the prompt API is anything that would benefit from the general capabilities of a language model, and can't be encompassed by the more-specific APIs for summarization/writing/rewriting. Realistic use cases currently are things like sentiment analysis, keyword extraction, etc. I have a number of ideas on how to integrate it into my current retirement project around Japanese flashcards, e.g. generating example sentences. If the small (~10 GiB) model class keeps getting smarter, the class of things possible on-device in this way gets larger and larger over time.
We definitely communicated with other browsers. There were the standing WebML Community Group meetings at the W3C every few weeks. There were async discussions like https://github.com/mozilla/standards-positions/issues/1213 and https://github.com/WebKit/standards-positions/issues/495 . (Side note, I love the contrast between Mozilla's helpful in-depth feedback and WebKit's... less helpful feedback.) There was also a bit of a debacle where the W3C Technical Architecture Group tried to give "feedback" but the feedback ended up being AI-generated slop... https://github.com/w3ctag/design-reviews/issues/1093 .
But overall, yeah, the goal with the prompt API, as with all web APIs, is to put something out there for discussion as early as possible, and get input from the broad community, especially including other browsers, to see if it's something that they are interested in collaborating on. https://www.chromium.org/blink/guidelines/web-platform-chang... (which I also wrote) goes into how the Chromium project thinks about such collaboration in general.
Wow yeah the feedback is actually useless ai generated slop. Has that happened before?
It's a tiny script that looks up the rss feed and uses the content to generate summaries; quite a nice fit with our static site. Sometime I'd like to extend it to ask different questions about the content.
- Gemini Nano-1: 46% MMLU, 1.8B
- Gemini Nano-2: 56% MMLU, 3.25B
- Gemma4 E2B: 60.0% MMLU, 2.3B
- Gemma4 E4B: 69.4% MMLU, 4.5B
Sources:
- https://huggingface.co/google/gemma-4-E2B-it
- https://android-developers.googleblog.com/2024/10/gemini-nan...
Note that the article here was last updated 2025-09-21, and as of that time it was already on Gemini Nano 3.
https://android-developers.googleblog.com/2026/04/AI-Core-De...
Yes; "With the Prompt API, you can send natural language requests to Gemini Nano in the browser."