Show HN: Atomic – Local-first, AI-augmented personal knowledge base (atomicapp.ai)
Hey HN - I first posted about my knowledge base product, Atomic, here around a month ago; since then, a viral tweet by Karpathy has produced a torrent of AI powered knowledge base projects. meanwhile I've been shipping like crazy, here are some of the new features shipped in the last month:
- Rebuilt the iOS app with an Android app on the way
- expanded both the MCP and internal agent chat toolkit immensely
- A custom, CodeMirror6-based markdown editor with obsidian-style rendering
- A dashboard view that provides a daily summary of atoms created or updated in the last day
And many bug fixes and improvements across the board. Atomic is MIT licensed. You can download the desktop app, but the true power is unlocked by self hosting an atomic server, which any client (web, mobile, or desktop) can connect to from anywhere. You can add content to your knowledge base directly, or via RSS feed, web clipper, mobile share capture, obsidian sync, or REST api.
It is the second llm wiki on frontpage today!
I wish the scene was more collaborative - instead of everyone writing their own. But I guess this is the llm curse - too easy to start. I am afraid it will all go in the LangChain direction with VC funding designs that are not yet ready solidifying choices that would normally be superseded.
The reviews are done automatically - here are the instructions: https://github.com/zby/commonplace/blob/main/kb/agent-memory...
I am open to changing these instructions - it cannot be about just making your system look better - but I'll try to incorporate genuine ideas how to improve these reviews.
...except my Android phone LOL
Am I the only one who feels a bit betrayed after reading LLM text? I am not even willing to try out the app after I notice… which is a shame.
At least polishing the obvious parts would help a lot and is not that much work.
I still refuse to self-censor to avoid having my actual writing get flagged by someone as LLM written.
OpenAI-compatible is indeed one of the provider options for Atomic. Ollama and openRouter are separate options to allow for easier selection of models from these specific providers.
https://atomicapp.ai/getting-started/ai-providers/
> OpenAI-compatible is indeed one of the provider options for Atomic. Ollama and openRouter are separate options to allow for easier selection of models from these specific providers.
Why is this necessary over just presenting the result of `/v1/models`?
You can say it's just the ordering of a dropdown, but to me it seems pretty clear that this thing is developed with the idea that you'll most likely use a SaaS provider.
"Local-first, your data never leaves the computer! Except once to go to the biggest information hoarders on the Internet."
fair point, the app makes requests to load fonts. we'll fix that next release.