Why? What makes Spacemacs so different/special that it requires some kind of distinct opinion that would be extremely valuable? Spacemacs is the same old Emacs with some out-of-the box customizations atop - there's nothing fundamentally different about it.
Spacemacs "is not batteries-included" version of Emacs. You say that and people may get confused. It's not a "different version" of Emacs, it's not Emacs at all - it's an Emacs config you can configure - a meta config. It is more like a collection of recipes you can run on Emacs. That is an important distinction.
Hence my question, what Wellons (who's a seasoned veteran of Emacs) could ever say anything about Spacemacs (or Doom - which in this context makes no difference)? What kind of views one would be interested to hear? Using the Space key as the "Lead key", or something about local-leader key; or vim-navigation/Evil in general; or modules/layers architecture of Emacs config? He said in that post you shared that he believed he'd eventually end up using Evil - he doesn't need to use Spacemacs for that.
Spacemacs is great for beginners, for people who don't want to deal with learning Emacs native bindings - they are legit confusing. For someone like Chris, it makes little sense, they'd probably would just add modal editing packages to their existing config. Even though Spacemacs and Doom are still valuable - one can find many interesting gems there.
Also, these projects may give you a good discipline for structuring your keys mnemonically - everything files related would be at "SPC f", search stuff on "SPF s", etc.
The author is the developer of the RSS reader Elfeed, which a lot of Emacs users use several times a day. Though the article talks about a vibe-coded wxWidgets-based GUI application called Elfeed2 that he wrote as a replacement, Emacs afficionados would be loath to leave their Emacs environment and switch to that. Hopefully Emacs elfeed finds a new maintainer.
I tried Elfeed2 immediately after the announcement, well, it's nowhere near the experience of elfeed in Emacs. Elfeed2 doesn't load content for most of my feeds, elfeed does. I also integrated elfeed-tube, which shows previews of videos and their transcripts, making it no-brainer to get a summary without watching the whole video.
A big loss for the Emacs community! emacs-aio is great!
I see the author is spring cleaning:
> I've
turned over a new leaf (no more Openbox, Tridactyl, Xorg, xterm), and so
some of these things I no longer use. On Linux I now use KDE on Wayland
with a minimally-configured browser. I miss the power user features, but
I do not miss the friction and constant maintenance.
LLMs have inspired a similar change in me: with a big change in how I work, I feel I can and should be more flexible with adopting new tech, which involving freeing myself of previous choices.
cool to see you in the wild, for me, it does work out of the box however, some sites will break or have too complex of a navigation, especially with iframes. and will have to swap to a mouse which is a bummer, which I understand is an inherent limitation of the tech, since web is not built today to do that.
FWIW, the age of LLMs made me build a deeper, more intimate relationship with Emacs, because it's a Lisp REPL loop with a built-in editor, not the other way around. When you give an LLM a closed loop system where it can evaluate code in a live REPL and observe the results, it stops guessing and starts reasoning empirically.
LLM that I run inside Emacs can fully control the active Emacs instance. I can make it change virtually any aspect of it. To load-test things, I even made it play Tetris in Emacs. And not just simply run it, but to actually play it without losing. It was insane.
Also, Emacs is all about plain text - you can easily extract text from anything - from the browser, terminal, CLI apps, Slack, Jira, etc., and you can do that on your own terms - context can appear in a buffer, in your clipboard, become a file or series of API requests. That is really hard to beat.
I am really loving working on a fun Elisp project with pi, a minimal and very extensible agent. I have the agent use emacsclient to control my session, showing me code, running magit ediff for me, testing, formatting, reloading -- it's all working great.
I'm still exploring all the ways the agent and I can collaborate using Emacs as a shared medium, but at the moment am super optimistic about it.
Big same. I have been doing a lot of clojure development, and hooking up my app to a live REPL has given me an absolutely fantastic feedback loop for the LLM. I don't think a lot of people understand what they're missing.
Does anyone else not understand what people mean when they refer to the "friction" supposedly inherent to these power user tools? Almost none of the configs/scripts/etc I use for my heavily-customized and terminal-heavy setup get changed for years at a time.
If you are frequently having to use other computers, a heavily customized setup has much more friction either to setup the machine like you want, or remember how to do things without all the customization (if you can't customize or it isn't worth the time).
When I graduated college I used Dvorak and Emacs on Linux. Six months of having to use shared Windows lab computers extensively beat me down to surrender all of those points - my brain just couldn't handle switching, so I conformed my desktop to match work. Then later I switched jobs to a group that was all Unix, but of many varieties most of which only had vi, not Emacs. And so I learned vi. Sometimes minimizing friction means going with the flow.
> With my newly-acquired superpowers I could knock out the last two pieces in a few days’ work
From the linked post:[0]
> I left an employer that is years behind adopting AI to one actively supporting and encouraging it. As of March, in my professional capacity I no longer write code myself. My current situation was unimaginable to me only a year ago. Like it or not, this is the future of software engineering. Turns out I like it, and having tasted the future I don’t want to go back to the old ways.
It's deeply distressing to watch people fall into AI psychosis. Being smart, accomplished, or experienced is no defence.
After the bubble pops and the industry realises the damage these tools can do to people, folks like the author will have to confront that they were taken in by a lie. Many won't be able to confront that.
> Being smart, accomplished, or experienced is no defence.
Perhaps you're confusing "not using AI" with "not being dependent on AI", those are very different things.
The edge isn't from avoidance, it's from using AI as leverage on top of real skill. A strong developer + AI beats a strong developer alone, and massively beats a weak developer + AI. The edge doesn't come from avoiding a tool - it comes from being the kind of person who doesn't need it but uses it anyway. That's leverage. Refusing to use it is just leaving leverage on the table to make a philosophical point.
> After the bubble pops
People like Chris (who is enormously capable engineer) would just move onto different tools, different techniques and paradigms. That is the essence of being a software developer - many of us choose this path specifically because it forces you to learn something new, every single day. That is (I suspect) also another reason why Wellons decided to migrate away from Emacs - he just learned it so deeply, perhaps it's no longer giving him the satisfaction of learning. Which to be honest is hard to believe - Emacs is a boundless playground, there's always something new to learn there.
> It's deeply distressing to watch people fall into AI psychosis.
It's unclear what you're saying here... Yes, AI-induced psychosis is a real problem and the frontier labs' mitigations are ineffective, to put it mildly. But using AI as a coding tool doesn't have anything to do with psychosis.
It's not AI psychosis, you're interpreting what he said to the extreme.
Anyone who has actual corporate team lead or management experience understand AI as effectively a junior dev who doesn't have great persistent memory. These devs using AI are reviewing, guiding, and validating the work given to them by AI just as they would from a junior dev.
The inverse of your statement is more apt; it's distressing to see people so angsty about AI usage. There are going to be skilless vibecoders and then there are going to be experienced devs (like OP) who figured out their AI workflow to multiply their productivity 2-5x.
What the future holds for AI model pricing-- that is a valid concern. But I don't think that's what you intended.
No. AI is a must for software development. It's non-negotiable. The productivity gains are too great. The era of 100% human-written code is over. People will still do it as an idle curiosity, for personal projects only they intend to use. But even those open source projects with significant user bases that forbid the use of AI (like, afaik, NetBSD) will be eclipsed by those that support it in terms of features, capability, and security. And the commercial world? Forget it. You cannot keep pace with your employer's expectations unless you learn to use these tools well. This is not up for debate. It's reality.
Plenty of accomplished devs are getting good results and accomplishing tasks with unheard-of speed using AI, so if you're still not, that's a PEBKAC. You are not using the tools correctly. Figure it out before you complain.
LLMs may be a must for programming, but not for engineering. Writing code is the easy part once you figure out what actually needs to be built in the first place.
Indeed. But figuring out what actually needs to be built is the systems analyst's job, not the programmer's. It takes people skills and holistic thought, something programmers are generally poor at (and AI certainly is no good at, at least not today).
I know how to do things by hand, man. But the writing is on the wall: that skill is going the way of writing programs on punchcards. And there's little we can do about it because the economics in favor of LLMs are like laws of physics.
Yes, model collapse is gonna suck. But LLMs are not just left to self-train, they are guided by human researchers who are going to find ways to groom and direct the models to avoid collapse. They can make billions by shipping better models, so why wouldn't they invest a lot of effort in that?
> "No. AI is a must for software development. It's non-negotiable."
Absolutist rubbish.
> "But even those open source projects with significant user bases that forbid the use of AI [...] will be eclipsed by those that support it in terms of features, capability, and security."
As is this. If a language model is relevant to a project, open source or otherwise, is of course heavily dependent on its nature (ethics, use case, deployment, working environment/culture, et cetera).
I just wonder how jobs like that won't replace their employees. Seems too good to last. In a few years OpenAI will just sell $1,000 per month Human-free Agent Coding for businesses.
Saying they have psychosis is a rude exaggeration.
AI psychosis is to have a toxic relationship with a chatbot as if though it was a real person. It has nothing to do with engineering. You're muddying your own point by conflating all LLM use with some kind of delusion. There is a lot of nuance in this space and you're not doing yourself any favors by ignoring it if you're an engineer. There is no bubble pop, other than a straight up apocalypse, that is going to put this genie back in the bottle. Models are trained. Tools are built. There isn't a single industry that cares about artistry more than efficiency. It's here to stay, it's getting better, and if you don't know how to use it, you're going to have trouble finding work.
Not writing code isn't the same as vibe-coding. You can stay on top of AI, make it rewrite the things that look bad, make it refactor until you're happy with how things look, etc....
Maybe a lot of people who are doing that aren't admitting that they've stopped writing code, but when all you're ever doing is manually fixing a few lines, or moving blocks of code to more sensible places, fixing jumbled parameters in a call and such, you're not really writing code anymore. You're now a chef in a kitchen yelling at assistants and just touching things when dealing with communicating a correction to one of those dimwits is more frustrating than just doing it yourself.
You still have to be a cook to be a chef, though. But the reason I say that AI is dumb is because I tell it to do things, it does them in a dumb way, and I complain at it and tell it to write it in a sensible way. It screws that up, and I tell it to do it again, and not to screw it up. I'm still not coding. If it goes into a loop of nonsense, I touch things with the intention of doing just enough to knock it out of that loop (or rather keep the new context from falling into it.)
"After the bubble pops" we might see that a lot of new chefs can't actually afford assistants. But just as likely, the overbuilt (government-subsidized directly and through policy) capacity might end up getting written off, and at the cost of electricity and maintenance costs could stay reasonably good. Or algos improve. Or training methods improve.
this is just like being promoted from developer to manager. some people like it some don't. with AI there is another dimenstion: some people like managing machines instead of people, some don't.
it's not for me. i don't want to stop writing code. i don't mind to manage people but i don't want to manage machines (at least not with an unprecise interface/outcome as AI provides). consequently AI may be fine for this person, but it is not for me.
I've been retired from emacs for several years now but I'm still looking for a magit replacement that is independent of my editor. Vscode's magit extension is really good but i split my time between IntelliJ and vscode.
#!/bin/sh
if [ "$(git rev-parse --is-inside-work-tree)" = "true" ]; then
exec emacs -nw -q --no-splash -l "/path/to/magit-init.el"
fi
It worked well for me because I can reuse all my keybindings (evil + leader keys with `general`) and my workflow is fully in the terminal. (I have since moved on to Jujutsu, and `jjui` is filling this gap for me right now, but it's not quite a magit-for-jj).
To toss them because the level of damage they have done it's astounding. Tons of companies are still fixing the losses from vibe coding.
What we need it's better code analizers, lexers and the like. And LLM's are practically the opposite because they can't never, ever give a concise answer by design. Worse, they rot over time.
You're right Spacemacs is essentially a batteries-included version of Emacs.
[1] https://nullprogram.com/blog/2017/04/01/
Hence my question, what Wellons (who's a seasoned veteran of Emacs) could ever say anything about Spacemacs (or Doom - which in this context makes no difference)? What kind of views one would be interested to hear? Using the Space key as the "Lead key", or something about local-leader key; or vim-navigation/Evil in general; or modules/layers architecture of Emacs config? He said in that post you shared that he believed he'd eventually end up using Evil - he doesn't need to use Spacemacs for that.
Spacemacs is great for beginners, for people who don't want to deal with learning Emacs native bindings - they are legit confusing. For someone like Chris, it makes little sense, they'd probably would just add modal editing packages to their existing config. Even though Spacemacs and Doom are still valuable - one can find many interesting gems there.
Also, these projects may give you a good discipline for structuring your keys mnemonically - everything files related would be at "SPC f", search stuff on "SPF s", etc.
I see the author is spring cleaning:
> I've turned over a new leaf (no more Openbox, Tridactyl, Xorg, xterm), and so some of these things I no longer use. On Linux I now use KDE on Wayland with a minimally-configured browser. I miss the power user features, but I do not miss the friction and constant maintenance.
https://github.com/skeeto/dotfiles/commit/df275005769b654618...
> I am no longer using Mutt nor running my own mail server. In general less terminal stuff for me.
https://github.com/skeeto/dotfiles/commit/e331e367c75f66aaa9...
LLMs have inspired a similar change in me: with a big change in how I work, I feel I can and should be more flexible with adopting new tech, which involving freeing myself of previous choices.
For me the friction always comes when I try to use the internet without it
solid extension, big fan
FWIW, the age of LLMs made me build a deeper, more intimate relationship with Emacs, because it's a Lisp REPL loop with a built-in editor, not the other way around. When you give an LLM a closed loop system where it can evaluate code in a live REPL and observe the results, it stops guessing and starts reasoning empirically.
LLM that I run inside Emacs can fully control the active Emacs instance. I can make it change virtually any aspect of it. To load-test things, I even made it play Tetris in Emacs. And not just simply run it, but to actually play it without losing. It was insane.
Also, Emacs is all about plain text - you can easily extract text from anything - from the browser, terminal, CLI apps, Slack, Jira, etc., and you can do that on your own terms - context can appear in a buffer, in your clipboard, become a file or series of API requests. That is really hard to beat.
This is what gives me the most pause.
I'm still exploring all the ways the agent and I can collaborate using Emacs as a shared medium, but at the moment am super optimistic about it.
My .emacs config has improved and I wrote my own Emacs based coding agent https://github.com/mark-watson/coding-agent
this exactly. most people can’t set it up that well.
When I graduated college I used Dvorak and Emacs on Linux. Six months of having to use shared Windows lab computers extensively beat me down to surrender all of those points - my brain just couldn't handle switching, so I conformed my desktop to match work. Then later I switched jobs to a group that was all Unix, but of many varieties most of which only had vi, not Emacs. And so I learned vi. Sometimes minimizing friction means going with the flow.
From the linked post:[0]
> I left an employer that is years behind adopting AI to one actively supporting and encouraging it. As of March, in my professional capacity I no longer write code myself. My current situation was unimaginable to me only a year ago. Like it or not, this is the future of software engineering. Turns out I like it, and having tasted the future I don’t want to go back to the old ways.
It's deeply distressing to watch people fall into AI psychosis. Being smart, accomplished, or experienced is no defence.
After the bubble pops and the industry realises the damage these tools can do to people, folks like the author will have to confront that they were taken in by a lie. Many won't be able to confront that.
[0]: https://nullprogram.com/blog/2026/03/29/
Perhaps you're confusing "not using AI" with "not being dependent on AI", those are very different things.
The edge isn't from avoidance, it's from using AI as leverage on top of real skill. A strong developer + AI beats a strong developer alone, and massively beats a weak developer + AI. The edge doesn't come from avoiding a tool - it comes from being the kind of person who doesn't need it but uses it anyway. That's leverage. Refusing to use it is just leaving leverage on the table to make a philosophical point.
> After the bubble pops
People like Chris (who is enormously capable engineer) would just move onto different tools, different techniques and paradigms. That is the essence of being a software developer - many of us choose this path specifically because it forces you to learn something new, every single day. That is (I suspect) also another reason why Wellons decided to migrate away from Emacs - he just learned it so deeply, perhaps it's no longer giving him the satisfaction of learning. Which to be honest is hard to believe - Emacs is a boundless playground, there's always something new to learn there.
It's unclear what you're saying here... Yes, AI-induced psychosis is a real problem and the frontier labs' mitigations are ineffective, to put it mildly. But using AI as a coding tool doesn't have anything to do with psychosis.
Anyone who has actual corporate team lead or management experience understand AI as effectively a junior dev who doesn't have great persistent memory. These devs using AI are reviewing, guiding, and validating the work given to them by AI just as they would from a junior dev.
The inverse of your statement is more apt; it's distressing to see people so angsty about AI usage. There are going to be skilless vibecoders and then there are going to be experienced devs (like OP) who figured out their AI workflow to multiply their productivity 2-5x.
What the future holds for AI model pricing-- that is a valid concern. But I don't think that's what you intended.
Plenty of accomplished devs are getting good results and accomplishing tasks with unheard-of speed using AI, so if you're still not, that's a PEBKAC. You are not using the tools correctly. Figure it out before you complain.
>You are not using the tools correctly.
Stop being deluded, man.
When this crap collapses into itself you will be in tears back asking for the knowledge you failed to get without the fancy Clippys.
Now, stop that fancy Megahal chatbot and learn to do things by hand.
Yes, model collapse is gonna suck. But LLMs are not just left to self-train, they are guided by human researchers who are going to find ways to groom and direct the models to avoid collapse. They can make billions by shipping better models, so why wouldn't they invest a lot of effort in that?
Absolutist rubbish.
> "But even those open source projects with significant user bases that forbid the use of AI [...] will be eclipsed by those that support it in terms of features, capability, and security."
As is this. If a language model is relevant to a project, open source or otherwise, is of course heavily dependent on its nature (ethics, use case, deployment, working environment/culture, et cetera).
Saying they have psychosis is a rude exaggeration.
Maybe a lot of people who are doing that aren't admitting that they've stopped writing code, but when all you're ever doing is manually fixing a few lines, or moving blocks of code to more sensible places, fixing jumbled parameters in a call and such, you're not really writing code anymore. You're now a chef in a kitchen yelling at assistants and just touching things when dealing with communicating a correction to one of those dimwits is more frustrating than just doing it yourself.
You still have to be a cook to be a chef, though. But the reason I say that AI is dumb is because I tell it to do things, it does them in a dumb way, and I complain at it and tell it to write it in a sensible way. It screws that up, and I tell it to do it again, and not to screw it up. I'm still not coding. If it goes into a loop of nonsense, I touch things with the intention of doing just enough to knock it out of that loop (or rather keep the new context from falling into it.)
"After the bubble pops" we might see that a lot of new chefs can't actually afford assistants. But just as likely, the overbuilt (government-subsidized directly and through policy) capacity might end up getting written off, and at the cost of electricity and maintenance costs could stay reasonably good. Or algos improve. Or training methods improve.
this is just like being promoted from developer to manager. some people like it some don't. with AI there is another dimenstion: some people like managing machines instead of people, some don't.
it's not for me. i don't want to stop writing code. i don't mind to manage people but i don't want to manage machines (at least not with an unprecise interface/outcome as AI provides). consequently AI may be fine for this person, but it is not for me.
Anyone know of something like this?
https://flathub.org/en/apps/io.github.aganzha.Stage
The IntelliJ git client is my favorite by far, I am curious what do you not like about it?
For you, perhaps.
What we need it's better code analizers, lexers and the like. And LLM's are practically the opposite because they can't never, ever give a concise answer by design. Worse, they rot over time.
https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-...
This sounds like unsubstantiated hyperbole - can we keep HN grounded in reality, please?
My alternative hypothesis - you don't like agentic coding or maybe LLMs in general. Not helpful for the group.