In 2026, the number of mobile applications in the App Store and Google Play increased by 60% year over year, largely because entry into the market has become much easier thanks to AI.
Yes, but they are mostly paying little or nothing. How much did you spend on phone apps this year? And ads pay a pittance, unless you have massive scale.
Having my credit card already is an overwhelming advantage for the Apple App store and for Steam. I won’t say it is impossible to overcome, but I think I could count on my fingers the number of instances where I, like, typed my card into a website to buy anything, in the last decade.
This reminds me of a past job working for an e-commerce company. This wasn’t a store like Amazon that “everyone” uses weekly, it was a specific pricey fashion brand. They had put out a shitty iOS app, which was just a very bare-bones wrapper around the website. But they raved about how much better the conversion rate rates were there. Nobody would listen to me about how the customers that bother downloading a specific app for shopping at a particular retailer are obviously just superfans so of course that self-selected group converts well.
So many people who should be smart based on their job titles and salaries, got the causation completely backwards!
Hey, I notice this kind of thing all the time. People use "data" to tell the story they want to -- similar to how it seems humans make a decision subconsciously then weave a rational decision to back it up afterwards.
Do you have principles on how to tackle this? I feel stuck between the irrationality of anecdata and the irrationality of lying with numbers. As if the only useful statistic is one I collect and calculate myself. And, even then, I could be lying to myself.
Review the methodology, if you can, and form your own conclusions. Don't bother trying to change people's minds. It rarely works, and often causes conflict, even in the case of people who say they're data-driven.
Realizing I could frickin mine enough bitcoins overnight back then to probably be set for life (maybe for multiple generations) now, is one of my biggest life regrets. I assume it’s shared with all other people who were into tech back then but dismissed bitcoin as stupid, as I did.
You simply can't get hung up on what could have been. Same applies to trying to time the stock market... should have bought, should have sold. Best thing is to know there's nothing that can be done about the past and move along and deal with what you can do now instead.
You're right. What gets me though is that unlike the stock market, bitcoin was an incredibly rare occurrence where anyone could have gotten extraordinarily rich without even incurring any risk! (besides a couple evenings spent learning how to use it.) Whereas to have $10MM today in GOOG stock, I would have had to invest over $300k in 2010.
That's not true at all, any number of things could have killed bitcoin in its infancy. The stakes were just low. Somewhere out there is a lost collection of wallets of mine, collectively holding ~100btc ($1000 at the time). If regulators cracked down hard, that 100btc would have become just as worthless and either way I'd be out $1000.
"Risk" is an epistemic claim about the future taking the worse path. Obviously looking back it looks like risk-free money. That's not how it looked at the time. The "currency of the future" thing was always niche, especially after the crash in 2013, until a much larger cultural shift happened around 2015-ish.
Plenty of people will chime in with early bitcoin stories, and how they always believed it was going to go to the moon, etc. I always find it curious because my memory of the time period is that it was a means to an end, and that's how the overwhelming majority saw it and treated it. Funnily enough, it was thanks to that overwhelming majority that led to it being worth anything at all. If it was just a bunch of yahoos clamoring about the "currency of the future" thing, it probably would have gone absolutely fucking nowhere. The irony that the yahoos ended up becoming the majority I think is underappreciated.
You could have bought bored apes (and any of the other scammy NFTs) and ended up losing your shirt in the end. Overall, who cares if you miss the boat on something, it's not good for the mind to dwell on it.
Every year since around 2014, friends and family would ask whether they should buy Bitcoin, and every year I told them that I had looked into Bitcoin, I fully understood what Bitcoin was and how it worked, and I recommended that they not invest in Bitcoin because it was stupid. And every year, my advice has been disastrously wrong. Who knows, maybe 2026 will be the first time I'm right.
On the other hand I spent 25 years selling desktop software and never once had an annual review. I never had to submit an application for time off. I never had to ask permission for a dentist appointment. If the weather was good I could take the day off and go for a bike ride. I didn’t attend any scrum meetings nor did I have to argue about what framework to use with a PM who couldn’t code FizzBuzz.
No, "grass always looks greener on the other side" is a perspective thing. If you stand on your own grass then you look down onto it and see the dirt, but if you look over to the other side you see the gras from the side which makes it look more dense and hides the dirt. But it's the same boring grass everywhere. :)
At first, I thought "this is missing the point of the phrase" and moved on, but now I'm back to say it's stuck in my head and an intuitive, pretty neat way to think about it.
Almost all of Patrick's points are great if your software development goal is to make a buck. They don't seem to matter if you're writing open source, and I'd argue that desktop apps are still relevant and wonderful in the open source world. I just started a new hobby project, and am doing it as a cross-platform, non-Electron, desktop app because that's what I like to develop.
The onboarding funnel: Only a concern if you're trying to grow your user base and make sales.
Conversion: Only a concern if you're charging money.
Adwords: Only a concern if, in his words, you're trying to "trounce my competitors".
Support: If you're selling your software, you kind of have to support it. Minor concern for free and open source.
Piracy: Commercial software concern only.
Analytics and Per-user behavior: Again, only commercial software seems to feel the need to spy on users and use them as A/B testing guinea pigs.
The only point I can agree with him that makes web development better is the shorter development cycles. But I would argue that this is only a "developer convenience" and doesn't really matter to users (in fact, shorter development cycles can be worse for users as their software changes rapidly like quicksand out from under them.) To me, in my open source projects, my "development cycle" ends when I push to git, and that can be done as often as I want.
> Analytics and Per-user behavior: Again, only commercial software seems to feel the need to spy on users and use them as A/B testing guinea pigs.
KDE has analytics, they're just disabled by default (and I always turn them on in the hopes of convincing KDE to switch the defaults to the ones I like).
For some things a desktop app is required (more system access) or offers some competitive UX advantage (although this reason is shrinking all the time). Short of that user's are going to choose web 95% of the time.
Counterpoint: is the web browser not already fulfilling the "universal app engine" need? It can already run on most end user devices where people do most other things. IoT/Edge devices don't count here, but this day most of their data is just being sent back to a server which is accessible via some web interface.
Ignoring the fragmentation of course; although that seems to be getting less and less each year (so long as you ignore Safari).
Yes. But it consumes at least 10x-100x more resources to run a web app than to run a comparable desktop app (written in a sufficiently low level language).
The impact on people's time, money and on the environment are proportional.
> But it consumes at least 10x-100x more resources to run a web app than to run a comparable desktop app (written in a sufficiently low level language)
Does it? Have you compared a web app written in a sufficiently low level language with a desktop app?
Yes. I can run entire 3D games.... ten times in the memory footprint of your average browser. Even fairly decent-looking ones, not your Doom or Quake!
And if we're talking about simple GUI apps, you can run them in 10 megabytes or maybe even less. It's cheating a bit as the OS libraries are already loaded - but they're loaded anyway if you use the browser too, so it's not like you can shave off of that.
> Yes. I can run entire 3D games.... ten times in the memory footprint of your average browser.
What about in QML, which uses Web technologies like CSS, JS and even basic HTML? The whole KDE Plasma 6 desktop is built around these technologies now and I (and many others) consider it light and high-performance.
If you saddle up those technologies in the full browser everything then it will get larger, yes, but nothing requires you to do this, just as nothing requires providing your app as a full-fat Fedora install when a distroless container would have sufficed.
Plain Javascript can be very fast and still come at relatively low resource demands and the same is true of HTML and CSS. Many "plain desktop-native" applications often end up reinventing their own variants of HTML and CSS in the course of designing the U/I anyways.
It's better, but it's still quite bloated, to be honest. Linux is generally more memory-hungry than Windows because of how modular it is, and having no Win32 equivalent really hurts. Although they've started doing UI in React Native over there too...
Qt is much lighter than your Chromium-based stacks but all the waste kind of adds up.
"just as nothing requires providing your app as a full-fat Fedora install when a distroless container would have sufficed"
Containers are hungrier than running stuff on bare metal...
Yeah, React Native is apparently how Claude Code operates (even on terminal) so it wouldn't surprise me to see it being useful in a native GUI context as well, if we can get more bindings than Skia.
> Containers are hungrier than running stuff on bare metal...
Containers are tremendously lightweight compared to VM. You might as well point out that running a full multiuser security-protected OS like Linux is hungrier than running on bare metal with DOS too. It's just as true, and even proportionally as true.
In any event a full Fedora container with all packages installed is going to be tremendously larger than a distroless hello-world "built" around Alpine, for instance, even though they both use container technologies. Same applies to Web technologies, you can certainly go and easily add a lot of waste using them but they are not themselves inherently wasteful.
I believe Firefox use separate processes per tab and most of them are over 100MB per page. And that's understandable when you know that each page is the equivalent of a game engine with it's own attached editor.
A desktop app may consume more, but it's heavily focused on one thing, so a photo editor don't need to bring in a whole sound subsystem and a live programming system.
>Counterpoint: is the web browser not already fulfilling the "universal app engine" need?
Counter-counterpoint: Maybe it's time to require professional engineer certification before a software product can be shipped in a way that can be monetized. It's to filter devs from the industry who look at browsers today and go "Yeah, this is a good universal app engine."
I think a browser is an inverted universal engine. The underlying tech is solid, but on top of it sits the DOM and scripting, and then apps have to build on top of that mess. In my opinion, it would be much better for web apps and the DOM to be sibling implementations using the same engine, not hierarchically related. You wouldn’t use Excel as a foundation to make software, even though you could.
Maybe useful higher-level elements like layout, typography, etc. could be shared as frameworks.
You are thinking along the same lines as me. The fact that the first thing to be standardized was HTML made it a fait accompli that everything had to be built on top of it, since that "guaranteed" <insert grain of salt> cross vendor compatibility.
There are many alternate histories where a different base application layer (app engine) could have been designed for the web (the platform)
"The Browser" has turned out to be a pretty terrible application API, IMO. First, which browser? They are all (and have been) slightly different in infuriating ways going all the way back to IE6 and prior. Also, a lot of compromises were made while organically evolving what was supposed to be "a system for displaying and linking between text pages" into a cross-platform application and system API. The web's HTML/CSS roots are a heavy ball and chain for applications to carry around.
It would have been great if browsers remained lightweight html/image/hyperlink displayers, and something separate emerged as an actual cross-platform API, but history is what it is.
It didn't win. It just survived long enough. The web is a terrible platform. I haven't ever shipped a line of "web code" for money and I plan to keep it that way until I retire. What a miserable way to make a living.
Perhaps you're taking the npm/react/vercel world to be the entire web? I agree that that stuff is a scourge. But you can still just write html and Javascript and serve it from a static site, I wrote an outline in https://incoherency.co.uk/blog/stories/web-programs.html which I frequently link to coding agents when they are going astray.
When I was a kid I was running websites with active forums and a real domain name, and I did it with vBulletin and my brain. Someone bought the domain name and website off of me, haven't touched web tech since. I did use Wt at an old job once, but the "website" was local to 1 machine and there were no security concerns.
I envy your pure soul. I am one of many who has had, at times, been coerced through financial strain to write some front end code. All I ask for is, when the time comes, you try to remember me for who I was and not the thing I became.
Look at caniuse, if you see green boxes on all the current version browsers. Than you are good to go. If not, wait until the feature is more widely supported.
Remember Flash? The big tech companies felt a threat to their walled gardens. They formed an unholy alliance to stamp out flash with a sprinkle of fake news labeling it a security threat.
Remember Livescript and early web browsers? It was almost cancelled by big tech because Java was supposed to be the cross platform system. The web and Javascript just BARELY escaped a big tech smack down. They stroked the ego of big tech by renaming to Javascript to honor Java. Licked some boots, promised a very mediocre, non threatning UI experience in the browser and big tech allowed it to exist. Then the whole world started using the web/javascript. It caught fire before big tech could extinguish. Java itself got labeled a security threat by Apple/Microsoft for threatening the walled gardens but that's another story.
You may not like browsers but they are the ONLY thing big tech can't extinguish due to ubiquity. Achieving ubiquity is not easy, not even possible for new contenders. Pray to GOD everyday and thank her for giving us the web browser as a feasible cross platform GUI.
Web browser UI available on all devices is not a failure, it's a miracle.
To top it all off, HTML/CSS/Javascript is a pretty good system. The box model of CSS is great for a cross platform design. Things need to work on a massive TV or small screen phone. The open text-based nature is great for catering to screen readers to help the visually impaired.
The latest Wizbang GPU powered UI framework probably forgot about the blind. The latest Wizbang is probably stuck in the days of absolute positioning and non-declarative layouts. And with x,y(z) coords. It may be great for the next-gen 4-D video game, but sucks for general purpose use.
Flash had quite a lot of quite severe CVE; how many of those do you suppose are "fake news" connived by conspiracy (paranoid style in politics, much?) as opposed to Flash being a pile of rusted dongs as far as security goes? A lot of software from that era was a pile of rusted dongs, bloat browsers included. Flash was also the first broken website I ever came across, for some restaurant I never ended up going to. If they can't show their menu in text, oh, well.
As I recall, Flash and Java weren't so much security issues themselves, but rather the poorly designed gaping hole they used to enter the browser sandbox being impossible to lock down. If something like WASM existed at the time to make it possible for them to run fully inside the sandbox, I bet they'd still be around today. People really did like Macromedia/Adobe tools for web dev, and the death of Flash was only possible to overcome its popularity because of just how bad those security holes were. I miss Flash, but I really don't miss drive-by toolbar and adware installation, which went away when those holes were closed.
We have failed to design a universal app engine…except for the one that dwarfs every other kind of software development for every kind of device in the world.
But that's the thing, if I'm doing audio and buying 128GB of ram for the sake of doing music with my sample libraries, and loading hundreds of parallel tracks and being able to scrub through them without lags or audio clicks, I absolutely want to be able to load them to play with them.
Via electron I’m sure it could. In the main browser it’s probably best to cap usage to avoid having buggy pages consume everything. Anything heavy like a video editor you’d rather install as an electron app for deeper system access and such.
Do you really want a universal app engine? If you don't have a good reason for ignoring platform guidelines (as many games do), then don't. The best applications on any platform are the ones that embrace the platform's conventions and quirks.
I get why businesses will settle for mediocre, but for personal projects why would you? Pick the platform you use and make the best application you can. If you must have cross-platform support, then decouple your UI and pick the right language and libraries for each platform (SwiftUI on Mac, GTK for Linux, etc...).
Platforms and app engines are orthogonal concerns. I agree that platform guidelines are worth preserving, and the web as a platform solves it by hijacking the rectangle that the native platform yields to it. Any app engine could do the same thing.
No DNS, no DDOS, no network plane, no kubernetes, no required data egress, no cryptographic vulnerabilities, no surveillance of activity... It's almost like the push for everything to go through the web was like a psyop so everything we did and when was logged somewhere. No, no, that's not right.
>or offers some competitive UX advantage (although this reason is shrinking all the time).
As a user, properly implemented desktop interface will always beat web. By properly, I mean obeying shortcut keys and conventions of the desktop world. Having alt+letter assignments to boxes and functions, Tab moves between elements, pressing PageUp/PageDown while in a text entry area for a chat window scrolls the chat history above and not the text entry area (looking at you SimpleX), etc.
Sorry, not sorry. Web interface is interface-smell, and I avoid it as much as possible. Give me a TUI before a webpage.
Going further, if you're a hobbyist, you're probably instinctively prioritizing the aspects of the hobby that you enjoy. My first app was a shareware offering in the 1980s, written in Turbo Pascal, that was easy to package and only had to run on one platform. Because expectations were low, my app looked just as good as commercial apps.
Today, even the minimal steps of creating a desktop app have lost their appeal, but I like showing how I solved a problem, so my "apps" are Jupyter notebooks.
I have spent a good deal of my life writing software to put food on the table. I didn't interpret any of what he wrote in the way you describe. Perhaps you could explain why you did.
Both can be true: we can have different preferences about what we're doing to put food on the table and what we're doing when we build something on our own for other reasons.
To me it's not the same. I earn money doing software work for my employer, but I'd never think about creating a paid application myself. Feels icky to me.
I see a lot of this sentiment amongst developer friends but I never could relate. Its not that I'm against it or something but it just doesn't move me personally.
Most things I create in my free time are for my and my family's consumption and typically benefit immensely from the write once run everywhere nature of the web.
You can launch a small toy app on your intranet and run it from everywhere instantly. And typically these things are also much easier to interconnect.
God damn that drives me up a wall! Mozilla is a terrible offender in this regard, but there are myriad others too!
The user interface is your contract with your users: don't break muscle memory! I would ditch FF-derivatives, but I'm held hostage by them because the good privacy browsers are based on FF.
Stop following fads! Be like craigslist: never change, or if you do then think long and hard about not moving things around! Also if you're a web/mobile developer, learn desktopisms! Things don't need to be spaced out like everything is a touch interface. Be dense like IRC and Briar, don't be sparse like default Discord or SimpleX! Also treat your interfaces like a language for interaction, or a sandbox with tools; don't make interfaces that only corral and guide idiots, because a non-idiot may want to use it someday.
I really wish Stallman could be technology czar, with the power to [massively] tax noncompliance to his computing philosophy.
Attitudes like these is why non-developers don't want to use open source software.
These concerns may not matter to you, the developer, but they absolutely matter to end-users.
If your prospective user can't find the setup.exe they just downloaded, they won't be able to use your software. If your conversion and onboarding sucks, they'll get confused and try the commercial offering instead. If you don't gather analytics and A/B test, you won't even know this is happening. If you're not the first result on Google, they'll try the commercial app first.
Users want apps that work consistently on all their devices and look the same on both desktop and mobile, keep their data when they spill coffee on the laptop, and let them share content on Slack with people who don't have the app installed. Open source doesn't have good answers to these problems, so let's not shoot ourselves in the foot even further.
I'm a seasoned developer and I frequently come across OSS projects where I spend half an hour or more in "how the fuck do I actually use this"-land. A lot of developers need to take the mindset of writing the documentation for their non-tech grandma from the ground up.
Don't presume that. People release OSS for all sorts of reasons, and you cannot assume anything. You also are not owed or entitled anything. If a maintainer wants to do something, they will. If they don't, then they won't, even if that thing might net them more users. It's not for you to decide, or even gripe about.
This presupposes that the OSS creator even wants users in the first place, which might not always be the case as it could be personal software; and that these users actually want these features, as many do not want analytics, ads, and A/B tests in your app.
I guess in the same way that one might presuppose a boat wants water?
If a piece of software doesn’t have users and the developers don’t care about the papercuts they are delivering, I would argue what they have created is more of an art project than a utility.
Science research without obvious practical application can still be important and valuable.
Art works without popular appeal can become highly treasured by some.
Open source software doesn't have to be ambitious to be worthwhile and useful. It can be artful, utilitarian or a artifact of play. Commercial standards shouldn't be the only measure of good software.
It's more like building your own boat then someone else coming along and saying it'll never compete with a cruise ship because it doesn't have a water slide and endless buffet; sometimes, things in the same category can serve wholly different purposes.
If my user cannot install software in their own computer then I do not want their money. They have issues they need to work out on their own and they might be better off saving their money.
They're also ubiquitous for creative works, i.e. the sort of things a small set of people spend much time on, but is not something most people use. Examples:
- CAD / ECAD
- Artist/photos
- Musician software. Composing, DAW etc
- Scientific software of all domains, drug design etc
Adobe Photoshop, the most used tool for professional digital art, especially in raster graphics editing, is was first example of a perfectly fine commercial desktop application converted to cloud application with a single purpose - increased profit for Adobe.
Master Collection CS6 still works excellently, and is now (relatively) small enough to live comfortably in virtuo. Newer file formats can be handled with ffmpeg and a bit of terminal-fu.
Slicers for people doing 3D printing too (don't know if webapp slicers are more common than desktop app slicers though).
Desktop publishing.
Brokerage apps (some are webapps but many ship an actual desktop app).
And yet, to me, something changed: I still "install apps locally", but "locally" as in "only on my LAN", but they can be webapps too. I run them in containers (and the containers are in VMs).
I don't care much as to whether something is a desktop app, a GUI or a TUI, a webapp or not...
But what I do care about is being in control.
Say I'm using "I'm Mich" (immich) to view family pictures: it's shipped (it's open source), I run it locally. It'll never be "less good" than it is today: for if it is, I can simply keep running the version I have now.
It's not open to the outside world: it's to use on our LAN only.
So it's a "local" app, even if the interface is through a webapp.
In a way this entire "desktop app vs webapp" is a false dichotomy, especially when you can have a "webapp (really in a browser) that you can self-host on a LAN" and then a "desktop app that's really a webapp (say wrapped in Electron) that only works if there's an Internet connection".
Agreed, desktop frameworks have been getting really good these days, such as Flutter, Rust's GPUI (which the popular editor (and more importantly a competitor to a webview-based app in the form of Electron) Zed is written with), egui, Slint and so on, not to mention even the ability to render your desktop app to the web via WASM if you still wanted to share a link.
Times have changed quite a bit from nearly 20 years ago.
I quit all social media, cancelled Spotify and whatnot and I am hella thankful for the Strawberry media player as a desktop app as it allows me to play all the music i actually own. I love desktop apps.
You should probably accept the fact that browsers are indeed application platforms. I'm not saying they should be, or that they are good at that role, but they absolutely are, at this point in time.
To me, I prefer desktop apps because I KNOW when I've upgraded - it either said "upgrade now?" and did it, or, in the olden days, I had to track it down, or I installed an updated version of a distro, which included updated apps, so I expected some updates.
There are some things that NATURALLY lend themselves to a website - like doctor's appointments, bank balance, etc - but it's still a pain when, on logging in to "quickly check that one thing" that I finally got the muscle memory down for because I don't do it that often, I get a "take a quick tour of our great new overhauled features" where now that one thing I wanted is buried 7 levels deep or something, or just plain unfindable.
For something like Audacity (the audio program), how the heck does it make sense to put that on a website (I'm just giving a random example, I don't think they've actually done this), where you first have to upload your source file (privacy issues), manipulate it in a graphically/widget-limited browser - do they have a powerful enough machine on the backend for your big project? - then download the result? It's WAY, WAY better to be able to run the code on your own machine, etc. AND to be stable, so that once you start a project, it won't break halfway through because they changed/removed that one feature your relied upon (no, not thinking of AI at all, why do you ask? :-)
Of course, I'm also an old-school hacker (typed my first BASIC program ~45 years ago), so I have a desktop mentality. None of this newfangled 17-pound-portable stuff for me :-) And phones are at best a tertiary computing mechanism: first, desktop, then laptop, then phone. So yes, I'm clearly biased. Not trying to hide that.
Nowadays, it seems to be that mobile apps have the "best metrics" for b2c software. I'd be interested to read a contemporary version of this article.
https://www.successfulsoftware.net
I'd wager there are more people paying for software for their smart phone than any other platform they use.
Your employer most likely has.
This reminds me of a past job working for an e-commerce company. This wasn’t a store like Amazon that “everyone” uses weekly, it was a specific pricey fashion brand. They had put out a shitty iOS app, which was just a very bare-bones wrapper around the website. But they raved about how much better the conversion rate rates were there. Nobody would listen to me about how the customers that bother downloading a specific app for shopping at a particular retailer are obviously just superfans so of course that self-selected group converts well.
So many people who should be smart based on their job titles and salaries, got the causation completely backwards!
Do you have principles on how to tackle this? I feel stuck between the irrationality of anecdata and the irrationality of lying with numbers. As if the only useful statistic is one I collect and calculate myself. And, even then, I could be lying to myself.
That's not true at all, any number of things could have killed bitcoin in its infancy. The stakes were just low. Somewhere out there is a lost collection of wallets of mine, collectively holding ~100btc ($1000 at the time). If regulators cracked down hard, that 100btc would have become just as worthless and either way I'd be out $1000.
"Risk" is an epistemic claim about the future taking the worse path. Obviously looking back it looks like risk-free money. That's not how it looked at the time. The "currency of the future" thing was always niche, especially after the crash in 2013, until a much larger cultural shift happened around 2015-ish.
Plenty of people will chime in with early bitcoin stories, and how they always believed it was going to go to the moon, etc. I always find it curious because my memory of the time period is that it was a means to an end, and that's how the overwhelming majority saw it and treated it. Funnily enough, it was thanks to that overwhelming majority that led to it being worth anything at all. If it was just a bunch of yahoos clamoring about the "currency of the future" thing, it probably would have gone absolutely fucking nowhere. The irony that the yahoos ended up becoming the majority I think is underappreciated.
To save you a click: It's harder to monetize desktop apps than webapps.
Lol. LMAO, even.
ig remote work is the best of both worlds
ok, now do this analysis for mobile apps...
The onboarding funnel: Only a concern if you're trying to grow your user base and make sales.
Conversion: Only a concern if you're charging money.
Adwords: Only a concern if, in his words, you're trying to "trounce my competitors".
Support: If you're selling your software, you kind of have to support it. Minor concern for free and open source.
Piracy: Commercial software concern only.
Analytics and Per-user behavior: Again, only commercial software seems to feel the need to spy on users and use them as A/B testing guinea pigs.
The only point I can agree with him that makes web development better is the shorter development cycles. But I would argue that this is only a "developer convenience" and doesn't really matter to users (in fact, shorter development cycles can be worse for users as their software changes rapidly like quicksand out from under them.) To me, in my open source projects, my "development cycle" ends when I push to git, and that can be done as often as I want.
KDE has analytics, they're just disabled by default (and I always turn them on in the hopes of convincing KDE to switch the defaults to the ones I like).
For some things a desktop app is required (more system access) or offers some competitive UX advantage (although this reason is shrinking all the time). Short of that user's are going to choose web 95% of the time.
Ignoring the fragmentation of course; although that seems to be getting less and less each year (so long as you ignore Safari).
The impact on people's time, money and on the environment are proportional.
Does it? Have you compared a web app written in a sufficiently low level language with a desktop app?
And if we're talking about simple GUI apps, you can run them in 10 megabytes or maybe even less. It's cheating a bit as the OS libraries are already loaded - but they're loaded anyway if you use the browser too, so it's not like you can shave off of that.
What about in QML, which uses Web technologies like CSS, JS and even basic HTML? The whole KDE Plasma 6 desktop is built around these technologies now and I (and many others) consider it light and high-performance.
If you saddle up those technologies in the full browser everything then it will get larger, yes, but nothing requires you to do this, just as nothing requires providing your app as a full-fat Fedora install when a distroless container would have sufficed.
Plain Javascript can be very fast and still come at relatively low resource demands and the same is true of HTML and CSS. Many "plain desktop-native" applications often end up reinventing their own variants of HTML and CSS in the course of designing the U/I anyways.
Qt is much lighter than your Chromium-based stacks but all the waste kind of adds up.
"just as nothing requires providing your app as a full-fat Fedora install when a distroless container would have sufficed" Containers are hungrier than running stuff on bare metal...
> Containers are hungrier than running stuff on bare metal...
Containers are tremendously lightweight compared to VM. You might as well point out that running a full multiuser security-protected OS like Linux is hungrier than running on bare metal with DOS too. It's just as true, and even proportionally as true.
In any event a full Fedora container with all packages installed is going to be tremendously larger than a distroless hello-world "built" around Alpine, for instance, even though they both use container technologies. Same applies to Web technologies, you can certainly go and easily add a lot of waste using them but they are not themselves inherently wasteful.
A desktop app may consume more, but it's heavily focused on one thing, so a photo editor don't need to bring in a whole sound subsystem and a live programming system.
Counter-counterpoint: Maybe it's time to require professional engineer certification before a software product can be shipped in a way that can be monetized. It's to filter devs from the industry who look at browsers today and go "Yeah, this is a good universal app engine."
Maybe useful higher-level elements like layout, typography, etc. could be shared as frameworks.
There are many alternate histories where a different base application layer (app engine) could have been designed for the web (the platform)
It would have been great if browsers remained lightweight html/image/hyperlink displayers, and something separate emerged as an actual cross-platform API, but history is what it is.
Remember Livescript and early web browsers? It was almost cancelled by big tech because Java was supposed to be the cross platform system. The web and Javascript just BARELY escaped a big tech smack down. They stroked the ego of big tech by renaming to Javascript to honor Java. Licked some boots, promised a very mediocre, non threatning UI experience in the browser and big tech allowed it to exist. Then the whole world started using the web/javascript. It caught fire before big tech could extinguish. Java itself got labeled a security threat by Apple/Microsoft for threatening the walled gardens but that's another story.
You may not like browsers but they are the ONLY thing big tech can't extinguish due to ubiquity. Achieving ubiquity is not easy, not even possible for new contenders. Pray to GOD everyday and thank her for giving us the web browser as a feasible cross platform GUI.
Web browser UI available on all devices is not a failure, it's a miracle.
To top it all off, HTML/CSS/Javascript is a pretty good system. The box model of CSS is great for a cross platform design. Things need to work on a massive TV or small screen phone. The open text-based nature is great for catering to screen readers to help the visually impaired.
The latest Wizbang GPU powered UI framework probably forgot about the blind. The latest Wizbang is probably stuck in the days of absolute positioning and non-declarative layouts. And with x,y(z) coords. It may be great for the next-gen 4-D video game, but sucks for general purpose use.
You've reminded me of the XKCD comic about standards: https://xkcd.com/927/
Do you really want a universal app engine? If you don't have a good reason for ignoring platform guidelines (as many games do), then don't. The best applications on any platform are the ones that embrace the platform's conventions and quirks.
I get why businesses will settle for mediocre, but for personal projects why would you? Pick the platform you use and make the best application you can. If you must have cross-platform support, then decouple your UI and pick the right language and libraries for each platform (SwiftUI on Mac, GTK for Linux, etc...).
That's a terrible solution that preserves nothing. Try using a screen reader with an app rendered onto a rectangle.
Let's also remember that it's infinitely easier to keep a native app operational, since there's no web server to set up or maintain.
As a user, properly implemented desktop interface will always beat web. By properly, I mean obeying shortcut keys and conventions of the desktop world. Having alt+letter assignments to boxes and functions, Tab moves between elements, pressing PageUp/PageDown while in a text entry area for a chat window scrolls the chat history above and not the text entry area (looking at you SimpleX), etc.
Sorry, not sorry. Web interface is interface-smell, and I avoid it as much as possible. Give me a TUI before a webpage.
Today, even the minimal steps of creating a desktop app have lost their appeal, but I like showing how I solved a problem, so my "apps" are Jupyter notebooks.
Most things I create in my free time are for my and my family's consumption and typically benefit immensely from the write once run everywhere nature of the web.
You can launch a small toy app on your intranet and run it from everywhere instantly. And typically these things are also much easier to interconnect.
And his point about randomly moving buttons to see if people like it better?
No fucking thanks. The last thing I need is an app made of quicksand.
The user interface is your contract with your users: don't break muscle memory! I would ditch FF-derivatives, but I'm held hostage by them because the good privacy browsers are based on FF.
Stop following fads! Be like craigslist: never change, or if you do then think long and hard about not moving things around! Also if you're a web/mobile developer, learn desktopisms! Things don't need to be spaced out like everything is a touch interface. Be dense like IRC and Briar, don't be sparse like default Discord or SimpleX! Also treat your interfaces like a language for interaction, or a sandbox with tools; don't make interfaces that only corral and guide idiots, because a non-idiot may want to use it someday.
I really wish Stallman could be technology czar, with the power to [massively] tax noncompliance to his computing philosophy.
These concerns may not matter to you, the developer, but they absolutely matter to end-users.
If your prospective user can't find the setup.exe they just downloaded, they won't be able to use your software. If your conversion and onboarding sucks, they'll get confused and try the commercial offering instead. If you don't gather analytics and A/B test, you won't even know this is happening. If you're not the first result on Google, they'll try the commercial app first.
Users want apps that work consistently on all their devices and look the same on both desktop and mobile, keep their data when they spill coffee on the laptop, and let them share content on Slack with people who don't have the app installed. Open source doesn't have good answers to these problems, so let's not shoot ourselves in the foot even further.
If a piece of software doesn’t have users and the developers don’t care about the papercuts they are delivering, I would argue what they have created is more of an art project than a utility.
Art works without popular appeal can become highly treasured by some.
Open source software doesn't have to be ambitious to be worthwhile and useful. It can be artful, utilitarian or a artifact of play. Commercial standards shouldn't be the only measure of good software.
Good! It's not for them! They can stay paypigs on subscription because they can't git gud!
Desktop publishing.
Brokerage apps (some are webapps but many ship an actual desktop app).
And yet, to me, something changed: I still "install apps locally", but "locally" as in "only on my LAN", but they can be webapps too. I run them in containers (and the containers are in VMs).
I don't care much as to whether something is a desktop app, a GUI or a TUI, a webapp or not...
But what I do care about is being in control.
Say I'm using "I'm Mich" (immich) to view family pictures: it's shipped (it's open source), I run it locally. It'll never be "less good" than it is today: for if it is, I can simply keep running the version I have now.
It's not open to the outside world: it's to use on our LAN only.
So it's a "local" app, even if the interface is through a webapp.
In a way this entire "desktop app vs webapp" is a false dichotomy, especially when you can have a "webapp (really in a browser) that you can self-host on a LAN" and then a "desktop app that's really a webapp (say wrapped in Electron) that only works if there's an Internet connection".
Times have changed quite a bit from nearly 20 years ago.
There are some things that NATURALLY lend themselves to a website - like doctor's appointments, bank balance, etc - but it's still a pain when, on logging in to "quickly check that one thing" that I finally got the muscle memory down for because I don't do it that often, I get a "take a quick tour of our great new overhauled features" where now that one thing I wanted is buried 7 levels deep or something, or just plain unfindable.
For something like Audacity (the audio program), how the heck does it make sense to put that on a website (I'm just giving a random example, I don't think they've actually done this), where you first have to upload your source file (privacy issues), manipulate it in a graphically/widget-limited browser - do they have a powerful enough machine on the backend for your big project? - then download the result? It's WAY, WAY better to be able to run the code on your own machine, etc. AND to be stable, so that once you start a project, it won't break halfway through because they changed/removed that one feature your relied upon (no, not thinking of AI at all, why do you ask? :-)