Presumably because they think agents will become the dominant primary users of tools like Blender, and want a seat at the architectural table to help accelerate that & create useful synergies with Anthropic products and models?
The press release calls out the Blender Python API, specifically, which makes sense for agentic use.
> This support will be dedicated towards Blender core development, to maintain and continuously improve foundational features like the Blender Python API
Pretty much spells it out. They have an interest in extending/supporting the ability for Claude/CC to use and interact with Blender. There may be gaps in endpoints that Anthropic needs to enable certain patterns of automated usage.
Oh yes, I think this part is easy to see and perhaps even logical. But I also don't think this part is the problem. After all, generating 3d models is primarily a technical consideration. I don't think the technical prowess of software is an issue in this here.
Aha - so that decision was not a consensus or community made one? I really don't know right now; no clue about the internals at blender. But that would be interesting ... I can see the headlines "Blender community sold to Anthropic. Forks starting in 3 ... 2 ... 1 ..."
Sponsorship decisions are not community polls, never have been.
And the worries about "blender just being sold to xyz..." have been around forever. Always wrong. People with AMD cards were screaming when Nvidia became sponsor, and other way round.
I have mixed feeling about this. Guess they can need the money ... but still. Data goes to Anthropic here. It also will buy influence in some ways, I am sure about that. We could see this with rubygems.org - when shopify threatened to cut funding some months ago, suddenly chaos erupted. Money buys influence, this is easy to see how.
And Blender tries to get funding from many different donors so that no single one can have any sway over them. Anthropic, as disgusting as they are, are just one more donor. Epic, Nvidia, Google, CoreWeave are also patrons. I don't worry about that donation.
Yep. Anthropic's motives are obviously self-interested (Cluade <-> Blender integration), but I'm not donating to Blender, are you? That's the problem, we all want Blender to be able to pick and choose donations, but when all OSS is cash-strapped, it is easier said than done.
I'd prefer Blender get some additional funding out of this AI bubble at least.
Or it might allow proficient blender users to become more productive, resulting in higher detailed scenes for the same budget.
We'll see how it shakes out. As a non-proficient Blender user, I'm kinda keen on this since I have had a lot of ideas that I haven't been able to realize in Blender.
So reducing budgets and suppressing wages then, what a great deal for the workers who have specialised in this field and who's work and effort the LLM has been trained on to replace them!
I don't think they can tell Blender what to do. As such it's just more money for Blender! Yes, Anthropic can use the Python API to do their AI BS, but an improved Python API is also good for anyone else. This doesn't mean that Blender themselves are integrating any gen AI (if you don't already count the denoise filters). Do you really think Blender should have denied the donation?
Shame that we have to choose between better financing of Blender for features we already want (Python API quality) and placating imo overly dramatic artists.
I think the worries of artists over gen AI are valid. I guess all the better that some of the money of those "not yet" profitable AI companies goes to a good open source project and not to some of their usual practices.
Artists mad about AI art ought to welcome this. This is about making art tools better, instead of replacing them entirely. The alternative to this is AI just generating art directly and making tools like Blender obsolete.
Art generators need to come a long way to completely replace art tools. I dabble, but if I were doing real work with it, there have been times it would have been faster to composite in a 3D model rather than keep trying to prompt an image generator into fixing something.
This is what I do. It’s been really helpful for taking existing FBX files and handing them off to the agent + Python Blender API to analyze the geometry, convert to GLBs, etc.
What model you using? With codex and gpt-5.4 set to xhigh (and now gpt-5.5) seems to have zero issues helping me with rigging and fixing glb/fbx models, works as a charm. One time I instructed it to iterate together with screenshots because it was a gnarly task, but usually it figures out everything even when headless.
Honestly, I think this is a stepping stone towards replacing industry CAD modeling tools.
AI _can_ work with 3D models already, but it's really bad at it. CAD requires an extra level of control and I think this is where I could see AI companies wanting to get a foot in the door.
e.g "Let's build an adapter between 2in BSP Male and 3/4in NPT Female threads with a third Hose Barb outlet with the following properties..."
This might actually be quite nice - the Blender Python API is currently very useful and very touchy. Lots of differences in behavior in headless mode which are hard to debug (because you can't open the GUI to see what's happening, because that changes the behavior).
Yes the blender API feels like it sits on top of the GUI rather than the GUI on top of the API. When you are writing scripts in the blender api you basically mechanically describe the steps you would take in the UI. It can be a little fragile at times.
I've used Claude to write some blender scripts and it's an excellent use case. I look forward to even better claude/blender interaction based on this annonuncement.
I've also used genAI to write script. It works splendid up to a point, then there is absolutely no way to move the needle further. And it's not even close to renders I would ever publish.
That being said, it's about the same for the code it produces for non purely creative things, but for artistic work, I doubt an LLM in between gives any gain. After all, we do have an interface. A human interface.
Not everything is abstract art. Sometimes I want my subsurf modifier to only target certain vertex groups, and if I can use AI to make that happen in a few seconds, that's a huge win for me.
Blender (and CAD programs as well) get in the way of creativity.
I know what I want, no idea how to tool my way there.
I spend two months going through YT tutorials, mucking about in Blender in order to figure out how to put together the model I have in my head [1].
(A year later, a new project idea—and it's back to YouTube because the learning is not only a steep curve but also sometimes so esoteric that it's fleeting.)
I disagree that anyone should need LLMs for Blender, for example, because Blender is designed by people to be understood and used by people, even if it requires a learning curve. It sounds a bit dangerous to build new things we don't understand, or worse, reduce our understanding of what we currently use because (only after studying our use of the same technology) an LLM apears able to replicate it, mostly.
I'm reminded of Sam Altman's performative helplessness on Jimmy Kimmel, when he described being unable believe a baby without ChatGPT. That's something I believe humanity has been capable of doing for a good portion of its existence, and not something we should give up to the hands of a yet-unproven, yet-unprofitable technology.
Surely there's a middle ground where improved APIs can be leveraged by both people and LLMs alike while keeping those APIs approachable? Why is it necessary that changing the python APIs would lead to "need[ing] LLMs for Blender"? I'm nowhere close to an AI maximalist but this criticism seems grounded in execution concerns. I'm definitely not saying that they won't mess this up and make the APIs overly complex, I just don't think that's necessarily going to be the case.
Frankly, I love the idea of an automation engine printing out tangible works. I actually build spritesheets that way! Load a bunch of individual gimp files as layers, set them offset by a given parameter, and boom, done!
Would be rad to incorporate some statistical procedurally generated designs based on my own aparatus.
What I do not want to see is this realm of LLMs hijacking decades of hard work and consideration for integration channels to more tailor towards their LLMs, not for the diligent engineer.
If they want to put their tentacles as far as they want while making products more difficult to work with innovation of a different color, they are making enemies out of, at least me.
It's because LLMs will soon start building real-world objects via CAD. This is the first step. Look at things like the Adam plugin for Onshape. Works great with Opus. It built a toy car for me with one prompt.
All the CAD and modelings tools have their own scripting languages that LLMs can write to, so you can just use that directly without any built-in LLM support. There will probably be someone doing a pelican-on-a-bicycle for CAD.
If this is the case, they’ll want to improve the NURBS support within Blender. You can get some amazing results with subd, but digital twins require accuracy and you get that with NURBS geometry. Fortunately, Blender supports it already, it just needs some attention to tooling.
MAYA has extensive NURBS tools, which means it can import and export CAD data natively. While Blender does support basic NURBS geometry, it lacks tooling to fully support it.
If the idea is to support Blender for use with “Digital Twins” or “World Models” then the first step is to start with accurate geometry. Anything less is slop.
I once thought the same about all the copyrighted works on which LLMs are currently trained. Surely they can't just hoover everything up? Haha, silly me.
I understand that creating an LLM itself is transformative, but an LLM trained on copyrighted works remains capable of generating derivative works, which eventually will result in successful copyright lawsuits against LLM users who redistribute those derivative works.
In advance of that day, the great race is to build a licensed corpus as aggressively as possible (see Github's latest decision to opt in Copilot usage). Even if Blender doesn't send your data on every save, various options can be developed, such as publishing to a Blender-controlled public channel.
They already have corporate sponsorships from Google, Meta, Nvidia, and other big companies. Anthropic is just joining the list. This is actually good for Blender.
Yes, Claude is the AI doing my denoising. I keep running out of tokens with my 4k renders.
AI is a nebulous term. AI denoisers are not the same thing as an LLM or image gen model, the ire is directed at LLMs and not AI denoisers because they are completely different things.
Wasn't this one called "machine learning" for denoising and upscaling? That's completely different from an LLM replacing your job (after being trained on your work without permission).
The press watching side of me only has questions. Why was this published by Blender and not Anthropic? What does this actually mean? That the blender team gets free claude code max subscriptions?
What it means is here[1]. Anthropic is paying €240k a year and in return they get some marketing in the form of a press release and a website mention, as well as someone to talk to.
Can you imagine going to a football match and second-guessing which are the players who look human, but skin-deep are actually androids made at a factory? This is what it feels like with music and literature right now with so much AI. There are some pockets where you still can say "that's human-made", like 3D-rendered feature films with some particular artistic direction. That, it seems, AI companies also want it to go the way of the dodo.
> like 3D-rendered feature films with some particular artistic direction.
This is a really interesting example. Why do you foresee artistic direction going away as a result of AI? More importantly: why didn't we lose that with the transitions through the years of special effects - i.e., from practical to 3D-rendered?
It's not an uncommon opinion that we did lose artistic direction and aesthetics by moving to vfx - the ability to edit more and more things in post to change the direction or plot of a film personally seems like it's enabled more design by committee in marvel films, etc
The pearl clutching over the pedigree of art is getting tiring. No one has really ever cared. Most mainstream music is written by corporate teams. Elvis didn't write his own music. Frank Sinatra didn't write his own music. Nearly all pop artists don't. But suddenly, people are now clamoring for art, but they never gave a shit to begin with. Most people can't tell AI written music from anything else if a human performer played it. Most of it is better than any local bands anyway. Tired of people pretending they care.
It’s subjective, because it’s art. There’s no right answer.
If you like listening to AI generated content, then that’s fine! I’m glad you found something you enjoy.
For me, I consume art because I want to understand other people. For example, when I go to an art museum I want to emotionally connect with the artist: to feel what they were feeling, or understand an idea they’re conveying. I have little desire to emotionally connect with stochastic token sampling. It seems a vapid way to spend time
You still assume the artist in those examples is real. It could be a team, a ghost artist, etc - yea it's less likely than music, but still. The connection itself is quite difficult too, given the ease in which someone could plagiarize others work - sure they have mechanical skill, but did they really invest in the painting or was it ripped off from others ideas?
I suspect your connection to real artists won't be impacted. This, like the music example, just highlights our assumptions.
I'm not defending this AI garbage fwiw, i just don't think it's as interesting as most people make it out to be. I adore music, and i connect with songs i connect with. I don't typically think about the possible ghost writers, teams of writers, ghost players, etc. The music either speaks to me or it doesn't.
Though i'm not trying to connect to the musician as a person. However, as i was illustrating - if i really wanted to connect to musicians at face value, that ship sailed many, many years ago. Far before AI.
There are ways to mitigate this, but that balance will always be there - it was before AI, and it will be after. It's an evolution. Not an enjoyable one perhaps, but it is nonetheless.
I arrange gigs with real bands playing music. At least that will take quite a while to replace with AI. I am curious to see if we will get a backlash eventually around the content. It will probably be a mix of everything.
Storytelling didn’t go away when the theatre was invented. Theatre didn’t go away when cinema arrived. Cinema wasn’t replaced when radio arrived, ad that wasn’t completely replace by TV, etc. It is a mix of things these days and it will probably remain that way.
If Frank Sinatra had Ai he woulnt have had to perform any of that slop by Cole Porter, Irving Berlin, Kurt Weill, Rodgers & Hammerstein and other composers no one cares about
Yesterday I saw a clip that went "viral" of a few hogs chased by a humanoid robot somewhere in Poland. I had to watch it a few times to figure out if it was real or generated. I still wasn't 100% sure. Asked around in a group, and apparently it's been widely reported on regular news, so I guess it's real? But we're slowly getting to the point where you won't be able to tell, especially from a short clip on a phone.
Yes, and tx for sharing the experience of the hog video - recommended to me too and I chose not to click, as I did not want the frustration of seeing another "tech run amuck" example, of tech disrupting YET ANOTHER norm.
Relatedly, IMO "trust" as a word / concept is deserving of being reevaluated nowadays.
E.g. I don't know that you, NitpickLawyer, are a real person. And when I go through the mental exercise of inventing the details, proofs, and evidence I'd need in order to satisfy my doubt, I never succeed until I reach the physical-contact-with-NitpickLawyer condition.
So I think we need to evaluate what is necessary for oneself to operate in society, separate from these untrustable things .. such as media / news reports, and all the other things I just don't want to worry about, right now. :-(
No-one cares dude. People like good enough, convenient things that serve their entertainment needs, which is shaped by said entertainment, so there is not really an issue here.
Since they are up against a insurmountable mountain of capital which will commoditize and optimize whatever it wants, they are kind of in for a pointless fight with an inevitable end. They could save themselves a lot of despair if they saw the writing on the wall and pivoted to something that still has value, or accepted the new reality instead of throwing a fit.
That is too difficult as the concept (of trusting one's perception) is, I believe, intertwined deeply with other aspects of being human, for many people.
It's not reasonable to require that those people be mentally organized in a manner that already mistrusts reality, in a healthy manner.
I care deeply. It is not single-handedly going to destroy humanity. However, we are clearly on a course where people are more isolated, less challenged, less social, and very very very unhappy. Music is one of those things that can really bring people together. If we flood the zone with AI music (or any other art form) we will slowly edge out the humans who are doing that. That is less new music. Less chances to come together. Less chances to dance together. It's a death by a thousand cuts. I, and many others, think it's worth fighting for because we want others to have the amazing experiences we're having.
Every generation has a new baseline. The younger generation will not be able to imagine having anything other than doctors and psychologists in the phone, and they are content with it because it's all they know. Social media might be all the social connection they have, and that will be the best thing where they will have the best experiences, they won't know another baseline. Eventually maybe the best experiences will be had with digital companions, etc.
The only losers here are old or bitter people who have tied up their worldview into their own time and cannot see or comprehend that the world has moved on with a different bound for the experiences and expectations.
> Eventually maybe the best experiences will be had with digital companions, etc.
Obviously I can't speak for all of Gen Z (and I realize we're no longer "the younger generation"), but my friends and I don't want any part of this, and feel optimistic rather than bitter that things won't go the way you're describing. I seldom meet anyone in my age group that isn't talking about moving away from social media, cancelling software subscriptions, all of the things that millenials and Gen X seem to be so excited to continue building and promoting.
Even at my workplace the "older" people are the ones that are excited about stuff like AI jazz remixes of rap songs and AI generated short films, while literally everyone else under 30 finds it pretty cringe and makes fun of them in DMs.
So all that to say, I disagree with your outlook, but I guess time will tell.
Talking about and doing something are different things. What are the social and market structures around your friends that lets them avoid having a smartphone, cancelling subscriptions, and uninstalling everything? Do you see this getting better with media consolidations from Substack(Andreassen), Twitter(Musk), and Youtube channels by the hyperscalars/billionaries and questionable merges like Paramount and Warner Bros?
When the social culture is based around platforms and content that has subscriptions, and when media and what you see is consolidated, you can't just exit without losing a big part of the social context because the people around you are eating the same thing.
I dislike slop as much as anyone else. I think it puts a higher burden on the receiver of information to filter the signal in a pile of trash. I just don't really see an actual way out if you look at it from a societal level with the existing structures and incentives.
I've been using Claude with OpenSCAD to generate some simple models with repetitive geometry (a set of d8 dice with braille on them for a scrabble-like game for blind children). It's really good, though often I have to send a screenshot to Claude or describe a geometry issue.
Having more native integration into Blender, which I'm already much more familiar with, will be fantastic.
This[0] is the original game. I downloaded the dice, made a list of letters for each die (I can't remember if I did this manually, I don't see a published list so I must have), and then I fiddled until I got something that looks decent and also printable. Each die face has the braille letter as well as a small English letter. Here's[1] my repo, I wasn't intending to make it public yet, so it still has the original creator's files in there and the README is autogenerated.
The biggest challenge at this point is figuring out how to make the dice print consistently. With each die face only having a few points of contact, they keep unsticking. What I'm trying now is cutting the dice in half, printing the halves, and then sticking them together with dowels.
Similarly, I made an agent that lets Claude puppet OpenSCAD, generate screen shots, change the camera angle, etc. In general Claude seems to have a pretty good vision model that can create usable designs. It's also fun to let it make up new models of its own and then try to 3D print them.
The press release calls out the Blender Python API, specifically, which makes sense for agentic use.
Pretty much spells it out. They have an interest in extending/supporting the ability for Claude/CC to use and interact with Blender. There may be gaps in endpoints that Anthropic needs to enable certain patterns of automated usage.
Chances are they were expecting the agent to spoon-feed hundreds of influencers.
I wonder, if Ton was involved in that decision, or if it's only Francesco. Could turn out to be a very unlucky start into the leadership role.
And the worries about "blender just being sold to xyz..." have been around forever. Always wrong. People with AMD cards were screaming when Nvidia became sponsor, and other way round.
It is more about the signal sent, in this case.
For everyone who is interested, here is the mastodon thread: https://mastodon.social/@Blender/116482997785333001 (it is just like to be expected though)
And Blender tries to get funding from many different donors so that no single one can have any sway over them. Anthropic, as disgusting as they are, are just one more donor. Epic, Nvidia, Google, CoreWeave are also patrons. I don't worry about that donation.
You know, for now...
I'd prefer Blender get some additional funding out of this AI bubble at least.
Exactly right. Everyone online is all to happy to proclaim what hill other people should die on, but is rarely willing to go up there themselves.
it seems pretty active, albeit small donations at a time.
We'll see how it shakes out. As a non-proficient Blender user, I'm kinda keen on this since I have had a lot of ideas that I haven't been able to realize in Blender.
Some of them, like the illustrious MrDoob (behind Threejs), love AI and are all-in on it.
The VFX folks at Corridor Crew [1] have been leaning into AI for years now and showing a healthy attitude and path forward to using AI in workflows.
[1] https://www.youtube.com/@CorridorCrew
Money is good. But not antagonizing your community (as an open source project) is better.
So they want claude to be able to talk to blender
Not sure if this one was the one I saw, but Google gave me this one. You could use Claude Code to build things with Blender.
https://blender-mcp.com/
https://youtu.be/LZMWsZbZU5w
it felt weak at it , like the corpus wasn't strong with blender/python work to look through , but it got going at it fairly fast with some coaxing.
AI _can_ work with 3D models already, but it's really bad at it. CAD requires an extra level of control and I think this is where I could see AI companies wanting to get a foot in the door.
e.g "Let's build an adapter between 2in BSP Male and 3/4in NPT Female threads with a third Hose Barb outlet with the following properties..."
I've used Claude to write some blender scripts and it's an excellent use case. I look forward to even better claude/blender interaction based on this annonuncement.
That being said, it's about the same for the code it produces for non purely creative things, but for artistic work, I doubt an LLM in between gives any gain. After all, we do have an interface. A human interface.
As an amateur this is really exciting - but not sure about folks that are real pros at this stuff.
"Some software" is approaching levels of complexity where, perhaps, it gets to a point where a human is barely able to even use it.
At the same time (brave new world) LLM assisted software opens up the possibility of levels of complexity we would not have considered before.
Art should demand more of the creator than the person experiencing it.
The alternative is 9 billion who cares slop things.
I know what I want, no idea how to tool my way there.
I spend two months going through YT tutorials, mucking about in Blender in order to figure out how to put together the model I have in my head [1].
(A year later, a new project idea—and it's back to YouTube because the learning is not only a steep curve but also sometimes so esoteric that it's fleeting.)
[1] https://github.com/EngineersNeedArt/Space-Tug_3DModel
I'm reminded of Sam Altman's performative helplessness on Jimmy Kimmel, when he described being unable believe a baby without ChatGPT. That's something I believe humanity has been capable of doing for a good portion of its existence, and not something we should give up to the hands of a yet-unproven, yet-unprofitable technology.
While it’s just a “you” problem. Some folks have better skills, knowledge and comfort with difficult subjects. And that’s fine.
Further, I'm suggesting "designed by people to be understood and used by people" might be a hurdle for some future software we might envision.
(Altman's performance is orthogonal as I'm suggesting a new level of software that has not yet been written/conceived.)
MuBlE: MuJoCo and Blender simulation Environment and Benchmark for Task Planning in Robot Manipulation: https://arxiv.org/abs/2503.02834
Would be rad to incorporate some statistical procedurally generated designs based on my own aparatus.
What I do not want to see is this realm of LLMs hijacking decades of hard work and consideration for integration channels to more tailor towards their LLMs, not for the diligent engineer.
If they want to put their tentacles as far as they want while making products more difficult to work with innovation of a different color, they are making enemies out of, at least me.
There already are LLM plugins for Blenders and prompt integration for model generation, rigging and co.
If the idea is to support Blender for use with “Digital Twins” or “World Models” then the first step is to start with accurate geometry. Anything less is slop.
I understand that creating an LLM itself is transformative, but an LLM trained on copyrighted works remains capable of generating derivative works, which eventually will result in successful copyright lawsuits against LLM users who redistribute those derivative works.
In advance of that day, the great race is to build a licensed corpus as aggressively as possible (see Github's latest decision to opt in Copilot usage). Even if Blender doesn't send your data on every save, various options can be developed, such as publishing to a Blender-controlled public channel.
Blender already has ton of other Corporate Patron level sponsors, such as Netflix, Meta, Intel, BMW, Adobe and others.
[1]: https://fund.blender.org/corporate-memberships/
If Blender doesn't grow AI capabilities, its utility in the future will be severely degraded.
If you haven't seen 3D mesh, texturing, PBR, and retopo tools, they're getting extremely good.
AI is a nebulous term. AI denoisers are not the same thing as an LLM or image gen model, the ire is directed at LLMs and not AI denoisers because they are completely different things.
[1]: https://fund.blender.org/corporate-memberships/
[0] https://www.youtube.com/watch?v=LZMWsZbZU5w
[1] https://www.youtube.com/watch?v=Gen8rG40ntA
This is unsurprising as a general development other than Anthropic doesn’t have a 3D model generation framework.
I don’t think this is to create MCP servers necessarily but rather to improve the blender pipeline further.
This is a really interesting example. Why do you foresee artistic direction going away as a result of AI? More importantly: why didn't we lose that with the transitions through the years of special effects - i.e., from practical to 3D-rendered?
If not, doesn't your argument entirely miss the point?
If you like listening to AI generated content, then that’s fine! I’m glad you found something you enjoy.
For me, I consume art because I want to understand other people. For example, when I go to an art museum I want to emotionally connect with the artist: to feel what they were feeling, or understand an idea they’re conveying. I have little desire to emotionally connect with stochastic token sampling. It seems a vapid way to spend time
I suspect your connection to real artists won't be impacted. This, like the music example, just highlights our assumptions.
I'm not defending this AI garbage fwiw, i just don't think it's as interesting as most people make it out to be. I adore music, and i connect with songs i connect with. I don't typically think about the possible ghost writers, teams of writers, ghost players, etc. The music either speaks to me or it doesn't.
Though i'm not trying to connect to the musician as a person. However, as i was illustrating - if i really wanted to connect to musicians at face value, that ship sailed many, many years ago. Far before AI.
There are ways to mitigate this, but that balance will always be there - it was before AI, and it will be after. It's an evolution. Not an enjoyable one perhaps, but it is nonetheless.
Storytelling didn’t go away when the theatre was invented. Theatre didn’t go away when cinema arrived. Cinema wasn’t replaced when radio arrived, ad that wasn’t completely replace by TV, etc. It is a mix of things these days and it will probably remain that way.
- https://donnybenet.bandcamp.com/album/il-basso
Totally not written by Google.
Relatedly, IMO "trust" as a word / concept is deserving of being reevaluated nowadays.
E.g. I don't know that you, NitpickLawyer, are a real person. And when I go through the mental exercise of inventing the details, proofs, and evidence I'd need in order to satisfy my doubt, I never succeed until I reach the physical-contact-with-NitpickLawyer condition.
So I think we need to evaluate what is necessary for oneself to operate in society, separate from these untrustable things .. such as media / news reports, and all the other things I just don't want to worry about, right now. :-(
It's not reasonable to require that those people be mentally organized in a manner that already mistrusts reality, in a healthy manner.
The only losers here are old or bitter people who have tied up their worldview into their own time and cannot see or comprehend that the world has moved on with a different bound for the experiences and expectations.
Obviously I can't speak for all of Gen Z (and I realize we're no longer "the younger generation"), but my friends and I don't want any part of this, and feel optimistic rather than bitter that things won't go the way you're describing. I seldom meet anyone in my age group that isn't talking about moving away from social media, cancelling software subscriptions, all of the things that millenials and Gen X seem to be so excited to continue building and promoting.
Even at my workplace the "older" people are the ones that are excited about stuff like AI jazz remixes of rap songs and AI generated short films, while literally everyone else under 30 finds it pretty cringe and makes fun of them in DMs.
So all that to say, I disagree with your outlook, but I guess time will tell.
When the social culture is based around platforms and content that has subscriptions, and when media and what you see is consolidated, you can't just exit without losing a big part of the social context because the people around you are eating the same thing.
I dislike slop as much as anyone else. I think it puts a higher burden on the receiver of information to filter the signal in a pile of trash. I just don't really see an actual way out if you look at it from a societal level with the existing structures and incentives.
Having more native integration into Blender, which I'm already much more familiar with, will be fantastic.
The biggest challenge at this point is figuring out how to make the dice print consistently. With each die face only having a few points of contact, they keep unsticking. What I'm trying now is cutting the dice in half, printing the halves, and then sticking them together with dowels.
[0] https://www.printables.com/model/821177-octobabble-a-word-ba...
[1] https://github.com/PeterFajner/braille_octobabble/
“We love art :P”