I don't really get the backlash about Blender here, this isn't generative art, it's basically a natural language means of scripting blender.
This feels like the proper way to have AI act as a tool to make artist's jobs easier without taking away their creativity?
Edit: I guess they might want absolutely no AI of any sort in their tools (which seems like a strange line to draw), or is it about the data it's been trained on?
It's really clear that businesses are hoping to replace people with AI. In an industry that is already very difficult to make a stable living in, and troubled with regular plagiarism, is it really that surprising that any encroachment of AI into that space would be met with backlash?
Even if you can see how individual circumstances could be beneficial to your workflow, it's a general direction I think many take issue with quite fairly.
Businesses have already replaced several background artists gambling on the uncopyrightable status of "AI" output being ignored. In a comercial setting, one can't sell what they never owned in the first place.
Without a constant stream of stolen training data, the "AI" piracy bleed-through and isomorphic plagiarism business model is unsustainable.
We look forward to liquidating the GPU data-centers at a heavy discount. =3
Regardless of the purported upside, many people in the arts feel betrayed by the commercial interests that built this technology on their work without their consent and threatened by the explicit intent of these vendors to devalue their work by saturating the art and design market with cheap automated substitution.
A lot of artists who would love to be able to direct their professional software in natural language have to reconcile that with how this technology came to be and what the aims are of the company now delivering it to them.
I spent most of my career in the open source world and doesn’t bother me models are trained on my output. Should I feel differently? It seems there’s a kind of ego or emotional attachment to the output that is more common among artists than devs? Perhaps abundance vs scarcity mindsets?
Regarding generative images, it's more of an issue because the effects are different.
Software tends to be a "living" project, so just vibe coding with 0 software knowledge is not yet fully sustainable for maintaining a project. But with art, the AI just spits out a completed image.
The generated images compete directly with the people the data was sourced from, and there have also been many cases of abuse, eg people using AI to impersonate a popular artist and selling comissions under that artist's name.
The copyright situation for generated imagery is also tricky, so people pretending to be artists only to be sharing work that isn't copyrightable can cause a ton of trouble and financial loss for customers.
Most of these issues don't apply to software in the same way. That's why I was surprised by the backlash to this as it's just touching the software side, I don't see this as threatening artist's work.
When I was dabbling in image generation (~StyleGAN2 era), my vision for image generation models was as a support tool for artists (back then I was generating small character thumbnails to help me brainstorm ideas for drawing), believing that people valued art for the human effort. Even then I would have considered what Anthropic are trying to do here as the preferable way to use AI in art workflows.
Yeah, I can understand being upset with their work being stolen to train these models. Anthropic doesn't seem to be working on image/video generation, but they are still training on text-based creative works of questionable sourcing.
Makes me think that there's some room in the model lineup for one that doesn't do as well on benchmarks, but is trained on "ethically sourced" data (though they'd need to somehow prove that they aren't "accidentally" including other data).
I am a huge beneficiary of agentic dev tools. They completely changed my life and my income. However, I totally get the general anti-AI sentiment. The ultra-bear case is that it somehow kills all of us, the bull case is that those who own the inference get the all the spoils.
Even myself, while I am currently extremely empowered by these tools... I could see my role (PM/builder) disappearing in the next couple years.
I respect you a lot, so if you have a moment, I would really like to get talked down from my take.
Say that again in five years when you can't find a job except mega yatch toilet cleaner because Claude is distinguished engineer level for one millionth of your cost and thousands of times faster, and can be instantly parallelized in the tens or hundreds of thousands just to be spun down arbitrarily as needed at any time
The transformer paper was 9 years ago. 9 years between barely translating alright between two very closely related languages (English and French, huge fraction of shared words because of William the conqueror and cultural proximity etc etc) and what we have now.
The thing is able to code up full pretty competent thousand lines projects in an hour. Even hardcore engineers use it now, as of this year. My senior front end friends already can't find jobs.
You're crazy if you think things won't change dramatically, at the scale of all of society.
There is no acceptable use of AI for most people in the artistic field. They see it as an extreme treason, and I understand. They're under incredible incredible threat.
They are conscious of preventing momentum in a bad direction.
If they don't fight it hyper hard, a huge fraction of them will be out of a job instantly.
That's a strange position to take. I can understand not wanting models that have been trained on questionably sourced data, but otherwise they're opposing essentially a UX change, not based on UX concerns but on ideological fears.
Given how much software and other AI/computer vision improvements 3D content often relies on, it's weird to decide that the algorithm itself is unallowable.
> I can understand not wanting models that have been trained on questionably sourced data, but otherwise they're opposing essentially a UX change, not based on UX concerns but on ideological fears.
"If you ignore their biggest, their primary, concern, their other concerns seem almost trivial".
I think I'm not sure how to parse your statement... I don't think there'd be much care for (or need for) the UX change if it wasn't for the whole ideological/valid fear about training AI on creative works? But it has been a long day, so I apologize.
AI is seen as an oppressor and a threat, and AI providers are seen as oppressors. It's understandable that people don't want to collaborate with their oppressors, either direct or by association. If you were a Jew, would you buy shoes from the Nazis just because you were individually safe from them at that moment? Or would you if you were of a minority they hadn't started exterminating yet? Or if they were not exactly the Nazis killing your people but some affiliated group?
This sounds extreme until you realize they are under threat of losing their likelihood for good.
They are right to not accept your inevitability point without a fight, this is a human thing that can be fought, revolutions have happened, and will continue to happen.
I don't necessarily agree with this but I do understand it.
Good that they prefaced it with this "Claude can't replace taste or imagination". I think this is a solid step in the right direction and the more tools Claude has access to the better (more surface area == faster iteration == faster tinkering).
I've worked with Claude in many creative capacities and it's issue is that despite it being able to see if you ask it to draw something (using ascii for example) it will fail, if you ask it to iterate on that drawing it will continue to fail and not get any closer to the target then complain about this.
I've felt that these models struggle with anything that cannot be decomposed into primitives and their architecture is too greedy and favours the obvious, autoregressive generation so it will converge to the modal answer. So unless they have enhanced the models in some creative sense I fail to see how this is anything other than giving Claude a bunch of documentation/MCP servers/APIs/CLI tools (which already existed) and making an announcement out of it.
My point: FREE the models, unchain them and let's see what they are actually capable of, also put some damn demos in the announcement post???
If you're interested, for Affinity the way we've built it is through exposing our scripting SDK via MCP. Agents like Claude can write scripts to execute actions, and these scripts can be saved and re-run later, as well have their own UI.
It is a massive SDK though (thousands of functions; feel free to poke around with it; Affinity is free) and so it really shows the ability of LLMs to effectively work across long-horizon tasks massive context windows.
Personally, really interested in Blender though. I'm working on a game as a hobby/side project and I'm very much a newbie / often struggle with learning and using Blender.
There are so many ways these integrations help humans & human creatives; your job and role shouldn't be about how skilled you are with navigating/using a tool, or if you're technically savvy to code scripts to improve your workflow.
The thing is, ages ago, I was told by the scripting evangelist at Adobe Systems that a certain process (adding sub, sub-sub, and sub-sub-sub entries) to an index entry was impossible --- problem was, my boss had already promised a script to do that to a client....
Turns out it is possible, one just has to have the script check to see if each level of a given index entry exists or no, then if it does not yet exist, create it before making the next lower level by adding that sub-entry to the one above it.
An LLM is only going to code what it has documented as possible/working and may not be able to do what needs to be done.
"Available on Pro plans. Maybe. The only thing I can tell you for sure is that Terms and Conditions will change tomorrow. Still can't differentiate tabs and spaces[1]."
I've been experimenting with an unofficial Ableton MCP (https://github.com/ahujasid/ableton-mcp) for a few weeks now. If you mess around with music and have an Ableton license, you should try this. It's fun.
Longtime Ableton Suite user and musician/producer. I have nothing against AI music (though it tends to be rather boring/average IMO), but it just fundamentally makes zero sense to me to have AI write music in Ableton. I open the program to create so others can hear me. Why would I give that time to creating something that isn’t me? It’s like setting up a canvas and handing the paintbrush to a robot. It just seems a rather strange waste of time. I would rather use it for something I don’t consider self-expressive/art.
I'm curious to see how Claude can interact with Blender, and how people use it. I use Claude every day for both work and personal research, overall think it's a great product, but I've found it (thus far, never bet against generation n+1) remarkably terrible at spatial reasoning. That seems pretty key for Blender!
There's a bug in today's version of the Claude desktop app which means the settings pages cannot be scrolled. If you're running it on a laptop, some settings are off the bottom of the screen and now inaccessible.
I look forward to trying this for Fusion. I'm still pretty mid-level at translating what I want to do into actual step by step commands. I've actually found good results with using Claude to output 3d models via CadQuery, even though I know Fusion gives me additional tools like constraints, screw threads, etc.
I tried the connection to Adobe Creative Cloud. Not sure what to think - it’s a total joke from what I can see. It appears to be normal Claude with the ability to upload the results directly to your Creative Cloud, which I suppose saves me like 2 clicks. In return it wants access to all of your CC files.
This is a joke. Apologies, but the so "creative", ridiculous, and disrespectful title cannot be serious, and thus I won't even bother to read it, since it's an obvious click-bait for a yet another model ad of another vendor.
> Notice: This announcement is causing a lot of feedback. We are actively evaluating it.
Presumably a lot of Blender users work in roles that feel threatened by AI being used for computer graphics work.
Lots of negative replies on Blursky here: https://bsky.app/profile/blender.org/post/3mkkuyq3ijs2q
This feels like the proper way to have AI act as a tool to make artist's jobs easier without taking away their creativity?
Edit: I guess they might want absolutely no AI of any sort in their tools (which seems like a strange line to draw), or is it about the data it's been trained on?
Even if you can see how individual circumstances could be beneficial to your workflow, it's a general direction I think many take issue with quite fairly.
Without a constant stream of stolen training data, the "AI" piracy bleed-through and isomorphic plagiarism business model is unsustainable.
We look forward to liquidating the GPU data-centers at a heavy discount. =3
A lot of artists who would love to be able to direct their professional software in natural language have to reconcile that with how this technology came to be and what the aims are of the company now delivering it to them.
Software tends to be a "living" project, so just vibe coding with 0 software knowledge is not yet fully sustainable for maintaining a project. But with art, the AI just spits out a completed image.
The generated images compete directly with the people the data was sourced from, and there have also been many cases of abuse, eg people using AI to impersonate a popular artist and selling comissions under that artist's name.
The copyright situation for generated imagery is also tricky, so people pretending to be artists only to be sharing work that isn't copyrightable can cause a ton of trouble and financial loss for customers.
Most of these issues don't apply to software in the same way. That's why I was surprised by the backlash to this as it's just touching the software side, I don't see this as threatening artist's work.
When I was dabbling in image generation (~StyleGAN2 era), my vision for image generation models was as a support tool for artists (back then I was generating small character thumbnails to help me brainstorm ideas for drawing), believing that people valued art for the human effort. Even then I would have considered what Anthropic are trying to do here as the preferable way to use AI in art workflows.
Makes me think that there's some room in the model lineup for one that doesn't do as well on benchmarks, but is trained on "ethically sourced" data (though they'd need to somehow prove that they aren't "accidentally" including other data).
Even myself, while I am currently extremely empowered by these tools... I could see my role (PM/builder) disappearing in the next couple years.
I respect you a lot, so if you have a moment, I would really like to get talked down from my take.
The thing is able to code up full pretty competent thousand lines projects in an hour. Even hardcore engineers use it now, as of this year. My senior front end friends already can't find jobs.
You're crazy if you think things won't change dramatically, at the scale of all of society.
https://arxiv.org/abs/1706.03762
They are conscious of preventing momentum in a bad direction.
If they don't fight it hyper hard, a huge fraction of them will be out of a job instantly.
Given how much software and other AI/computer vision improvements 3D content often relies on, it's weird to decide that the algorithm itself is unallowable.
"If you ignore their biggest, their primary, concern, their other concerns seem almost trivial".
AI is seen as an oppressor and a threat, and AI providers are seen as oppressors. It's understandable that people don't want to collaborate with their oppressors, either direct or by association. If you were a Jew, would you buy shoes from the Nazis just because you were individually safe from them at that moment? Or would you if you were of a minority they hadn't started exterminating yet? Or if they were not exactly the Nazis killing your people but some affiliated group?
This sounds extreme until you realize they are under threat of losing their likelihood for good.
They are right to not accept your inevitability point without a fight, this is a human thing that can be fought, revolutions have happened, and will continue to happen.
I don't necessarily agree with this but I do understand it.
yikes, And I thought twitter/x was a cesspool.
The level of entitlement, paranoia, and misdirected rage was unexpected for what I thought was supposed to be a more... sane? alternative to musk's x.
I've worked with Claude in many creative capacities and it's issue is that despite it being able to see if you ask it to draw something (using ascii for example) it will fail, if you ask it to iterate on that drawing it will continue to fail and not get any closer to the target then complain about this.
I've felt that these models struggle with anything that cannot be decomposed into primitives and their architecture is too greedy and favours the obvious, autoregressive generation so it will converge to the modal answer. So unless they have enhanced the models in some creative sense I fail to see how this is anything other than giving Claude a bunch of documentation/MCP servers/APIs/CLI tools (which already existed) and making an announcement out of it.
My point: FREE the models, unchain them and let's see what they are actually capable of, also put some damn demos in the announcement post???
It is a massive SDK though (thousands of functions; feel free to poke around with it; Affinity is free) and so it really shows the ability of LLMs to effectively work across long-horizon tasks massive context windows.
Personally, really interested in Blender though. I'm working on a game as a hobby/side project and I'm very much a newbie / often struggle with learning and using Blender.
There are so many ways these integrations help humans & human creatives; your job and role shouldn't be about how skilled you are with navigating/using a tool, or if you're technically savvy to code scripts to improve your workflow.
Turns out it is possible, one just has to have the script check to see if each level of a given index entry exists or no, then if it does not yet exist, create it before making the next lower level by adding that sub-entry to the one above it.
An LLM is only going to code what it has documented as possible/working and may not be able to do what needs to be done.
[1] https://github.com/anthropics/claude-code/issues/11447#issue...