This actually looks pretty good. The key takeaway I got was that they know their business is dependent upon Intellectual Property rights, and that Generative AI in final outputs or productive work undermines the foundation of their future success vis a vis discounting or dismissing IP Law and Rights.
That’s likely to be the middle ground going forward for the smarter creative companies, and I’m personally all for it. Sure, use it for a pitch, or a demo, or a test - but once there’s money on the line (copyright in particular), get that shit outta there because we can’t own something we stole from someone else.
Or they can do like Call of duty, that just makes skins "heavily inspired" by other franchises they don't own, the week Borderlands 4 came out they put a few cell shaded skins that heavily resembles the look of that game's characters, there is one that skin that is pretty much like reptile from mortal Kombat called "vibrant serpent", they got a bit of heat in May of this year for releasing a skin that looked too much like one from another game called High On Life, and the list goes on. It reminds me a lot of the disguises they sell on Spirit Halloween during every October.
And yes I know they do legal and agreed partnerships like with the Predator franchise, or the Beavis and Butt-Head franchise (yes they exist in CoD now...), and those only count for a tiny number of the premium skins.
The Call of Duty series makes me so sad. I remember when cod 4 came out it felt like a genuinely groundbreaking and innovative thing and I was so pumped to see what IW did next. And then Activision took all of that talent that was genuinely exploring new ground in game development and stuck them in the yearly rerelease of the same damn game mill until everyone got burnt out and left.
I thought they were on biyearly swapping with treyarch?
Cod4 in some ways was the beginning of the end for a lot that we took for granted in gaming up to that point. I remember when it released and a couple of us went to my friends house to play it. Boy were we in for a shock when there was no coop multiplayer like halo 3.
For the record, Arc Raiders (just released) makes me feel like I'm back playing MW2 in the golden days. Just in the sense of playing an awesome game and riding the wave of popularity with everyone else.
Not me, the mix of parkour with multiplayer shooting with beautiful highly detailed maps it's something I like a lot, nothing even compares in that regard, I know the game is a shameless skin store but I do appreciate the former, although I also hate how small a lot of maps are, glances at Nuketown
I hate how parkour infested the fps genre. There's this whole meta now that I don't care about at all yet one has to learn if you don't want to go 3 and 12 and its in most games now.
Totally, Titanfall 2 is one of my favorite games ever, but by the time I discovered the multiplayer was pretty much dead, no players and no recent updates.
Other than that, just a bit of common sense tells you all you need to know about where the data comes from (datasets never released, outputs of the LLMs suspisciously close to original copyrighted content, AI founders openly saying that paying for copyrighted content is too costly etc. etc. etc.)
It's partly about Netflix getting sued by someone claiming infringement, but also partly (maybe mostly) about Netflix maintaining their right to sue others for infringement.
The scenario looks like this:
* Be Netflix. Own some movie or series where the main elements (plot, characters, setting) were GenAI-created.
* See someone else using your plot/characters/setting in their own for-profit works.
* Try suing that someone else for copyright infringement.
* Get laughed out of court because the US Copyright Office has already said that GenAI is not copyrightable. [1]
I know that people get very up in arms about AI in creative industries - but I feel like people don't necessarily understand that even in creative industries there is a LOT of monotonous, exploitative grunt work.
For every person who gets to make creative decision, there are hundreds upon hundreds of people whose sole purpose is slavish adherence to those decisions. Miyazaki gets to design his beautiful characters - but the task of getting those characters to screen must be carried out by massive team of illustrators for whom "creative liberty" is a liability to their career.
(And this example is only for the creative aspects of film-making. There is a lot of normal corporate and logistical stuff that never even affects what you see)
That's not to say I'm looking forward to the wave of lazy AI-infused slop that is heading our way. But I also don't necessarily agree with the grandstanding that AI is inherently anti-creative or only destructive. I reserve the right to be open-minded.
The irony is that movies and TV themselves represented a cheaper, industrialized and commoditized alternative to theater. And theater is still around and just as good as it ever was.
>For every person who gets to make creative decision, there are hundreds upon hundreds of people whose sole purpose is slavish adherence to those decisions. Miyazaki gets to design his beautiful characters - but the task of getting those characters to screen must be carried out by massive team of illustrators for whom "creative liberty" is a liability to their career.
This is vastly oversimplifying and is misleading. Key animators have a highly creative role. The small decisions in the movements, the timings, the shapes, even scene layouts (Miyazaki didn't draw every layout in The Boy and the Heron), are creative decisions that Miyazaki handpicked his staff on the basis of. Miyazaki conceived of the opening scene [0] in that film with Shinya Ohira as the animator in mind [1]. Even in his early films, when he was known to exert more control, animator Yoshinori Kanada's signature style is evident in the movements and effects [2].
> For every person who gets to make creative decision, there are hundreds upon hundreds of people whose sole purpose is slavish adherence to those decisions.
Yes but at least those decisions come from some or one person not just an algorirhm
Shouldn’t be particularly surprising Netflix is leaning in here - they’ve been pretty open about viewing themselves as “second screen”/background content for people doing other things. Their primary need these days is for a large volume of somewhat passable content, especially content they can get for cheap. Spotify’s in a similar boat and has been filling the recommended playlists up with low-royalty elevator music.
"Generated material is temporary and not part of the final deliverables" sounds like they are not looking to generative AI for content that they will air to the public.
Later on they do have a note suggesting that the following might be OK if you use judgement and get their approval: "Using GenAI to generate background elements (e.g., signage, posters) that appear on camera"
"If you can confidently say "yes" to all the above, socializing the intended use with your Netflix contact may be sufficient. If you answer “no” or “unsure” to any of these principles, escalate to your Netflix contact for more guidance before proceeding, as written approval may be required."
They do want to save money by cheaply generating content, but it's only cheap if no expensive lawsuits result. Hence the need for clear boundaries and legal review of uses that may be risky from a copyright perspective.
They also mention reputation / image in there. If I can’t tell something is generated by AI (some background image in a small part of a scene), it’s just CGI. But if its the uncanny valley view of a person/animal/thing that is clearly AI generated, that shows laziness.
Yeah, I read the "Talent" section and it's very balanced. I can't see much, if anything, to complain about, so thank goodness for SAG-AFTRA. The strike a couple of years ago was well judged.
I think it would be very, very difficult - almost impossible - to create a dataset to train an image generator that doesn't contain any copyrighted material that you don't have the rights to. There's the obvious stuff like Mickey Mouse or Superman, you just run some other tool over it to filter them out, but there are so many ridiculous things that can be copyrighted (depictions of buildings, tattoos), things like crowd shots, pictures of cities that have ads in the background, that I don't know how you could do it. I'm sure even Adobe's stock library would have a lot of violations like that.
If you take a model trained on Getty and ask it for Indiana Jones or Harry Potter, what does it give you? These things are popular enough that it's likely to be present in any large set of training data, either erroneously or because some specific works incorporated them in a way that was licensed or fair use for those particular works even if it isn't in general.
And then when it conjures something like that by description rather than by name, how are you any better off than something trained from random social media? It's not like you get to make unlicensed AI India Jones derivatives just because Getty has a photo of Harrison Ford.
I work in this space. In traditional diffusion-based regimes (paired image and text), one can absolutely check the text to remove all occurrences of Indiana Jones. Likewise, Adobe Stock has content moderation that ensures (up to human moderation limit) no dirty content. It is a world without Indiana Jones to the model
Yeah, there's no way Indiana Jones was not in the training data that created that image. To even say it's not in there is James Clapper in front of Congress level lying.
> one can absolutely check the text to remove all occurrences of Indiana Jones
How do you handle this kind of prompt:
“Generate an image of a daring, whip-wielding archaeologist and adventurer, wearing a fedora hat and leather jacket. Here's some back-story about him: With a sharp wit and a knack for languages, he travels the globe in search of ancient artifacts, often racing against rival treasure hunters and battling supernatural forces. His adventures are filled with narrow escapes, booby traps, and encounters with historical and mythical relics. He’s equally at home in a university lecture hall as he is in a jungle temple or a desert ruin, blending academic expertise with fearless action. His journey is as much about uncovering history’s secrets as it is about confronting his own fears and personal demons.”
Try copy-pasting it in any image generation model. It looks awfully like Indiana Jones for all my attempts, yet I've not referenced Indiana Jones even once!
It comes down to who is liable for the edge cases, I suspect. Adobe will compensate the end user if they get sued for using a Firefly-generated image (probably up to some limit).
Getting sued occasionally is a cost of doing business in some industries. It’s about risk mitigation rather than risk elimination.
Whistleblowers, corporate leaks, output resembling copyrighted content etc.
Basically it feels it's the same as the companies who unlawfully use licensed code as their own (e.g. without respecting GPL license)
Consumers have long wanted a single place to access all content. Netflix was probably the closest that ever got, and even then it had regional difficulties. As competitors rose, they stopped licensing their content to netflix, and netflix is now arguably just another face in the crowd.
Now they want to go and leverage AI to produce more content and bam, stung by the same bee. No one is going to license their content for training, if the results of that training will be used in perpetuity. They will want a permanent cut. Which means they either need to support fair use, or more likely, they will all put up a big wall and suck eggs.
One of the issues with using LLMs in content generation is that instruction tuning causes mode collapse. For example, if you ask an LLM to generate a random number between 1 and 10, it might pick something like 7 80% of the time. Base models do not exhibit the same behavior.
“Creative Output” has an entirely different meaning when you start to think about them in the way they actually work.
>GenAI is not used to replace or generate new talent performances
This is 100% a lie.
Studios will use this to replace humans. In fact, the idea is for the technology – AI in general – to be so good you don't need humans anywhere in the pipeline. Like, the best thing a human could produce would only be as good as the average output of their model, except the model would be far cheaper and faster.
And... that's okay, honestly. I mean, it's a capitalism problem. I believe with all my strength that this automation is fundamentally different from the ones from back in the day. There won't be new jobs.
The issue wasn't if they said that thing or not; companies say a lot of things which are fundamentally a lie, things to keep up appearances – which are oftentimes not enforced. It's like companies arguing they believe in fair pay while using Chinese sweatshops or whatever.
In this case, for instance, Netflix still has a relation with their partners that they don't want to damage at this moment, and we are not at the point of AI being able to generate a whole feature length film indistinguishable from a traditional one . Also, they might be apprehensive regarding legal risks and the copyrightability at this exact moment; big companies' lawyers are usually pretty conservative regarding taking any "risks," so they probably want to wait for the dust to settle down as far as legal precedents and the like.
Anyway, the issue here is:
"Does that statement actually reflect what Netflix truly think and that they actually believe GenAI shouldn't be used to replace or generate new talent performances?"
Because they believe in the sanctity of human authorship or whatever? And the answer is: no, no, hell no, absolutely no. That is a lie.
I’m inclined to agree. The goalposts will move once the time is right. I’ve already personally witnessed it happening; a company sells their AI-whatever strictly along the lines of staff augmentation and a force multiplier for employees. Not a year later and the marketing has shifted to cost optimization, efficiency, and better “uptime” over real employees.
The truth is that Netflix, Amazon, or any other company, honestly, would fire 99% of their workforce if it were possible, because they only care about profit – hell, they are companies, that's why they exist. At the same time, brands have to pretend they care about society, people having jobs, the climate, whatever, so they can't simply say: "Yeah, we exist to make money and we totally want to fire you guys as soon as possible." As you said, it's all masked as staff augmentation and other technical mumbo jumbo.
>GenAI is not used to replace or generate new talent performances
>> This is 100% a lie.
We’ve had CGI for decades and generally don’t mind. However, the point at which AI usage becomes a negative (eg: the content appears low quality) because of its usage, I’d expect some backlash and pulling back in the industry.
In film and tv, customers have so much choice. If a film or tv is low effort, it’s likely going to get low ratings.
Every business and industry is obviously incentivized to cut costs, but, if those cost cuts directly affect the reputation and imagery of your final product, you probably want to choose wisely which things you cut..
I think you're right, in general - certainly AI will replace background actors, though that's already been happening for years without AI generation. I'm also pretty sure that if/when AI can generate whole films, then that'll happen, too.
However, this statement is a hell of a lot better than I expected to see, and suggests to me that the actors' strike a few years ago was necessary and successful. It may, as you say, only be holding back the "capitalism problem" dike, but... At least it's doing that?
I would somewhat disagree with this statement being a sign the strike was a success because, like, AI is not at the point of generating a whole movie in human quality today, so Netflix issuing this statement like this now, in November 2025, costs them literally nothing, and feels more like a consolation prize: "Here, take this statement, so you guys can pretend the strike achieved anything."
When AI gets good enough, 2, 3, 5, 10 years from now, they simply reverse path, and this statement wouldn't delay Netflix embracing AI films that much, if anything.
> I would somewhat disagree with this statement being a sign the strike was a success because, like, AI is not at the point of generating a whole movie in human quality today, so Netflix issuing this statement like this now, in November 2025, costs them literally nothing, and feels more like a consolation prize: "Here, take this statement, so you guys can pretend the strike achieved anything."
>
> When AI gets good enough, 2, 3, 5, 10 years from now, they simply reverse path, and this statement wouldn't have delay Netflix embracing AI films that much, if anything.
There’s no guarantee AI will get good enough to replace anyone. We’ve pretty much run out of training data at this point. I’m a little annoyed that people speak about future progress like it’s an inevitability.
I am thinking of building an association of AI consumers so we can organize to praise or boycott whatever we collectevily find acceptable or not. I'll spend some time reading this in details later on, but whatever it states or imply, positive or negative, it's not for businesses to set the rules as if they owned the place. Consumer associations are powerful and can't be fired when striking, since the customer is always right.
This reads like a reasonable policy. More broadly speaking re: AI content: Sure, boomers scrolling facebook will continue to enjoy their AI slop baby and animal videos, but I think the fact that the term "AI slop" has become so commonplace reflects a bias (generally) against AI-generated content.
Each time I scroll LinkedIn and I see some obviously AI produced images, with garbled text, etc. it immediately turns me off to whatever the content was associated with the image.
I'd be very disappointed to see the arts, including film making, shift away from the core of human expression.
“You know what the biggest problem with pushing all-things-AI is? Wrong direction. I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.” - Joanna Maciejewka
> but I think the fact that the term "AI slop" has become so commonplace reflects a bias (generally) against AI-generated content.
Is that just because we are at the very beginning stages of the technology, though? It is just going to keep getting better, will the bias against AI generated content remain? I know people like to talk as if AI will always have the quality issues it has now, but I wouldn't count on that.
Is it going to get better? Because people have been saying that for years now, and while AI output is somewhat improved, many of the issues with it have not changed.
It's that not every one has the talent to produce something of quality.
If you give a professional passionate chef, the same ingredients for a full meal, as your average home cook the results will NOT be the same by a far stretch.
Much of "AI slop" is to content what Macdonald's is to food. Its technically edible but not high quality
That’s an interesting way to put it, which asks the bigger question of (perhaps?):
Do we want a society where everyone can masquerade as an “artist”, flooding society with low-quality content using AI trained on the work product of actual artists?
The people doing as such do not have the talent they desire, nor did they do anything to upskill themselves. Its a short cut to an illusion of competency.
> Do we want a society where everyone can masquerade as an “artist”, flooding society with low-quality content using AI trained on the work product of actual artists?
Change the statement to: Do we want a society where everyone can masquerade as an “photographer”, flooding society with low-quality photos using cell phones, never having to learn to develop film, or use focus, or understand lenses...
Do we want a society where everyone can masquerade as an “painter”, flooding society with low-quality paintings because acrylics are cheap, the old masters made their own paint after all...
Why does it matter how it was created? It wasn't Bob Ross's "Joy of Making Incredible Art", it was simply the "Joy of Painting".
And people do enjoy content that, for lack of a better word, is disposable. Look at the "short dramas" or "vertical dramas" industry that is making money hand over fist. The content isnt high brow, but people enjoy it all the same.
> AI trained on the work product of actual artists?
Should we teach people how to play guitar without using the songs of other artists? Should those artists be compensated for inspiring others?
Some of this is an artifact of our ability to sell reproductions (and I would argue that the economics were all around distribution).
There is a long (possibly decades) conversation that were going to have on this topic.
I suspect that if GenAI starts to make content which can grab people's attention, and do it cheaply, then Netflix will become far more accommodating very quickly.
Netflix is basically strangling the creative potential of GenAI before it can even breathe. Their new “guidelines” read like a corporate legal panic document, not a policy for innovation. Every use case needs escalation, approval, or a lawyer’s blessing. That’s not how creativity works.
The irony is rich they built their empire on disrupting old Hollywood gatekeeping, and now they’re recreating it in AI form. Instead of letting creators experiment freely with these tools, Netflix wants control over every brushstroke of ai creativity
This actually looks pretty good. The key takeaway I got was that they know their business is dependent upon Intellectual Property rights, and that Generative AI in final outputs or productive work undermines the foundation of their future success vis a vis discounting or dismissing IP Law and Rights.
That’s likely to be the middle ground going forward for the smarter creative companies, and I’m personally all for it. Sure, use it for a pitch, or a demo, or a test - but once there’s money on the line (copyright in particular), get that shit outta there because we can’t own something we stole from someone else.
Or they can do like Call of duty, that just makes skins "heavily inspired" by other franchises they don't own, the week Borderlands 4 came out they put a few cell shaded skins that heavily resembles the look of that game's characters, there is one that skin that is pretty much like reptile from mortal Kombat called "vibrant serpent", they got a bit of heat in May of this year for releasing a skin that looked too much like one from another game called High On Life, and the list goes on. It reminds me a lot of the disguises they sell on Spirit Halloween during every October.
And yes I know they do legal and agreed partnerships like with the Predator franchise, or the Beavis and Butt-Head franchise (yes they exist in CoD now...), and those only count for a tiny number of the premium skins.
The Call of Duty series makes me so sad. I remember when cod 4 came out it felt like a genuinely groundbreaking and innovative thing and I was so pumped to see what IW did next. And then Activision took all of that talent that was genuinely exploring new ground in game development and stuck them in the yearly rerelease of the same damn game mill until everyone got burnt out and left.
I thought they were on biyearly swapping with treyarch?
Cod4 in some ways was the beginning of the end for a lot that we took for granted in gaming up to that point. I remember when it released and a couple of us went to my friends house to play it. Boy were we in for a shock when there was no coop multiplayer like halo 3.
For the record, Arc Raiders (just released) makes me feel like I'm back playing MW2 in the golden days. Just in the sense of playing an awesome game and riding the wave of popularity with everyone else.
Thanks, I'd heard whispers but hadn't jumped in yet. I will need to check this out.
(platinum rating on protondb too woohoo)
Not me, the mix of parkour with multiplayer shooting with beautiful highly detailed maps it's something I like a lot, nothing even compares in that regard, I know the game is a shameless skin store but I do appreciate the former, although I also hate how small a lot of maps are, glances at Nuketown
I hate how parkour infested the fps genre. There's this whole meta now that I don't care about at all yet one has to learn if you don't want to go 3 and 12 and its in most games now.
They stole all the parkour stuff from Titanfall, which was made by the original IW founders when they left and founded Respawn ;)
(I use "stole" in a non derogatory way here - 90% of good game design is cribbing together stuff that worked elsewhere in a slightly new form)
Totally, Titanfall 2 is one of my favorite games ever, but by the time I discovered the multiplayer was pretty much dead, no players and no recent updates.
> get that shit outta there because we can’t own something we stole from someone else
How does anyone prove it though? You can say "does that matter?" but once everybody starts doing it, it becomes a different story.
Are you kidding me ? Everyone knows it's pirated content (aka stealing), there are a ton of proofs here and there:
- https://arstechnica.com/tech-policy/2025/02/meta-torrented-o... - https://news.bloomberglaw.com/ip-law/openai-risks-billions-a...
Other than that, just a bit of common sense tells you all you need to know about where the data comes from (datasets never released, outputs of the LLMs suspisciously close to original copyrighted content, AI founders openly saying that paying for copyrighted content is too costly etc. etc. etc.)
It's partly about Netflix getting sued by someone claiming infringement, but also partly (maybe mostly) about Netflix maintaining their right to sue others for infringement.
The scenario looks like this:
* Be Netflix. Own some movie or series where the main elements (plot, characters, setting) were GenAI-created.
* See someone else using your plot/characters/setting in their own for-profit works.
* Try suing that someone else for copyright infringement.
* Get laughed out of court because the US Copyright Office has already said that GenAI is not copyrightable. [1]
[1] https://www.copyright.gov/ai/Copyright-and-Artificial-Intell...
This scenario only plays out if it is known what was or wasn't made with GenAI.
It would become known during discovery.
Any one with a brain knows it is not stolen, but nevertheless the fact that people will claim so is a risk.
I know that people get very up in arms about AI in creative industries - but I feel like people don't necessarily understand that even in creative industries there is a LOT of monotonous, exploitative grunt work.
For every person who gets to make creative decision, there are hundreds upon hundreds of people whose sole purpose is slavish adherence to those decisions. Miyazaki gets to design his beautiful characters - but the task of getting those characters to screen must be carried out by massive team of illustrators for whom "creative liberty" is a liability to their career.
(And this example is only for the creative aspects of film-making. There is a lot of normal corporate and logistical stuff that never even affects what you see)
That's not to say I'm looking forward to the wave of lazy AI-infused slop that is heading our way. But I also don't necessarily agree with the grandstanding that AI is inherently anti-creative or only destructive. I reserve the right to be open-minded.
The irony is that movies and TV themselves represented a cheaper, industrialized and commoditized alternative to theater. And theater is still around and just as good as it ever was.
>For every person who gets to make creative decision, there are hundreds upon hundreds of people whose sole purpose is slavish adherence to those decisions. Miyazaki gets to design his beautiful characters - but the task of getting those characters to screen must be carried out by massive team of illustrators for whom "creative liberty" is a liability to their career.
This is vastly oversimplifying and is misleading. Key animators have a highly creative role. The small decisions in the movements, the timings, the shapes, even scene layouts (Miyazaki didn't draw every layout in The Boy and the Heron), are creative decisions that Miyazaki handpicked his staff on the basis of. Miyazaki conceived of the opening scene [0] in that film with Shinya Ohira as the animator in mind [1]. Even in his early films, when he was known to exert more control, animator Yoshinori Kanada's signature style is evident in the movements and effects [2].
[0]: https://www.sakugabooru.com/post/show/260429
[1]: https://fullfrontal.moe/takeshi-honda-the-boy-and-the-heron-...
[2]: Search for "Kanada animated many sequences of the movie, but let’s just focus on the most famous one, the air battle scene." in https://animetudes.com/2021/05/15/directing-kanada/
> For every person who gets to make creative decision, there are hundreds upon hundreds of people whose sole purpose is slavish adherence to those decisions.
Yes but at least those decisions come from some or one person not just an algorirhm
As a software engineer you still make the hard decisions and let claude type them out for you. Isn't it similar?
I mean, yeah. No matter how you feel about AI and creativity, having AI make the creative choices is dumb and backwards.
What happens to the illustrators now?
Shouldn’t be particularly surprising Netflix is leaning in here - they’ve been pretty open about viewing themselves as “second screen”/background content for people doing other things. Their primary need these days is for a large volume of somewhat passable content, especially content they can get for cheap. Spotify’s in a similar boat and has been filling the recommended playlists up with low-royalty elevator music.
"Generated material is temporary and not part of the final deliverables" sounds like they are not looking to generative AI for content that they will air to the public.
Later on they do have a note suggesting that the following might be OK if you use judgement and get their approval: "Using GenAI to generate background elements (e.g., signage, posters) that appear on camera"
"If you can confidently say "yes" to all the above, socializing the intended use with your Netflix contact may be sufficient. If you answer “no” or “unsure” to any of these principles, escalate to your Netflix contact for more guidance before proceeding, as written approval may be required."
They do want to save money by cheaply generating content, but it's only cheap if no expensive lawsuits result. Hence the need for clear boundaries and legal review of uses that may be risky from a copyright perspective.
They also mention reputation / image in there. If I can’t tell something is generated by AI (some background image in a small part of a scene), it’s just CGI. But if its the uncanny valley view of a person/animal/thing that is clearly AI generated, that shows laziness.
Yeah, that's a fair assessment. The specific mention of "union-covered work" plays to that interpretation as well:
> GenAI is not used to replace or generate new talent performances or union-covered work without consent.
Yeah, I read the "Talent" section and it's very balanced. I can't see much, if anything, to complain about, so thank goodness for SAG-AFTRA. The strike a couple of years ago was well judged.
Yup. Everything will be muzak in the end.
But what word should we coin as buzzword for “Netflix-Muzak”?
And when we're saturated with it all, we'll start buying DVDs (or other future media) again.
> Using unowned training data (e.g., celebrity faces, copyrighted art)
How would one ever know that the GenAI output is not influenced or based on copyrighted content.
I think it would be very, very difficult - almost impossible - to create a dataset to train an image generator that doesn't contain any copyrighted material that you don't have the rights to. There's the obvious stuff like Mickey Mouse or Superman, you just run some other tool over it to filter them out, but there are so many ridiculous things that can be copyrighted (depictions of buildings, tattoos), things like crowd shots, pictures of cities that have ads in the background, that I don't know how you could do it. I'm sure even Adobe's stock library would have a lot of violations like that.
Getty and Adobe offer models that were trained only on images that they have the rights to. Those models might meet Netflix’s standards?
I kind of wonder if that even works.
If you take a model trained on Getty and ask it for Indiana Jones or Harry Potter, what does it give you? These things are popular enough that it's likely to be present in any large set of training data, either erroneously or because some specific works incorporated them in a way that was licensed or fair use for those particular works even if it isn't in general.
And then when it conjures something like that by description rather than by name, how are you any better off than something trained from random social media? It's not like you get to make unlicensed AI India Jones derivatives just because Getty has a photo of Harrison Ford.
I work in this space. In traditional diffusion-based regimes (paired image and text), one can absolutely check the text to remove all occurrences of Indiana Jones. Likewise, Adobe Stock has content moderation that ensures (up to human moderation limit) no dirty content. It is a world without Indiana Jones to the model
If you ask the Adobe stock image generation for "Adventurer with a whip and hat portrait view , Brown leather hat, jacket, close-up"
It gives you an image of Harrison Ford dressed like Indiana Jones.
https://stock.adobe.com/ca/images/adventurer-with-a-whip-and...
Yeah, there's no way Indiana Jones was not in the training data that created that image. To even say it's not in there is James Clapper in front of Congress level lying.
> one can absolutely check the text to remove all occurrences of Indiana Jones
How do you handle this kind of prompt:
“Generate an image of a daring, whip-wielding archaeologist and adventurer, wearing a fedora hat and leather jacket. Here's some back-story about him: With a sharp wit and a knack for languages, he travels the globe in search of ancient artifacts, often racing against rival treasure hunters and battling supernatural forces. His adventures are filled with narrow escapes, booby traps, and encounters with historical and mythical relics. He’s equally at home in a university lecture hall as he is in a jungle temple or a desert ruin, blending academic expertise with fearless action. His journey is as much about uncovering history’s secrets as it is about confronting his own fears and personal demons.”
Try copy-pasting it in any image generation model. It looks awfully like Indiana Jones for all my attempts, yet I've not referenced Indiana Jones even once!
It comes down to who is liable for the edge cases, I suspect. Adobe will compensate the end user if they get sued for using a Firefly-generated image (probably up to some limit).
Getting sued occasionally is a cost of doing business in some industries. It’s about risk mitigation rather than risk elimination.
All the indemnities I’ve read have clauses though that say if you intentionally use it to make something copyrighted they won’t protect you.
So if you put obviously copyrighted things in the prompt you’ll still be on your own.
Adobe Firefly absolutely has a spider man problem.
Whistleblowers, corporate leaks, output resembling copyrighted content etc. Basically it feels it's the same as the companies who unlawfully use licensed code as their own (e.g. without respecting GPL license)
Netflix could also use or provide their own TV/movie productions as training data.
Lionsgate tried that and found that even their entire archive wasn't nearly enough to produce a useful model: https://www.thewrap.com/lionsgate-runway-ai-deal-ip-model-co... and https://futurism.com/artificial-intelligence/lionsgate-movie...
This amuses me.
Consumers have long wanted a single place to access all content. Netflix was probably the closest that ever got, and even then it had regional difficulties. As competitors rose, they stopped licensing their content to netflix, and netflix is now arguably just another face in the crowd.
Now they want to go and leverage AI to produce more content and bam, stung by the same bee. No one is going to license their content for training, if the results of that training will be used in perpetuity. They will want a permanent cut. Which means they either need to support fair use, or more likely, they will all put up a big wall and suck eggs.
Maybe now all that product placement is finally coming back to haunt them.
Having spent some time in post-production, this reads more like a “please don’t get us sued”
One of the issues with using LLMs in content generation is that instruction tuning causes mode collapse. For example, if you ask an LLM to generate a random number between 1 and 10, it might pick something like 7 80% of the time. Base models do not exhibit the same behavior.
“Creative Output” has an entirely different meaning when you start to think about them in the way they actually work.
>GenAI is not used to replace or generate new talent performances
This is 100% a lie.
Studios will use this to replace humans. In fact, the idea is for the technology – AI in general – to be so good you don't need humans anywhere in the pipeline. Like, the best thing a human could produce would only be as good as the average output of their model, except the model would be far cheaper and faster.
And... that's okay, honestly. I mean, it's a capitalism problem. I believe with all my strength that this automation is fundamentally different from the ones from back in the day. There won't be new jobs.
But the solution was never to ban technology
The part you quote is part of the list of conditions for an if-statement, so how could it be a lie?
The issue wasn't if they said that thing or not; companies say a lot of things which are fundamentally a lie, things to keep up appearances – which are oftentimes not enforced. It's like companies arguing they believe in fair pay while using Chinese sweatshops or whatever.
In this case, for instance, Netflix still has a relation with their partners that they don't want to damage at this moment, and we are not at the point of AI being able to generate a whole feature length film indistinguishable from a traditional one . Also, they might be apprehensive regarding legal risks and the copyrightability at this exact moment; big companies' lawyers are usually pretty conservative regarding taking any "risks," so they probably want to wait for the dust to settle down as far as legal precedents and the like.
Anyway, the issue here is:
"Does that statement actually reflect what Netflix truly think and that they actually believe GenAI shouldn't be used to replace or generate new talent performances?"
Because they believe in the sanctity of human authorship or whatever? And the answer is: no, no, hell no, absolutely no. That is a lie.
I’m inclined to agree. The goalposts will move once the time is right. I’ve already personally witnessed it happening; a company sells their AI-whatever strictly along the lines of staff augmentation and a force multiplier for employees. Not a year later and the marketing has shifted to cost optimization, efficiency, and better “uptime” over real employees.
The truth is that Netflix, Amazon, or any other company, honestly, would fire 99% of their workforce if it were possible, because they only care about profit – hell, they are companies, that's why they exist. At the same time, brands have to pretend they care about society, people having jobs, the climate, whatever, so they can't simply say: "Yeah, we exist to make money and we totally want to fire you guys as soon as possible." As you said, it's all masked as staff augmentation and other technical mumbo jumbo.
>GenAI is not used to replace or generate new talent performances
>> This is 100% a lie.
We’ve had CGI for decades and generally don’t mind. However, the point at which AI usage becomes a negative (eg: the content appears low quality) because of its usage, I’d expect some backlash and pulling back in the industry.
In film and tv, customers have so much choice. If a film or tv is low effort, it’s likely going to get low ratings.
Every business and industry is obviously incentivized to cut costs, but, if those cost cuts directly affect the reputation and imagery of your final product, you probably want to choose wisely which things you cut..
I think you're right, in general - certainly AI will replace background actors, though that's already been happening for years without AI generation. I'm also pretty sure that if/when AI can generate whole films, then that'll happen, too.
However, this statement is a hell of a lot better than I expected to see, and suggests to me that the actors' strike a few years ago was necessary and successful. It may, as you say, only be holding back the "capitalism problem" dike, but... At least it's doing that?
I would somewhat disagree with this statement being a sign the strike was a success because, like, AI is not at the point of generating a whole movie in human quality today, so Netflix issuing this statement like this now, in November 2025, costs them literally nothing, and feels more like a consolation prize: "Here, take this statement, so you guys can pretend the strike achieved anything."
When AI gets good enough, 2, 3, 5, 10 years from now, they simply reverse path, and this statement wouldn't delay Netflix embracing AI films that much, if anything.
> I would somewhat disagree with this statement being a sign the strike was a success because, like, AI is not at the point of generating a whole movie in human quality today, so Netflix issuing this statement like this now, in November 2025, costs them literally nothing, and feels more like a consolation prize: "Here, take this statement, so you guys can pretend the strike achieved anything."
>
> When AI gets good enough, 2, 3, 5, 10 years from now, they simply reverse path, and this statement wouldn't have delay Netflix embracing AI films that much, if anything.
There’s no guarantee AI will get good enough to replace anyone. We’ve pretty much run out of training data at this point. I’m a little annoyed that people speak about future progress like it’s an inevitability.
You’re saying their statement about what is happening is a lie because of what you predict will happen…
I am thinking of building an association of AI consumers so we can organize to praise or boycott whatever we collectevily find acceptable or not. I'll spend some time reading this in details later on, but whatever it states or imply, positive or negative, it's not for businesses to set the rules as if they owned the place. Consumer associations are powerful and can't be fired when striking, since the customer is always right.
>I am thinking of building an association of AI consumers
The Gooner Association?
> it's not for businesses to set the rules as if they owned the place.
This is for studios and companies that are producing content for Netflix.
If you want to sell to Netflix, you have to play by Netflix's rules.
Netflix has all kinds of rules and guidelines, including which camera bodies and lenses are allowed [1].
[1] https://partnerhelp.netflixstudios.com/hc/en-us/articles/360...
This reads like a reasonable policy. More broadly speaking re: AI content: Sure, boomers scrolling facebook will continue to enjoy their AI slop baby and animal videos, but I think the fact that the term "AI slop" has become so commonplace reflects a bias (generally) against AI-generated content.
Each time I scroll LinkedIn and I see some obviously AI produced images, with garbled text, etc. it immediately turns me off to whatever the content was associated with the image.
I'd be very disappointed to see the arts, including film making, shift away from the core of human expression.
“You know what the biggest problem with pushing all-things-AI is? Wrong direction. I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.” - Joanna Maciejewka
> but I think the fact that the term "AI slop" has become so commonplace reflects a bias (generally) against AI-generated content.
Is that just because we are at the very beginning stages of the technology, though? It is just going to keep getting better, will the bias against AI generated content remain? I know people like to talk as if AI will always have the quality issues it has now, but I wouldn't count on that.
Is it going to get better? Because people have been saying that for years now, and while AI output is somewhat improved, many of the issues with it have not changed.
The problem with AI slop isnt the AI part.
It's that not every one has the talent to produce something of quality.
If you give a professional passionate chef, the same ingredients for a full meal, as your average home cook the results will NOT be the same by a far stretch.
Much of "AI slop" is to content what Macdonald's is to food. Its technically edible but not high quality
That’s an interesting way to put it, which asks the bigger question of (perhaps?):
Do we want a society where everyone can masquerade as an “artist”, flooding society with low-quality content using AI trained on the work product of actual artists?
The people doing as such do not have the talent they desire, nor did they do anything to upskill themselves. Its a short cut to an illusion of competency.
> Do we want a society where everyone can masquerade as an “artist”, flooding society with low-quality content using AI trained on the work product of actual artists?
Change the statement to: Do we want a society where everyone can masquerade as an “photographer”, flooding society with low-quality photos using cell phones, never having to learn to develop film, or use focus, or understand lenses...
Do we want a society where everyone can masquerade as an “painter”, flooding society with low-quality paintings because acrylics are cheap, the old masters made their own paint after all...
Why does it matter how it was created? It wasn't Bob Ross's "Joy of Making Incredible Art", it was simply the "Joy of Painting".
And people do enjoy content that, for lack of a better word, is disposable. Look at the "short dramas" or "vertical dramas" industry that is making money hand over fist. The content isnt high brow, but people enjoy it all the same.
> AI trained on the work product of actual artists?
Should we teach people how to play guitar without using the songs of other artists? Should those artists be compensated for inspiring others?
Some of this is an artifact of our ability to sell reproductions (and I would argue that the economics were all around distribution).
There is a long (possibly decades) conversation that were going to have on this topic.
I think that's unfair to Mcdonalds
Netflix joins everyone else jumping on the "rules for thee, but not for me" train.
I suspect that if GenAI starts to make content which can grab people's attention, and do it cheaply, then Netflix will become far more accommodating very quickly.
They do not want to be disrupted.
Netflix is basically strangling the creative potential of GenAI before it can even breathe. Their new “guidelines” read like a corporate legal panic document, not a policy for innovation. Every use case needs escalation, approval, or a lawyer’s blessing. That’s not how creativity works.
The irony is rich they built their empire on disrupting old Hollywood gatekeeping, and now they’re recreating it in AI form. Instead of letting creators experiment freely with these tools, Netflix wants control over every brushstroke of ai creativity
Thankfully GenAI has no creative potential so we aren’t losing much.
I do agree Netflix wants to crush creators.