Post 7

AI image generation is honestly wild. It’s fast, it’s fun, and once you start playing with it, it’s kind of hard to stop. But these models are all being trained on billions of images scraped from the internet such as your photos, artists’ portfolios, news footage, and usually without anyone’s permission. It’s not about that AI can copy a style; it’s that it can swallow someone’s face, their work, and their memory without asking, then spit it back out as something “new.” That’s not innovation, that’s extraction.


And it gets way worse when people use this tech to generate images of someone who’s dead, such as like Kobe Bryant——honestly it’s kind of shame because it was a trend in China. I totally understand that people miss him, but Kobe can’t consent to having his likeness dragged out of old footage and turned into a digital being because he is dead. It’s super disrespectful.


So how can we actually fix this? I think there should be a regulation for data that used to train AI, as we discussed a lot. First, these models need to be trained on opt-in data, not just everything scraped off the web. Second, every AI image needs a permanent watermark or label that can’t be cropped out, and platforms need to actually enforce it. And third, there should be a hard rule: you don’t generate identifiable images of real dead people without explicit family consent. So we need to stop treating “we can do it” as the same thing as “we should do it.”.

Leave a Reply