First of all, I’m so sorry that you have been exposed to such horrors. I hope you can handle that, or find help to.
I don’t have a solution, I’d just like to share some thoughts.
-
Some people suggested that AIs could detect this kind of content. I would be reluctant to use such tools, because lots of AI projects exploit unprotected workers in poor countries for data labeling.
-
An zero-image policy could be an effective solution, but it would badly impact @[email protected], @[email protected] and @[email protected].
-
correct me if I’m wrong, but on the fediverse, when a picture is posted on an instance, it is duplicated on all federated instances? If I’m right, it means that even if beehaw found a way to totally avoid CSAM posting, you could still end up with duplicated CSAM on your server? (with consequences on your mental health, and possibly legal risks for owning such pictures)
Thank you so much for all these explanations! I didn’t know the communities/users were so important in the system.
I thought that a duplicate of each post on a instance was automatically sent to all federated instances, and I wondered how the servers didn’t get overloaded by the global activity.