recently_folded: Image from HLV Appledore scene (Default)
[personal profile] recently_folded posting in [community profile] post_tumblr_fandom
And here is the elephant in the room. We get some starting thoughts in [personal profile] greywash's master thread and its comments, but there's a lot of thought that this will require because this is the one that can bite us all in the ass no matter how well or creatively any site is designed.

Many of us are leaving tumblr because we just don't see the world in binary sanitized terms with sexuality and sexual conduct the evil mirror presence to be kept locked in the closet. Adult content is fundamental to shipping, which is a huge component of fandom across a variety of media. Right now, that content is being pushed away in different directions, and with it, fracturing fandom. If we're going to have a common fandom home and protect our fandom legacy in all media, we've got to bring it right back to the table.

Nonetheless, child porn exists and porn purveyors will invade every toehole they can find in search of profit. I think we laughed, those of us who saw PornHub's tweeted invitation to tumblr refuges, but that's kind of the side our allies may turn out to be on in this. So drawing difficult lines between adult content and exploitative porn are going to have to be part of our planning.

Dreamwidth has a policy; Pillowfort has a policy (although the issue of their domain not permitting "porn" bit them despite their good intentions). AO3 has a strong presence in this game, and it's also been a controversial one that has been used as an attack vector.

What are we going to do about this?

Note to other mods: do we need to raise the age restriction on this thread? Please edit in and fix if indicated.

Date: 2018-12-06 03:05 pm (UTC)
greywash: A lounging pinup girl, holding a cocktail. (Default)
From: [personal profile] greywash
I would like to keep the age restriction on this comm set to "everyone," because I think that there are underage fans who have a vested interest in how fandom handles adult content, and it's important for them to have a voice in the meta-conversation.

Just to be clear: I'm not saying either that I want fandom to let underage fans freely access adult content, or that underage fans should be able to dictate whether or not fandom (as a large, sprawling, multi-faceted space) contains adult content. All of fandom does not need to be a "safe space" for underage fans (this is essentially the anti/purity wank argument, and it's garbage). However, I do think we need to talk about:

  1. how adult fans producing material for other adult fans can block access to fans who are underage;
  2. how underage fans (and fans who just don't want to see adult content) can protect themselves from accidentally encountering explicit material;
  3. what our overall policy is on the system's interaction with adult content;
  4. what our overall policy is for content warnings; and
  5. how we are going to deal with the system's abuse by people trying to host/link to illegal content.

These are really five different (though related) issues, and discussing those issues actually isn't something we need to keep away from the tender eyes of children, even if the material we're discussing is.

I personally think that the AO3 did this right, and—whether or not purity wankers attack them for it—we should do our best to model whatever we create on what they're doing, because their system:

I genuinely do not see a better general approach to these issues for fannish material, though I think it's important to also not discount that the technical issues we're facing are going to be different, and as we solidify what we want to do technically, we are going to need to be flexible about how we implement those same principles.
Edited Date: 2018-12-06 03:07 pm (UTC)

Date: 2018-12-08 10:53 pm (UTC)
cesperanza: (Default)
From: [personal profile] cesperanza
I have to say, IMO, the way to get around this is to make it very like OTW: a network that only hosts fanworks. I am willing to be super broad about fanworks - we can tag for underage illustrations, for instance - but that framework would exclude things like: pirated movies, erotic images that have nothing to do with fandom, photographic pornography, etc. I think if it's a fandom social network that excludes things broadly speaking outside of fandom, we will be ok.

Date: 2018-12-09 01:48 am (UTC)
glymr: (Default)
From: [personal profile] glymr
I can see ways this could be abused, though. For example, what about erotic cosplay? Adults (or even children) in sexualized fannish costumes?

Date: 2018-12-09 02:14 am (UTC)
cesperanza: (Default)
From: [personal profile] cesperanza
Sure, I guess, but keep in mind--there's no money to be made here. It would literally be like the mashers of old. But yeah, good to have a TOS that says, you know, not a fanwork.

Date: 2018-12-09 02:33 am (UTC)
glymr: (Default)
From: [personal profile] glymr
Just to be clear, I wasn't trying to be glib - I literally follow a guy (an adult) on Patreon who dresses up in sexy costumes and cosplays popular characters. He advertises on tumblr, instagram and twitter with his less risque' photos. I could see something similar happening with CP if we weren't vigilant on this theoretical. Along the lines of, "Check out this cute, sexy Sailor Moon cosplay! Click here for more!" The pic on the site would be nothing illegal, just suggestive, but it would lead to a pay child porn site.

I'm really loathe to block links to patreons, ko-fis or paid sites in general, but that might be one way of controlling such things. Cut off the ability to make money by making such links not allowed on the site and you reduce the incentive to use the site (though you also increase the incentive to find ways around them.)

Date: 2018-12-09 03:03 am (UTC)
cesperanza: (Default)
From: [personal profile] cesperanza
Well, fwiw, I am myself in favor of banning patrons, ko-fi, paid sites etc. from fandom, though I have many friends who use them; also, a lot of these distributed systems I'm looking at have tip jars and such, which I think would be attractive to some fan-artists. But noncommerciality bolsters the case for legality--but in any case, if its not a big profit system, I do think that those guys won't be as interested in it. There's no clicks to amass etc. The other thing is that some distributed systems allow you to form a web of trust, much like invite systems do. That's for peer to peer- with hubs, fandom's own hubs could have fandom's own rules but interoperate with other hubs/platorms (Hubzilla interoperates with mastedon and other activityhub sites) so then you're no worse off than you are now in terms of running into creeps. Hubzilla has very very granular privacy controls fwiw.
Edited Date: 2018-12-09 03:03 am (UTC)

Date: 2018-12-09 03:09 am (UTC)
glymr: (Default)
From: [personal profile] glymr
Yes, to be fair, there is a huge difference between being privately commissioned to create a fanwork and being paid once, and writing a derivative work and profiting off every sale. Copyright holders are far more concerned about(and have far more of a case against)the latter situation than the former. But that's getting into a whole other discussion.

The peer to peer concept is intriguing! I'm not clear on how the web of trust works, though - people have to "vouch" for newcomers?

Date: 2018-12-09 07:34 am (UTC)
snowgrouse: (Torsten/Laura choke)
From: [personal profile] snowgrouse
Even if I know what you're getting at, "photographic pornography" and "erotic images that have nothing to do with fandom" are still used by fans a great deal. I make explicit photomanips, for a start (my icon is an example of a BDSM porn pic manipped to depict a scene from one of my fics), and it's not uncommon to use erotic gifs in RP blogs to illustrate what character X is doing to character Y if you want to be really specific about what type of caress you're talking about, etc. And then you have people using erotic images as one would use stock photos, to illustrate what their OCs look like, stuff like that. What I'd focus on there is the context--that it really has to be tied to something fannish. But how you draw that line is, I agree, tricky. We wouldn't want a hypothetical pedo posting pics of 14-year-old naked girls from CP sites saying "this is how I imagine Hermione getting shafted." But I wouldn't want us to stomp on, say, female artists who deliberately reappropriate porn photos to add characters and persons and emotional context to them. So, yeah, that's... not going to be an easy thing to mod. But that's just my two cents as a fanartist who works with explicit photos and video.

Date: 2018-12-09 09:41 pm (UTC)
cesperanza: (Default)
From: [personal profile] cesperanza
To me, they're all still clearly fannish, but I take your point. What I would say is that Federation would allow there to be some servers/hubs that allow those fanworks and others not--and it also allows individuals to set their own ratings level and control what kinds of things they filter/see. One of the best things about federation, decentralization is that it allows users a lot of control - really granular control--over their experience.

Date: 2018-12-10 12:54 pm (UTC)
snowgrouse: The fucketh? (Avon WTF)
From: [personal profile] snowgrouse
Off-topic, but reading this on my phone the very first thing in the morning, as a bleary-eyed Blake's 7 fan, confused me greatly because I thought it was *that* Federation that was being talked about, what with us all being concerned about repressive regimes. :D

Date: 2018-12-09 04:02 am (UTC)
impertinence: (Default)
From: [personal profile] impertinence
Regarding child abuse images, I think the biggest issue any fannish site is likely to run into is enforcement/modding related.

Tumblr and Facebook wind up hosting child abuse images because they're too big to moderate effectively (millions upon millions of posts = imperfect machine learning and sub-sub-sub contractors making decisions on what stays and is banned). Because they're so huge and like 99% automated, criminals are able to create distribution networks and fly under the radar, and people who really want to access that content follow links into areas of the internet specifically for that.

So, you'd want a framework with robust blocking and some kind of report mechanism - even if it's just "DM a mod" - and then you'd want community norms to be banning on site for anything leading to child abuse images. (And for a federated solution, banning/blacklisting servers that tolerate it.) These people succeed on places like Tumblr because of the lack of consistent banning. The biggest difference maker is having active, involved mods, & networks small enough to support individual attention.

(As a sidebar, I know some people report others for "pedophilia" when it's nothing of the sort; moderators can also enforce a zero-tolerance policy for spite reporting. I don't think it'd be completely simple but active moderation is VERY important in a social context/anywhere that child abuse images could be hosted!)

Date: 2018-12-12 09:36 pm (UTC)
alyndra: (circular reasoning)
From: [personal profile] alyndra
"enforce a zero-tolerance policy for spite reporting" This doesn't even have to be as extreme as banning, just give people a one-time warning: "This is not bannable content, please review our clear policies here. You may ask clarifying questions but if you continue to report things that are not bannable, please be advised that we will route all future reports from your account directly to spam."

...maybe check with a lawyer about this option first, come to think of it. But it would be effective as hell.

Date: 2018-12-12 11:33 pm (UTC)
alyndra: (Default)
From: [personal profile] alyndra

If they want to make new accounts for every two reports they make, that may or may not be against the TOS but hopefully they’ll have fun with their time doing it, and anyway they’d be doing the exact same thing if they were actually banned, so hey. I don’t expect the volume of complaints you’d get this way would be too much for a mod to keep routing to spam/deleting/ignoring.


Date: 2018-12-17 09:39 pm (UTC)
impertinence: (Default)
From: [personal profile] impertinence
FWIW, I would personally classify that as an enforcement mechanism issue - because that pattern of behavior is possible for any person who's banned because of abuse of the site. There are various ways to mitigate that (IP banning in the case of a consistent/static IP, which is becoming less and less common; ban-on-sight practices; x-day freezes on new accounts; etc). But in addition, to my knowledge, a lot of the people doing false reporting of fanworks right now are people who very much thrive on having a consistent platform with a consistent number of followers, and the reward for harassment diminishes with a diminished audience. Which banning does accomplish even if they come back under a different name. It's one of the reasons federated solutions are so gung-ho about smaller instances, because that kind of personalized moderation is possible/sustainable and so repeat offenders can be proactively removed even if they recreate. Whereas someone using Tumblr to coordinate harassing an ao3 user won't be banned from Tumblr for such behavior, even if people submitted copious evidence of that harassment to Tumblr.

Date: 2018-12-15 04:35 am (UTC)
oulfis: A teacup next to a plate of scones with clotted cream and preserves. (Default)
From: [personal profile] oulfis
I saw a twitter thread the other day which seemed to propose a workable policy:
Short version: let users flag content, have a paid team to evaluate flags, have a hidden "reporter karma value" flag that bumps reports from people who reliably flag truly bad stuff to the top of the human moderation queue.

If you must have a system that automatically removes some posts/bans some users, have it only trigger on reports from people who have built up good "rep." If a user regularly flags inappropriately to harass, have a setting that deprioritizes their reports.

Implement several grades of opt-in filtering. Allow people to self-flag for nsfw, tasteful/artistic nudity, violence, etc. Accounts that regularly properly self-flag get more accommodations when they miscategorize. Accounts like porn bots who game the system get a bigger hammer

Unfortunately, you must slightly obfuscate your system to prevent bad actors gaming it, but never lie about it. The algorithms and systems for finding trusted reporters and getting good results are subtle, shifting, and require real human iteration and a few humans as safeguards.

All content moderation systems must acknowledge the history of flagging as harassment and the reality of mobs disproportionately targeting marginalized people. There are pretty easy ways to recognize, defang, and disincentive these behaviors but they take intuition and work.

The internet and game industries are constantly forgetting their history. Here's an article from 2009 about some of the pitfalls of public-facing semi-automated flagging systems, including a hilarious and terrifying example from the Sims Online in 2003.
-- This seems to me like a good place to start, and not too unfeasible.

Date: 2018-12-15 04:37 am (UTC)
oulfis: A teacup next to a plate of scones with clotted cream and preserves. (Default)
From: [personal profile] oulfis
Though, looking at this again, it's very definitely a "centralized" approach! I'm not sure how it would be best adapted to a more federated system.

Date: 2018-12-17 09:41 pm (UTC)
impertinence: (Default)
From: [personal profile] impertinence
Yeah, reading through this - I don't disagree with the thought process, but it'd play out differently in smaller communities! I think it does kind of hit the nail on the head as far as moderation though - which is that you do need proactive moderation that actually cares, and systems that are automated enough to carry out the moderation, or (implied) small enough for that moderation to be manual or semi-manual. Thank you for the link!

Date: 2018-12-16 03:39 am (UTC)
isaacsapphire: Black haired anime style boy (Default)
From: [personal profile] isaacsapphire
I like the availability of "chose to not tag". After the tumbler debacle and them apparently using several self-flagging tags to delete content, I'm not sure if I want to tag in the future, for strategic reasons.

Date: 2018-12-16 08:57 pm (UTC)
isaacsapphire: Black haired anime style boy (Default)
From: [personal profile] isaacsapphire
Tag as "chose not to tag" and then put whatever warnings or synopsis is necessary at the top. didn't have tags. Make the hypothetical future purge actually work for it.

Date: 2018-12-18 08:18 pm (UTC)
ascerain: Screenshot of the words "Warnin! You ave bin warned!" (warning)
From: [personal profile] ascerain
I consider "chose not to tag" an adequate tag for warning purposes; once you take away the option to not tag, that opens up the door to tagging "no warnings apply" when some of the warnings really do apply, because the creator is scared of harassment or doesn't want their content to be found by someone who is searching the warning tag. I find that tagging "no warnings apply" when they do apply is worse than tagging "chose not to tag". With "chose not to tag," the reader/content consumer can decide whether or not they want to take the risk. It's like buying a box of surprise books from a thrift store. Of course, the view that "chose not to tag" is a valid warning in and of itself isn't common, I guess.

"Chose not to tag" is also useful for situations where, say, the content in the warning tag is mentioned but not depicted.
Edited Date: 2018-12-18 08:19 pm (UTC)

Date: 2018-12-23 03:36 am (UTC)
thisaintbc: Uncle Boris (Balto), snow shenanigans, default icon (Default)
From: [personal profile] thisaintbc
My only quibble with this is that it seems it would deprioritize infrequent or first-time reporters (in basically the same way that RL credit scores deprioritize first-time borrowers). While not ideal (imo infrequent reporters seem far more reliable than frequent reporters), it wouldn't be the end of the world if that was the system we ended up going with as long as there's a conscious decision that the benefits of doing so outweigh the drawbacks.

Date: 2018-12-15 06:46 pm (UTC)
From: [personal profile] powerful_dusk_88623
Reposting this here at the recommendation of oulfis (sorry, not sure how to do a mention on mobile here).

The Electronic Frontier Foundation gave me the contact info for four attorneys who expressed interest in my inquiry about legal advice; I've chosen to reach out to one who's an instructor at a university legal clinic because they seem the most likely to understand fandom issues (the others are in private practice and seem to do corporate internet law). I'm drafting a list of questions here: which mods or others are welcome to comment or add on.


Some discussion space for where we go next.

March 2019


Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Apr. 18th, 2019 03:21 pm
Powered by Dreamwidth Studios