recently_folded: Image from HLV Appledore scene (Default)
[personal profile] recently_folded posting in [community profile] post_tumblr_fandom
As discussed in [personal profile] greywash's master thread and its comments, this is a major concern for fans.

Tumblr, while convenient in many ways, kind of died on this hill for many fans. It's certainly been a major issue in the devaluing of predominantly-female fandoms in the eyes of outsiders, and it's gutted more than one fandom (Sherlock BBC, I'm looking at you). How many of us have burned old fandom identities behind us to shake toxic attention or orphaned works on AO3 to avoid old associations?

In existing sites, Pillowfort is trying to take a strong stand against this and has some interesting—if confusing to new users coming from tumblr—measures in place to try to combat it. As do Dreamwidth and AO3 and perhaps others I'm not sure about.

What do we need to keep fans safe from abuse? What kind of measures do we expect from a site that wants fans?

Date: 2018-12-10 11:46 pm (UTC)
laurakaye: (Default)
From: [personal profile] laurakaye
It's a complex and gnarly problem, to be sure. I think one of the best ways to approach it is to look at how online harassment occurs--what the tools are that get used to commit it--and what ways it can be combatted.

The largest non-fannish (or, well, non-fan-fic-ish) recent example that springs to mind is GamerGate and its ilk, where there is an organized campaign of many bad actors.

Recently, in hockey RPF fandom, (apparently) one single person managed to drive nearly everyone who wrote Penguins fic underground with a campaign of concerted attacks.

Techniques I see used against people, that our solution should be able to handle:
-harrassing comments (public on a person's work) and messages (private to a person), including threats
-threats to a person's RL identity (doxxing, swatting, contacting/threatening to contact family/friends/employers)
-weaponized false reports

obviously there are more because people can be terrible.

But I think that we can extrapolate from this that we need:
-users able to control their own privacy settings
-ability to easily take things private if they need to in a hurry
-strong abuse policies that are enforced (so that reporting actually does something)
-ability for a user to block people from interacting with their own content (so you can't go to someone's story and fill the comments with abuse, for instance)
-consistent identities (to encourage people to own their behavior) but no forced real-name policies or anything similar
-ability to orphan content if needed

and lots more, but those were my first top-of-mind take...

Date: 2018-12-14 12:05 pm (UTC)
alyndra: (Default)
From: [personal profile] alyndra
On DW you can edit a comment only until it gets its first reply. It’s a nice compromise between being able to fix typos and not being able to abuse comment editing to make someone else look nuts.

Date: 2018-12-16 01:26 pm (UTC)
glitterary: (Rocks fall... everyone dies!)
From: [personal profile] glitterary
I had a short chat about this with a twitter user who insisted that going through a verification service and allowing people to block unverified users was the only way to absolutely prevent harassment. From my POV, that just means creating a hierarchy of users as well as excluding people who are vital to fandom--kids, people who create erotic fanworks and could lose their jobs if exposed, kinksters for the same reason, etc.

However, while a huge part of fandom and online life's appeal is its anonymity, I think it might still be possible to enforce accountability without demanding things like verification.

The option I've thought about most is by requiring a greater effort investment in creating an account:

- require a unique email address to open an account, and perhaps create a short waiting period (up to a day, but more likely just a few hours) between registering a username and receiving your "click to confirm your address" email to prevent people from being able to quickly create burner accounts
- similarly, require that people make X (low) number of posts on their own blog, spaced at least an hour apart, before being able to post comments on someone else's post (unless that blog's owner has allowed brand new users to comment in their settings)
- anonymous posting (e.g. for kink memes) is done by being logged in to a real account and ticking a "post anonymously" box, so that other users can't see who posted a comment but it can be tracked back to a registered user if it's abusive

None of these cost money or demand that people use their real name or verify themselves in any way, and they are effectively "start-up effort costs"--they're fairly minimal for people using the network in good faith, who will only need to do it once, but they'd be extremely off-putting for anyone looking to create sockpuppets because doing it more than once or twice becomes an enormous time investment.

Another thing it means is that if a blog is found to be repeatedly posting (anonymous or logged-in) harassing or abusive comments or posts, it can be frozen to remove a user's ability to enourage an existing entourage to dogpile on another user and force that user to make a new blog if they want to try again and play nicely this time. The weakness here is that it's vulnerable to false reports, and may require mods/site admins to make judgements on what constitutes harassment. For that reason, we'd need a strong harassment policy which also dovetails with policies on e.g. illegal content and Nazis.

I also agree strongly with [personal profile] laurakaye above. I think having a lot of granularity in offering the ability, for example, to...

- block individual users
- filter posts to specific groups
- toggle the ability for specific or groups of people to post comments/send messages on and off
- remove the ability of individual/all other users to tag them
- quickly and reversibly "go dark" so that previously-public posts can only be seen by logged-in users or friends
- potentially even hide posts of theirs that have been reblogged by others, if the platform uses that format

...would give users a lot of power to take immediate emergency action on their own blogs, or define their own comfort and privacy level. All of this also makes harassing someone more time- and effort-intensive and less "rewarding" for anyone who gets a kick out of that. It also gives people the tools to manage their online interactions which are unpleasant, but not necessarily harassment (e.g. someone repeatedly complaining they don't like a particular pairing every time an author posts a fic with it).

I'm lucky enough never to have been on the receiving end of fandom harassment, so I don't know all the insidious ways bullies try to circumvent any harassment policy, so please do let me know if I've missed anything and any of these raises red flags for some reason!

Date: 2018-12-17 02:49 pm (UTC)
sarahthecoat: which I made (Default)
From: [personal profile] sarahthecoat
I was hanging up laundry this morning and had this thought, probably not new, but: i am imagining that part of the "immune system" in an invitation based site like ao3 is that you can track who invited an abuser onto the site. I am thinking here more of the systemic abuse rather than interpersonal harassment, but it could apply. IIRC, ao3 went through a rough patch with users posting illegal video feeds to sports events, or something, way outside ao3's intended use, not a "fanwork" in any sense, and it was challenging to stamp them out. If these users were inviting each other onto the site, and the chain of invitations could be tracked, then the people inviting the abusers could be shut out as well as the abusers? For all i know that is what ao3 did.

Profile

Some discussion space for where we go next.

March 2019

S M T W T F S
     12
3456789
10111213141516
17181920212223
24252627282930
31      

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Apr. 18th, 2019 02:25 pm
Powered by Dreamwidth Studios