Give them a read, think about the underlying pattern, decide which of those 'censorings' you think are reasonable, and which are not... and why you hold those views.
I'd like to skip over the incredibly stupid deep-dive into a) whether or not "cancel culture" is a good name for the phenomenon under discussion and b) whether or not this phenomenon is new. I don't give a rat's ass if people agree with the nomenclature, or if it's been around for a million years - I haven't given a crap about that since page 1.
I'd also really like to avoid the shallow slings of partisan politics insisting that it's all one side or the other doing the cancelling, or whether it's all one side or the other complaining about cancelling. It's neither of those things, it's a mix that affects all political viewpoints.
I'd much rather discuss whether or not this type of socially-prompted censorship and deplatforming is ethical or reasonable. I'd also like to discuss whether there are any feasible ways to make sure it only happens in reasonable ways, and what the risk is of it getting out of hand and turning into a mob-based style of McCarthyism, enforcing moral conformity.
Those are my concerns and the things I'm interested in discussion.
Fair enough, I'm definitely guilty of wallowing a bit too much in the semantics in this thread and I'm happy to talk about the meat.
On the first article, the issue of YouTube putting restrictions on livestreaming.
Here's my background framing.
YouTube, like many social media platforms has been used to do a lot of unsavory things. Social media has even been cited as a major force in organizing the Rohingya genocide. YouTube was certainly used to spread a lot of political misinformation around US elections.
I think social media companines are right to be concerned about being used to organize violence or spread misinformation. And they're right to take steps to try to curb those sorts of misuse. I fully acknowledge their motivations are probably not all puppy dogs and patriotism. But whether it's in fear from their brand being tainted, fear that government may try to step in if they can't show they're regulating themselves, or on the off chance that it's motivated by real ethical concerns. Those motivations don't matter too much because they all converge on "Do what you can to at least try to minimize the dangerous stuff".
Social media services are in a weird liminal space that we as a society haven't really fully understood yet, and I think we fail when we try to shove them in categories that don't fit.
They're not really like any publisher in the traditional sense. They don't and can't scrutinize every piece of content.
But they're not really like a neutral service like UPS either.
If you send something weird through UPS that's a communication between two individuals. What people post on YT becomes a part of their ecosystem of search results, recommendations, communities etc.
So we can't say that the content is none of their business like a package shipping company, because the content spills out in a lot of ways. That's what makes it "social". And we can't say they need to individually approve of every piece of video with minute discretion. The volume is so huge that would be impossible, and that high volume is part of their social model. And even if they could make millions and millions of judgement calls every day, people probably wouldn't like that any better.
So the only tools available are going to be broader guidelines.
I'm not inside the YT meetings about how they decided live streams showing guns was a rule they were going to enforce. And I'm also not going to say that's a perfect rule.
But I think however they set the rules to try to manage the real ugly stuff, there's bound to be collateral damage. And that's a shame, but I can't see a great alternative.
I do value independent journalism. But I think if independent media of ANY kind relies on one other business for everything, they're going to be vulnerable to policy decisions that don't have anything to do with them.
There was some similar anger when both YT and FB changed their algorythms in a way that made it much harder for some independent creators who had been using the two services for their livelihood. A couple creators I really enjoyed had to stop making content because it just wasn't profitable anymore. And that sucks. But, I don't see retaining the old model as something YT owed to them either morally or legally.
I recognize that YT is a profit driven company and they're going to take the actions in their own best interest.
In all, I don't see this as part of a cultural push to enforce values and punish any kind of wrong thinking. I see a crude practical attempt to limit truly harmful content. And unfortunately, crude might be the only option right now.