-
Notifications
You must be signed in to change notification settings - Fork 392
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draft: MSC3215: Aristotle - Moderation in all things #3215
base: old_master
Are you sure you want to change the base?
Changes from 4 commits
23159fb
87d9da4
80e78e0
1a826ad
a379b78
da689ab
c941c49
79d9a66
ad08ef8
c21243e
b3e1db1
9439f62
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
@@ -0,0 +1,324 @@ | ||||||||||||||||||||
# MSC3215: Aristotle: Moderation in all things | ||||||||||||||||||||
|
||||||||||||||||||||
On large public channels (e.g. Matrix HQ), we have too many users abusing the room: | ||||||||||||||||||||
- Spammers | ||||||||||||||||||||
- Bullies | ||||||||||||||||||||
- Invite spammers | ||||||||||||||||||||
- ... | ||||||||||||||||||||
|
||||||||||||||||||||
The Matrix community doesn't have enough moderators to handle all of this, in particular during | ||||||||||||||||||||
weekends/outside of office hours. | ||||||||||||||||||||
|
||||||||||||||||||||
In an ideal world, we should not need to rely upon human moderators being awake to react to | ||||||||||||||||||||
such abuse, as many users tend to report these types of abuse very quickly. One could imagine, | ||||||||||||||||||||
for instance, that if 25 long-standing users of Matrix HQ report the same message of a new | ||||||||||||||||||||
user as spam, said user will be banned temporarily or permanently from the room and/or the | ||||||||||||||||||||
server as a spammer. | ||||||||||||||||||||
|
||||||||||||||||||||
This proposal does NOT include a specific policy for kicking/banning. Rather, it redesigns the abuse | ||||||||||||||||||||
reporting mechanism to: | ||||||||||||||||||||
|
||||||||||||||||||||
- decentralize it; | ||||||||||||||||||||
- produce formatted data that can be consumed by bots to decide whether action should be taken | ||||||||||||||||||||
against a user. | ||||||||||||||||||||
|
||||||||||||||||||||
This proposal redesigns how abuse reports are posted, routed and treated to make it possible to | ||||||||||||||||||||
use bots to react to simple cases. | ||||||||||||||||||||
|
||||||||||||||||||||
The expectation is that this will allow the Matrix community to experiment with bots that deal | ||||||||||||||||||||
with abuse reports intelligently. | ||||||||||||||||||||
|
||||||||||||||||||||
|
||||||||||||||||||||
## Proposal | ||||||||||||||||||||
|
||||||||||||||||||||
Matrix specs offer a mechanism to report abuse. In this mechanism: | ||||||||||||||||||||
|
||||||||||||||||||||
1. a user posts an abuse report for an event; | ||||||||||||||||||||
2. hopefully, the homeserver administrator for the user's homeserver will handle the abuse report. | ||||||||||||||||||||
|
||||||||||||||||||||
In the current state, this mechanism is insufficient: | ||||||||||||||||||||
|
||||||||||||||||||||
1. If the abuse report concerns an event in an encrypted room, the homeserver administrator typically | ||||||||||||||||||||
does not have access to that room, while a room moderator would, hence cannot act upon that report. | ||||||||||||||||||||
2. Many homeserver administrators do not wish to be moderators, especially in rooms in which they | ||||||||||||||||||||
do not participate themselves. | ||||||||||||||||||||
3. As the mechanism does not expose an API for reading the abuse reports, it is difficult to experiment | ||||||||||||||||||||
with bots that could help moderators. | ||||||||||||||||||||
4. As the mechanism is per-homeserver, reports from two users of the same room that happen to have accounts | ||||||||||||||||||||
on distinct homeservers cannot be collated. | ||||||||||||||||||||
5. There is no good mechanism to route a report by a user to a moderator, especially if they live on different | ||||||||||||||||||||
homeserver. | ||||||||||||||||||||
|
||||||||||||||||||||
|
||||||||||||||||||||
This proposal redesigns the abuse report spec and suggested behavior as follows: | ||||||||||||||||||||
|
||||||||||||||||||||
- Any room can opt-in for moderation. | ||||||||||||||||||||
- Rooms that opt-in for moderation have a moderation room (specified as a state event). These moderation | ||||||||||||||||||||
rooms may be shared between several rooms and there may be a default moderation room for a homeserver. | ||||||||||||||||||||
Yoric marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||||||||||||||
- Posting an abuse report on a specific event from a room with moderation sends a data message to the | ||||||||||||||||||||
moderation room. | ||||||||||||||||||||
- As there may still be a need to report entire rooms, the current abuse report API remains in place for | ||||||||||||||||||||
reporting entire rooms, although it is expected that further MSCs will eventually deprecate this API. | ||||||||||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Reporting an entire room to a room moderator probably doesn't make too much sense. I suppose reporting these to admins of homeservers that are in the room is the better route? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The idea is that, for the time being, we keep the reporting API, reporting only to our homeserver admin and we'll work on improving this in future MSCs. |
||||||||||||||||||||
|
||||||||||||||||||||
While this is not part of the MSC, the expectation is that the community may start experimenting with bots that | ||||||||||||||||||||
can be invited to moderation rooms act upon abuse reports: | ||||||||||||||||||||
Yoric marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||||||||||||||
- a bot could pick these data messages and turn them into human-readable reports including context | ||||||||||||||||||||
and buttons to let moderators easily ignore/kick/ban/redact; | ||||||||||||||||||||
- a bot could collate reports, ignore those from recently registered users, and decide to kick/ban | ||||||||||||||||||||
reported users if some threshold is exceeded; | ||||||||||||||||||||
- ... | ||||||||||||||||||||
|
||||||||||||||||||||
### Invariants | ||||||||||||||||||||
|
||||||||||||||||||||
- Each room MAY have a state event `m.room.moderated_by`. If specified, this is the room ID towards which | ||||||||||||||||||||
abuse reports MUST be sent. As rooms may be deleted `m.room.moderated_by` MAY be an invalid room ID. | ||||||||||||||||||||
A room that has a state event `m.room.moderated_by` supports moderation. | ||||||||||||||||||||
|
||||||||||||||||||||
```jsonc | ||||||||||||||||||||
{ | ||||||||||||||||||||
"state_key": "m.room.moderated_by", | ||||||||||||||||||||
"type": "m.room.moderated_by", | ||||||||||||||||||||
"content": { | ||||||||||||||||||||
"room_id": XXX, // The room picked for moderation. | ||||||||||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I fail to see the purpose of this room. IIUC users never actually use this room, they likely don't even have access. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Probably to couple with ban lists as rooms. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
The entire room or specifying the room ID in the state event? If the former, well, we need to send abuse reports somewhere. The current abuse API has them sent to a proprietary admin API. We replace this with a standard room. At this stage, it's up to users and tooling to decide what they do with it. If the latter, the client needs a way to find where to post the abuse reports. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Here I am talking about in the state event.
Please elaborate, I thought they just talked to the bot. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
The bot is just a delivery mechanism to send a message to the Moderation Room. The same bot may be used by several Moderation Rooms. So we need both the userID of the bot (to talk to it) and the roomID of the Moderation Room (to tell it where to send the message). There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Why doesn't the bot have a mapping from source room to destination. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Using field Yes, this could work, too, if we require the bot to be stateful. I believe that the best way to do it, though, is to keep rooms themselves the source of truth, rather than some bot memory. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I see. Does this mean that the bot has to peek into the "community room" to see where it should send the report to? Or is the bot expected to be part of that room already? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. In the current status of the MSC, the user's client copies this value There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ah, I missed that. It still feels weird to me that we need to expose this to the user but I'll consider this resolved until I have a better idea what to do here. |
||||||||||||||||||||
"user_id": XXX, // The bot in charge of forwarding reports to `room_id`. | ||||||||||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Why enforce that there is a bot? This seems to be over-complicated. Why not just provide a list of users that should be invited to a report room? This could be a bot, or it could just be the room admin(s). This way the use of a bot is not mandated. This also has a number of advantages, such as maybe one of the admins is causing the issue, the user could choose to exclude that user (this benefit is lost in the case of a bot, but at least it is allowed in some cases). Furthermore the user can use E2EE if they know the admin's keys. This removes an extra set of keys that need to be dealt with. It is also much simpler, especially for the case of a small team of admins that manage a room or two. Now they don't need to set up rooms, bots or anything. Yet they are still prepared if a (rare) abuse report comes in. In fact I would even consider that clients recommend reporting to the users of highest power-level in the room if this event is not present, this means that there is some sort of reasonable route for reporting abuse even if the room moderators haven't considered that abuse may concern. (People generally don't think about these issues until they happen) The process then becomes:
At that point that report may be handled by the bot listed in the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The main reason for there being a bot is that Matrix does not offer a built-in mechanism for users who are not member of a room (in this case, the Moderation Room) to post events towards that room. The bot is the simplest routing mechanism that I can think of. If I read correctly your counter-proposal, it does not address this (rather fundamental) issue.
Good idea. We can definitely add this as a suggestion in the MSC if the state events are not setup.
What's the "reporting room"? If it's what I call the Community Room, receiving abuse reports in the same room as they were sent leads to immediate deanonymization of the reporters. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
It does address it, but it does effectively sidestep it. It allows you to use a bot which forwards to a room, or it lets you just use the room that the user created to send the report. It puts the choice to the moderation team.
I mean the room the reporter created with the bot. Let me summarize the flow you are requiring in this MSC.
That is a completely reasonable report flow. However it seems overly specified. There are many other valid moderation workflows that don't need or want this complexity. Especially if you have a smallish community abuse reports will be rare. So having a dedicated mod room for discussion is probably not necessary. And in many communities the mods may want to keep the separate reports in different rooms for organization. Also keep in mind that for many (probably most) communities the "mod team" is one person. So copying the abuse report to $modroom is quite pointless. It just lets that one mod discuss with themselves. What I am suggesting is that we just drop everything about the bot and the moderation room from this MSC. It can still be implemented, but it leaves each mod team free to implement their own workflow and leaves room for different bots that work differently (and leaves room for no bot at all). In this case the MSC becomes
This way we have a standardized method for reporting abuse. But the actual workflow for handling it is still flexible. It is totally valid to use a bot as suggested in this MSC to copy the report to a moderation room, however that is optional, not required. This has a number of advantages in my mind.
TL;DR I don't see the benefit of mandating the bot and its behaviour. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Where does the reporter send the abuse report in your counter-proposal? To a userID as specified in If I understand correctly, you're splitting the MSC in two. The bot and moderation room are still necessary in many (most?) cases but remain unspecified. Essentially, we're losing the specification for
A few notes on this:
Well, that's possible with either variant. The main difference in this specific scenario is that the Moderation Room allows more than one bot to operate. More generally, I believe that the true difference between your proposal and mine is that in yours, the abuse endpoint is a user (which may optionally connect to a Moderation Room, etc.) while in mine, the abuse endpoint is a room (which may optionally connect to users, etc.)
I believe that I understand your point and that I understand the point of minimizing. If I were to go this way, though, I'd probably come back with a MSC for the bot pretty soon :) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Yes. You could have a
Basically. I see a lot of value to specifying how the client reports abuse, but I'm not so convinced that the "report management" workflow that you have proposed is sufficient for all use cases, furthermore I don't see as much value in standardizing it. So I think it makes sense to get the reporting flow through. Then we can consider management workflows later if we find value in standardizing it.
But who is expected to run the bot? Is it to be built into every homeserver?
Why can't this just be done by the client for the non-bot case?
In this case we would probably want a bot because with a lot of reports new rooms for each new report may be undesirable. So set My point here is that the bot workflow works even without being part of the spec. Each mod team can use a bot that works for them instead of mandating a single bot that implements one workflow.
That sounds fine to me. If you can justify the value of specifying the bot I am all ears :) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
I believe that neither proposal solves that scenario, because of the difficulty of trusting DMs. I believe that this deserves its own MSC. I also believe that both proposal allow for such experimentation (especially since each proposal can easily be extended into the other proposal :) ).
Agreed on the summary.
Good point. Unfortunately, I believe that we have reached a stage at which this conversation has stopped progressing. We both have arguments that make sense, we each appreciate the other's arguments, but I feel that continuing this thread will simply block everything. So, to summarize:
For these reasons, I believe that continuing work on this MSC more or less as is (i.e. my proposal) does not harm your proposal and should yield clear benefits in terms of both enabling experimentation (including experimentation on your proposal) and aiding the fight against abuse. Therefore, I'm planning to:
Does this make sense? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think the point that you are missing is that I think specing out the "second half" right now of this MSC is (mildly) harmful. I agree that what I am proposing is a subset of yours. But I think that it makes sense to start with that subset. Most importantly I am still do not see the benefits that you see in specifying the "second half". If there is no benefit of nailing something down in the spec then I think it is best not to specify it to avoid unnecessary restrictions that may come back to bite us down the road. Would it help if I put forward a stripped down version of this and we can consider deferring this for now?
It isn't clear that your proposal can be cleanly cut down to the minimal version. Could you clarify roughly the tweaks that you would make? It is just changing the bot name to a MXID or list of MXID?
Yes, that is why I am suggesting pushing out the "first half". Then you can write the bot and use it for these communities. This looks like the fastest path to me and avoids adding technical debt. So to be clear this is the plan that I think makes the most sense:
Again, if you can specify clear reasons why the MSC would be worse without specifying the behaviour of the bot then I think it can go ahead. But reading back I still don't see any reasons why the bot needs to be specified. I have only see "big communities will need it". That statement may well be true but that doesn't mean that it needs to be specified. My preferred subset allows using a bot, and it may well be the case that all users would use a bot, but unless there is a downside to removing the bot from the proposal then I think we should. Simpler is better. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm ok with reducing the MSC to its first half and keeping the second half as an illustration of a possible workflow. Before experimentation, I'm not ok with specifying that the client must be able to display abuse reports or that we can specify several targets in Does this work for you? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. A few additions:
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. That sounds good.
Then we can consider things like a fallback with extensible events in the future or mandating the bot. For now we can assume that anyone who ads the metadata has a way to read the reports.
Can we cut down the middle for now and say that it must be a list of one element? That way we don't need to break the API to allow multiple recipients. What is your objection to allowing multiple? It doesn't seem to cause any issues in my mind.
That sounds good. We can also consider adding back There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Having a single bot for routing moderation reports is a clear single point of failure, as if the server running the bot goes down, no moderator from any server will be able to receive reports. |
||||||||||||||||||||
} | ||||||||||||||||||||
// ... usual fields | ||||||||||||||||||||
} | ||||||||||||||||||||
``` | ||||||||||||||||||||
|
||||||||||||||||||||
- Each room MAY have state events `m.room.moderator_of`. | ||||||||||||||||||||
Yoric marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||||||||||||||
|
||||||||||||||||||||
```jsonc | ||||||||||||||||||||
{ | ||||||||||||||||||||
"state_key": "m.room.moderation.moderator_of", // A bot used to forward reports to this room. | ||||||||||||||||||||
"type": "m.room.moderation.moderator_of", | ||||||||||||||||||||
"content": { | ||||||||||||||||||||
"user_id": XXX, // The bot in charge of forwarding reports to this room. | ||||||||||||||||||||
} | ||||||||||||||||||||
// ... usual fields | ||||||||||||||||||||
} | ||||||||||||||||||||
``` | ||||||||||||||||||||
|
||||||||||||||||||||
```jsonc | ||||||||||||||||||||
Yoric marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||||||||||||||
{ | ||||||||||||||||||||
"state_key": "m.room.moderation.moderator_of.XXX", // XXX is the ID of the Community Room, i.e. the room being moderated. | ||||||||||||||||||||
Yoric marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||||||||||||||
"type": "m.room.moderation.moderator_of", | ||||||||||||||||||||
"content": { | ||||||||||||||||||||
"user_id": XXX, // The bot in charge of forwarding reports to this room. | ||||||||||||||||||||
} | ||||||||||||||||||||
// ... usual fields | ||||||||||||||||||||
} | ||||||||||||||||||||
``` | ||||||||||||||||||||
|
||||||||||||||||||||
### Client behavior | ||||||||||||||||||||
|
||||||||||||||||||||
#### Opting in for moderation | ||||||||||||||||||||
|
||||||||||||||||||||
When a user Alice creates a room ("the Community Room") or when a room moderator accesses the Community Room's configuration, | ||||||||||||||||||||
they MAY opt-in for moderation. When they do, they MUST pick a Moderation Room. The Client SHOULD check that: | ||||||||||||||||||||
- the Moderation Room is a room in which Alice has a powerlevel sufficient for sending messages; | ||||||||||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Why does Alice need to be able to send messages in the moderation room? What if people configure their moderation room to be a read-only stream of reports (disabling users other than the bot user from sending messages). There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Actually, I should have written "events" instead of "messages".
Do you think I'm missing something? |
||||||||||||||||||||
- the Moderation Room has a state event `m.room.moderation.moderator_of`. | ||||||||||||||||||||
|
||||||||||||||||||||
If Alice has opted-in for moderation, mased on the Moderation Room's Room ID and `m.room.moderation.moderator_of`, the Client | ||||||||||||||||||||
Yoric marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||||||||||||||
MUST create a state event `m.room.moderated_by` (see above) in the Community Room. | ||||||||||||||||||||
|
||||||||||||||||||||
Similarly, if a moderator has opted in for moderation in a Community Room, a moderator MAY opt out of moderation for that | ||||||||||||||||||||
Community Room. This is materialized as deleting `m.room.moderated_by`. | ||||||||||||||||||||
|
||||||||||||||||||||
#### Rejecting moderation | ||||||||||||||||||||
|
||||||||||||||||||||
A member of a Moderation Room may disconnect the Moderation Room from a Community Room by removing state event | ||||||||||||||||||||
`m.room.moderation.moderator_of.XXX`. This may serve to reconfigure moderation if a Community Room is deleted | ||||||||||||||||||||
or grows sufficiently to require its dedicated moderation team/bots. | ||||||||||||||||||||
|
||||||||||||||||||||
#### Reporting an event | ||||||||||||||||||||
|
||||||||||||||||||||
Any member of a Community Room that supports moderation MAY report an event from that room, by sending a `m.abuse.report` event | ||||||||||||||||||||
with content | ||||||||||||||||||||
|
||||||||||||||||||||
| field | Description | | ||||||||||||||||||||
|----------|-------------| | ||||||||||||||||||||
| event_id | **Required** id of the event being reported. | | ||||||||||||||||||||
Yoric marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||||||||||||||
| room_id | **Required** id of the room in which the event took place. | | ||||||||||||||||||||
| moderated_by_id | **Required** id of the moderation room, as taken from `m.room.moderated_by`. | | ||||||||||||||||||||
| nature | **Required** The nature of the event, see below. | | ||||||||||||||||||||
| reporter | **Required** The user reporting the event. | | ||||||||||||||||||||
| comment | Optional. String. A freeform description of the reason for sending this abuse report. | | ||||||||||||||||||||
|
||||||||||||||||||||
`nature` is an enum: | ||||||||||||||||||||
|
||||||||||||||||||||
- `m.abuse.nature.disagreement`: disagree with other user; | ||||||||||||||||||||
- `m.abuse.nature.toxic`: toxic behavior, including insults, unsollicited invites; | ||||||||||||||||||||
- `m.abuse.nature.illegal`: illegal behavior, including child pornography, death threats,...; | ||||||||||||||||||||
- `m.abuse.nature.spam`: commercial spam, propaganda, ... whether from a bot or a human user; | ||||||||||||||||||||
- `m.abuse.nature.room`: report the entire room, e.g. for voluntarily hosting behavior that violates server ToS; | ||||||||||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm not sure what the moderators of a room should do with a report that the contents of the room violates the ToS of a server. Should it be up to the room moderators to ACL the server, or up to the homeserver to pull its users out of the room? If a user from a homeserver with a very restricted ToS happens to join your public room, it probably shouldn't be up to the room moderators to deal with that. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, that's where the current abuse report API comes in play. I'll clarify this. |
||||||||||||||||||||
- `m.abuse.nature.other`: doesn't fit in any category above. | ||||||||||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hard-coding a list seems destined to failure. Maybe the list of forbidden content should be in the room somewhere? For example what if adult content isn't allowed? Or discussion of drugs? I think it makes sense for the moderators to make this list. Especially "a Client may give to give a user the opportunity to think a little about whether the behavior they report truly is abuse" is very difficult when this spec-provided list may not be aligned in any way with what is actually allowed in the room. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I love the idea of making the list extensible. It definitely makes sense. On the other hand, if we think tooling, I believe that having a standardized list is the way to go because:
Additionally:
What do you think about the following?
While points 2+ don't seem very complicated at first, I feel that they deserve their own MSC. I can rephrase the current MSC to leave room for them. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
This is a good point. Maybe we can have "well known" types of abuse that can be translated by clients. For example On the other hand this probably isn't much of an issue. The moderators that create these rules will enter them in the language(s) that their community uses. Just like the moderators will need to translate the ToS at signup, set the room topic or similar.
I'm not sure that this provides a huge benefit. I think the reason would usually just be shown. If there is any filtering this is probably customized by the mod team anyways. I would be interested in any examples where this would be helpful.
What do you have in mind? I can't think of an example here.
I think this is a client issue. The client can provide a "quick list" of categories if they think it will be useful. They can also save it to the user account or pull a server default list if required. I don't think the spec is the best place to store a helpful list of abuse types to be honest.
I think this makes sense. I'm still not sure how much value the well-known list provides but I don't think they hurt much if they are optional to use and you can add custom ones. I agree that future MSCs can extend the list and add mechanisms to help clients suggest good defaults. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Agreed with both points, but I still feel that this deserves its own MSC :)
It's basically the same thing as translation, just for moderators.
Actually, I can't think of a good example, either (I was thinking for instance of using the report button to report calls for help in case of suicide threats, but it's not very convincing). On the other hand, having a standard list would be very useful for bots/tools. We can imagine a bot lurking both in the Moderation Room and in the Community Room and that watches specifically for In a very different scenario, we can imagine a bot that files reports on GitLab. In certain cases, the bot should file the abuse report with as much context as it can possibly find - if the bot is present in the Community Room, it can attach the content of messages, copy links to images, etc. Except if the abuse is, say
Agreed. Although I believe that we need a few entries to bootstrap testing.
Are we in agreement that this MSC can start with a short list and that the customization mechanism can wait for a followup MSC? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
One downside of translation is that it is basically putting words in the moderators mouth. Every client and every translation will interpret the boundaries of each
I don't fine these examples too convincing as they can easily be setup when configuring the bot. Even with a custom list the bot is still receiving predictable
What type of testing do you think you need? I'm confused by this comment.
I don't think I agree. It sounds like this is a breaking change. I would rather get the customization mechanism specified from the onset so that clients don't need to be changed when it gets added. At the least we would need to explicitly reserve some format for the custom messages. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
I don't think we can escape translation for the end-user.
I disagree on this point. Making natures non-standard by default feels like a footgun to me.
Once the MSC feels stable enough, I'm planning to prototype this MSC (probably as part of develop.element.io and matrix.org) to gather feedback from actual moderators and end-users. That's what I meant by "testing".
I don't think this is a breaking change if we specify that clients that do not support customization may fallback to the standard list. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
I'm not convinced. If we step away from creating a single moderation policy for the whole world it becomes very reasonable that the acceptable content policy is written in just the languages that the moderators operate. I'm not sure how effective a mod can be if they don't speak the same language as the user anyways.
What is the danger you see in this?
I meant that clients will still need to know how to display these custom messages. Otherwise these users will no longer be able to report abuse. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Any case in which we'd need two tools to communicate with each other. I don't have specific examples yet, but I'd be really surprised if it didn't show up quickly.
Agreed. I still believe that this can be done in a followup MSC, though. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
Signed-off by: Erkin Alp Güney [email protected] |
||||||||||||||||||||
|
||||||||||||||||||||
We expect that this enum will be amended by further MSCs. | ||||||||||||||||||||
|
||||||||||||||||||||
The rationale for requiring a `nature` is twofold: | ||||||||||||||||||||
|
||||||||||||||||||||
- a Client may give to give a users the opportunity to think a little about whether the behavior they report truly is abuse; | ||||||||||||||||||||
Yoric marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||||||||||||||
- this gives the Client the ability to split between | ||||||||||||||||||||
- `abuse.room`, which should be routed to an administrator (in the current MSC, using the existing moderation API); | ||||||||||||||||||||
- `abuse.disagreement`, which may better be handled by blurring messages from offending user; | ||||||||||||||||||||
Yoric marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||||||||||||||
- everything else, which needs to be handled by a room moderator or a bot. | ||||||||||||||||||||
|
||||||||||||||||||||
To send an `m.abuse.report`, the Client posts the `m.abuse.report` message as DM to the `user_id` specified in the | ||||||||||||||||||||
`m.room.moderated_by`. | ||||||||||||||||||||
|
||||||||||||||||||||
This proposal does not specify behavior when `m.room.moderated_by` is not set or when the `user_id` doesn't exist. | ||||||||||||||||||||
|
||||||||||||||||||||
### Built-in routing bot behavior | ||||||||||||||||||||
|
||||||||||||||||||||
Users should not need to join the moderation room to be able to send `m.abuse.report` messages to it, as it would | ||||||||||||||||||||
let them snoop on reports from other users. Rather, we introduce a built-in bot as part of this specification: the | ||||||||||||||||||||
Routing Bot. | ||||||||||||||||||||
|
||||||||||||||||||||
1. When the Routing Bot is invited to a room, it always accepts invites. | ||||||||||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. (I'm writing this regardless of the status of the MSC in case it gets picked up again later by someone else, even if that's in another form.) It would be really useful for the client to give the room a distinct type. Currently in Mjolnir (which has a partial implementation of the routing bot) this behaviour is problematic as it clashes with the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I wouldn't be happy with a solution that requires a bot on my homeserver joining all rooms it's invited into. This seems too abusable. I want my server only participating in rooms that my users explicitly joined. |
||||||||||||||||||||
2. When the Routing Bot receives a message other than `m.abuse.report`, it ignores the message. | ||||||||||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. To be clear, this is an event with type There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Right, it's a message event with type |
||||||||||||||||||||
3. When the Routing Bot receives a message _M_ with type `m.abuse.report` from Alice: | ||||||||||||||||||||
- If the Routing Bot is not a member of _M_`.moderated_by_id`, reject the message. | ||||||||||||||||||||
- If _M_.`reporter` is not Alice, reject the message. | ||||||||||||||||||||
- If room _M_.`moderated_by_id` does not contain a state event `m.room.moderation.moderator_of.XXX`, where `XXX` | ||||||||||||||||||||
is _M_.`room_id`, reject the message. Otherwise, call _S_ this state event. | ||||||||||||||||||||
- If _S_ does not have type `m.room.moderation.moderator_of`, reject the message. | ||||||||||||||||||||
- If _S_ is missing field `user_id`, reject the message. | ||||||||||||||||||||
- If _S_.`user_id` is not the id of the Routing Bot, reject the message. | ||||||||||||||||||||
Yoric marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||||||||||||||
- Copy the `content` of _M_ as a new `m.abuse.report` message in room _M_.`room_id`. | ||||||||||||||||||||
|
||||||||||||||||||||
### Possible Moderation Bot behavior | ||||||||||||||||||||
|
||||||||||||||||||||
This section is provided as an illustration of the spec, not as part of the spec. | ||||||||||||||||||||
|
||||||||||||||||||||
A possible setup would involve two Moderation Bots, both members of a moderation room _MR_. | ||||||||||||||||||||
Yoric marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||||||||||||||
|
||||||||||||||||||||
- A Classifier Bot consumes `m.abuse.report` messages, discards messages from users who have joined recently or never | ||||||||||||||||||||
been active in the room (possible bots/sleeping bots), then collates reports against users. If there are more than | ||||||||||||||||||||
e.g. 10 reports in the last hour against a single user, post a `m.policy.rule.user` message in the same room specifying that the user | ||||||||||||||||||||
should undergo temporary ban. | ||||||||||||||||||||
- A Ban Bot consumes `m.policy.rule.user` messages and implements bans. | ||||||||||||||||||||
|
||||||||||||||||||||
## Security considerations | ||||||||||||||||||||
|
||||||||||||||||||||
### Routing, 1 | ||||||||||||||||||||
|
||||||||||||||||||||
This proposal introduces a (very limited) mechanism that lets users send (some) events to a room without being part of that | ||||||||||||||||||||
room. There is the possibility that this mechanism could be abused. | ||||||||||||||||||||
|
||||||||||||||||||||
We believe that it cannot readily be abused for spam, as these are structured data messages, which are usually not visible to members | ||||||||||||||||||||
of the moderation room. | ||||||||||||||||||||
|
||||||||||||||||||||
However, it is possible that it can become a vector for attacks if combined with a bot that treats said structured data messages, | ||||||||||||||||||||
e.g. a Classifier Bot and/or a Ban Bot. | ||||||||||||||||||||
|
||||||||||||||||||||
|
||||||||||||||||||||
### Routing, 2 | ||||||||||||||||||||
|
||||||||||||||||||||
The Routing Bot does not have access to priviledged information. In particular, it CANNOT check whether: | ||||||||||||||||||||
- Alice is a member of _M_.`room_id`. | ||||||||||||||||||||
- Event _M_.`event_id` took place in room _M_.`room_id`. | ||||||||||||||||||||
- Alice could witness event _M_.`event_id`. | ||||||||||||||||||||
Yoric marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||||||||||||||
|
||||||||||||||||||||
This means that **it is possible to send bogus abuse reports**, as is already the case with the current Abuse Report API. | ||||||||||||||||||||
|
||||||||||||||||||||
This is probably something that SHOULD BE FIXED before merging this spec. | ||||||||||||||||||||
|
||||||||||||||||||||
### Revealing oneself | ||||||||||||||||||||
|
||||||||||||||||||||
If a end-user doesn't understand the difference between `m.abuse.nature.room` and other kinds of abuse report, there is the possibility | ||||||||||||||||||||
that this end-user may end up revealing themself by sending a report against a moderator or against the room to the very | ||||||||||||||||||||
moderators of that room. | ||||||||||||||||||||
|
||||||||||||||||||||
The author believes that this is a problem that can and should be solved by UX. | ||||||||||||||||||||
|
||||||||||||||||||||
### Snooping administrators (user homeserver) | ||||||||||||||||||||
|
||||||||||||||||||||
Consider the following case: | ||||||||||||||||||||
|
||||||||||||||||||||
- homeserver compromised.org is administered by an evil administrator Marvin; | ||||||||||||||||||||
- user @alice:compromised.org is a member of Community Room _CR_; | ||||||||||||||||||||
- user @alice:compromised.org posts an abuse report against @bob:somewhere.org as DM to the Routing Bot; | ||||||||||||||||||||
- Marvin can witness that @alice:compromised.org has sent a message to the Routing Bot | ||||||||||||||||||||
but cannot witness the contents of the message (assuming encryption); | ||||||||||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hmm, building encryption support into a bot that's part of the homeserver may be tricky... There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Actually, it doesn't really have to be part of the homeserver. Truly, I'd probably prefer if it wasn't, because it would make all the retry logics easier to write without complicating the homeserver. |
||||||||||||||||||||
- as @alice:compromised.org is a member of _CR_, Marvin can witness when @bob:somewhere.org is kicked/banned, | ||||||||||||||||||||
even if _CR_ is encrypted; | ||||||||||||||||||||
- Marvin can deduce that @alice:compromised.org has denounced @bob:somewhere.org. | ||||||||||||||||||||
|
||||||||||||||||||||
This is BAD. However, this is better as the current situation in which Marvin can directly | ||||||||||||||||||||
read the report posted by @alice:compromised.org using the reporting API. | ||||||||||||||||||||
|
||||||||||||||||||||
### Snooping administrators (moderator homeserver) | ||||||||||||||||||||
|
||||||||||||||||||||
Consider the following case: | ||||||||||||||||||||
|
||||||||||||||||||||
- homeserver compromised.org is administered by an evil administrator Marvin; | ||||||||||||||||||||
- user @alice:compromised.org is a moderator of room _CR_ with moderation room _MR_; | ||||||||||||||||||||
- user @bob:innocent.org is a member of room _R_; | ||||||||||||||||||||
- @bob:innocent.org posts an abuse report as DM to the Routing Bot; | ||||||||||||||||||||
- Marvin does not witness this; | ||||||||||||||||||||
- Marvin sees that the Routing Bot posts a message to _MR_ but the metadata does not | ||||||||||||||||||||
contain any data on @bob:innocent.org; | ||||||||||||||||||||
- if the room is encrypted, Marvin cannot determine that @bob:innocent.org has posted | ||||||||||||||||||||
an abuse report. | ||||||||||||||||||||
|
||||||||||||||||||||
This is GOOD. | ||||||||||||||||||||
|
||||||||||||||||||||
### Interfering administrator (moderator homeserver) | ||||||||||||||||||||
|
||||||||||||||||||||
Consider the following case: | ||||||||||||||||||||
|
||||||||||||||||||||
- homeserver compromised.org is administered by an evil administrator Marvin; | ||||||||||||||||||||
- user @alice:compromised.org joins a moderation room _MR_; | ||||||||||||||||||||
- Marvin can impersonate @alice:compromised.org and set `m.room.moderation.moderator_of` | ||||||||||||||||||||
to point to a malicious bot EvilBot; | ||||||||||||||||||||
- when @alice:compromised.org becomes moderator for room _CR_ and sets _MR_ as moderation | ||||||||||||||||||||
room, EvilBot becomes the Routing Bot; | ||||||||||||||||||||
- every abuse report in room _CR_ is deanonymized by EvilBot. | ||||||||||||||||||||
|
||||||||||||||||||||
This is BAD. This may suggest that the Routing Bot mechanism may be a bad idea. | ||||||||||||||||||||
|
||||||||||||||||||||
### Interfering administrator (moderator homeserver) | ||||||||||||||||||||
|
||||||||||||||||||||
Consider the following case: | ||||||||||||||||||||
|
||||||||||||||||||||
- homeserver compromised.org is administered by an evil administrator Marvin; | ||||||||||||||||||||
- user @alice:compromised.org is a moderator of room _CR_ with moderation room _MR_; | ||||||||||||||||||||
- Marvin can impersonate @alice:compromised.org and set `m.room.moderation.moderated_by` | ||||||||||||||||||||
to point to a moderation room under its control; | ||||||||||||||||||||
- every abuse report in room _CR_ is deanonymized by EvilBot. | ||||||||||||||||||||
|
||||||||||||||||||||
This is BAD. This actually suggests that the problem goes beyond the Routing Bot. | ||||||||||||||||||||
Yoric marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||||||||||||||
|
||||||||||||||||||||
### Snooping bots | ||||||||||||||||||||
|
||||||||||||||||||||
As bots are invited to moderation rooms, a compromised bot (whether it's Routing Bot, | ||||||||||||||||||||
Classifier Bot or Ban Bot) has access to all moderation data for that room. | ||||||||||||||||||||
|
||||||||||||||||||||
## Alternatives | ||||||||||||||||||||
|
||||||||||||||||||||
### MSC 2938 | ||||||||||||||||||||
MSC 2938 (by the same author) has previously been posted to specify a mechanism for reporting events to room moderators. The current MSC is considered | ||||||||||||||||||||
- more reliable (it does not need to roll out its own federation communication); | ||||||||||||||||||||
- less specialized/more general. | ||||||||||||||||||||
|
||||||||||||||||||||
I am not aware of other proposals that cover the same needs. | ||||||||||||||||||||
|
||||||||||||||||||||
### Alternatives to the Routing Bot | ||||||||||||||||||||
|
||||||||||||||||||||
The "knocking" protocol is an example of an API that lets users inject state events in a room in which they do | ||||||||||||||||||||
not belong. It is possible that we could follow the example of this protocol and implement a similar "abuse" API. | ||||||||||||||||||||
|
||||||||||||||||||||
However, this would require implementing yet another new communication protocol based on PDUs/EDUs, including a | ||||||||||||||||||||
(small) custom encryption/certificate layer and another retry mechanism. The author believes that this would entail | ||||||||||||||||||||
a higher risk and result in code that is harder to test and trust. | ||||||||||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thanks for including this - I see now that the weight that comes with this solution is not just a new API, but all the scaffolding that goes along with it (including message secrecy). There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
From what I can tell this is not currently part of the MSC, but it could be in the future. On the point against adding another handshake, I am not too sure as to what the problem is, as a certificate is not only not part of the MSC as it currently stands, but arguably isn't that useful, since any bad actor could spam reports for events they can see instead, which doesn't make a big difference to the moderators dealing with it. Not too sure what the trust and testing part is about though, since there is already a precedent for using handshakes to send state to rooms a server is not a part of, so the code to perform it would likely be not much of a new thing, especially when compared to creating the concept of a routing bot. |
||||||||||||||||||||
|
||||||||||||||||||||
## Unstable prefix | ||||||||||||||||||||
|
||||||||||||||||||||
During experimentation | ||||||||||||||||||||
|
||||||||||||||||||||
- `m.room.moderation.moderated_by` will be prefixed `org.matrix.msc3215.room.moderation.moderated_by`; | ||||||||||||||||||||
- `m.room.moderation.moderator_of` will be prefixed `org.matrix.msc3215.room.moderation.moderator_of`; | ||||||||||||||||||||
- `m.abuse.report` will be prefixed `org.matrix.msc3215.abuse.report`; | ||||||||||||||||||||
- `m.abuse.nature.*` will be prefixed `org.matrix.msc3215.abuse.nature.*`. | ||||||||||||||||||||
Comment on lines
+405
to
+408
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Slightly shorter unstable keys:
Suggested change
Signed-off by: Erkin Alp Güney [email protected] |
||||||||||||||||||||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While probably not something which should be done in this proposal, extending this to spaces would be super useful. A room which is added as a child to a space with a moderation room set being required to also inherit that moderation room (and the ability for any bot to verify it's PL in the new room and remove the room from the space if it can't moderate it) would make it possible to have moderated spaces where many more people can be trusted to add new rooms to a space (like people can add channels to a slack team etc).