Skip to content
This repository was archived by the owner on Apr 26, 2024. It is now read-only.

Commit

Permalink
Merge branch 'develop' into madlittlemods/13856-fix-have-seen-events-…
Browse files Browse the repository at this point in the history
…not-being-invalidated
  • Loading branch information
MadLittleMods committed Sep 27, 2022
2 parents af93b3c + f5aaa55 commit 0d0f54e
Show file tree
Hide file tree
Showing 28 changed files with 350 additions and 189 deletions.
60 changes: 34 additions & 26 deletions CHANGES.md

Large diffs are not rendered by default.

1 change: 1 addition & 0 deletions changelog.d/13839.misc
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Carry IdP Session IDs through user-mapping sessions.
1 change: 1 addition & 0 deletions changelog.d/13867.misc
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Correct the comments in the complement dockerfile.
1 change: 1 addition & 0 deletions changelog.d/13872.bugfix
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Faster room joins: Fix a bug introduced in 1.66.0 where an error would be logged when syncing after joining a room.
1 change: 1 addition & 0 deletions changelog.d/13885.misc
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Correctly handle a race with device lists when a remote user leaves during a partial join.
1 change: 1 addition & 0 deletions changelog.d/13892.feature
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Faster remote room joins: record _when_ we first partial-join to a room.
1 change: 1 addition & 0 deletions changelog.d/13914.misc
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Complement image: propagate SIGTERM to all workers.
1 change: 1 addition & 0 deletions changelog.d/13920.feature
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Support a `dir` parameter on the `/relations` endpoint per [MSC3715](https://github.com/matrix-org/matrix-doc/pull/3715).
1 change: 1 addition & 0 deletions changelog.d/13922.bugfix
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Fix long-standing bug where device updates could cause delays sending out to-device messages over federation.
6 changes: 6 additions & 0 deletions debian/changelog
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,12 @@ matrix-synapse-py3 (1.69.0~rc1+nmu1) UNRELEASED; urgency=medium

-- Synapse Packaging team <[email protected]> Mon, 26 Sep 2022 18:05:09 +0100

matrix-synapse-py3 (1.68.0) stable; urgency=medium

* New Synapse release 1.68.0.

-- Synapse Packaging team <[email protected]> Tue, 27 Sep 2022 12:02:09 +0100

matrix-synapse-py3 (1.68.0~rc2) stable; urgency=medium

* New Synapse release 1.68.0rc2.
Expand Down
22 changes: 9 additions & 13 deletions docker/complement/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -8,27 +8,23 @@

ARG SYNAPSE_VERSION=latest

# first of all, we create a base image with a postgres server and database,
# which we can copy into the target image. For repeated rebuilds, this is
# much faster than apt installing postgres each time.
#
# This trick only works because (a) the Synapse image happens to have all the
# shared libraries that postgres wants, (b) we use a postgres image based on
# the same debian version as Synapse's docker image (so the versions of the
# shared libraries match).

# now build the final image, based on the Synapse image.

FROM matrixdotorg/synapse-workers:$SYNAPSE_VERSION
# copy the postgres installation over from the image we built above
# First of all, we copy postgres server from the official postgres image,
# since for repeated rebuilds, this is much faster than apt installing
# postgres each time.

# This trick only works because (a) the Synapse image happens to have all the
# shared libraries that postgres wants, (b) we use a postgres image based on
# the same debian version as Synapse's docker image (so the versions of the
# shared libraries match).
RUN adduser --system --uid 999 postgres --home /var/lib/postgresql
COPY --from=postgres:13-bullseye /usr/lib/postgresql /usr/lib/postgresql
COPY --from=postgres:13-bullseye /usr/share/postgresql /usr/share/postgresql
RUN mkdir /var/run/postgresql && chown postgres /var/run/postgresql
ENV PATH="${PATH}:/usr/lib/postgresql/13/bin"
ENV PGDATA=/var/lib/postgresql/data

# initialise the database cluster in /var/lib/postgresql
# We also initialize the database at build time, rather than runtime, so that it's faster to spin up the image.
RUN gosu postgres initdb --locale=C --encoding=UTF-8 --auth-host password

# Configure a password and create a database for Synapse
Expand Down
39 changes: 31 additions & 8 deletions docs/upgrade.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,9 +15,8 @@ this document.
The website <https://endoflife.date> also offers convenient
summaries.

- If Synapse was installed using [prebuilt
packages](setup/installation.md#prebuilt-packages), you will need to follow the
normal process for upgrading those packages.
- If Synapse was installed using [prebuilt packages](setup/installation.md#prebuilt-packages),
you will need to follow the normal process for upgrading those packages.

- If Synapse was installed using pip then upgrade to the latest
version by running:
Expand Down Expand Up @@ -91,10 +90,34 @@ process, for example:
# Upgrading to v1.68.0
As announced in the upgrade notes for v1.67.0, Synapse now requires a SQLite
version of 3.27.0 or higher if SQLite is in use and source checkouts of Synapse
now require a recent Rust compiler.
Two changes announced in the upgrade notes for v1.67.0 have now landed in v1.68.0.
## SQLite version requirement
Synapse now requires a SQLite version of 3.27.0 or higher if SQLite is configured as
Synapse's database.

Installations using

- Docker images [from `matrixdotorg`](https://hub.docker.com/r/matrixdotorg/synapse),
- Debian packages [from Matrix.org](https://packages.matrix.org/), or
- a PostgreSQL database

are not affected.

## Rust requirement when building from source.

Building from a source checkout of Synapse now requires a recent Rust compiler
(currently Rust 1.58.1, but see also the
[Platform Dependency Policy](https://matrix-org.github.io/synapse/latest/deprecation_policy.html)).

Installations using

- Docker images [from `matrixdotorg`](https://hub.docker.com/r/matrixdotorg/synapse),
- Debian packages [from Matrix.org](https://packages.matrix.org/), or
- PyPI wheels via `pip install matrix-synapse` (on supported platforms and architectures)

will not be affected.

# Upgrading to v1.67.0

Expand Down Expand Up @@ -128,12 +151,12 @@ The simplest way of installing Rust is via [rustup.rs](https://rustup.rs/)

## SQLite version requirement in the next release

From the next major release (v1.68.0) Synapse will require SQLite 3.27.0 or
From the next major release (v1.68.0) Synapse will require SQLite 3.27.0 or
higher. Synapse v1.67.0 will be the last major release supporting SQLite
versions 3.22 to 3.26.

Those using Docker images or Debian packages from Matrix.org will not be
affected. If you have installed from source, you should check the version of
affected. If you have installed from source, you should check the version of
SQLite used by Python with:

```shell
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ manifest-path = "rust/Cargo.toml"

[tool.poetry]
name = "matrix-synapse"
version = "1.68.0rc2"
version = "1.68.0"
description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <[email protected]>"]
license = "Apache-2.0"
Expand Down
2 changes: 1 addition & 1 deletion synapse/app/admin_cmd.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,9 +53,9 @@

class AdminCmdSlavedStore(
SlavedFilteringStore,
SlavedDeviceStore,
SlavedPushRuleStore,
SlavedEventStore,
SlavedDeviceStore,
TagsWorkerStore,
DeviceInboxWorkerStore,
AccountDataWorkerStore,
Expand Down
32 changes: 30 additions & 2 deletions synapse/app/complement_fork_starter.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,11 +51,18 @@
import importlib
import itertools
import multiprocessing
import os
import signal
import sys
from typing import Any, Callable, List
from types import FrameType
from typing import Any, Callable, List, Optional

from twisted.internet.main import installReactor

# a list of the original signal handlers, before we installed our custom ones.
# We restore these in our child processes.
_original_signal_handlers: dict[int, Any] = {}


class ProxiedReactor:
"""
Expand Down Expand Up @@ -105,6 +112,11 @@ def _worker_entrypoint(

sys.argv = args

# reset the custom signal handlers that we installed, so that the children start
# from a clean slate.
for sig, handler in _original_signal_handlers.items():
signal.signal(sig, handler)

from twisted.internet.epollreactor import EPollReactor

proxy_reactor._install_real_reactor(EPollReactor())
Expand Down Expand Up @@ -167,13 +179,29 @@ def main() -> None:
update_proc.join()
print("===== PREPARED DATABASE =====", file=sys.stderr)

processes: List[multiprocessing.Process] = []

# Install signal handlers to propagate signals to all our children, so that they
# shut down cleanly. This also inhibits our own exit, but that's good: we want to
# wait until the children have exited.
def handle_signal(signum: int, frame: Optional[FrameType]) -> None:
print(
f"complement_fork_starter: Caught signal {signum}. Stopping children.",
file=sys.stderr,
)
for p in processes:
if p.pid:
os.kill(p.pid, signum)

for sig in (signal.SIGINT, signal.SIGTERM):
_original_signal_handlers[sig] = signal.signal(sig, handle_signal)

# At this point, we've imported all the main entrypoints for all the workers.
# Now we basically just fork() out to create the workers we need.
# Because we're using fork(), all the workers get a clone of this launcher's
# memory space and don't need to repeat the work of loading the code!
# Instead of using fork() directly, we use the multiprocessing library,
# which uses fork() on Unix platforms.
processes = []
for (func, worker_args) in zip(worker_functions, args_by_worker):
process = multiprocessing.Process(
target=_worker_entrypoint, args=(func, proxy_reactor, worker_args)
Expand Down
29 changes: 16 additions & 13 deletions synapse/federation/sender/per_destination_queue.py
Original file line number Diff line number Diff line change
Expand Up @@ -646,29 +646,32 @@ async def __aenter__(self) -> Tuple[List[EventBase], List[Edu]]:

# We start by fetching device related EDUs, i.e device updates and to
# device messages. We have to keep 2 free slots for presence and rr_edus.
limit = MAX_EDUS_PER_TRANSACTION - 2

device_update_edus, dev_list_id = await self.queue._get_device_update_edus(
limit
)

if device_update_edus:
self._device_list_id = dev_list_id
else:
self.queue._last_device_list_stream_id = dev_list_id

limit -= len(device_update_edus)
device_edu_limit = MAX_EDUS_PER_TRANSACTION - 2

# We prioritize to-device messages so that existing encryption channels
# work. We also keep a few slots spare (by reducing the limit) so that
# we can still trickle out some device list updates.
(
to_device_edus,
device_stream_id,
) = await self.queue._get_to_device_message_edus(limit)
) = await self.queue._get_to_device_message_edus(device_edu_limit - 10)

if to_device_edus:
self._device_stream_id = device_stream_id
else:
self.queue._last_device_stream_id = device_stream_id

device_edu_limit -= len(to_device_edus)

device_update_edus, dev_list_id = await self.queue._get_device_update_edus(
device_edu_limit
)

if device_update_edus:
self._device_list_id = dev_list_id
else:
self.queue._last_device_list_stream_id = dev_list_id

pending_edus = device_update_edus + to_device_edus

# Now add the read receipt EDU.
Expand Down
14 changes: 13 additions & 1 deletion synapse/handlers/federation.py
Original file line number Diff line number Diff line change
Expand Up @@ -581,7 +581,11 @@ async def do_invite_join(
# Mark the room as having partial state.
# The background process is responsible for unmarking this flag,
# even if the join fails.
await self.store.store_partial_state_room(room_id, ret.servers_in_room)
await self.store.store_partial_state_room(
room_id=room_id,
servers=ret.servers_in_room,
device_lists_stream_id=self.store.get_device_stream_token(),
)

try:
max_stream_id = (
Expand All @@ -606,6 +610,14 @@ async def do_invite_join(
room_id,
)
raise LimitExceededError(msg=e.msg, errcode=e.errcode, retry_after_ms=0)
else:
# Record the join event id for future use (when we finish the full
# join). We have to do this after persisting the event to keep foreign
# key constraints intact.
if ret.partial_state:
await self.store.write_partial_state_rooms_join_event_id(
room_id, event.event_id
)
finally:
# Always kick off the background process that asynchronously fetches
# state for the room.
Expand Down
9 changes: 9 additions & 0 deletions synapse/handlers/sso.py
Original file line number Diff line number Diff line change
Expand Up @@ -147,6 +147,9 @@ class UsernameMappingSession:
# A unique identifier for this SSO provider, e.g. "oidc" or "saml".
auth_provider_id: str

# An optional session ID from the IdP.
auth_provider_session_id: Optional[str]

# user ID on the IdP server
remote_user_id: str

Expand Down Expand Up @@ -464,6 +467,7 @@ async def complete_sso_login_request(
client_redirect_url,
next_step_url,
extra_login_attributes,
auth_provider_session_id,
)

user_id = await self._register_mapped_user(
Expand Down Expand Up @@ -585,6 +589,7 @@ async def _redirect_to_next_new_user_step(
client_redirect_url: str,
next_step_url: bytes,
extra_login_attributes: Optional[JsonDict],
auth_provider_session_id: Optional[str],
) -> NoReturn:
"""Creates a UsernameMappingSession and redirects the browser
Expand All @@ -607,6 +612,8 @@ async def _redirect_to_next_new_user_step(
extra_login_attributes: An optional dictionary of extra
attributes to be provided to the client in the login response.
auth_provider_session_id: An optional session ID from the IdP.
Raises:
RedirectException
"""
Expand All @@ -615,6 +622,7 @@ async def _redirect_to_next_new_user_step(
now = self._clock.time_msec()
session = UsernameMappingSession(
auth_provider_id=auth_provider_id,
auth_provider_session_id=auth_provider_session_id,
remote_user_id=remote_user_id,
display_name=attributes.display_name,
emails=attributes.emails,
Expand Down Expand Up @@ -968,6 +976,7 @@ async def register_sso_user(self, request: Request, session_id: str) -> None:
session.client_redirect_url,
session.extra_login_attributes,
new_user=True,
auth_provider_session_id=session.auth_provider_session_id,
)

def _expire_old_sessions(self) -> None:
Expand Down
22 changes: 19 additions & 3 deletions synapse/handlers/sync.py
Original file line number Diff line number Diff line change
Expand Up @@ -1191,7 +1191,9 @@ async def _find_missing_partial_state_memberships(
room_id: The partial state room to find the remaining memberships for.
members_to_fetch: The memberships to find.
events_with_membership_auth: A mapping from user IDs to events whose auth
events are known to contain their membership.
events would contain their prior membership, if one exists.
Note that join events will not cite a prior membership if a user has
never been in a room before.
found_state_ids: A dict from (type, state_key) -> state_event_id, containing
memberships that have been previously found. Entries in
`members_to_fetch` that have a membership in `found_state_ids` are
Expand All @@ -1201,6 +1203,10 @@ async def _find_missing_partial_state_memberships(
A dict from ("m.room.member", state_key) -> state_event_id, containing the
memberships missing from `found_state_ids`.
When `events_with_membership_auth` contains a join event for a given user
which does not cite a prior membership, no membership is returned for that
user.
Raises:
KeyError: if `events_with_membership_auth` does not have an entry for a
missing membership. Memberships in `found_state_ids` do not need an
Expand All @@ -1218,8 +1224,18 @@ async def _find_missing_partial_state_memberships(
if (EventTypes.Member, member) in found_state_ids:
continue

missing_members.add(member)
event_with_membership_auth = events_with_membership_auth[member]
is_join = (
event_with_membership_auth.is_state()
and event_with_membership_auth.type == EventTypes.Member
and event_with_membership_auth.state_key == member
and event_with_membership_auth.content.get("membership")
== Membership.JOIN
)
if not is_join:
# The event must include the desired membership as an auth event, unless
# it's the first join event for a given user.
missing_members.add(member)
auth_event_ids.update(event_with_membership_auth.auth_event_ids())

auth_events = await self.store.get_events(auth_event_ids)
Expand All @@ -1243,7 +1259,7 @@ async def _find_missing_partial_state_memberships(
auth_event.type == EventTypes.Member
and auth_event.state_key == member
):
missing_members.remove(member)
missing_members.discard(member)
additional_state_ids[
(EventTypes.Member, member)
] = auth_event.event_id
Expand Down
Loading

0 comments on commit 0d0f54e

Please sign in to comment.