In a [previous PR](https://github.com/matrix-org/dendrite/pull/3181) I
accidentally left GMSL on a dev branch, this PR fixes it by bringing it
back to the main branch of GMSL
Signed-off-by: `Sam Wedgwood <sam@wedgwood.dev>`
Adds the `org.matrix.msc3575.proxy` field (used for configuring sliding
sync) to /.well-known/matrix/client when Dendrite is serving that
endpoint and `well_known_sliding_sync_proxy` has been configured.
ie. Config values of:
``` yaml
global:
well_known_client_name: https://example.com
well_known_sliding_sync_proxy: https://syncv3.example.com
```
results in a /.well-known/matrix/client of:
``` json
{
"m.homeserver": {
"base_url": "https://example.com"
},
"org.matrix.msc3575.proxy": {
"url": "https://syncv3.example.com"
}
}
```
If `well_known_sliding_sync_proxy` is not provided, the json provided by
/.well-known/matrix/client does not include the proxy field.
ie.
``` json
{
"m.homeserver": {
"base_url": "https://example.com"
}
}
```
Fixes include:
- Translating state keys that contain user IDs to their respective room
keys for both querying and sending state events
- **NOTE**: there may be design discussion needed on what should happen
when sender keys cannot be found for users
- A simple fix for kicking guests from rooms properly
- Logic for boundary history visibilities was slightly off (I'm
surprised this only manifested in pseudo ID room versions)
Signed-off-by: `Sam Wedgwood <sam@wedgwood.dev>`
This PR adds a config key `room_server.default_config_key` to set the
default room version for the room server.
Signed-off-by: `Sam Wedgwood <sam@wedgwood.dev>`
This is to easier identify which service caused the error.
Feature is just improving logging, thus no tests added.
### Pull Request Checklist
<!-- Please read
https://matrix-org.github.io/dendrite/development/contributing before
submitting your pull request -->
* [X] I have justified why this PR doesn't need tests
* [X] Pull request includes a [sign off below using a legally
identifiable
name](https://matrix-org.github.io/dendrite/development/contributing#sign-off)
_or_ I have already signed off privately
Signed-off-by: `Maximilian Berger <max@berger.name>`
Co-authored-by: Till <2353100+S7evinK@users.noreply.github.com>
There are cases where a dendrite instance is unaware of a pseudo ID for
a user, the user is not a member of that room. To represent this case,
we currently use the 'zero' value, which is often not checked and so
causes errors later down the line. To make this case more explict, and
to be consistent with `QueryUserIDForSender`, this PR changes this to
use a pointer (and `nil` to mean no sender ID).
Signed-off-by: `Sam Wedgwood <sam@wedgwood.dev>`
@S7evinK sorry for the spam but any chance we get get this merged into
main at some point? It was previously merged in
https://github.com/matrix-org/dendrite/pull/3021 into a temp branch that
never made it into main. If there is an issue with this being merged let
me know.
---
Minor update to the helm chart to allow setting the update strategy as
the default `RollingUpdate` one is a bit annoying if using
`ReadWriteOnce` volumes for media. Hope this makes sense.
---
### Pull Request Checklist
<!-- Please read
https://matrix-org.github.io/dendrite/development/contributing before
submitting your pull request -->
* [x] ~~I have added Go unit tests or [Complement integration
tests](https://github.com/matrix-org/complement) for this PR _or_ I have
justified why this PR doesn't need tests~~ Haven't touched any go files.
* [x] Pull request includes a [sign off below using a legally
identifiable
name](https://matrix-org.github.io/dendrite/development/contributing#sign-off)
_or_ I have already signed off privately
Signed-off-by: `George Antoniadis <george@noodles.gr>` [skip ci]
Background federated joins are currently broken since they timeout after
30s. This timeout didn't exist before the refactor. It should still exist but it needs to be extended to allow for the additional time it can take a server to generate the /send_join response when joining a complex room.
The previous version was getting **ALL** membership events (as
`ClientEvents`, so going through `NewEventFromTrustedJSONWithID`) for a
given room.
Now we are querying only locally joined users as `ClientEvents`, which
should **significantly** reduce allocations.
Take for example a large room with 2k membership events, but only 1
local user - avoiding 1999 `NewEventFromTrustedJSONWithID` calls just to
calculate the `roomSize` which we can also query by other means.
This is also getting called for every `OutputRoomEvent` in the userAPI.
Benchmark with 1 local user and 100 remote users.
```
pkg: github.com/matrix-org/dendrite/userapi/consumers
cpu: 12th Gen Intel(R) Core(TM) i5-12500H
│ old.txt │ new.txt │
│ sec/op │ sec/op vs base │
LocalRoomMembers-16 375.9µ ± 7% 327.6µ ± 6% -12.85% (p=0.000 n=10)
│ old.txt │ new.txt │
│ B/op │ B/op vs base │
LocalRoomMembers-16 79.426Ki ± 0% 8.507Ki ± 0% -89.29% (p=0.000 n=10)
│ old.txt │ new.txt │
│ allocs/op │ allocs/op vs base │
LocalRoomMembers-16 1015.0 ± 0% 277.0 ± 0% -72.71% (p=0.000 n=10)
```
Since the removal of `build.sh`, we don't include any information about
the revision Dendrite was build from. Since go1.18, the revision a
binary was build from is automatically included, so we can try to get
that instead.
This also adds a `dendrite_up` metric showing the current version
(`dendrite_up{version="0.13.1+c796f20"} 1`)
Closes#2993
If old messages build up in the input stream and do not get processed
successfully, this can create a significant drift between the stream
first sequence and the consumer ack floors, which results in a slow and
expensive start-up when interest-based retention is in use.
If a message is sat in the stream for 24 hours, it's probably not going
to get processed successfully, so let NATS drop them instead. Dendrite
can reconcile by fetching missing events later if it needs to.
---------
Co-authored-by: Neil Alexander <neilalexander@users.noreply.github.com>
The syncapi operates using userID's so when querying for the previous
state event we need to lookup the userID from the given senderID before
the state query.
When we're adding state to the database, we check which eventNIDs are
already in a block, if we already have that eventNID, we remove it from
the list. In its current form we would skip over eventNIDs in the case
we already found a match (we're decrementing `i` twice)
My theory is, that when we later get the state blocks, we are receiving
"too many" eventNIDs (well, yea, we stored too many), which may or may
not can result in state resets when comparing different state snapshots.
(e.g. when adding state we stored a eventNID by accident because we
skipped it, later we add more state and are not adding it because we
don't skip it)