This commit is contained in:
Neil Alexander 2022-05-11 15:39:36 +01:00
parent 9599b3686e
commit 19a9166eb0
No known key found for this signature in database
GPG Key ID: A02A2019A2BB0944
39 changed files with 1483 additions and 944 deletions

7
.gitignore vendored
View File

@ -41,6 +41,10 @@ _testmain.go
*.test *.test
*.prof *.prof
*.wasm *.wasm
*.aar
*.jar
*.framework
*.xcframework
# Generated keys # Generated keys
*.pem *.pem
@ -65,4 +69,7 @@ test/wasm/node_modules
# Ignore complement folder when running locally # Ignore complement folder when running locally
complement/ complement/
# Stuff from GitHub Pages
docs/_site
media_store/ media_store/

View File

@ -1,4 +1,5 @@
# Dendrite # Dendrite
[![Build status](https://github.com/matrix-org/dendrite/actions/workflows/dendrite.yml/badge.svg?event=push)](https://github.com/matrix-org/dendrite/actions/workflows/dendrite.yml) [![Dendrite](https://img.shields.io/matrix/dendrite:matrix.org.svg?label=%23dendrite%3Amatrix.org&logo=matrix&server_fqdn=matrix.org)](https://matrix.to/#/#dendrite:matrix.org) [![Dendrite Dev](https://img.shields.io/matrix/dendrite-dev:matrix.org.svg?label=%23dendrite-dev%3Amatrix.org&logo=matrix&server_fqdn=matrix.org)](https://matrix.to/#/#dendrite-dev:matrix.org) [![Build status](https://github.com/matrix-org/dendrite/actions/workflows/dendrite.yml/badge.svg?event=push)](https://github.com/matrix-org/dendrite/actions/workflows/dendrite.yml) [![Dendrite](https://img.shields.io/matrix/dendrite:matrix.org.svg?label=%23dendrite%3Amatrix.org&logo=matrix&server_fqdn=matrix.org)](https://matrix.to/#/#dendrite:matrix.org) [![Dendrite Dev](https://img.shields.io/matrix/dendrite-dev:matrix.org.svg?label=%23dendrite-dev%3Amatrix.org&logo=matrix&server_fqdn=matrix.org)](https://matrix.to/#/#dendrite-dev:matrix.org)
Dendrite is a second-generation Matrix homeserver written in Go. Dendrite is a second-generation Matrix homeserver written in Go.
@ -52,7 +53,7 @@ The [Federation Tester](https://federationtester.matrix.org) can be used to veri
## Get started ## Get started
If you wish to build a fully-federating Dendrite instance, see [INSTALL.md](docs/INSTALL.md). For running in Docker, see [build/docker](build/docker). If you wish to build a fully-federating Dendrite instance, see [the Installation documentation](docs/installation). For running in Docker, see [build/docker](build/docker).
The following instructions are enough to get Dendrite started as a non-federating test deployment using self-signed certificates and SQLite databases: The following instructions are enough to get Dendrite started as a non-federating test deployment using self-signed certificates and SQLite databases:

View File

@ -1,60 +0,0 @@
# Code Style
In addition to standard Go code style (`gofmt`, `goimports`), we use `golangci-lint`
to run a number of linters, the exact list can be found under linters in [.golangci.yml](.golangci.yml).
[Installation](https://github.com/golangci/golangci-lint#install-golangci-lint) and [Editor
Integration](https://golangci-lint.run/usage/integrations/#editor-integration) for
it can be found in the readme of golangci-lint.
For rare cases where a linter is giving a spurious warning, it can be disabled
for that line or statement using a [comment
directive](https://golangci-lint.run/usage/false-positives/#nolint), e.g. `var
bad_name int //nolint:golint,unused`. This should be used sparingly and only
when its clear that the lint warning is spurious.
The linters can be run using [build/scripts/find-lint.sh](/build/scripts/find-lint.sh)
(see file for docs) or as part of a build/test/lint cycle using
[build/scripts/build-test-lint.sh](/build/scripts/build-test-lint.sh).
## Labels
In addition to `TODO` and `FIXME` we also use `NOTSPEC` to identify deviations
from the Matrix specification.
## Logging
We generally prefer to log with static log messages and include any dynamic
information in fields.
```golang
logger := util.GetLogger(ctx)
// Not recommended
logger.Infof("Finished processing keys for %s, number of keys %d", name, numKeys)
// Recommended
logger.WithFields(logrus.Fields{
"numberOfKeys": numKeys,
"entityName": name,
}).Info("Finished processing keys")
```
This is useful when logging to systems that natively understand log fields, as
it allows people to search and process the fields without having to parse the
log message.
## Visual Studio Code
If you use VSCode then the following is an example of a workspace setting that
sets up linting correctly:
```json
{
"go.lintTool":"golangci-lint",
"go.lintFlags": [
"--fast"
]
}
```

View File

@ -1,55 +1,103 @@
---
title: Contributing
parent: Development
permalink: /development/contributing
---
# Contributing to Dendrite # Contributing to Dendrite
Everyone is welcome to contribute to Dendrite! We aim to make it as easy as Everyone is welcome to contribute to Dendrite! We aim to make it as easy as
possible to get started. possible to get started.
Please ensure that you sign off your contributions! See [Sign Off](#sign-off) ## Sign off
section below.
We ask that everyone who contributes to the project signs off their contributions
in accordance with the [DCO](https://github.com/matrix-org/matrix-spec/blob/main/CONTRIBUTING.rst#sign-off).
In effect, this means adding a statement to your pull requests or commit messages
along the lines of:
```
Signed-off-by: Full Name <email address>
```
Unfortunately we can't accept contributions without it.
## Getting up and running ## Getting up and running
See [INSTALL.md](INSTALL.md) for instructions on setting up a running dev See the [Installation](INSTALL.md) section for information on how to build an
instance of dendrite, and [CODE_STYLE.md](CODE_STYLE.md) for the code style instance of Dendrite. You will likely need this in order to test your changes.
guide.
We use [golangci-lint](https://github.com/golangci/golangci-lint) to lint ## Code style
Dendrite which can be executed via:
On the whole, the format as prescribed by `gofmt`, `goimports` etc. is exactly
what we use and expect. Please make sure that you run one of these formatters before
submitting your contribution.
## Comments
Please make sure that the comments adequately explain *why* your code does what it
does. If there are statements that are not obvious, please comment what they do.
We also have some special tags which we use for searchability. These are:
* `// TODO:` for places where a future review, rewrite or refactor is likely required;
* `// FIXME:` for places where we know there is an outstanding bug that needs a fix;
* `// NOTSPEC:` for places where the behaviour specifically does not match what the
[Matrix Specification](https://spec.matrix.org/) prescribes, along with a description
of *why* that is the case.
## Linting
We use [golangci-lint](https://github.com/golangci/golangci-lint) to lint Dendrite
which can be executed via:
```bash
golangci-lint run
``` ```
$ golangci-lint run
``` If you are receiving linter warnings that you are certain are spurious and want to
silence them, you can annotate the relevant lines or methods with a `// nolint:`
comment. Please avoid doing this if you can.
## Unit tests
We also have unit tests which we run via: We also have unit tests which we run via:
``` ```bash
$ go test ./... go test ./...
``` ```
## Continuous Integration In general, we like submissions that come with tests. Anything that proves that the
code is functioning as intended is great, and to ensure that we will find out quickly
in the future if any regressions happen.
When a Pull Request is submitted, continuous integration jobs are run We use the standard [Go testing package](https://gobyexample.com/testing) for this,
automatically to ensure the code builds and is relatively well-written. The jobs alongside some helper functions in our own [`test` package](https://pkg.go.dev/github.com/matrix-org/dendrite/test).
are run on [Buildkite](https://buildkite.com/matrix-dot-org/dendrite/), and the
Buildkite pipeline configuration can be found in Matrix.org's [pipelines
repository](https://github.com/matrix-org/pipelines).
If a job fails, click the "details" button and you should be taken to the job's ## Continuous integration
logs.
![Click the details button on the failing build When a Pull Request is submitted, continuous integration jobs are run automatically
step](https://raw.githubusercontent.com/matrix-org/dendrite/main/docs/images/details-button-location.jpg) by GitHub actions to ensure that the code builds and works in a number of configurations,
such as different Go versions, using full HTTP APIs and both database engines.
CI will automatically run the unit tests (as above) as well as both of our integration
test suites ([Complement](https://github.com/matrix-org/complement) and
[SyTest](https://github.com/matrix-org/sytest)).
Scroll down to the failing step and you should see some log output. Scan the You can see the progress of any CI jobs at the bottom of the Pull Request page, or by
logs until you find what it's complaining about, fix it, submit a new commit, looking at the [Actions](https://github.com/matrix-org/dendrite/actions) tab of the Dendrite
then rinse and repeat until CI passes. repository.
### Running CI Tests Locally We generally won't accept a submission unless all of the CI jobs are passing. We
do understand though that sometimes the tests get things wrong — if that's the case,
please also raise a pull request to fix the relevant tests!
### Running CI tests locally
To save waiting for CI to finish after every commit, it is ideal to run the To save waiting for CI to finish after every commit, it is ideal to run the
checks locally before pushing, fixing errors first. This also saves other people checks locally before pushing, fixing errors first. This also saves other people
time as only so many PRs can be tested at a given time. time as only so many PRs can be tested at a given time.
To execute what Buildkite tests, first run `./build/scripts/build-test-lint.sh`; this To execute what CI tests, first run `./build/scripts/build-test-lint.sh`; this
script will build the code, lint it, and run `go test ./...` with race condition script will build the code, lint it, and run `go test ./...` with race condition
checking enabled. If something needs to be changed, fix it and then run the checking enabled. If something needs to be changed, fix it and then run the
script again until it no longer complains. Be warned that the linting can take a script again until it no longer complains. Be warned that the linting can take a
@ -64,8 +112,7 @@ passing tests.
If these two steps report no problems, the code should be able to pass the CI If these two steps report no problems, the code should be able to pass the CI
tests. tests.
## Picking things to do
## Picking Things To Do
If you're new then feel free to pick up an issue labelled [good first If you're new then feel free to pick up an issue labelled [good first
issue](https://github.com/matrix-org/dendrite/labels/good%20first%20issue). issue](https://github.com/matrix-org/dendrite/labels/good%20first%20issue).
@ -81,17 +128,10 @@ We ask people who are familiar with Dendrite to leave the [good first
issue](https://github.com/matrix-org/dendrite/labels/good%20first%20issue) issue](https://github.com/matrix-org/dendrite/labels/good%20first%20issue)
issues so that there is always a way for new people to come and get involved. issues so that there is always a way for new people to come and get involved.
## Getting Help ## Getting help
For questions related to developing on Dendrite we have a dedicated room on For questions related to developing on Dendrite we have a dedicated room on
Matrix [#dendrite-dev:matrix.org](https://matrix.to/#/#dendrite-dev:matrix.org) Matrix [#dendrite-dev:matrix.org](https://matrix.to/#/#dendrite-dev:matrix.org)
where we're happy to help. where we're happy to help.
For more general questions please use For more general questions please use [#dendrite:matrix.org](https://matrix.to/#/#dendrite:matrix.org).
[#dendrite:matrix.org](https://matrix.to/#/#dendrite:matrix.org).
## Sign off
We ask that everyone who contributes to the project signs off their
contributions, in accordance with the
[DCO](https://github.com/matrix-org/matrix-spec/blob/main/CONTRIBUTING.rst#sign-off).

View File

@ -1,140 +0,0 @@
# Design
## Log Based Architecture
### Decomposition and Decoupling
A matrix homeserver can be built around append-only event logs built from the
messages, receipts, presence, typing notifications, device messages and other
events sent by users on the homeservers or by other homeservers.
The server would then decompose into two categories: writers that add new
entries to the logs and readers that read those entries.
The event logs then serve to decouple the two components, the writers and
readers need only agree on the format of the entries in the event log.
This format could be largely derived from the wire format of the events used
in the client and federation protocols:
C-S API +---------+ Event Log +---------+ C-S API
---------> | |+ (e.g. kafka) | |+ --------->
| Writers || =============> | Readers ||
---------> | || | || --------->
S-S API +---------+| +---------+| S-S API
+---------+ +---------+
However the way matrix handles state events in a room creates a few
complications for this model.
1) Writers require the room state at an event to check if it is allowed.
2) Readers require the room state at an event to determine the users and
servers that are allowed to see the event.
3) A client can query the current state of the room from a reader.
The writers and readers cannot extract the necessary information directly from
the event logs because it would take too long to extract the information as the
state is built up by collecting individual state events from the event history.
The writers and readers therefore need access to something that stores copies
of the event state in a form that can be efficiently queried. One possibility
would be for the readers and writers to maintain copies of the current state
in local databases. A second possibility would be to add a dedicated component
that maintained the state of the room and exposed an API that the readers and
writers could query to get the state. The second has the advantage that the
state is calculated and stored in a single location.
C-S API +---------+ Log +--------+ Log +---------+ C-S API
---------> | |+ ======> | | ======> | |+ --------->
| Writers || | Room | | Readers ||
---------> | || <------ | Server | ------> | || --------->
S-S API +---------+| Query | | Query +---------+| S-S API
+---------+ +--------+ +---------+
The room server can annotate the events it logs to the readers with room state
so that the readers can avoid querying the room server unnecessarily.
[This architecture can be extended to cover most of the APIs.](WIRING.md)
## How things are supposed to work.
### Local client sends an event in an existing room.
0) The client sends a PUT `/_matrix/client/r0/rooms/{roomId}/send` request
and an HTTP loadbalancer routes the request to a ClientAPI.
1) The ClientAPI:
* Authenticates the local user using the `access_token` sent in the HTTP
request.
* Checks if it has already processed or is processing a request with the
same `txnID`.
* Calculates which state events are needed to auth the request.
* Queries the necessary state events and the latest events in the room
from the RoomServer.
* Confirms that the room exists and checks whether the event is allowed by
the auth checks.
* Builds and signs the events.
* Writes the event to a "InputRoomEvent" kafka topic.
* Send a `200 OK` response to the client.
2) The RoomServer reads the event from "InputRoomEvent" kafka topic:
* Checks if it has already has a copy of the event.
* Checks if the event is allowed by the auth checks using the auth events
at the event.
* Calculates the room state at the event.
* Works out what the latest events in the room after processing this event
are.
* Calculate how the changes in the latest events affect the current state
of the room.
* TODO: Workout what events determine the visibility of this event to other
users
* Writes the event along with the changes in current state to an
"OutputRoomEvent" kafka topic. It writes all the events for a room to
the same kafka partition.
3a) The ClientSync reads the event from the "OutputRoomEvent" kafka topic:
* Updates its copy of the current state for the room.
* Works out which users need to be notified about the event.
* Wakes up any pending `/_matrix/client/r0/sync` requests for those users.
* Adds the event to the recent timeline events for the room.
3b) The FederationSender reads the event from the "OutputRoomEvent" kafka topic:
* Updates its copy of the current state for the room.
* Works out which remote servers need to be notified about the event.
* Sends a `/_matrix/federation/v1/send` request to those servers.
* Or if there is a request in progress then add the event to a queue to be
sent when the previous request finishes.
### Remote server sends an event in an existing room.
0) The remote server sends a `PUT /_matrix/federation/v1/send` request and an
HTTP loadbalancer routes the request to a FederationReceiver.
1) The FederationReceiver:
* Authenticates the remote server using the "X-Matrix" authorisation header.
* Checks if it has already processed or is processing a request with the
same `txnID`.
* Checks the signatures for the events.
Fetches the ed25519 keys for the event senders if necessary.
* Queries the RoomServer for a copy of the state of the room at each event.
* If the RoomServer doesn't know the state of the room at an event then
query the state of the room at the event from the remote server using
`GET /_matrix/federation/v1/state_ids` falling back to
`GET /_matrix/federation/v1/state` if necessary.
* Once the state at each event is known check whether the events are
allowed by the auth checks against the state at each event.
* For each event that is allowed write the event to the "InputRoomEvent"
kafka topic.
* Send a 200 OK response to the remote server listing which events were
successfully processed and which events failed
2) The RoomServer processes the event the same as it would a local event.
3a) The ClientSync processes the event the same as it would a local event.

View File

@ -1,26 +1,34 @@
# Frequently Asked Questions ---
title: FAQ
nav_order: 1
permalink: /faq
---
### Is Dendrite stable? # FAQ
## Is Dendrite stable?
Mostly, although there are still bugs and missing features. If you are a confident power user and you are happy to spend some time debugging things when they go wrong, then please try out Dendrite. If you are a community, organisation or business that demands stability and uptime, then Dendrite is not for you yet - please install Synapse instead. Mostly, although there are still bugs and missing features. If you are a confident power user and you are happy to spend some time debugging things when they go wrong, then please try out Dendrite. If you are a community, organisation or business that demands stability and uptime, then Dendrite is not for you yet - please install Synapse instead.
### Is Dendrite feature-complete? ## Is Dendrite feature-complete?
No, although a good portion of the Matrix specification has been implemented. Mostly missing are client features - see the readme at the root of the repository for more information. No, although a good portion of the Matrix specification has been implemented. Mostly missing are client features - see the readme at the root of the repository for more information.
### Is there a migration path from Synapse to Dendrite? ## Is there a migration path from Synapse to Dendrite?
No, not at present. There will be in the future when Dendrite reaches version 1.0. No, not at present. There will be in the future when Dendrite reaches version 1.0. For now it is not
possible to migrate an existing Synapse deployment to Dendrite.
### Can I use Dendrite with an existing Synapse database? ## Can I use Dendrite with an existing Synapse database?
No, Dendrite has a very different database schema to Synapse and the two are not interchangeable. No, Dendrite has a very different database schema to Synapse and the two are not interchangeable.
### Should I run a monolith or a polylith deployment? ## Should I run a monolith or a polylith deployment?
Monolith deployments are always preferred where possible, and at this time, are far better tested than polylith deployments are. The only reason to consider a polylith deployment is if you wish to run different Dendrite components on separate physical machines. Monolith deployments are always preferred where possible, and at this time, are far better tested than polylith deployments are. The only reason to consider a polylith deployment is if you wish to run different Dendrite components on separate physical machines, but this is an advanced configuration which we don't
recommend.
### I've installed Dendrite but federation isn't working ## I've installed Dendrite but federation isn't working
Check the [Federation Tester](https://federationtester.matrix.org). You need at least: Check the [Federation Tester](https://federationtester.matrix.org). You need at least:
@ -28,54 +36,57 @@ Check the [Federation Tester](https://federationtester.matrix.org). You need at
* A valid TLS certificate for that DNS name * A valid TLS certificate for that DNS name
* Either DNS SRV records or well-known files * Either DNS SRV records or well-known files
### Does Dendrite work with my favourite client? ## Does Dendrite work with my favourite client?
It should do, although we are aware of some minor issues: It should do, although we are aware of some minor issues:
* **Element Android**: registration does not work, but logging in with an existing account does * **Element Android**: registration does not work, but logging in with an existing account does
* **Hydrogen**: occasionally sync can fail due to gaps in the `since` parameter, but clearing the cache fixes this * **Hydrogen**: occasionally sync can fail due to gaps in the `since` parameter, but clearing the cache fixes this
### Does Dendrite support push notifications? ## Does Dendrite support push notifications?
Yes, we have experimental support for push notifications. Configure them in the usual way in your Matrix client. Yes, we have experimental support for push notifications. Configure them in the usual way in your Matrix client.
### Does Dendrite support application services/bridges? ## Does Dendrite support application services/bridges?
Possibly - Dendrite does have some application service support but it is not well tested. Please let us know by raising a GitHub issue if you try it and run into problems. Possibly - Dendrite does have some application service support but it is not well tested. Please let us know by raising a GitHub issue if you try it and run into problems.
Bridges known to work (as of v0.5.1): Bridges known to work (as of v0.5.1):
- [Telegram](https://docs.mau.fi/bridges/python/telegram/index.html)
- [WhatsApp](https://docs.mau.fi/bridges/go/whatsapp/index.html) * [Telegram](https://docs.mau.fi/bridges/python/telegram/index.html)
- [Signal](https://docs.mau.fi/bridges/python/signal/index.html) * [WhatsApp](https://docs.mau.fi/bridges/go/whatsapp/index.html)
- [probably all other mautrix bridges](https://docs.mau.fi/bridges/) * [Signal](https://docs.mau.fi/bridges/python/signal/index.html)
* [probably all other mautrix bridges](https://docs.mau.fi/bridges/)
Remember to add the config file(s) to the `app_service_api` [config](https://github.com/matrix-org/dendrite/blob/de38be469a23813921d01bef3e14e95faab2a59e/dendrite-config.yaml#L130-L131). Remember to add the config file(s) to the `app_service_api` [config](https://github.com/matrix-org/dendrite/blob/de38be469a23813921d01bef3e14e95faab2a59e/dendrite-config.yaml#L130-L131).
### Is it possible to prevent communication with the outside world? ## Is it possible to prevent communication with the outside world?
Yes, you can do this by disabling federation - set `disable_federation` to `true` in the `global` section of the Dendrite configuration file. Yes, you can do this by disabling federation - set `disable_federation` to `true` in the `global` section of the Dendrite configuration file.
### Should I use PostgreSQL or SQLite for my databases? ## Should I use PostgreSQL or SQLite for my databases?
Please use PostgreSQL wherever possible, especially if you are planning to run a homeserver that caters to more than a couple of users. Please use PostgreSQL wherever possible, especially if you are planning to run a homeserver that caters to more than a couple of users.
### Dendrite is using a lot of CPU ## Dendrite is using a lot of CPU
Generally speaking, you should expect to see some CPU spikes, particularly if you are joining or participating in large rooms. However, constant/sustained high CPU usage is not expected - if you are experiencing that, please join `#dendrite-dev:matrix.org` and let us know, or file a GitHub issue. Generally speaking, you should expect to see some CPU spikes, particularly if you are joining or participating in large rooms. However, constant/sustained high CPU usage is not expected - if you are experiencing that, please join `#dendrite-dev:matrix.org` and let us know what you were doing when the
CPU usage shot up, or file a GitHub issue. If you can take a [CPU profile](PROFILING.md) then that would
be a huge help too, as that will help us to understand where the CPU time is going.
### Dendrite is using a lot of RAM ## Dendrite is using a lot of RAM
A lot of users report that Dendrite is using a lot of RAM, sometimes even gigabytes of it. This is usually due to Go's allocator behaviour, which tries to hold onto allocated memory until the operating system wants to reclaim it for something else. This can make the memory usage look significantly inflated in tools like `top`/`htop` when actually most of that memory is not really in use at all. As above with CPU usage, some memory spikes are expected if Dendrite is doing particularly heavy work
at a given instant. However, if it is using more RAM than you expect for a long time, that's probably
not expected. Join `#dendrite-dev:matrix.org` and let us know what you were doing when the memory usage
ballooned, or file a GitHub issue if you can. If you can take a [memory profile](PROFILING.md) then that
would be a huge help too, as that will help us to understand where the memory usage is happening.
If you want to prevent this behaviour so that the Go runtime releases memory normally, start Dendrite using the `GODEBUG=madvdontneed=1` environment variable. It is also expected that the allocator behaviour will be changed again in Go 1.16 so that it does not hold onto memory unnecessarily in this way. ## Dendrite is running out of PostgreSQL database connections
If you are running with `GODEBUG=madvdontneed=1` and still see hugely inflated memory usage then that's quite possibly a bug - please join `#dendrite-dev:matrix.org` and let us know, or file a GitHub issue.
### Dendrite is running out of PostgreSQL database connections
You may need to revisit the connection limit of your PostgreSQL server and/or make changes to the `max_connections` lines in your Dendrite configuration. Be aware that each Dendrite component opens its own database connections and has its own connection limit, even in monolith mode! You may need to revisit the connection limit of your PostgreSQL server and/or make changes to the `max_connections` lines in your Dendrite configuration. Be aware that each Dendrite component opens its own database connections and has its own connection limit, even in monolith mode!
### What is being reported when enabling anonymous stats? ## What is being reported when enabling anonymous stats?
If anonymous stats reporting is enabled, the following data is send to the defined endpoint. If anonymous stats reporting is enabled, the following data is send to the defined endpoint.

5
docs/Gemfile Normal file
View File

@ -0,0 +1,5 @@
source "https://rubygems.org"
gem "github-pages", "~> 226", group: :jekyll_plugins
group :jekyll_plugins do
gem "jekyll-feed", "~> 0.15.1"
end

283
docs/Gemfile.lock Normal file
View File

@ -0,0 +1,283 @@
GEM
remote: https://rubygems.org/
specs:
activesupport (6.0.5)
concurrent-ruby (~> 1.0, >= 1.0.2)
i18n (>= 0.7, < 2)
minitest (~> 5.1)
tzinfo (~> 1.1)
zeitwerk (~> 2.2, >= 2.2.2)
addressable (2.8.0)
public_suffix (>= 2.0.2, < 5.0)
coffee-script (2.4.1)
coffee-script-source
execjs
coffee-script-source (1.11.1)
colorator (1.1.0)
commonmarker (0.23.4)
concurrent-ruby (1.1.10)
dnsruby (1.61.9)
simpleidn (~> 0.1)
em-websocket (0.5.3)
eventmachine (>= 0.12.9)
http_parser.rb (~> 0)
ethon (0.15.0)
ffi (>= 1.15.0)
eventmachine (1.2.7)
execjs (2.8.1)
faraday (1.10.0)
faraday-em_http (~> 1.0)
faraday-em_synchrony (~> 1.0)
faraday-excon (~> 1.1)
faraday-httpclient (~> 1.0)
faraday-multipart (~> 1.0)
faraday-net_http (~> 1.0)
faraday-net_http_persistent (~> 1.0)
faraday-patron (~> 1.0)
faraday-rack (~> 1.0)
faraday-retry (~> 1.0)
ruby2_keywords (>= 0.0.4)
faraday-em_http (1.0.0)
faraday-em_synchrony (1.0.0)
faraday-excon (1.1.0)
faraday-httpclient (1.0.1)
faraday-multipart (1.0.3)
multipart-post (>= 1.2, < 3)
faraday-net_http (1.0.1)
faraday-net_http_persistent (1.2.0)
faraday-patron (1.0.0)
faraday-rack (1.0.0)
faraday-retry (1.0.3)
ffi (1.15.5)
forwardable-extended (2.6.0)
gemoji (3.0.1)
github-pages (226)
github-pages-health-check (= 1.17.9)
jekyll (= 3.9.2)
jekyll-avatar (= 0.7.0)
jekyll-coffeescript (= 1.1.1)
jekyll-commonmark-ghpages (= 0.2.0)
jekyll-default-layout (= 0.1.4)
jekyll-feed (= 0.15.1)
jekyll-gist (= 1.5.0)
jekyll-github-metadata (= 2.13.0)
jekyll-include-cache (= 0.2.1)
jekyll-mentions (= 1.6.0)
jekyll-optional-front-matter (= 0.3.2)
jekyll-paginate (= 1.1.0)
jekyll-readme-index (= 0.3.0)
jekyll-redirect-from (= 0.16.0)
jekyll-relative-links (= 0.6.1)
jekyll-remote-theme (= 0.4.3)
jekyll-sass-converter (= 1.5.2)
jekyll-seo-tag (= 2.8.0)
jekyll-sitemap (= 1.4.0)
jekyll-swiss (= 1.0.0)
jekyll-theme-architect (= 0.2.0)
jekyll-theme-cayman (= 0.2.0)
jekyll-theme-dinky (= 0.2.0)
jekyll-theme-hacker (= 0.2.0)
jekyll-theme-leap-day (= 0.2.0)
jekyll-theme-merlot (= 0.2.0)
jekyll-theme-midnight (= 0.2.0)
jekyll-theme-minimal (= 0.2.0)
jekyll-theme-modernist (= 0.2.0)
jekyll-theme-primer (= 0.6.0)
jekyll-theme-slate (= 0.2.0)
jekyll-theme-tactile (= 0.2.0)
jekyll-theme-time-machine (= 0.2.0)
jekyll-titles-from-headings (= 0.5.3)
jemoji (= 0.12.0)
kramdown (= 2.3.2)
kramdown-parser-gfm (= 1.1.0)
liquid (= 4.0.3)
mercenary (~> 0.3)
minima (= 2.5.1)
nokogiri (>= 1.13.4, < 2.0)
rouge (= 3.26.0)
terminal-table (~> 1.4)
github-pages-health-check (1.17.9)
addressable (~> 2.3)
dnsruby (~> 1.60)
octokit (~> 4.0)
public_suffix (>= 3.0, < 5.0)
typhoeus (~> 1.3)
html-pipeline (2.14.1)
activesupport (>= 2)
nokogiri (>= 1.4)
http_parser.rb (0.8.0)
i18n (0.9.5)
concurrent-ruby (~> 1.0)
jekyll (3.9.2)
addressable (~> 2.4)
colorator (~> 1.0)
em-websocket (~> 0.5)
i18n (~> 0.7)
jekyll-sass-converter (~> 1.0)
jekyll-watch (~> 2.0)
kramdown (>= 1.17, < 3)
liquid (~> 4.0)
mercenary (~> 0.3.3)
pathutil (~> 0.9)
rouge (>= 1.7, < 4)
safe_yaml (~> 1.0)
jekyll-avatar (0.7.0)
jekyll (>= 3.0, < 5.0)
jekyll-coffeescript (1.1.1)
coffee-script (~> 2.2)
coffee-script-source (~> 1.11.1)
jekyll-commonmark (1.4.0)
commonmarker (~> 0.22)
jekyll-commonmark-ghpages (0.2.0)
commonmarker (~> 0.23.4)
jekyll (~> 3.9.0)
jekyll-commonmark (~> 1.4.0)
rouge (>= 2.0, < 4.0)
jekyll-default-layout (0.1.4)
jekyll (~> 3.0)
jekyll-feed (0.15.1)
jekyll (>= 3.7, < 5.0)
jekyll-gist (1.5.0)
octokit (~> 4.2)
jekyll-github-metadata (2.13.0)
jekyll (>= 3.4, < 5.0)
octokit (~> 4.0, != 4.4.0)
jekyll-include-cache (0.2.1)
jekyll (>= 3.7, < 5.0)
jekyll-mentions (1.6.0)
html-pipeline (~> 2.3)
jekyll (>= 3.7, < 5.0)
jekyll-optional-front-matter (0.3.2)
jekyll (>= 3.0, < 5.0)
jekyll-paginate (1.1.0)
jekyll-readme-index (0.3.0)
jekyll (>= 3.0, < 5.0)
jekyll-redirect-from (0.16.0)
jekyll (>= 3.3, < 5.0)
jekyll-relative-links (0.6.1)
jekyll (>= 3.3, < 5.0)
jekyll-remote-theme (0.4.3)
addressable (~> 2.0)
jekyll (>= 3.5, < 5.0)
jekyll-sass-converter (>= 1.0, <= 3.0.0, != 2.0.0)
rubyzip (>= 1.3.0, < 3.0)
jekyll-sass-converter (1.5.2)
sass (~> 3.4)
jekyll-seo-tag (2.8.0)
jekyll (>= 3.8, < 5.0)
jekyll-sitemap (1.4.0)
jekyll (>= 3.7, < 5.0)
jekyll-swiss (1.0.0)
jekyll-theme-architect (0.2.0)
jekyll (> 3.5, < 5.0)
jekyll-seo-tag (~> 2.0)
jekyll-theme-cayman (0.2.0)
jekyll (> 3.5, < 5.0)
jekyll-seo-tag (~> 2.0)
jekyll-theme-dinky (0.2.0)
jekyll (> 3.5, < 5.0)
jekyll-seo-tag (~> 2.0)
jekyll-theme-hacker (0.2.0)
jekyll (> 3.5, < 5.0)
jekyll-seo-tag (~> 2.0)
jekyll-theme-leap-day (0.2.0)
jekyll (> 3.5, < 5.0)
jekyll-seo-tag (~> 2.0)
jekyll-theme-merlot (0.2.0)
jekyll (> 3.5, < 5.0)
jekyll-seo-tag (~> 2.0)
jekyll-theme-midnight (0.2.0)
jekyll (> 3.5, < 5.0)
jekyll-seo-tag (~> 2.0)
jekyll-theme-minimal (0.2.0)
jekyll (> 3.5, < 5.0)
jekyll-seo-tag (~> 2.0)
jekyll-theme-modernist (0.2.0)
jekyll (> 3.5, < 5.0)
jekyll-seo-tag (~> 2.0)
jekyll-theme-primer (0.6.0)
jekyll (> 3.5, < 5.0)
jekyll-github-metadata (~> 2.9)
jekyll-seo-tag (~> 2.0)
jekyll-theme-slate (0.2.0)
jekyll (> 3.5, < 5.0)
jekyll-seo-tag (~> 2.0)
jekyll-theme-tactile (0.2.0)
jekyll (> 3.5, < 5.0)
jekyll-seo-tag (~> 2.0)
jekyll-theme-time-machine (0.2.0)
jekyll (> 3.5, < 5.0)
jekyll-seo-tag (~> 2.0)
jekyll-titles-from-headings (0.5.3)
jekyll (>= 3.3, < 5.0)
jekyll-watch (2.2.1)
listen (~> 3.0)
jemoji (0.12.0)
gemoji (~> 3.0)
html-pipeline (~> 2.2)
jekyll (>= 3.0, < 5.0)
kramdown (2.3.2)
rexml
kramdown-parser-gfm (1.1.0)
kramdown (~> 2.0)
liquid (4.0.3)
listen (3.7.1)
rb-fsevent (~> 0.10, >= 0.10.3)
rb-inotify (~> 0.9, >= 0.9.10)
mercenary (0.3.6)
minima (2.5.1)
jekyll (>= 3.5, < 5.0)
jekyll-feed (~> 0.9)
jekyll-seo-tag (~> 2.1)
minitest (5.15.0)
multipart-post (2.1.1)
nokogiri (1.13.6-arm64-darwin)
racc (~> 1.4)
octokit (4.22.0)
faraday (>= 0.9)
sawyer (~> 0.8.0, >= 0.5.3)
pathutil (0.16.2)
forwardable-extended (~> 2.6)
public_suffix (4.0.7)
racc (1.6.0)
rb-fsevent (0.11.1)
rb-inotify (0.10.1)
ffi (~> 1.0)
rexml (3.2.5)
rouge (3.26.0)
ruby2_keywords (0.0.5)
rubyzip (2.3.2)
safe_yaml (1.0.5)
sass (3.7.4)
sass-listen (~> 4.0.0)
sass-listen (4.0.0)
rb-fsevent (~> 0.9, >= 0.9.4)
rb-inotify (~> 0.9, >= 0.9.7)
sawyer (0.8.2)
addressable (>= 2.3.5)
faraday (> 0.8, < 2.0)
simpleidn (0.2.1)
unf (~> 0.1.4)
terminal-table (1.8.0)
unicode-display_width (~> 1.1, >= 1.1.1)
thread_safe (0.3.6)
typhoeus (1.4.0)
ethon (>= 0.9.0)
tzinfo (1.2.9)
thread_safe (~> 0.1)
unf (0.1.4)
unf_ext
unf_ext (0.0.8.1)
unicode-display_width (1.8.0)
zeitwerk (2.5.4)
PLATFORMS
arm64-darwin-21
DEPENDENCIES
github-pages (~> 226)
jekyll-feed (~> 0.15.1)
minima (~> 2.5.1)
BUNDLED WITH
2.3.7

View File

@ -1,283 +1,15 @@
# Installing Dendrite # Installation
Dendrite can be run in one of two configurations: Please note that new installation instructions can be found
on the [new documentation site](https://matrix-org.github.io/dendrite/),
* **Monolith mode**: All components run in the same process. In this mode, or alternatively, in the [installation](installation/) folder:
it is possible to run an in-process [NATS Server](https://github.com/nats-io/nats-server)
instead of running a standalone deployment. This will usually be the preferred model for 1. [Planning your deployment](installation/1_planning.md)
low-to-mid volume deployments, providing the best balance between performance and resource usage. 2. [Setting up the domain](installation/2_domainname.md)
3. [Preparing database storage](installation/3_database.md)
* **Polylith mode**: A cluster of individual components running in their own processes, dealing 4. [Generating signing keys](installation/4_signingkey.md)
with different aspects of the Matrix protocol (see [WIRING.md](WIRING-Current.md)). Components 5. [Installing as a monolith](installation/5_install_monolith.md)
communicate with each other using internal HTTP APIs and [NATS Server](https://github.com/nats-io/nats-server). 6. [Installing as a polylith](installation/6_install_polylith.md)
This will almost certainly be the preferred model for very large deployments but scalability 7. [Populate the configuration](installation/7_configuration.md)
comes with a cost. API calls are expensive and therefore a polylith deployment may end up using 8. [Starting the monolith](installation/8_starting_monolith.md)
disproportionately more resources for a smaller number of users compared to a monolith deployment. 9. [Starting the polylith](installation/9_starting_polylith.md)
In almost all cases, it is **recommended to run in monolith mode with PostgreSQL databases**.
Regardless of whether you are running in polylith or monolith mode, each Dendrite component that
requires storage has its own database connections. Both Postgres and SQLite are supported and can
be mixed-and-matched across components as needed in the configuration file.
Be advised that Dendrite is still in development and it's not recommended for
use in production environments just yet!
## Requirements
Dendrite requires:
* Go 1.16 or higher
* PostgreSQL 12 or higher (if using PostgreSQL databases, not needed for SQLite)
If you want to run a polylith deployment, you also need:
* A standalone [NATS Server](https://github.com/nats-io/nats-server) deployment with JetStream enabled
If you want to build it on Windows, you need `gcc` in the path:
* [MinGW-w64](https://www.mingw-w64.org/)
## Building Dendrite
Start by cloning the code:
```bash
git clone https://github.com/matrix-org/dendrite
cd dendrite
```
Then build it:
* Linux or UNIX-like systems:
```bash
./build.sh
```
* Windows:
```dos
build.cmd
```
## Install NATS Server
Follow the [NATS Server installation instructions](https://docs.nats.io/running-a-nats-service/introduction/installation) and then [start your NATS deployment](https://docs.nats.io/running-a-nats-service/introduction/running).
JetStream must be enabled, either by passing the `-js` flag to `nats-server`,
or by specifying the `store_dir` option in the the `jetstream` configuration.
## Configuration
### PostgreSQL database setup
Assuming that PostgreSQL 12 (or later) is installed:
* Create role, choosing a new password when prompted:
```bash
sudo -u postgres createuser -P dendrite
```
At this point you have a choice on whether to run all of the Dendrite
components from a single database, or for each component to have its
own database. For most deployments, running from a single database will
be sufficient, although you may wish to separate them if you plan to
split out the databases across multiple machines in the future.
On macOS, omit `sudo -u postgres` from the below commands.
* If you want to run all Dendrite components from a single database:
```bash
sudo -u postgres createdb -O dendrite dendrite
```
... in which case your connection string will look like `postgres://user:pass@database/dendrite`.
* If you want to run each Dendrite component with its own database:
```bash
for i in mediaapi syncapi roomserver federationapi appservice keyserver userapi_accounts; do
sudo -u postgres createdb -O dendrite dendrite_$i
done
```
... in which case your connection string will look like `postgres://user:pass@database/dendrite_componentname`.
### SQLite database setup
**WARNING:** SQLite is suitable for small experimental deployments only and should not be used in production - use PostgreSQL instead for any user-facing federating installation!
Dendrite can use the built-in SQLite database engine for small setups.
The SQLite databases do not need to be pre-built - Dendrite will
create them automatically at startup.
### Server key generation
Each Dendrite installation requires:
* A unique Matrix signing private key
* A valid and trusted TLS certificate and private key
To generate a Matrix signing private key:
```bash
./bin/generate-keys --private-key matrix_key.pem
```
**WARNING:** Make sure take a safe backup of this key! You will likely need it if you want to reinstall Dendrite, or
any other Matrix homeserver, on the same domain name in the future. If you lose this key, you may have trouble joining
federated rooms.
For testing, you can generate a self-signed certificate and key, although this will not work for public federation:
```bash
./bin/generate-keys --tls-cert server.crt --tls-key server.key
```
If you have server keys from an older Synapse instance,
[convert them](serverkeyformat.md#converting-synapse-keys) to Dendrite's PEM
format and configure them as `old_private_keys` in your config.
### Configuration file
Create config file, based on `dendrite-config.yaml`. Call it `dendrite.yaml`. Things that will need editing include *at least*:
* The `server_name` entry to reflect the hostname of your Dendrite server
* The `database` lines with an updated connection string based on your
desired setup, e.g. replacing `database` with the name of the database:
* For Postgres: `postgres://dendrite:password@localhost/database`, e.g.
* `postgres://dendrite:password@localhost/dendrite_userapi_account` to connect to PostgreSQL with SSL/TLS
* `postgres://dendrite:password@localhost/dendrite_userapi_account?sslmode=disable` to connect to PostgreSQL without SSL/TLS
* For SQLite on disk: `file:component.db` or `file:///path/to/component.db`, e.g. `file:userapi_account.db`
* Postgres and SQLite can be mixed and matched on different components as desired.
* Either one of the following in the `jetstream` configuration section:
* The `addresses` option — a list of one or more addresses of an external standalone
NATS Server deployment
* The `storage_path` — where on the filesystem the built-in NATS server should
store durable queues, if using the built-in NATS server
There are other options which may be useful so review them all. In particular,
if you are trying to federate from your Dendrite instance into public rooms
then configuring `key_perspectives` (like `matrix.org` in the sample) can
help to improve reliability considerably by allowing your homeserver to fetch
public keys for dead homeservers from somewhere else.
**WARNING:** Dendrite supports running all components from the same database in
PostgreSQL mode, but this is **NOT** a supported configuration with SQLite. When
using SQLite, all components **MUST** use their own database file.
## Starting a monolith server
The monolith server can be started as shown below. By default it listens for
HTTP connections on port 8008, so you can configure your Matrix client to use
`http://servername:8008` as the server:
```bash
./bin/dendrite-monolith-server
```
If you set `--tls-cert` and `--tls-key` as shown below, it will also listen
for HTTPS connections on port 8448:
```bash
./bin/dendrite-monolith-server --tls-cert=server.crt --tls-key=server.key
```
If the `jetstream` section of the configuration contains no `addresses` but does
contain a `store_dir`, Dendrite will start up a built-in NATS JetStream node
automatically, eliminating the need to run a separate NATS server.
## Starting a polylith deployment
The following contains scripts which will run all the required processes in order to point a Matrix client at Dendrite.
### nginx (or other reverse proxy)
This is what your clients and federated hosts will talk to. It must forward
requests onto the correct API server based on URL:
* `/_matrix/client` to the client API server
* `/_matrix/federation` to the federation API server
* `/_matrix/key` to the federation API server
* `/_matrix/media` to the media API server
See `docs/nginx/polylith-sample.conf` for a sample configuration.
### Client API server
This is what implements CS API endpoints. Clients talk to this via the proxy in
order to send messages, create and join rooms, etc.
```bash
./bin/dendrite-polylith-multi --config=dendrite.yaml clientapi
```
### Sync server
This is what implements `/sync` requests. Clients talk to this via the proxy
in order to receive messages.
```bash
./bin/dendrite-polylith-multi --config=dendrite.yaml syncapi
```
### Media server
This implements `/media` requests. Clients talk to this via the proxy in
order to upload and retrieve media.
```bash
./bin/dendrite-polylith-multi --config=dendrite.yaml mediaapi
```
### Federation API server
This implements the federation API. Servers talk to this via the proxy in
order to send transactions. This is only required if you want to support
federation.
```bash
./bin/dendrite-polylith-multi --config=dendrite.yaml federationapi
```
### Internal components
This refers to components that are not directly spoken to by clients. They are only
contacted by other components. This includes the following components.
#### Room server
This is what implements the room DAG. Clients do not talk to this.
```bash
./bin/dendrite-polylith-multi --config=dendrite.yaml roomserver
```
#### Appservice server
This sends events from the network to [application
services](https://matrix.org/docs/spec/application_service/unstable.html)
running locally. This is only required if you want to support running
application services on your homeserver.
```bash
./bin/dendrite-polylith-multi --config=dendrite.yaml appservice
```
#### Key server
This manages end-to-end encryption keys for users.
```bash
./bin/dendrite-polylith-multi --config=dendrite.yaml keyserver
```
#### User server
This manages user accounts, device access tokens and user account data,
amongst other things.
```bash
./bin/dendrite-polylith-multi --config=dendrite.yaml userapi
```

View File

@ -1,3 +1,9 @@
---
title: Profiling
parent: Development
permalink: /development/profiling
---
# Profiling Dendrite # Profiling Dendrite
If you are running into problems with Dendrite using excessive resources (e.g. CPU or RAM) then you can use the profiler to work out what is happening. If you are running into problems with Dendrite using excessive resources (e.g. CPU or RAM) then you can use the profiler to work out what is happening.
@ -16,7 +22,7 @@ If pprof has been enabled successfully, a log line at startup will show that ppr
``` ```
WARN[2020-12-03T13:32:33.669405000Z] [/Users/neilalexander/Desktop/dendrite/internal/log.go:87] SetupPprof WARN[2020-12-03T13:32:33.669405000Z] [/Users/neilalexander/Desktop/dendrite/internal/log.go:87] SetupPprof
Starting pprof on localhost:65432 Starting pprof on localhost:65432
``` ```
All examples from this point forward assume `PPROFLISTEN=localhost:65432` but you may need to adjust as necessary for your setup. All examples from this point forward assume `PPROFLISTEN=localhost:65432` but you may need to adjust as necessary for your setup.

View File

@ -1,71 +0,0 @@
This document details how various components communicate with each other. There are two kinds of components:
- Public-facing: exposes CS/SS API endpoints and need to be routed to via client-api-proxy or equivalent.
- Internal-only: exposes internal APIs and produces Kafka events.
## Internal HTTP APIs
Not everything can be done using Kafka logs. For example, requesting the latest events in a room is much better suited to
a request/response model like HTTP or RPC. Therefore, components can expose "internal APIs" which sit outside of Kafka logs.
Note in Monolith mode these are actually direct function calls and are not serialised HTTP requests.
```
Tier 1 Sync FederationAPI ClientAPI MediaAPI
Public Facing | | | | | | | | | |
2 .-------3-----------------` | | | `--------|-|-|-|--11--------------------.
| | .--------4----------------------------------` | | | |
| | | .---5-----------` | | | | | |
| | | | .---6----------------------------` | | |
| | | | | | .-----7----------` | |
| | | | | 8 | | 10 |
| | | | | | | `---9----. | |
V V V V V V V V V V
Tier 2 Roomserver EDUServer FedSender AppService KeyServer ServerKeyAPI
Internal only | `------------------------12----------^ ^
`------------------------------------------------------------13----------`
Client ---> Server
```
- 2 (Sync -> Roomserver): When making backfill requests
- 3 (FedAPI -> Roomserver): Calculating (prev/auth events) and sending new events, processing backfill/state/state_ids requests
- 4 (ClientAPI -> Roomserver): Calculating (prev/auth events) and sending new events, processing /state requests
- 5 (FedAPI -> EDUServer): Sending typing/send-to-device events
- 6 (ClientAPI -> EDUServer): Sending typing/send-to-device events
- 7 (ClientAPI -> FedSender): Handling directory lookups
- 8 (FedAPI -> FedSender): Resetting backoffs when receiving traffic from a server. Querying joined hosts when handling alias lookup requests
- 9 (FedAPI -> AppService): Working out if the client is an appservice user
- 10 (ClientAPI -> AppService): Working out if the client is an appservice user
- 11 (FedAPI -> ServerKeyAPI): Verifying incoming event signatures
- 12 (FedSender -> ServerKeyAPI): Verifying event signatures of responses (e.g from send_join)
- 13 (Roomserver -> ServerKeyAPI): Verifying event signatures of backfilled events
In addition to this, all public facing components (Tier 1) talk to the `UserAPI` to verify access tokens and extract profile information where needed.
## Kafka logs
```
.----1--------------------------------------------.
V |
Tier 1 Sync FederationAPI ClientAPI MediaAPI
Public Facing ^ ^ ^
| | |
2 | |
| `-3------------. |
| | |
| | |
| | |
| .--------4-----|------------------------------`
| | |
Tier 2 Roomserver EDUServer FedSender AppService KeyServer ServerKeyAPI
Internal only | | ^ ^
| `-----5----------` |
`--------------------6--------`
Producer ----> Consumer
```
- 1 (ClientAPI -> Sync): For tracking account data
- 2 (Roomserver -> Sync): For all data to send to clients
- 3 (EDUServer -> Sync): For typing/send-to-device data to send to clients
- 4 (Roomserver -> ClientAPI): For tracking memberships for profile updates.
- 5 (EDUServer -> FedSender): For sending EDUs over federation
- 6 (Roomserver -> FedSender): For sending PDUs over federation, for tracking joined hosts.

View File

@ -1,229 +0,0 @@
# Wiring
The diagram is incomplete. The following things aren't shown on the diagram:
* Device Messages
* User Profiles
* Notification Counts
* Sending federation.
* Querying federation.
* Other things that aren't shown on the diagram.
Diagram:
W -> Writer
S -> Server/Store/Service/Something/Stuff
R -> Reader
+---+ +---+ +---+
+----------| W | +----------| S | +--------| R |
| +---+ | Receipts +---+ | Client +---+
| Federation |>=========================================>| Server |>=====================>| Sync |
| Receiver | | | | |
| | +---+ | | | |
| | +--------| W | | | | |
| | | Client +---+ | | | |
| | | Receipt |>=====>| | | |
| | | Updater | | | | |
| | +----------+ | | | |
| | | | | |
| | +---+ +---+ | | +---+ | |
| | +------------| W | +------| S | | | +--------| R | | |
| | | Federation +---+ | Room +---+ | | | Client +---+ | |
| | | Backfill |>=====>| Server |>=====>| |>=====>| Push | | |
| | +--------------+ | | +------------+ | | | |
| | | | | | | |
| | | |>==========================>| | | |
| | | | +----------+ | |
| | | | +---+ | |
| | | | +-------------| R | | |
| | | |>=====>| Application +---+ | |
| | | | | Services | | |
| | | | +--------------+ | |
| | | | +---+ | |
| | | | +--------| R | | |
| | | | | Client +---+ | |
| |>========================>| |>==========================>| Search | | |
| | | | | | | |
| | | | +----------+ | |
| | | | | |
| | | |>==========================================>| |
| | | | | |
| | +---+ | | +---+ | |
| | +--------| W | | | +----------| S | | |
| | | Client +---+ | | | Presence +---+ | |
| | | API |>=====>| |>=====>| Server |>=====================>| |
| | | /send | +--------+ | | | |
| | | | | | | |
| | | |>======================>| |<=====================<| |
| | +----------+ | | | |
| | | | | |
| | +---+ | | | |
| | +--------| W | | | | |
| | | Client +---+ | | | |
| | | Presence |>=====>| | | |
| | | Setter | | | | |
| | +----------+ | | | |
| | | | | |
| | | | | |
| |>=========================================>| | | |
| | +------------+ | |
| | | |
| | +---+ | |
| | +----------| S | | |
| | | EDU +---+ | |
| |>=========================================>| Server |>=====================>| |
+------------+ | | +----------+
+---+ | |
+--------| W | | |
| Client +---+ | |
| Typing |>=====>| |
| Setter | | |
+----------+ +------------+
# Component Descriptions
Many of the components are logical rather than physical. For example it is
possible that all of the client API writers will end up being glued together
and always deployed as a single unit.
Outbound federation requests will probably need to be funnelled through a
choke-point to implement ratelimiting and backoff correctly.
## Federation Send
* Handles `/federation/v1/send/` requests.
* Fetches missing ``prev_events`` from the remote server if needed.
* Fetches missing room state from the remote server if needed.
* Checks signatures on remote events, downloading keys if needed.
* Queries information needed to process events from the Room Server.
* Writes room events to logs.
* Writes presence updates to logs.
* Writes receipt updates to logs.
* Writes typing updates to logs.
* Writes other updates to logs.
## Client API /send
* Handles puts to `/client/v1/rooms/` that create room events.
* Queries information needed to process events from the Room Server.
* Talks to remote servers if needed for joins and invites.
* Writes room event pdus.
* Writes presence updates to logs.
## Client Presence Setter
* Handles puts to the [client API presence paths](https://matrix.org/docs/spec/client_server/unstable.html#id41).
* Writes presence updates to logs.
## Client Typing Setter
* Handles puts to the [client API typing paths](https://matrix.org/docs/spec/client_server/unstable.html#id32).
* Writes typing updates to logs.
## Client Receipt Updater
* Handles puts to the [client API receipt paths](https://matrix.org/docs/spec/client_server/unstable.html#id36).
* Writes receipt updates to logs.
## Federation Backfill
* Backfills events from other servers
* Writes the resulting room events to logs.
* Is a different component from the room server itself cause it'll
be easier if the room server component isn't making outbound HTTP requests
to remote servers
## Room Server
* Reads new and backfilled room events from the logs written by FS, FB and CRS.
* Tracks the current state of the room and the state at each event.
* Probably does auth checks on the incoming events.
* Handles state resolution as part of working out the current state and the
state at each event.
* Writes updates to the current state and new events to logs.
* Shards by room ID.
## Receipt Server
* Reads new updates to receipts from the logs written by the FS and CRU.
* Somehow learns enough information from the room server to workout how the
current receipt markers move with each update.
* Writes the new marker positions to logs
* Shards by room ID?
* It may be impossible to implement without folding it into the Room Server
forever coupling the components together.
## EDU Server
* Reads new updates to typing from the logs written by the FS and CTS.
* Updates the current list of people typing in a room.
* Writes the current list of people typing in a room to the logs.
* Shards by room ID?
## Presence Server
* Reads the current state of the rooms from the logs to track the intersection
of room membership between users.
* Reads updates to presence from the logs written by the FS and the CPS.
* Reads when clients sync from the logs from the Client Sync.
* Tracks any timers for users.
* Writes the changes to presence state to the logs.
* Shards by user ID somehow?
## Client Sync
* Handle /client/v2/sync requests.
* Reads new events and the current state of the rooms from logs written by the Room Server.
* Reads new receipts positions from the logs written by the Receipts Server.
* Reads changes to presence from the logs written by the Presence Server.
* Reads changes to typing from the logs written by the EDU Server.
* Writes when a client starts and stops syncing to the logs.
## Client Search
* Handle whatever the client API path for event search is?
* Reads new events and the current state of the rooms from logs writeen by the Room Server.
* Maintains a full text search index of somekind.
## Client Push
* Pushes unread messages to remote push servers.
* Reads new events and the current state of the rooms from logs writeen by the Room Server.
* Reads the position of the read marker from the Receipts Server.
* Makes outbound HTTP hits to the push server for the client device.
## Application Service
* Receives events from the Room Server.
* Filters events and sends them to each registered application service.
* Runs a separate goroutine for each application service.
# Internal Component API
Some dendrite components use internal APIs to communicate information back
and forth between each other. There are two implementations of each API, one
that uses HTTP requests and one that does not. The HTTP implementation is
used in multi-process mode, so processes on separate computers may still
communicate, whereas in single-process or Monolith mode, the direct
implementation is used. HTTP is preferred here to kafka streams as it allows
for request responses.
Running `dendrite-monolith-server` will set up direct connections between
components, whereas running each individual component (which are only run in
multi-process mode) will set up HTTP-based connections.
The functions that make HTTP requests to internal APIs of a component are
located in `/<component name>/api/<name>.go`, named according to what
functionality they cover. Each of these requests are handled in `/<component
name>/<name>/<name>.go`.
As an example, the `appservices` component allows other Dendrite components
to query external application services via its internal API. A component
would call the desired function in `/appservices/api/query.go`. In
multi-process mode, this would send an internal HTTP request, which would
be handled by a function in `/appservices/query/query.go`. In single-process
mode, no internal HTTP request occurs, instead functions are simply called
directly, thus requiring no changes on the calling component's end.

19
docs/_config.yml Normal file
View File

@ -0,0 +1,19 @@
title: Dendrite
description: >-
Second-generation Matrix homeserver written in Go!
baseurl: "/dendrite" # the subpath of your site, e.g. /blog
url: ""
twitter_username: matrixdotorg
github_username: matrix-org
remote_theme: just-the-docs/just-the-docs
plugins:
- jekyll-feed
aux_links:
"GitHub":
- "//github.com/matrix-org/dendrite"
aux_links_new_tab: true
sass:
sass_dir: _sass
style: compressed
exclude:
- INSTALL.md

View File

@ -0,0 +1,3 @@
footer.site-footer {
opacity: 10%;
}

10
docs/administration.md Normal file
View File

@ -0,0 +1,10 @@
---
title: Administration
has_children: yes
nav_order: 4
permalink: /administration
---
# Administration
This section contains documentation on managing your existing Dendrite deployment.

View File

@ -0,0 +1,53 @@
---
title: Creating user accounts
parent: Administration
permalink: /administration/createusers
nav_order: 1
---
# Creating user accounts
User accounts can be created on a Dendrite instance in a number of ways.
## From the command line
The `create-account` tool is built in the `bin` folder when building Dendrite with
the `build.sh` script.
It uses the `dendrite.yaml` configuration file to connect to the Dendrite user database
and create the account entries directly. It can therefore be used even if Dendrite is not
running yet, as long as the database is up.
An example of using `create-account` to create a **normal account**:
```bash
./bin/create-account -config /path/to/dendrite.yaml -username USERNAME
```
You will be prompted to enter a new password for the new account.
To create a new **admin account**, add the `-admin` flag:
```bash
./bin/create-account -config /path/to/dendrite.yaml -username USERNAME -admin
```
## Using shared secret registration
Dendrite supports the Synapse-compatible shared secret registration endpoint.
To enable shared secret registration, you must first enable it in the `dendrite.yaml`
configuration file by specifying a shared secret. In the `client_api` section of the config,
enter a new secret into the `registration_shared_secret` field:
```yaml
client_api:
# ...
registration_shared_secret: ""
```
You can then use the `/_synapse/admin/v1/register` endpoint as per the
[Synapse documentation](https://matrix-org.github.io/synapse/latest/admin_api/register_api.html).
Shared secret registration is only enabled once a secret is configured. To disable shared
secret registration again, remove the secret from the configuration file.

View File

@ -0,0 +1,53 @@
---
title: Enabling registration
parent: Administration
permalink: /administration/registration
nav_order: 2
---
# Enabling registration
Enabling registration allows users to register their own user accounts on your
Dendrite server using their Matrix client. They will be able to choose their own
username and password and log in.
Registration is controlled by the `registration_disabled` field in the `client_api`
section of the configuration. By default, `registration_disabled` is set to `true`,
disabling registration. If you want to enable registration, you should change this
setting to `false`.
Currently Dendrite supports secondary verification using [reCAPTCHA](https://www.google.com/recaptcha/about/).
Other methods will be supported in the future.
## reCAPTCHA verification
Dendrite supports reCAPTCHA as a secondary verification method. If you want to enable
registration, it is **highly recommended** to configure reCAPTCHA. This will make it
much more difficult for automated spam systems from registering accounts on your
homeserver automatically.
You will need an API key from the [reCAPTCHA Admin Panel](https://www.google.com/recaptcha/admin).
Then configure the relevant details in the `client_api` section of the configuration:
```yaml
client_api:
# ...
registration_disabled: false
recaptcha_public_key: "PUBLIC_KEY_HERE"
recaptcha_private_key: "PRIVATE_KEY_HERE"
enable_registration_captcha: true
captcha_bypass_secret: ""
recaptcha_siteverify_api: "https://www.google.com/recaptcha/api/siteverify"
```
## Open registration
Dendrite does support open registration — that is, allowing users to create their own
user accounts without any verification or secondary authentication. However, it
is **not recommended** to enable open registration, as this leaves your homeserver
vulnerable to abuse by spammers or attackers, who create large numbers of user
accounts on Matrix homeservers in order to send spam or abuse into the network.
It isn't possible to enable open registration in Dendrite in a single step. If you
try to disable the `registration_disabled` option without any secondary verification
methods enabled (such as reCAPTCHA), Dendrite will log an error and fail to start.

View File

@ -0,0 +1,39 @@
---
title: Enabling presence
parent: Administration
permalink: /administration/presence
nav_order: 3
---
# Enabling presence
Dendrite supports presence, which allows you to send your online/offline status
to other users, and to receive their statuses automatically. They will be displayed
by supported clients.
Note that enabling presence **can negatively impact** the performance of your Dendrite
server — it will require more CPU time and will increase the "chattiness" of your server
over federation. It is disabled by default for this reason.
Dendrite has two options for controlling presence:
* **Enable inbound presence**: Dendrite will handle presence updates for remote users
and distribute them to local users on your homeserver;
* **Enable outbound presence**: Dendrite will generate presence notifications for your
local users and distribute them to remote users over the federation.
This means that you can configure only one or other direction if you prefer, i.e. to
receive presence from other servers without revealing the presence of your own users.
## Configuring presence
Presence is controlled by the `presence` block in the `global` section of the
configuration file:
```yaml
global:
# ...
presence:
enable_inbound: false
enable_outbound: false
```

View File

@ -0,0 +1,25 @@
---
title: Supported admin APIs
parent: Administration
permalink: /administration/adminapi
---
# Supported admin APIs
Dendrite supports, at present, a very small number of endpoints that allow
admin users to perform administrative functions. Please note that there is no
API stability guarantee on these endpoints at present — they may change shape
without warning.
More endpoints will be added in the future.
## `/_dendrite/admin/evacuateRoom/{roomID}`
This endpoint will instruct Dendrite to part all local users from the given `roomID`
in the URL. It may take some time to complete. A JSON body will be returned containing
the user IDs of all affected users.
## `/_synapse/admin/v1/register`
Shared secret registration — please see the [user creation page](createusers) for
guidance on configuring and using this endpoint.

10
docs/development.md Normal file
View File

@ -0,0 +1,10 @@
---
title: Development
has_children: true
permalink: /development
---
# Development
This section contains documentation that may be useful when helping to develop
Dendrite.

24
docs/index.md Normal file
View File

@ -0,0 +1,24 @@
---
layout: home
nav_exclude: true
---
# Dendrite
Dendrite is a second-generation Matrix homeserver written in Go! Following the microservice
architecture model, Dendrite is designed to be efficient, reliable and scalable. Despite being beta,
many Matrix features are already supported.
This site aims to include relevant documentation to help you to get started with and
run Dendrite. Check out the following sections:
* **[Installation](INSTALL.md)** for building and deploying your own Dendrite homeserver
* **[Administration](administration.md)** for managing an existing Dendrite deployment
* **[Development](development.md)** for developing against Dendrite
You can also join us in our Matrix rooms dedicated to Dendrite, but please check first that
your question hasn't already been [answered in the FAQ](FAQ.md):
* **[#dendrite:matrix.org](https://matrix.to/#/#dendrite:matrix.org)** for general project discussion and support
* **[#dendrite-dev:matrix.org](https://matrix.to/#/#dendrite-dev:matrix.org)** for chat on Dendrite development specifically
* **[#dendrite-alerts:matrix.org](https://matrix.to/#/#dendrite-alerts:matrix.org)** for release notifications and other important announcements

10
docs/installation.md Normal file
View File

@ -0,0 +1,10 @@
---
title: Installation
has_children: true
nav_order: 2
permalink: /installation
---
# Installation
This section contains documentation on installing a new Dendrite deployment.

View File

@ -0,0 +1,110 @@
---
title: Planning your installation
parent: Installation
nav_order: 1
permalink: /installation/planning
---
# Planning your installation
## Modes
Dendrite can be run in one of two configurations:
* **Monolith mode**: All components run in the same process. In this mode,
it is possible to run an in-process NATS Server instead of running a standalone deployment.
This will usually be the preferred model for low-to-mid volume deployments, providing the best
balance between performance and resource usage.
* **Polylith mode**: A cluster of individual components running in their own processes, dealing
with different aspects of the Matrix protocol. Components communicate with each other using
internal HTTP APIs and NATS Server. This will almost certainly be the preferred model for very
large deployments but scalability comes with a cost. API calls are expensive and therefore a
polylith deployment may end up using disproportionately more resources for a smaller number of
users compared to a monolith deployment.
At present, we **recommend monolith mode deployments** in all cases.
## Databases
Dendrite can run with either a PostgreSQL or a SQLite backend. There are considerable tradeoffs
to consider:
* **PostgreSQL**: Needs to run separately to Dendrite, needs to be installed and configured separately
and and will use more resources over all, but will be **considerably faster** than SQLite. PostgreSQL
has much better write concurrency which will allow Dendrite to process more tasks in parallel. This
will be necessary for federated deployments to perform adequately.
* **SQLite**: Built into Dendrite, therefore no separate database engine is necessary and is quite
a bit easier to set up, but will be much slower than PostgreSQL in most cases. SQLite only allows a
single writer on a database at a given time, which will significantly restrict Dendrite's ability
to process multiple tasks in parallel.
At this time, we **recommend the PostgreSQL database engine** for all production deployments.
## Requirements
Dendrite will run on Linux, macOS and Windows Server. It should also run fine on variants
of BSD such as FreeBSD and OpenBSD. We have not tested Dendrite on AIX, Solaris, Plan 9 or z/OS —
your mileage may vary with these platforms.
It is difficult to state explicitly the amount of CPU, RAM or disk space that a Dendrite
installation will need, as this varies considerably based on a number of factors. In particular:
* The number of users using the server;
* The number of rooms that the server is joined to — federated rooms in particular will typically
use more resources than rooms with only local users;
* The complexity of rooms that the server is joined to — rooms with more members coming and
going will typically be of a much higher complexity.
Some tasks are more expensive than others, such as joining rooms over federation, running state
resolution or sending messages into very large federated rooms with lots of remote users. Therefore
you should plan accordingly and ensure that you have enough resources available to endure spikes
in CPU or RAM usage, as these may be considerably higher than the idle resource usage.
At an absolute minimum, Dendrite will expect 1GB RAM. For a comfortable day-to-day deployment
which can participate in federated rooms for a number of local users, be prepared to assign 2-4
CPU cores and 8GB RAM — more if your user count increases.
If you are running PostgreSQL on the same machine, allow extra headroom for this too, as the
database engine will also have CPU and RAM requirements of its own. Running too many heavy
services on the same machine may result in resource starvation and processes may end up being
killed by the operating system if they try to use too much memory.
## Dependencies
In order to install Dendrite, you will need to satisfy the following dependencies.
### Go
At this time, Dendrite supports being built with Go 1.16 or later. We do not support building
Dendrite with older versions of Go than this. If you are installing Go using a package manager,
you should check (by running `go version`) that you are using a suitable version before you start.
### PostgreSQL
If using the PostgreSQL database engine, you should install PostgreSQL 12 or later.
### NATS Server
Monolith deployments come with a built-in [NATS Server](https://github.com/nats-io/nats-server) and
therefore do not need this to be manually installed. If you are planning a monolith installation, you
do not need to do anything.
Polylith deployments, however, currently need a standalone NATS Server installation with JetStream
enabled.
To do so, follow the [NATS Server installation instructions](https://docs.nats.io/running-a-nats-service/introduction/installation) and then [start your NATS deployment](https://docs.nats.io/running-a-nats-service/introduction/running). JetStream must be enabled, either by passing the `-js` flag to `nats-server`,
or by specifying the `store_dir` option in the the `jetstream` configuration.
### Reverse proxy (polylith deployments)
Polylith deployments require a reverse proxy, such as [NGINX](https://www.nginx.com) or
[HAProxy](http://www.haproxy.org). Configuring those is not covered in this documentation,
although a [sample configuration for NGINX](https://github.com/matrix-org/dendrite/blob/main/docs/nginx/polylith-sample.conf)
is provided.
### Windows
Finally, if you want to build Dendrite on Windows, you will need need `gcc` in the path. The best
way to achieve this is by installing and building Dendrite under [MinGW-w64](https://www.mingw-w64.org/).

View File

@ -0,0 +1,93 @@
---
title: Setting up the domain
parent: Installation
nav_order: 2
permalink: /installation/domainname
---
# Setting up the domain
Every Matrix server deployment requires a server name which uniquely identifies it. For
example, if you are using the server name `example.com`, then your users will have usernames
that take the format `@user:example.com`.
For federation to work, the server name must be resolvable by other homeservers on the internet
— that is, the domain must be registered and properly configured with the relevant DNS records.
Matrix servers discover each other when federating using the following methods:
1. If a well-known delegation exists on `example.com`, use the path server from the
well-known file to connect to the remote homeserver;
2. If a DNS SRV delegation exists on `example.com`, use the hostname and port from the DNS SRV
record to connect to the remote homeserver;
3. If neither well-known or DNS SRV delegation are configured, attempt to connect to the remote
homeserver by connecting to `example.com` port TCP/8448 using HTTPS.
## TLS certificates
Matrix federation requires that valid TLS certificates are present on the domain. You must
obtain certificates from a publicly accepted Certificate Authority (CA). [LetsEncrypt](https://letsencrypt.org)
is an example of such a CA that can be used. Self-signed certificates are not suitable for
federation and will typically not be accepted by other homeservers.
A common practice to help ease the management of certificates is to install a reverse proxy in
front of Dendrite which manages the TLS certificates and HTTPS proxying itself. Software such as
[NGINX](https://www.nginx.com) and [HAProxy](http://www.haproxy.org) can be used for the task.
Although the finer details of configuring these are not described here, you must reverse proxy
all `/_matrix` paths to your Dendrite server.
It is possible for the reverse proxy to listen on the standard HTTPS port TCP/443 so long as your
domain delegation is configured to point to port TCP/443.
## Delegation
Delegation allows you to specify the server name and port that your Dendrite installation is
reachable at, or to host the Dendrite server at a different server name to the domain that
is being delegated.
For example, if your Dendrite installation is actually reachable at `matrix.example.com` port 8448,
you will be able to delegate from `example.com` to `matrix.example.com` so that your users will have
`@user:example.com` user names instead of `@user:matrix.example.com` usernames.
Delegation can be performed in one of two ways:
* **Well-known delegation**: A well-known text file is served over HTTPS on the domain name
that you want to use, pointing to your server on `matrix.example.com` port 8448;
* **DNS SRV delegation**: A DNS SRV record is created on the domain name that you want to
use, pointing to your server on `matrix.example.com` port TCP/8448.
If you are using a reverse proxy to forward `/_matrix` to Dendrite, your well-known or DNS SRV
delegation must refer to the hostname and port that the reverse proxy is listening on instead.
Well-known delegation is typically easier to set up and usually preferred. However, you can use
either or both methods to delegate. If you configure both methods of delegation, it is important
that they both agree and refer to the same hostname and port.
## Well-known delegation
Using well-known delegation requires that you are running a web server at `example.com` which
is listening on the standard HTTPS port TCP/443.
Assuming that your Dendrite installation is listening for HTTPS connections at `matrix.example.com`
on port 8448, the delegation file must be served at `https://example.com/.well-known/matrix/server`
and contain the following JSON document:
```json
{
"m.server": "https://matrix.example.com:8448"
}
```
## DNS SRV delegation
Using DNS SRV delegation requires creating DNS SRV records on the `example.com` zone which
refer to your Dendrite installation.
Assuming that your Dendrite installation is listening for HTTPS connections at `matrix.example.com`
port 8448, the DNS SRV record must have the following fields:
* Name: `@` (or whichever term your DNS provider uses to signal the root)
* Service: `_matrix`
* Protocol: `_tcp`
* Port: `8448`
* Target: `matrix.example.com`

View File

@ -0,0 +1,106 @@
---
title: Preparing database storage
parent: Installation
nav_order: 3
permalink: /installation/database
---
# Preparing database storage
Dendrite uses SQL databases to store data. Depending on the database engine being used, you
may need to perform some manual steps outlined below.
## SQLite
SQLite deployments do not require manual database creation. Simply configure the database
filenames in the Dendrite configuration file and start Dendrite. The databases will be created
and populated automatically.
Note that Dendrite **cannot share a single SQLite database across multiple components**. Each
component must be configured with its own SQLite database filename.
### Connection strings
Connection strings for SQLite databases take the following forms:
* Current working directory path: `file:dendrite_component.db`
* Full specified path: `file:///path/to/dendrite_component.db`
## PostgreSQL
Dendrite can automatically populate the database with the relevant tables and indexes, but
it is not capable of creating the databases themselves. You will need to create the databases
manually.
At this point, you can choose to either use a single database for all Dendrite components,
or you can run each component with its own separate database:
* **Single database**: You will need to create a single PostgreSQL database. Monolith deployments
can use a single global connection pool, which makes updating the configuration file much easier.
Only one database connection string to manage and likely simpler to back up the database. All
components will be sharing the same database resources (CPU, RAM, storage).
* **Separate databases**: You will need to create a separate PostgreSQL database for each
component. You will need to configure each component that has storage in the Dendrite
configuration file with its own connection parameters. Allows running a different database engine
for each component on a different machine if needs be, each with their own CPU, RAM and storage —
almost certainly overkill unless you are running a very large Dendrite deployment.
For either configuration, you will want to:
1. Configure a role (with a username and password) which Dendrite can use to connect to the
database;
2. Create the database(s) themselves, ensuring that the Dendrite role has privileges over them.
As Dendrite will create and manage the database tables, indexes and sequences by itself, the
Dendrite role must have suitable privileges over the database.
### Connection strings
The format of connection strings for PostgreSQL databases is described in the [PostgreSQL libpq manual](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING). Note that Dendrite only
supports the "Connection URIs" format and **will not** work with the "Keyword/Value Connection
string" format.
Example supported connection strings take the format:
* `postgresql://user:pass@hostname/database?options=...`
* `postgres://user:pass@hostname/database?options=...`
If you need to disable SSL/TLS on the database connection, you may need to append `?sslmode=disable` to the end of the connection string.
### Role creation
Create a role which Dendrite can use to connect to the database, choosing a new password when
prompted. On macOS, you may need to omit the `sudo -u postgres` from the below instructions.
```bash
sudo -u postgres createuser -P dendrite
```
### Single database creation
Create the database itself, using the `dendrite` role from above:
```bash
sudo -u postgres createdb -O dendrite dendrite
```
### Multiple database creation
The following eight components require a database. In this example they will be named:
| Appservice API | `dendrite_appservice` |
| Federation API | `dendrite_federationapi` |
| Media API | `dendrite_mediaapi` |
| MSCs | `dendrite_mscs` |
| Roomserver | `dendrite_roomserver` |
| Sync API | `dendrite_syncapi` |
| Key server | `dendrite_keyserver` |
| User API | `dendrite_userapi` |
... therefore you will need to create eight different databases:
```bash
for i in appservice federationapi mediaapi mscs roomserver syncapi keyserver userapi; do
sudo -u postgres createdb -O dendrite dendrite_$i
done
```

View File

@ -0,0 +1,79 @@
---
title: Generating signing keys
parent: Installation
nav_order: 4
permalink: /installation/signingkeys
---
# Generating signing keys
All Matrix homeservers require a signing private key, which will be used to authenticate
federation requests and events.
The `generate-keys` utility can be used to generate a private key. Assuming that Dendrite was
built using `build.sh`, you should find the `generate-keys` utility in the `bin` folder.
To generate a Matrix signing private key:
```bash
./bin/generate-keys --private-key matrix_key.pem
```
The generated `matrix_key.pem` file is your new signing key.
## Important warning
You must treat this key as if it is highly sensitive and private, so **never share it with
anyone**. No one should ever ask you for this key for any reason, even to debug a problematic
Dendrite server.
Make sure take a safe backup of this key. You will likely need it if you want to reinstall
Dendrite, or any other Matrix homeserver, on the same domain name in the future. If you lose
this key, you may have trouble joining federated rooms.
## Old signing keys
If you already have old signing keys from a previous Matrix installation on the same domain
name, you can reuse those instead, as long as they have not been previously marked as expired —
a key that has been marked as expired in the past is unusable.
Old keys from a previous Dendrite installation can be reused as-is without any further
configuration required. Simply use that key file in the Dendrite configuration.
If you have server keys from an older Synapse instance, you can convert them to Dendrite's PEM
format and configure them as `old_private_keys` in your config.
## Key format
Dendrite stores the server signing key in the PEM format with the following structure.
```
-----BEGIN MATRIX PRIVATE KEY-----
Key-ID: ed25519:<Key ID>
<Base64 Encoded Key Data>
-----END MATRIX PRIVATE KEY-----
```
## Converting Synapse keys
If you have signing keys from a previous Synapse installation, you should ideally configure them
as `old_private_keys` in your Dendrite config file. Synapse stores signing keys in the following
format:
```
ed25519 <Key ID> <Base64 Encoded Key Data>
```
To convert this key to Dendrite's PEM format, use the following template. You must copy the Key ID
exactly without modifying it. **It is important to include the trailing equals sign on the Base64
Encoded Key Data** if it is not already present in the original key, as the key data needs to be
padded to exactly 32 bytes:
```
-----BEGIN MATRIX PRIVATE KEY-----
Key-ID: ed25519:<Key ID>
<Base64 Encoded Key Data>=
-----END MATRIX PRIVATE KEY-----
```

View File

@ -0,0 +1,21 @@
---
title: Installing as a monolith
parent: Installation
has_toc: true
nav_order: 5
permalink: /installation/install/monolith
---
# Installing as a monolith
You can install the Dendrite monolith binary into `$GOPATH/bin` by using `go install`:
```sh
go install ./cmd/dendrite-monolith-server
```
Alternatively, you can specify a custom path for the binary to be written to using `go build`:
```sh
go build -o /usr/local/bin/ ./cmd/dendrite-monolith-server
```

View File

@ -0,0 +1,33 @@
---
title: Installing as a polylith
parent: Installation
has_toc: true
nav_order: 6
permalink: /installation/install/polylith
---
# Installing as a polylith
You can install the Dendrite polylith binary into `$GOPATH/bin` by using `go install`:
```sh
go install ./cmd/dendrite-polylith-multi
```
Alternatively, you can specify a custom path for the binary to be written to using `go build`:
```sh
go build -o /usr/local/bin/ ./cmd/dendrite-polylith-multi
```
The `dendrite-polylith-multi` binary is a "multi-personality" binary which can run as
any of the components depending on the supplied command line parameters.
## Reverse proxy
Polylith deployments require a reverse proxy in order to ensure that requests are
sent to the correct endpoint. You must ensure that a suitable reverse proxy is installed
and configured.
A [sample configuration file](https://github.com/matrix-org/dendrite/blob/main/docs/nginx/polylith-sample.conf)
is provided for [NGINX](https://www.nginx.com).

View File

@ -0,0 +1,145 @@
---
title: Populate the configuration
parent: Installation
nav_order: 7
permalink: /installation/configuration
---
# Populate the configuration
The configuration file is used to configure Dendrite. A sample configuration file,
called [`dendrite-config.yaml`](https://github.com/matrix-org/dendrite/blob/main/dendrite-config.yaml),
is present in the top level of the Dendrite repository.
You will need to duplicate this file, calling it `dendrite.yaml` for example, and then
tailor it to your installation. At a minimum, you will need to populate the following
sections:
## Server name
First of all, you will need to configure the server name of your Matrix homeserver.
This must match the domain name that you have selected whilst [configuring the domain
name delegation](domainname).
In the `global` section, set the `server_name` to your delegated domain name:
```yaml
global:
# ...
server_name: example.com
```
## Server signing keys
Next, you should tell Dendrite where to find your [server signing keys](signingkeys).
In the `global` section, set the `private_key` to the path to your server signing key:
```yaml
global:
# ...
private_key: /path/to/matrix_key.pem
```
## JetStream configuration
Monolith deployments can use the built-in NATS Server rather than running a standalone
server. If you are building a polylith deployment, or you want to use a standalone NATS
Server anyway, you can also configure that too.
### Built-in NATS Server (monolith only)
In the `global` section, under the `jetstream` key, ensure that no server addresses are
configured and set a `storage_path` to a persistent folder on the filesystem:
```yaml
global:
# ...
jetstream:
in_memory: false
storage_path: /path/to/storage/folder
topic_prefix: Dendrite
```
### Standalone NATS Server (monolith and polylith)
To use a standalone NATS Server instance, you will need to configure `addresses` field
to point to the port that your NATS Server is listening on:
```yaml
global:
# ...
jetstream:
addresses:
- localhost:4222
topic_prefix: Dendrite
```
You do not need to configure the `storage_path` when using a standalone NATS Server instance.
In the case that you are connecting to a multi-node NATS cluster, you can configure more than
one address in the `addresses` field.
## Database connections
Configuring database connections varies based on the [database configuration](database)
that you chose.
### Global connection pool (monolith with a single PostgreSQL database only)
If you are running a monolith deployment and want to use a single connection pool to a
single PostgreSQL database, then you must uncomment and configure the `database` section
within the `global` section:
```yaml
global:
# ...
database:
connection_string: postgres://user:pass@hostname/database?sslmode=disable
max_open_conns: 100
max_idle_conns: 5
conn_max_lifetime: -1
```
**You must then remove or comment out** the `database` sections from other areas of the
configuration file, e.g. under the `app_service_api`, `federation_api`, `key_server`,
`media_api`, `mscs`, `room_server`, `sync_api` and `user_api` blocks, otherwise these will
override the `global` database configuration.
### Per-component connections (all other configurations)
If you are building a polylith deployment, are using SQLite databases or separate PostgreSQL
databases per component, then you must instead configure the `database` sections under each
of the component blocks ,e.g. under the `app_service_api`, `federation_api`, `key_server`,
`media_api`, `mscs`, `room_server`, `sync_api` and `user_api` blocks.
For example, with PostgreSQL:
```yaml
room_server:
# ...
database:
connection_string: postgres://user:pass@hostname/dendrite_component?sslmode=disable
max_open_conns: 10
max_idle_conns: 2
conn_max_lifetime: -1
```
... or with SQLite:
```yaml
room_server:
# ...
database:
connection_string: file:roomserver.db
max_open_conns: 10
max_idle_conns: 2
conn_max_lifetime: -1
```
## Other sections
There are other options which may be useful so review them all. In particular, if you are
trying to federate from your Dendrite instance into public rooms then configuring the
`key_perspectives` (like `matrix.org` in the sample) can help to improve reliability
considerably by allowing your homeserver to fetch public keys for dead homeservers from
another living server.

View File

@ -0,0 +1,41 @@
---
title: Starting the monolith
parent: Installation
has_toc: true
nav_order: 9
permalink: /installation/start/monolith
---
# Starting the monolith
Once you have completed all of the preparation and installation steps,
you can start your Dendrite monolith deployment by starting the `dendrite-monolith-server`:
```bash
./dendrite-monolith-server -config /path/to/dendrite.yaml
```
If you want to change the addresses or ports that Dendrite listens on, you
can use the `-http-bind-address` and `-https-bind-address` command line arguments:
```bash
./dendrite-monolith-server -config /path/to/dendrite.yaml \
-http-bind-address 1.2.3.4:12345 \
-https-bind-address 1.2.3.4:54321
```
## Running under systemd
A common deployment pattern is to run the monolith under systemd. For this, you
will need to create a service unit file. An example service unit file is available
in the [GitHub repository](https://github.com/matrix-org/dendrite/blob/main/docs/systemd/monolith-example.service).
Once you have installed the service unit, you can notify systemd, enable and start
the service:
```bash
systemctl daemon-reload
systemctl enable dendrite
systemctl start dendrite
journalctl -fu dendrite
```

View File

@ -0,0 +1,73 @@
---
title: Starting the polylith
parent: Installation
has_toc: true
nav_order: 9
permalink: /installation/start/polylith
---
# Starting the polylith
Once you have completed all of the preparation and installation steps,
you can start your Dendrite polylith deployment by starting the various components
using the `dendrite-polylith-multi` personalities.
## Start the reverse proxy
Ensure that your reverse proxy is started and is proxying the correct
endpoints to the correct components. Software such as [NGINX](https://www.nginx.com) or
[HAProxy](http://www.haproxy.org) can be used for this purpose. A [sample configuration
for NGINX](https://github.com/matrix-org/dendrite/blob/main/docs/nginx/polylith-sample.conf)
is provided.
## Starting the components
Each component must be started individually:
### Client API
```bash
./dendrite-polylith-multi -config /path/to/dendrite.yaml clientapi
```
### Sync API
```bash
./dendrite-polylith-multi -config /path/to/dendrite.yaml syncapi
```
### Media API
```bash
./dendrite-polylith-multi -config /path/to/dendrite.yaml mediaapi
```
### Federation API
```bash
./dendrite-polylith-multi -config /path/to/dendrite.yaml federationapi
```
### Roomserver
```bash
./dendrite-polylith-multi -config /path/to/dendrite.yaml roomserver
```
### Appservice API
```bash
./dendrite-polylith-multi -config /path/to/dendrite.yaml appservice
```
### User API
```bash
./dendrite-polylith-multi -config /path/to/dendrite.yaml userapi
```
### Key server
```bash
./dendrite-polylith-multi -config /path/to/dendrite.yaml keyserver
```

View File

@ -1,27 +1,34 @@
## Peer-to-peer Matrix ---
title: P2P Matrix
nav_exclude: true
---
# P2P Matrix
These are the instructions for setting up P2P Dendrite, current as of May 2020. There's both Go stuff and JS stuff to do to set this up. These are the instructions for setting up P2P Dendrite, current as of May 2020. There's both Go stuff and JS stuff to do to set this up.
### Dendrite ## Dendrite
#### Build ### Build
- The `main` branch has a WASM-only binary for dendrite: `./cmd/dendritejs`. - The `main` branch has a WASM-only binary for dendrite: `./cmd/dendritejs`.
- Build it and copy assets to riot-web. - Build it and copy assets to riot-web.
``` ```
$ ./build-dendritejs.sh ./build-dendritejs.sh
$ cp bin/main.wasm ../riot-web/src/vector/dendrite.wasm cp bin/main.wasm ../riot-web/src/vector/dendrite.wasm
``` ```
#### Test ### Test
To check that the Dendrite side is working well as Wasm, you can run the To check that the Dendrite side is working well as Wasm, you can run the
Wasm-specific tests: Wasm-specific tests:
``` ```
$ ./test-dendritejs.sh ./test-dendritejs.sh
``` ```
### Rendezvous ## Rendezvous
This is how peers discover each other and communicate. This is how peers discover each other and communicate.
@ -29,18 +36,18 @@ By default, Dendrite uses the Matrix-hosted websocket star relay server at TODO
This is currently hard-coded in `./cmd/dendritejs/main.go` - you can also use a local one if you run your own relay: This is currently hard-coded in `./cmd/dendritejs/main.go` - you can also use a local one if you run your own relay:
``` ```
$ npm install --global libp2p-websocket-star-rendezvous npm install --global libp2p-websocket-star-rendezvous
$ rendezvous --port=9090 --host=127.0.0.1 rendezvous --port=9090 --host=127.0.0.1
``` ```
Then use `/ip4/127.0.0.1/tcp/9090/ws/p2p-websocket-star/`. Then use `/ip4/127.0.0.1/tcp/9090/ws/p2p-websocket-star/`.
### Riot-web ## Riot-web
You need to check out this repo: You need to check out this repo:
``` ```
$ git clone git@github.com:matrix-org/go-http-js-libp2p.git git clone git@github.com:matrix-org/go-http-js-libp2p.git
``` ```
Make sure to `yarn install` in the repo. Then: Make sure to `yarn install` in the repo. Then:
@ -53,26 +60,30 @@ if (!global.fs && global.require) {
global.fs = require("fs"); global.fs = require("fs");
} }
``` ```
- Add the diff at https://github.com/vector-im/riot-web/compare/matthew/p2p?expand=1 - ignore the `package.json` stuff.
- Add the diff at <https://github.com/vector-im/riot-web/compare/matthew/p2p?expand=1> - ignore the `package.json` stuff.
- Add the following symlinks: they HAVE to be symlinks as the diff in `webpack.config.js` references specific paths. - Add the following symlinks: they HAVE to be symlinks as the diff in `webpack.config.js` references specific paths.
``` ```
$ cd node_modules cd node_modules
$ ln -s ../../go-http-js-libp2p ln -s ../../go-http-js-libp2p
``` ```
NB: If you don't run the server with `yarn start` you need to make sure your server is sending the header `Service-Worker-Allowed: /`. NB: If you don't run the server with `yarn start` you need to make sure your server is sending the header `Service-Worker-Allowed: /`.
TODO: Make a Docker image with all of this in it and a volume mount for `dendrite.wasm`. TODO: Make a Docker image with all of this in it and a volume mount for `dendrite.wasm`.
### Running ## Running
You need a Chrome and a Firefox running to test locally as service workers don't work in incognito tabs. You need a Chrome and a Firefox running to test locally as service workers don't work in incognito tabs.
- For Chrome, use `chrome://serviceworker-internals/` to unregister/see logs. - For Chrome, use `chrome://serviceworker-internals/` to unregister/see logs.
- For Firefox, use `about:debugging#/runtime/this-firefox` to unregister. Use the console window to see logs. - For Firefox, use `about:debugging#/runtime/this-firefox` to unregister. Use the console window to see logs.
Assuming you've `yarn start`ed Riot-Web, go to `http://localhost:8080` and register with `http://localhost:8080` as your HS URL. Assuming you've `yarn start`ed Riot-Web, go to `http://localhost:8080` and register with `http://localhost:8080` as your HS URL.
You can: You can:
- join rooms by room alias e.g `/join #foo:bar`.
- invite specific users to a room. - join rooms by room alias e.g `/join #foo:bar`.
- explore the published room list. All members of the room can re-publish aliases (unlike Synapse). - invite specific users to a room.
- explore the published room list. All members of the room can re-publish aliases (unlike Synapse).

33
docs/other/peeking.md Normal file
View File

@ -0,0 +1,33 @@
---
nav_exclude: true
---
## Peeking
Local peeking is implemented as per [MSC2753](https://github.com/matrix-org/matrix-doc/pull/2753).
Implementationwise, this means:
* Users call `/peek` and `/unpeek` on the clientapi from a given device.
* The clientapi delegates these via HTTP to the roomserver, which coordinates peeking in general for a given room
* The roomserver writes an NewPeek event into the kafka log headed to the syncserver
* The syncserver tracks the existence of the local peek in the syncapi_peeks table in its DB, and then starts waking up the peeking devices for the room in question, putting it in the `peek` section of the /sync response.
Peeking over federation is implemented as per [MSC2444](https://github.com/matrix-org/matrix-doc/pull/2444).
For requests to peek our rooms ("inbound peeks"):
* Remote servers call `/peek` on federationapi
* The federationapi queries the federationsender to check if this is renewing an inbound peek or not.
* If not, it hits the PerformInboundPeek on the roomserver to ask it for the current state of the room.
* The roomserver atomically (in theory) adds a NewInboundPeek to its kafka stream to tell the federationserver to start peeking.
* The federationsender receives the event, tracks the inbound peek in the federationsender_inbound_peeks table, and starts sending events to the peeking server.
* The federationsender evicts stale inbound peeks which haven't been renewed.
For peeking into other server's rooms ("outbound peeks"):
* The `roomserver` will kick the `federationsender` much as it does for a federated `/join` in order to trigger a federated outbound `/peek`
* The `federationsender` tracks the existence of the outbound peek in in its federationsender_outbound_peeks table.
* The `federationsender` regularly renews the remote peek as long as there are still peeking devices syncing for it.
* TBD: how do we tell if there are no devices currently syncing for a given peeked room? The syncserver needs to tell the roomserver
somehow who then needs to warn the federationsender.

View File

@ -1,26 +0,0 @@
## Peeking
Local peeking is implemented as per [MSC2753](https://github.com/matrix-org/matrix-doc/pull/2753).
Implementationwise, this means:
* Users call `/peek` and `/unpeek` on the clientapi from a given device.
* The clientapi delegates these via HTTP to the roomserver, which coordinates peeking in general for a given room
* The roomserver writes an NewPeek event into the kafka log headed to the syncserver
* The syncserver tracks the existence of the local peek in the syncapi_peeks table in its DB, and then starts waking up the peeking devices for the room in question, putting it in the `peek` section of the /sync response.
Peeking over federation is implemented as per [MSC2444](https://github.com/matrix-org/matrix-doc/pull/2444).
For requests to peek our rooms ("inbound peeks"):
* Remote servers call `/peek` on federationapi
* The federationapi queries the federationsender to check if this is renewing an inbound peek or not.
* If not, it hits the PerformInboundPeek on the roomserver to ask it for the current state of the room.
* The roomserver atomically (in theory) adds a NewInboundPeek to its kafka stream to tell the federationserver to start peeking.
* The federationsender receives the event, tracks the inbound peek in the federationsender_inbound_peeks table, and starts sending events to the peeking server.
* The federationsender evicts stale inbound peeks which haven't been renewed.
For peeking into other server's rooms ("outbound peeks"):
* The `roomserver` will kick the `federationsender` much as it does for a federated `/join` in order to trigger a federated outbound `/peek`
* The `federationsender` tracks the existence of the outbound peek in in its federationsender_outbound_peeks table.
* The `federationsender` regularly renews the remote peek as long as there are still peeking devices syncing for it.
* TBD: how do we tell if there are no devices currently syncing for a given peeked room? The syncserver needs to tell the roomserver
somehow who then needs to warn the federationsender.

View File

@ -1,29 +0,0 @@
# Server Key Format
Dendrite stores the server signing key in the PEM format with the following structure.
```
-----BEGIN MATRIX PRIVATE KEY-----
Key-ID: ed25519:<Key Handle>
<Base64 Encoded Key Data>
-----END MATRIX PRIVATE KEY-----
```
## Converting Synapse Keys
If you have signing keys from a previous synapse server, you should ideally configure them as `old_private_keys` in your Dendrite config file. Synapse stores signing keys in the following format.
```
ed25519 <Key Handle> <Base64 Encoded Key Data>
```
To convert this key to Dendrite's PEM format, use the following template. **It is important to include the equals sign, as the key data needs to be padded to 32 bytes.**
```
-----BEGIN MATRIX PRIVATE KEY-----
Key-ID: ed25519:<Key Handle>
<Base64 Encoded Key Data>=
-----END MATRIX PRIVATE KEY-----
```

View File

@ -1,3 +1,9 @@
---
title: SyTest
parent: Development
permalink: /development/sytest
---
# SyTest # SyTest
Dendrite uses [SyTest](https://github.com/matrix-org/sytest) for its Dendrite uses [SyTest](https://github.com/matrix-org/sytest) for its
@ -43,6 +49,7 @@ source code. The test results TAP file and homeserver logging output will go to
add any tests to `sytest-whitelist`. add any tests to `sytest-whitelist`.
When debugging, the following Docker `run` options may also be useful: When debugging, the following Docker `run` options may also be useful:
* `-v /path/to/sytest/:/sytest/`: Use your local SyTest repository at * `-v /path/to/sytest/:/sytest/`: Use your local SyTest repository at
`/path/to/sytest` instead of pulling from GitHub. This is useful when you want `/path/to/sytest` instead of pulling from GitHub. This is useful when you want
to speed things up or make modifications to SyTest. to speed things up or make modifications to SyTest.
@ -58,6 +65,7 @@ When debugging, the following Docker `run` options may also be useful:
The docker command also supports a single positional argument for the test file to The docker command also supports a single positional argument for the test file to
run, so you can run a single `.pl` file rather than the whole test suite. For example: run, so you can run a single `.pl` file rather than the whole test suite. For example:
``` ```
docker run --rm --name sytest -v "/Users/kegan/github/sytest:/sytest" docker run --rm --name sytest -v "/Users/kegan/github/sytest:/sytest"
-v "/Users/kegan/github/dendrite:/src" -v "/Users/kegan/logs:/logs" -v "/Users/kegan/github/dendrite:/src" -v "/Users/kegan/logs:/logs"
@ -118,7 +126,7 @@ POSTGRES=1 ./run-tests.pl -I Dendrite::Monolith -d ../dendrite/bin -W ../dendrit
where `tee` lets you see the results while they're being piped to the file, and where `tee` lets you see the results while they're being piped to the file, and
`POSTGRES=1` enables testing with PostgeSQL. If the `POSTGRES` environment `POSTGRES=1` enables testing with PostgeSQL. If the `POSTGRES` environment
variable is not set or is set to 0, SyTest will fall back to SQLite 3. For more variable is not set or is set to 0, SyTest will fall back to SQLite 3. For more
flags and options, see https://github.com/matrix-org/sytest#running. flags and options, see <https://github.com/matrix-org/sytest#running>.
Once the tests are complete, run the helper script to see if you need to add Once the tests are complete, run the helper script to see if you need to add
any newly passing test names to `sytest-whitelist` in the project's root any newly passing test names to `sytest-whitelist` in the project's root

Binary file not shown.

Before

Width:  |  Height:  |  Size: 258 KiB

View File

@ -1,5 +1,11 @@
Opentracing ---
=========== title: OpenTracing
has_children: true
parent: Development
permalink: /development/opentracing
---
# OpenTracing
Dendrite extensively uses the [opentracing.io](http://opentracing.io) framework Dendrite extensively uses the [opentracing.io](http://opentracing.io) framework
to trace work across the different logical components. to trace work across the different logical components.
@ -23,7 +29,6 @@ This is useful to see where the time is being spent processing a request on a
component. However, opentracing allows tracking of spans across components. This component. However, opentracing allows tracking of spans across components. This
makes it possible to see exactly what work goes into processing a request: makes it possible to see exactly what work goes into processing a request:
``` ```
Component 1 |<─────────────────── HTTP ────────────────────>| Component 1 |<─────────────────── HTTP ────────────────────>|
|<──────────────── RPC ─────────────────>| |<──────────────── RPC ─────────────────>|
@ -39,7 +44,6 @@ deserialized span as the parent).
A collection of spans that are related is called a trace. A collection of spans that are related is called a trace.
Spans are passed through the code via contexts, rather than manually. It is Spans are passed through the code via contexts, rather than manually. It is
therefore important that all spans that are created are immediately added to the therefore important that all spans that are created are immediately added to the
current context. Thankfully the opentracing library gives helper functions for current context. Thankfully the opentracing library gives helper functions for
@ -53,11 +57,11 @@ defer span.Finish()
This will create a new span, adding any span already in `ctx` as a parent to the This will create a new span, adding any span already in `ctx` as a parent to the
new span. new span.
Adding Information Adding Information
------------------ ------------------
Opentracing allows adding information to a trace via three mechanisms: Opentracing allows adding information to a trace via three mechanisms:
- "tags" ─ A span can be tagged with a key/value pair. This is typically - "tags" ─ A span can be tagged with a key/value pair. This is typically
information that relates to the span, e.g. for spans created for incoming HTTP information that relates to the span, e.g. for spans created for incoming HTTP
requests could include the request path and response codes as tags, spans for requests could include the request path and response codes as tags, spans for
@ -69,12 +73,10 @@ Opentracing allows adding information to a trace via three mechanisms:
inspecting the traces, but can be used to add context to logs or tags in child inspecting the traces, but can be used to add context to logs or tags in child
spans. spans.
See See
[specification.md](https://github.com/opentracing/specification/blob/master/specification.md) [specification.md](https://github.com/opentracing/specification/blob/master/specification.md)
for some of the common tags and log fields used. for some of the common tags and log fields used.
Span Relationships Span Relationships
------------------ ------------------
@ -86,7 +88,6 @@ A second relation type is `followsFrom`, where the parent has no dependence on
the child span. This usually indicates some sort of fire and forget behaviour, the child span. This usually indicates some sort of fire and forget behaviour,
e.g. adding a message to a pipeline or inserting into a kafka topic. e.g. adding a message to a pipeline or inserting into a kafka topic.
Jaeger Jaeger
------ ------
@ -99,6 +100,7 @@ giving a UI for viewing and interacting with traces.
To enable jaeger a `Tracer` object must be instansiated from the config (as well To enable jaeger a `Tracer` object must be instansiated from the config (as well
as having a jaeger server running somewhere, usually locally). A `Tracer` does as having a jaeger server running somewhere, usually locally). A `Tracer` does
several things: several things:
- Decides which traces to save and send to the server. There are multiple - Decides which traces to save and send to the server. There are multiple
schemes for doing this, with a simple example being to save a certain fraction schemes for doing this, with a simple example being to save a certain fraction
of traces. of traces.

View File

@ -1,14 +1,20 @@
## OpenTracing Setup ---
title: Setup
parent: OpenTracing
grand_parent: Development
permalink: /development/opentracing/setup
---
![Trace when sending an event into a room](/docs/tracing/jaeger.png) # OpenTracing Setup
Dendrite uses [Jaeger](https://www.jaegertracing.io/) for tracing between microservices. Dendrite uses [Jaeger](https://www.jaegertracing.io/) for tracing between microservices.
Tracing shows the nesting of logical spans which provides visibility on how the microservices interact. Tracing shows the nesting of logical spans which provides visibility on how the microservices interact.
This document explains how to set up Jaeger locally on a single machine. This document explains how to set up Jaeger locally on a single machine.
### Set up the Jaeger backend ## Set up the Jaeger backend
The [easiest way](https://www.jaegertracing.io/docs/1.18/getting-started/) is to use the all-in-one Docker image: The [easiest way](https://www.jaegertracing.io/docs/1.18/getting-started/) is to use the all-in-one Docker image:
``` ```
$ docker run -d --name jaeger \ $ docker run -d --name jaeger \
-e COLLECTOR_ZIPKIN_HTTP_PORT=9411 \ -e COLLECTOR_ZIPKIN_HTTP_PORT=9411 \
@ -23,9 +29,10 @@ $ docker run -d --name jaeger \
jaegertracing/all-in-one:1.18 jaegertracing/all-in-one:1.18
``` ```
### Configuring Dendrite to talk to Jaeger ## Configuring Dendrite to talk to Jaeger
Modify your config to look like: (this will send every single span to Jaeger which will be slow on large instances, but for local testing it's fine) Modify your config to look like: (this will send every single span to Jaeger which will be slow on large instances, but for local testing it's fine)
``` ```
tracing: tracing:
enabled: true enabled: true
@ -40,10 +47,11 @@ tracing:
``` ```
then run the monolith server with `--api true` to use polylith components which do tracing spans: then run the monolith server with `--api true` to use polylith components which do tracing spans:
``` ```
$ ./dendrite-monolith-server --tls-cert server.crt --tls-key server.key --config dendrite.yaml --api true ./dendrite-monolith-server --tls-cert server.crt --tls-key server.key --config dendrite.yaml --api true
``` ```
### Checking traces ## Checking traces
Visit http://localhost:16686 to see traces under `DendriteMonolith`. Visit <http://localhost:16686> to see traces under `DendriteMonolith`.