[#105] Update self-hosting docs

This commit is contained in:
Radon Rosborough 2021-09-11 16:45:24 -07:00
parent 7d2c7ca0cb
commit ed457484e8
5 changed files with 128 additions and 74 deletions

View File

@ -10,23 +10,25 @@ these instructions.**
## Sign up for accounts
* [AWS](https://aws.amazon.com/)
* [CloudFlare](https://www.cloudflare.com/)
* [Fathom Analytics](https://usefathom.com/) (if you want analytics)
* [GitHub](https://github.com/)
* [Namecheap](https://www.namecheap.com/)
* [PagerDuty](https://www.pagerduty.com/) (if you want alerts)
* [AWS](https://aws.amazon.com/)
* [Namecheap](https://www.namecheap.com/) (or any other DNS provider)
* [CloudFlare](https://www.cloudflare.com/) (optional, for content
delivery optimization)
* [Fathom Analytics](https://usefathom.com/) (optional, for analytics)
* [Grafana Cloud](https://grafana.com/products/cloud/) (optional, for
monitoring)
* [Sentry](https://sentry.io/) (optional, for error tracking)
* [PagerDuty](https://www.pagerduty.com/) (optional, for alerts)
* [UptimeRobot](https://uptimerobot.com/) (optional, for alerts)
* [Statuspage](https://www.atlassian.com/software/statuspage)
(optional, for uptime reporting)
## Configure accounts
### GitHub
Fork the Riju repository under your account.
### PagerDuty
Set up notification rules as desired. Configure the AWS CloudWatch
integration and obtain an integration URL.
### AWS
You need to generate an access key with sufficient permission to apply
@ -45,25 +47,17 @@ S3](https://s3.console.aws.amazon.com/s3/home?region=us-west-1),
select your favorite AWS region, and create a new bucket called
`riju-yourname-tf`.
Finally, if you don't have a personal SSH key, generate one with
`ssh-keygen`, and upload the public key (e.g. `~/.ssh/id_rsa.pub`) as
an [EC2 Key
Pair](https://us-west-1.console.aws.amazon.com/ec2/v2/home?region=us-west-1#KeyPairs:).
Remember the name you use for the key pair.
(Note that there are two S3 buckets in play; one to store Terraform
state and one to store Riju's actual build artifacts. The Terraform
one is created manually in this step; the other one is provisioned via
Terraform, and you choose the name in your `.env` file, as described
in a later step.)
### Namecheap
### Namecheap (or any other DNS provider)
Buy a domain name at which to host.
### CloudFlare
Enter your domain name and go through the setup and DNS verification.
Update the nameserver settings on Namecheap's side, and enable all the
fun CloudFlare options you'd like.
### Fathom Analytics
Enter your domain name and get a site ID.
Buy a domain name at which to host (or you can use one you already
own, or a subdomain of one you already own). All you need is DNS panel
access for creating a CNAME.
## Install dependencies
@ -101,7 +95,6 @@ the values properly:
```
# Packer
ADMIN_PASSWORD=50M9QDBtkQLV6zFAwhVg
FATHOM_SITE_ID=
SUPERVISOR_ACCESS_TOKEN=5ta2qzMFS417dG9gbfcMgjsbDnGMI4
```
@ -110,21 +103,16 @@ SUPERVISOR_ACCESS_TOKEN=5ta2qzMFS417dG9gbfcMgjsbDnGMI4
This will be the `sudo` password for Riju server nodes. Generate one
randomly with `pwgen -s 20 1`.
### FATHOM\_SITE\_ID
This is the site ID from your Fathom Analytics account. If you don't
need analytics, just leave this unset.
### SUPERVISOR\_ACCESS\_TOKEN
This is a static shared secret used for the Riju server's supervisor
API. Generate one randomly with `pwgen -s 30 1`.
## Build AMI
## Build web AMI
You'll want to run `make env` to load in the new variables from
`.env`. Now run `make packer`. This will take up to 10 minutes to
build a timestamped AMI with a name like `riju-20210711223158`.
You'll want to run `set -a; . .env` to load in the new variables from
`.env`. Now run `make packer-web`. This will take up to 10 minutes to
build a timestamped AMI with a name like `riju-web-20210711223158`.
## Create local configuration (part 2 of 3)
@ -132,14 +120,14 @@ Add to `.env` the following contents:
```
# Terraform
AMI_NAME=riju-20210711223158
AMI_NAME=riju-web-20210711223158
AWS_REGION=us-west-1
S3_BUCKET=yourname-riju
```
### AMI\_NAME
This is the AMI name from the Packer build.
This is the AMI name from `make packer-web`.
### AWS\_REGION
@ -159,23 +147,28 @@ This is the name of the S3 bucket that will be used to store Riju
build artifacts (aside from Docker images). It needs to be globally
unique, so `yourname-riju` is a good choice.
### SSH\_KEY\_NAME
This is the name of the EC2 Key Pair you created in the AWS console.
You'll use it to connect to the development server.
## Set up Terraform infrastructure
Run `make env` again to load in the new variables from `.env`.
Run `set -a; . .env` again to load in the new variables from `.env`.
Now run `terraform init` and fill in the appropriate region and bucket
name for the Terraform state bucket you created in the AWS console.
At this point you can run `terraform apply` to create all the needed
infrastructure. Caution! At this point you probably want to go to the
EC2 console and stop the dev server. It is very expensive and will
rack up a few hundred dollars a month of compute. You should only have
it running when you're actively working on Riju.
infrastructure.
*Note: when updating `.env` configuration that affects the web AMI,
follow these steps:*
1. Update `.env` and make sure it is sourced (`set -a; . .env`).
2. Run `make packer-web` and get the name of the new AMI.
3. Update it in `.env` under `AMI_NAME` and make sure the update is
sourced (`set -a; . .env`).
4. Run `terraform apply`.
5. In the AWS console, scale up the ASG to 2 replicas and wait for the
new instance to become healthy.
6. Scale the ASG back down to 1 replica; the older instance should be
terminated.
## Finish AWS configuration
@ -186,9 +179,6 @@ Go back to the AWS console and take care of a few loose ends:
This will make your public registry URL easier to remember.
* In the "View push commands" modal dialogs, take note of the
repository URLs for your public and private Riju ECR repositories.
* If you want alerts, [create an SNS
subscription](https://us-west-1.console.aws.amazon.com/sns/v3/home?region=us-west-1#/subscriptions)
from the Riju SNS topic to the PagerDuty integration URL.
## Create local configuration (part 3 of 3)
@ -211,20 +201,20 @@ This is the URL for your public ECR repository.
## Configure DNS
Obtain the DNS record for Riju's ALB from `terraform output` and
install it as a proxied CNAME record in CloudFlare DNS for your apex
domain. After DNS propagates, you should now be able to receive a 502
from Riju with no body content.
## Set up dev server
The dev server is provisioned with a fresh Ubuntu AMI. You probably
want to clone your repository up there, enable SSH agent forwarding,
etc. Doing a full build on your laptop is feasible, but unless you
have symmetric gigabit ethernet you're not going to get all the build
artifacts uploaded in less than a week.
install it as a CNAME record in your DNS panel. After DNS propagates,
you should now be able to receive a 502 from Riju with no body
content.
## Build and deploy
*(Note: Although it's easy to build Riju locally, you have to be able
to upload the finished build artifacts to ECR, which amount to about
40 GB of data transfer. If you don't have a symmetric Internet plan at
home, you may need to do this on an EC2 instance instead. You can
provision one manually with at least 256 GB of disk space, install
Docker, clone down Riju, copy over your `.env` file, and proceed as if
you were running locally.)*
Invoke Depgraph:
```
@ -239,7 +229,7 @@ console and running `mssh admin@i-theinstanceid`. Then you can check
(using the previously configured admin password) `sudo journalctl -efu
riju` to see the supervisor logs.
## Set up CI
## Set up CI (optional)
In your GitHub repo settings, create the secrets `AWS_ACCESS_KEY_ID`
and `AWS_SECRET_ACCESS_KEY` with the values from `terraform output
@ -251,3 +241,70 @@ You'll also want to go to `.github/workflows/main.yml` and update the
environment variables `AWS_REGION`, `DOCKER_REPO`,
`PUBLIC_DOCKER_REPO`, and `S3_BUCKET` as appropriate for your own
deployment (see the `.env` file you created earlier).
## Set up content delivery caching (optional)
Enter your domain name on CloudFlare and go through the setup and DNS
verification. Update the nameserver settings on Namecheap's side, and
enable all the fun CloudFlare options you'd like.
## Set up analytics (optional)
Sign up for Fathom Analytics, enter your domain name, and get a site
ID. Set this as `FATHOM_SITE_ID` in your `.env` file, and build and
roll out a new web AMI.
## Set up monitoring (optional)
Register a free Grafana Cloud account and get your API key. Set it in
`.env` as `GRAFANA_API_KEY`, and build and roll out a new web AMI.
## Set up error tracking (optional)
Set up a Sentry project and get the DSN. Set it in `.env` as
`SENTRY_DSN_PACKER`, and build and roll out a new web AMI.
## Set up alerts (optional)
Set up notification rules as desired in PagerDuty. Configure the AWS
CloudWatch integration and obtain an integration URL. Then, in AWS,
[create an SNS
subscription](https://us-west-1.console.aws.amazon.com/sns/v3/home?region=us-west-1#/subscriptions)
from the Riju SNS topic to the PagerDuty integration URL.
On UptimeRobot, create a monitor for the domain on which you're
serving Riju. Then [follow the steps to integrate UptimeRobot with
PagerDuty](https://uptimerobot.com/integrations/pagerduty/).
## Set up uptime reporting (optional)
Register with Statuspage and create at least one component. Ideally
you want UptimeRobot to send email directly to Statuspage so that the
service status can be reported automatically, [but this doesn't work
out of the
box](https://community.atlassian.com/t5/Statuspage-questions/Uptime-Robot/qaq-p/1469835).
What you have to, as discussed in the linked forum thread, is set up a
filter in your personal email that will forward the uptime alerts to
the Statuspage email endpoint.
## Set up finance tracking (optional)
Go to the [AWS billing
console](https://us-east-1.console.aws.amazon.com/billing/home?region=us-west-1#/tags)
and activate `BillingCategory` and `BillingSubcategory` as cost
allocation tags. Then, under "Cost & Usage Reports", create a report
for delivery to an S3 bucket, with the following parameters:
* Additional report details: Include resource IDs
* Data refresh settings: Automatically refresh
* Time granularity: Hourly
* Report versioning: Overwrite existing report
* Compression type: GZIP
* File format: text/csv
In `.env`, set `BILLING_REPORTS_URL` to the S3 filepath prefix for
your report. It should look like
`s3://bucketname/reportpathprefix/reportname`. You can now use the
tools in the `financials` subdirectory to generate your own AWS
billing reports. It works best if all other AWS resources in your
account are also tagged with `BillingCategory`.

View File

@ -70,7 +70,7 @@ yarn
apt-get update
apt-get install -y $(sed 's/#.*//' <<< "${packages}")
pip3 install ec2instanceconnectcli
pip3 install ec2instanceconnectcli poetry
npm install -g prettier

View File

@ -38,6 +38,7 @@ clang
docker-ce-cli
make
nodejs
poetry
yarn
# packaging tools

View File

@ -42,6 +42,11 @@ resource "aws_lb_target_group" "server" {
port = 80
protocol = "HTTP"
vpc_id = data.aws_vpc.default.id
health_check {
path = "/"
matcher = "200"
}
}
resource "aws_lb_listener" "server_http" {

View File

@ -6,12 +6,3 @@ data "aws_ami" "server" {
values = [data.external.env.result.AMI_NAME]
}
}
data "aws_ami" "ci" {
owners = ["self"]
filter {
name = "name"
values = [data.external.env.result.CI_AMI_NAME]
}
}