Compare commits

...

2 commits

Author SHA1 Message Date
jeff
34095f1dd2 Update diy-tunnel/public/simple/README.md 2024-11-02 20:16:24 +00:00
Jeff Clement
aed48ffdf8
many new containers. work in progress 2024-11-02 14:09:56 -06:00
31 changed files with 1042 additions and 1 deletions

View file

@ -4,6 +4,6 @@ This repository is a collection of Docker Compose examples for various self-host
To make services easier to manage (backup, restore, move to a different host, etc.), they typically use bind-mounts to store data in a subdirectory (typically `./data`) alongside the `docker-compose.yml` and `.env` files, rather than Docker Volumes. To make services easier to manage (backup, restore, move to a different host, etc.), they typically use bind-mounts to store data in a subdirectory (typically `./data`) alongside the `docker-compose.yml` and `.env` files, rather than Docker Volumes.
Similarly, many examples here rely on Cloudflare Tunnels for publicly exposing services, and Tailscale for internal services. Similarly, many examples here rely on Cloudflare Tunnels for publicly exposing services, and Tailscale for internal services. This allows them to be run within homelabs without poking holes in firewalls, or needing to pay for a VPS.
In most cases, minimal changes should be required to the `docker-compose.yml` file, while the bulk of the changes will be to the `.env` file. In most cases, minimal changes should be required to the `docker-compose.yml` file, while the bulk of the changes will be to the `.env` file.

41
diy-tunnel/README.md Normal file
View file

@ -0,0 +1,41 @@
# DIY Tunnel
I **love** Cloudflare Tunnels and routinely use them to expose my self-hosted services to the Internet. However, there are a couple of limitations that make them less than ideal for some use-cases.
1. They work best with HTTP/HTTPS services. Other services like SSH / rando-TCP-service require the *clients* to run `cloudflared`. This prevents running public-facing non-HTTP services via. Cloudflare Tunnels.
2. Cloudflare is the *man-in-the-middle*. They manage the TLS certificates and, were they evil, they could inspect traffic.
This folder has some sample wireguard configuration to allow a cheap cloud-VPS to forward traffic through a wireguard tunnel to a private server.
* The VPS does not own/manage TLS certificates or any data.
* It supports any TCP services
* The connection is made from the private server to the public VPS, so only the VPS requires a static IP. The private server can hide behind a VPN and move networks with impunity.
* Through some network trickery, packets are forwarded from the VPS to the private server. The implication is that the private server sees the actual source IPs for the traffic which allows things like fail2ban to work appropriately.
Requirements:
1. One public facing machine (like a VPS) with a static IPv4 address
2. One private machine running your services
3. Wireguard installed on each (`apt install wireguard` for you Debian/Ubuntu folks)
Steps:
1. Generate keys for both the public and private server.
1. `wg genkey | tee privatekey | wg pubkey > publickey`
2. Copy `private/wg0.conf` to `/etc/wireguard/wg0.conf` on your private server.
* Update the ports `80,443` to be whatever ports you want to pass through.
* Add the private key for the public server, and the public key for the private server.
3. Copy `public/???/wg0.conf` to `/etc/wireguard/wg0.conf` on your public server.
* Update the ports `80,443` to be whatever ports you want to pass through.
* Add the private key for the private server, and the public key for the public server.
* Update the public IP for the public server (replace all 999.999.999.999) with your VPS IP
4. Start wireguard on each machine: `wg-quick wg0 up`
5. Enable wireguard on boot on each machine: `sudo systemctl enable wg-quick@wg0`
## Testing:
1. From public server, can ping private IP `ping 10.0.0.2`
2. From private server, can ping public IP `ping 10.0.0.1`
3. Run a webserver on the private server...
1. From public server: `curl http://10.0.0.2` should work
2. From workstation: `curl http://publicIP` should work
3. From the private server: `curl http://publicIP` should also work
That's about it.

View file

@ -0,0 +1 @@
`wg0.conf` should be deployed to `/etc/wireguard/wg0.conf` on the public facing (VPS) server.

View file

@ -0,0 +1,15 @@
[Interface]
Address = 10.0.0.1/24 # Private IP for the VPS in the VPN network
ListenPort = 51820 # Default WireGuard port
PrivateKey = ###PRIVATE KEY FOR PUBLIC SERVER####
# packet forwarding
PreUp = sysctl -w net.ipv4.ip_forward=1
# port forwarding (HTTP, HTTPS) - update port list as required
PreUp = iptables -t nat -A PREROUTING -i eth0 -p tcp -m multiport --dports 80,443 -j DNAT --to-destination 10.0.0.2
PostDown = iptables -t nat -D PREROUTING -i eth0 -p tcp -m multiport --dports 80,443 -j DNAT --to-destination 10.0.0.2
[Peer]
PublicKey = ###PUBLIC KEY FOR PRIVATE SERVER####
AllowedIPs = 10.0.0.2/32 # IP of the home server in VPN

View file

@ -0,0 +1 @@
Another nice option is instead of installing wireguard on the bare machine, we can fire it up within our existing `docker-compose.yml` and easily expose services from a set of docker containers.

View file

@ -0,0 +1,27 @@
services:
wireguard:
image: lscr.io/linuxserver/wireguard:latest
hostname: THEPRIVATESERVER
cap_add:
- NET_ADMIN
environment:
- TZ=America/Edmonton
volumes:
- ./wg0.conf:/config/wg_confs/wg0.conf
restart: always
sysctls:
- net.ipv4.ip_forward=1
caddy:
image: caddy:latest
restart: always
# this is the special sauce. This attaches this container to the
# network context of the wireguard container. Essentially this means
# that Caddy is listening on 10.0.0.2 now.
# If you have other containers exposing additional ports, do the same to them.
network_mode: service:wireguard
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile # Mount Caddyfile for configuration
- ./webroot:/srv/www # Mount local www directory to container
- ./data/caddy:/data/caddy # Persistent storage for certificates

View file

@ -0,0 +1,13 @@
[Interface]
Address = 10.0.0.2/24 # Private IP for the home server in the VPN network
PrivateKey = #### PRIVATE KEY OF PRIVATE SERVER ####
Table = 123
PreUp = ip rule add from 10.0.0.2 table 123 priority 1
PostDown = ip rule del from 10.0.0.2 table 123 priority 1
[Peer]
PublicKey = #### PUBLIC KEY OF PUBLIC SERVER ####
AllowedIPs = 0.0.0.0/0
Endpoint = 999.999.999.999:51820
PersistentKeepalive = 25

View file

@ -0,0 +1,3 @@
Copy `wg0.conf` to `/etc/wireguard/wg0.config`
This example is for wireguard running on the private server and forwarding traffic to local services AND docker services.

View file

@ -0,0 +1,46 @@
[Interface]
Address = 10.0.0.2/24 # Private IP for the home server in the VPN network
PrivateKey = #### PRIVATE KEY OF PRIVATE SERVER #####
Table = 123
# Enable IP forwarding
PreUp = sysctl -w net.ipv4.ip_forward=1
# loose reverse path forwarding validation
PostUp = sysctl -w net.ipv4.conf.wg0.rp_filter=2
# Mark new connections coming in through wg0
PreUp = iptables -t mangle -A PREROUTING -i wg0 -m state --state NEW -j CONNMARK --set-mark 1
PostDown = iptables -t mangle -D PREROUTING -i wg0 -m state --state NEW -j CONNMARK --set-mark 1
# Mark return packets to go out through WireGuard via policy routing
PreUp = iptables -t mangle -A PREROUTING ! -i wg0 -m connmark --mark 1 -j MARK --set-mark 1
PostDown = iptables -t mangle -D PREROUTING ! -i wg0 -m connmark --mark 1 -j MARK --set-mark 1
# Push marked connections back through wg0
PreUp = ip rule add fwmark 1 table 123 priority 456
PostDown = ip rule del fwmark 1 table 123 priority 456
# Route traffic to public IP to self to avoid it hitting the network
PreUp = iptables -t nat -A OUTPUT -d 999.999.999.999 -p tcp -m multiport --dports 80,443 -j DNAT --to-destination 127.0.0.1
PostDown = iptables -t nat -D OUTPUT -d 999.999.999.999 -p tcp -m multiport --dports 80,443 -j DNAT --to-destination 127.0.0.1
# ==== Firewall ===============================
# Allow our expected traffic
PreUp = iptables -A INPUT -i wg0 -p tcp -m multiport --dports 80,443 -j ACCEPT
PostDown = iptables -D INPUT -i wg0 -p tcp -m multiport --dports 80,443 -j ACCEPT
# And pings
PreUp = iptables -A INPUT -i wg0 -p icmp --icmp-type echo-request -j ACCEPT
PostDown = iptables -D INPUT -i wg0 -p icmp --icmp-type echo-request -j ACCEPT
# Block the rest
PreUp = iptables -A INPUT -i wg0 -j DROP
PostDown = iptables -D INPUT -i wg0 -j DROP
[Peer]
PublicKey = #### PUBLIC KEY OF PUBLIC SERVER #####
AllowedIPs = 0.0.0.0/0
Endpoint = 999.999.999.999:51820
PersistentKeepalive = 25

View file

@ -0,0 +1,3 @@
Copy `wg0.conf` to `/etc/wireguard/wg0.config`
This example works well for forwarding traffic to services running directly on the private server. If your services are running in Docker things get much more complicated because of how Docker handle networking. For that, see the `/public/docker-on-host` example.

View file

@ -0,0 +1,35 @@
[Interface]
Address = 10.0.0.2/24 # Private IP for the home server in the VPN network
PrivateKey = #### PRIVATE KEY OF PRIVATE SERVER #####
Table = 123
# Enable IP forwarding
PreUp = sysctl -w net.ipv4.ip_forward=1
# Return traffic through wireguard
PreUp = ip rule add from 10.0.0.2 table 123 priority 1
PostDown = ip rule del from 10.0.0.2 table 123 priority 1
# Route traffic to public IP to self to avoid it hitting the network
PreUp = iptables -t nat -A OUTPUT -d 999.999.999.999 -p tcp -m multiport --dports 80,443 -j DNAT --to-destination 127.0.0.1
PostDown = iptables -t nat -D OUTPUT -d 999.999.999.999 -p tcp -m multiport --dports 80,443 -j DNAT --to-destination 127.0.0.1
# ==== Firewall ===============================
# Allow our expected traffic
PreUp = iptables -A INPUT -i wg0 -p tcp -m multiport --dports 80,443 -j ACCEPT
PostDown = iptables -D INPUT -i wg0 -p tcp -m multiport --dports 80,443 -j ACCEPT
# And pings
PreUp = iptables -A INPUT -i wg0 -p icmp --icmp-type echo-request -j ACCEPT
PostDown = iptables -D INPUT -i wg0 -p icmp --icmp-type echo-request -j ACCEPT
# Block the rest
PreUp = iptables -A INPUT -i wg0 -j DROP
PostDown = iptables -D INPUT -i wg0 -j DROP
[Peer]
PublicKey = #### PUBLIC KEY OF PUBLIC SERVER #####
AllowedIPs = 0.0.0.0/0
Endpoint = 999.999.999.999:51820
PersistentKeepalive = 25

38
forgejo_tailscale/.env Normal file
View file

@ -0,0 +1,38 @@
FORGEJO_TAG=8
FORGEJO_RUNNER_TAG=4.0.1
# Tailscale authorization key
TS_AUTHKEY=tskey-auth-
# Tailscale tailnet node name
TAILNET_NAME=git
TAILNET_SUFFIX=XXXXXX.ts.net
# Instance Settings
APP_NAME="My Git Server"
FORGEJO_HOSTNAME=${TAILNET_NAME}.${TAILNET_SUFFIX}
DISABLE_REGISTRATION=true
# Database
FORGEJO_DB_PASSWORD= ##REQUIRED##
# Mail
MAIL_ENABLED=true
MAIL_FROM='"Git Server" <noreply@mg.yourdomain.com>'
MAIL_SMTP_USER=forgejo@mg.yourdomain.com
MAIL_SMTP_PASSWD= ##REQUIRED##
MAIL_SMTP_PROTOCOL=smtps
MAIL_SMTP_ADDR=smtp.mailgun.org
MAIL_SMTP_PORT=465
# Initial user
ROOT_USER=admin
ROOT_EMAIL=admin@yourdomain.com
ROOT_PASSWORD= ##REQUIRED##
# Token for runner (generate with `openssl rand -hex 20`)
SHARED_SECRET= ##REQUIRED##
# Runner name / labels
RUNNER_NAME=runner
RUNNER_LABELS='[\"docker:docker://code.forgejo.org/oci/node:20-bookworm\", \"ubuntu-22.04:docker://catthehacker/ubuntu:act-22.04\"]'

View file

@ -0,0 +1,19 @@
# Forgejo via. Tailscale
This example is a quick start to running an instance of the excellent Forgejo git server under Docker.
* private exposed on Tailscale Tailnet
* Pre-configured Github Action-style Runners
* Mail delivery (assuming Mailgun, but adjustable to any SMTP server)
* Pre-configured (no installation wizard) including admin account
## Steps:
0. Get an Auth Key from your tailscale account
1. Copy `docker-compose.yml` and `.env` to a new folder
2. Update variables in `.env` making sure to generate random secrets for the various secrets marked with `##REQUIRED##`
5. `docker compose up -d`
6. Wait...
7. Wait a bit more
8. Visit `https://git.your-name.ts.net` in your browser and login with the admin credentials in your `.env` file.
9. Verify settings. (i.e. do you want to disable user signups, etc.)

View file

@ -0,0 +1,132 @@
services:
tailscale:
hostname: ${TAILNET_NAME}
image: tailscale/tailscale
volumes:
- ./data/tailscale:/var/lib/tailscale
- ./ts-serve.json:/config/ts-serve.json:ro
- /dev/net/tun:/dev/net/tun
cap_add:
- net_admin
- sys_module
environment:
TS_AUTHKEY: ${TS_AUTHKEY}
TS_SERVE_CONFIG: /config/ts-serve.json
TS_AUTH_ONCE: true
TS_STATE_DIR: /var/lib/tailscale
TS_HOST: ${TAILNET_NAME}
restart: unless-stopped
server:
image: codeberg.org/forgejo/forgejo:${FORGEJO_TAG}
command: >-
bash -c '
/bin/s6-svscan /etc/s6 &
sleep 10 ;
su -c "forgejo forgejo-cli actions register --secret ${SHARED_SECRET}" git ;
su -c "forgejo admin user create --admin --username ${ROOT_USER} --password ${ROOT_PASSWORD} --email ${ROOT_EMAIL}" git ;
sleep infinity
'
environment:
# https://forgejo.org/docs/latest/admin/config-cheat-sheet/
- RUN_MODE=prod
- USER_UID=1000
- USER_GID=1000
- APP_NAME=${APP_NAME}
- FORGEJO__server__ROOT_URL=https://${FORGEJO_HOSTNAME}
# Prevent the installation wizard from running
- FORGEJO__security__INSTALL_LOCK=true
# Do we allow new signups?
- FORGEJO__service__DISABLE_REGISTRATION=${DISABLE_REGISTRATION}
# DB Setup
- FORGEJO__database__DB_TYPE=postgres
- FORGEJO__database__HOST=db:5432
- FORGEJO__database__NAME=gitea
- FORGEJO__database__USER=gitea
- FORGEJO__database__PASSWD=${FORGEJO_DB_PASSWORD}
# Mail Setup
- FORGEJO__mailer__ENABLED=${MAIL_ENABLED}
- FORGEJO__mailer__FROM=${MAIL_FROM}
- FORGEJO__mailer__PROTOCOL=${MAIL_SMTP_PROTOCOL}
- FORGEJO__mailer__SMTP_ADDR=${MAIL_SMTP_ADDR}
- FORGEJO__mailer__SMTP_PORT=${MAIL_SMTP_PORT}
- FORGEJO__mailer__USER=${MAIL_SMTP_USER}
- FORGEJO__mailer__PASSWD=${MAIL_SMTP_PASSWD}
# Git rid of the splash screen and just show a project listing
# on the homepage
- FORGEJO__server__LANDING_PAGE=explore
restart: always
volumes:
- ./data/data:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
depends_on:
- db
db:
image: postgres:13
restart: always
environment:
- POSTGRES_USER=gitea
- POSTGRES_PASSWORD=${FORGEJO_DB_PASSWORD}
- POSTGRES_DB=gitea
volumes:
- ./data/postgres:/var/lib/postgresql/data
# Runner configuration is fairly complex and uses Docker-in-Docker
# Pulled from this example:
# https://code.forgejo.org/forgejo/runner/src/branch/main/examples/docker-compose/compose-forgejo-and-runner.yml
runner-register:
image: code.forgejo.org/forgejo/runner:${FORGEJO_RUNNER_TAG}
links:
- docker-in-docker
- server
environment:
DOCKER_HOST: tcp://docker-in-docker:2376
volumes:
- ./data/runner-data:/data
user: 0:0
command: >-
bash -ec '
while : ; do
forgejo-runner create-runner-file --connect --instance http://server:3000 --name ${RUNNER_NAME} --secret ${SHARED_SECRET} && break ;
sleep 1 ;
done ;
sed -i -e "s|\"labels\": null|\"labels\": ${RUNNER_LABELS}|" .runner ;
forgejo-runner generate-config > config.yml ;
sed -i -e "s|network: .*|network: host|" config.yml ;
sed -i -e "s|^ labels: \[\]$$| labels: ${RUNNER_LABELS}|" config.yml ;
sed -i -e "s|^ envs:$$| envs:\n DOCKER_HOST: tcp://docker:2376\n DOCKER_TLS_VERIFY: 1\n DOCKER_CERT_PATH: /certs/client|" config.yml ;
sed -i -e "s|^ options:| options: -v /certs/client:/certs/client|" config.yml ;
sed -i -e "s| valid_volumes: \[\]$$| valid_volumes:\n - /certs/client|" config.yml ;
chown -R 1000:1000 /data
'
runner-daemon:
image: code.forgejo.org/forgejo/runner:4.0.1
links:
- docker-in-docker
- server
environment:
DOCKER_HOST: tcp://docker:2376
DOCKER_CERT_PATH: /certs/client
DOCKER_TLS_VERIFY: "1"
volumes:
- ./data/runner-data:/data
- ./data/docker_certs:/certs
command: >-
bash -c '
while : ; do test -w .runner && forgejo-runner --config config.yml daemon ; sleep 1 ; done
'
docker-in-docker:
image: docker:dind
hostname: docker # Must set hostname as TLS certificates are only valid for docker or localhost
privileged: true
environment:
DOCKER_TLS_CERTDIR: /certs
DOCKER_HOST: docker-in-docker
volumes:
- ./data/docker_certs:/certs

View file

@ -0,0 +1,22 @@
{
"TCP": {
"22": {
"TCPForward": "server:22"
},
"443": {
"HTTPS": true
}
},
"Web": {
"${TS_CERT_DOMAIN}:443": {
"Handlers": {
"/": {
"Proxy": "http://server:3000"
}
}
}
},
"AllowFunnel": {
"${TS_CERT_DOMAIN}:443": false
}
}

13
ghost_cloudflare/.env Normal file
View file

@ -0,0 +1,13 @@
# Token used to authentication with CloudFlare Tunnel
TUNNEL_TOKEN= ##REQUIRED##
# Password used for MySQL root account
MYSQL_PASSWORD= ##REQUIRED##
# Base URL for the Blog
URL=https://www.yourname.com
# From credentials use for transactional emails through Mailgun
# Note: that bulk emails need a separate configuration with Mailgun API Key
MAILGUN_USERNAME=ghost@mg.yourname.com
MAILGUN_PASSWORD= ##REQUIRED##

View file

@ -0,0 +1,15 @@
# Ghost Blog behind Cloudflare
This example is covered in a fair bit of detail in this blog post:
https://www.straybits.ca/2024/ghost-cloudflare-setup/
Requires:
* Cloudflare Tunnel
* Mailgun for SMTP and bulk (Newsletter) delivery
Steps:
1. Setup your Tunnel (pointing to `http://ghost`)
2. Update parameters in `.env`
3. `docker compose up -d`
4. Head to Starbucks and get your blog on!

View file

@ -0,0 +1,47 @@
services:
tunnel:
image: cloudflare/cloudflared
command: tunnel --no-autoupdate run
restart: always
environment:
TUNNEL_TOKEN: ${TUNNEL_TOKEN}
depends_on:
- ghost
- caddy
caddy:
image: caddy:alpine
volumes:
- ./static:/usr/share/caddy/static
restart: always
ghost:
image: ghost:5-alpine
restart: always
depends_on:
- db
environment:
# see https://ghost.org/docs/config/#configuration-options
database__client: mysql
database__connection__host: db
database__connection__user: root
database__connection__password: ${MYSQL_PASSWORD}
database__connection__database: ghost
mail__transport: SMTP
mail__options__service: Mailgun
mail__from: ${MAILGUN_USERNAME}
mail__options__auth__user: ${MAILGUN_USERNAME}
mail__options__auth__pass: ${MAILGUN_PASSWORD}
url: ${URL}
server__port: 80
volumes:
- ./data/ghost:/var/lib/ghost/content
db:
image: mysql:8.0
restart: always
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_PASSWORD}
volumes:
- ./data/db:/var/lib/mysql

7
mailserver/.env Normal file
View file

@ -0,0 +1,7 @@
HOSTNAME=mail.yourdomain.com
# Relay outbound mail through Amazon SES
RELAY_HOST=email-smtp.us-west-2.amazonaws.com
RELAY_PORT=2587
RELAY_USER=
RELAY_PASSWORD=

7
mailserver/Caddyfile Normal file
View file

@ -0,0 +1,7 @@
{
email admin@yourdomain.com
}
mail.yourdomain.com {
reverse_proxy * http://roundcube:80
}

36
mailserver/README.md Normal file
View file

@ -0,0 +1,36 @@
# Mailserver Setup
This docker container fires up a copy of docker-mailserver.
* The services SMTP, IMAP, POP, etc are exposed to by tunneling traffic from a public facing VPS
* Outbound mail is sent through Amazon SES
* Optionally, inbound mail can be received through Amazon SES (via. S3 bucket) to allow it to be backup/primary MX if you need it.
* Make sure to update bucket information in `s3-ingest.py`
Steps:
1. You'll need to update parameters in `.env` and `wireguard.conf` and `Caddyfile`
2. Initially, comment out (from `docker-compose.yml` the two lines starting with ` - ./data/caddy/certificates`). We need to start it once without so that Caddy will fetch our certificates. Once that happens, uncomment those lines and restart.
3. Setup Mailgun or SES for mail forwarding and enter relay config in `.env`. SES is pretty easy to work with and supports multiple sending domains with a single set of credentials.
4. Optionally, setup a S3 bucket and configure SES to deliver inbound mail there and then update `s3-ingest.py` and uncomment the lines for mail ingestion from `docker-compose.yml`. This is handy if your VPS/ISP is blocking inbound mail ports.
## Front-end Server Wireguard
This wireguard configuration would be deployed to the public-facing VPS which will forward interesting traffic (25,465,587,993,995,80,443) through to our docker services.
```
[Interface]
Address = 10.0.0.1/24 # Private IP for the VPS in the VPN network
ListenPort = 51820 # Default WireGuard port
PrivateKey = ##PRIVATE KEY FOR PUBLIC SERVER##
# packet forwarding
PreUp = sysctl -w net.ipv4.ip_forward=1
# port forwarding (HTTP) // repeat for each port
PreUp = iptables -t nat -A PREROUTING -i eth0 -p tcp -m multiport --dports 25,465,587,993,995,80,443 -j DNAT --to-destination 10.0.0.2
PostDown = iptables -t nat -D PREROUTING -i eth0 -p tcp -m multiport --dports 25,465,587,993,995,80,443 -j DNAT --to-destination 10.0.0.2
[Peer]
PublicKey = ##PUBLIC KEY FOR PRIVATE SERVER##
AllowedIPs = 10.0.0.2/32 # IP of the home server in VPN
```

View file

@ -0,0 +1,8 @@
# Adding `MAILTO=""` prevents cron emailing notifications of the task outcome each run
MAILTO=""
#
# m h dom mon dow user command
#
# Everyday 4:00AM, optimize index files
0 4 * * * root doveadm fts optimize -A
# EOF

6
mailserver/cron/s3 Normal file
View file

@ -0,0 +1,6 @@
# Adding `MAILTO=""` prevents cron emailing notifications of the task outcome each run
MAILTO=""
#
# m h dom mon dow user command
* * * * * root /usr/local/bin/s3-ingest >> /var/log/mail/s3-ingest.log 2>&1
# EOF

View file

@ -0,0 +1,86 @@
services:
wireguard:
image: lscr.io/linuxserver/wireguard:latest
hostname: ${HOSTNAME}
cap_add:
- NET_ADMIN
environment:
- TZ=America/Edmonton
volumes:
- ./wireguard.conf:/config/wg_confs/wg0.conf
restart: always
sysctls:
- net.ipv4.ip_forward=1
mailserver:
image: ghcr.io/docker-mailserver/docker-mailserver:latest
network_mode: service:wireguard
volumes:
- ./data/dms/mail-data/:/var/mail/
- ./data/dms/mail-state/:/var/mail-state/
- ./data/dms/mail-logs/:/var/log/mail/
- ./data/dms/config/:/tmp/docker-mailserver/
- /etc/localtime:/etc/localtime:ro
# Enable ingestion from S3
#- ./s3-ingest.py:/usr/local/bin/s3-ingest:ro
#- ./cron/s3:/etc/cron.d/s3:ro
# Enable full text searching
# https://docker-mailserver.github.io/docker-mailserver/latest/config/advanced/full-text-search/
- ./fts-xapian-plugin.conf:/etc/dovecot/conf.d/10-plugin.conf:ro
- ./cron/fts_xapian:/etc/cron.d/fts_xapian:ro
# when initializing, these need to be commented out because they don't exist.
# until Caddy has had a chance to fetch them.
- ./data/caddy/certificates/acme.zerossl.com-v2-dv90/${HOSTNAME}/${HOSTNAME}.crt:/etc/letsencrypt/live/${HOSTNAME}/fullchain.pem:ro
- ./data/caddy/certificates/acme.zerossl.com-v2-dv90/${HOSTNAME}/${HOSTNAME}.key:/etc/letsencrypt/live/${HOSTNAME}/privkey.pem:ro
environment:
- ENABLE_RSPAMD=1
- ENABLE_OPENDMARC=0
- ENABLE_POLICYD_SPF=0
- ENABLE_FAIL2BAN=1
- ENABLE_POSTGREY=1
- ENABLE_DNSBL=1
- ENABLE_CLAMAV=1
- ENABLE_POP3=1
# We'll leverage certs from Caddy here
- SSL_TYPE=letsencrypt
# Assume we can't send outbound mail. Relay sent mail through
# something like Mailgun or Amazon SES
- RELAY_HOST=${RELAY_HOST}
- RELAY_PORT=${RELAY_PORT}
- RELAY_USER=${RELAY_USER}
- RELAY_PASSWORD=${RELAY_PASSWORD}
cap_add:
- NET_ADMIN # For Fail2Ban to work
restart: always
# ========= WEBMAIL =========================================
# Who doesn't want webmail. Besides we can piggy back on this
# to fetch TLS certificates for our IMAP/SMTP services.
caddy:
image: caddy:latest
restart: always
network_mode: service:wireguard
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile # Mount Caddyfile for configuration
- ./data/caddy:/data/caddy # Persistent storage for certificates
roundcube:
image: roundcube/roundcubemail:latest
container_name: roundcubemail
restart: always
volumes:
- ./data/roundcube/www:/var/www/html
- ./data/roundcube/db:/var/roundcube/db
environment:
- ROUNDCUBEMAIL_DB_TYPE=sqlite
- ROUNDCUBEMAIL_SKIN=elastic
- ROUNDCUBEMAIL_DEFAULT_HOST=tls://${HOSTNAME}
- ROUNDCUBEMAIL_SMTP_SERVER=tls://${HOSTNAME}

View file

@ -0,0 +1,28 @@
mail_plugins = $mail_plugins fts fts_xapian
plugin {
fts = xapian
fts_xapian = partial=3 full=20 verbose=0
fts_autoindex = yes
fts_enforced = yes
# disable indexing of folders
fts_autoindex_exclude = \Trash
# Index attachements
# fts_decoder = decode2text
}
service indexer-worker {
# limit size of indexer-worker RAM usage, ex: 512MB, 1GB, 2GB
vsz_limit = 1GB
}
# service decode2text {
# executable = script /usr/libexec/dovecot/decode2text.sh
# user = dovecot
# unix_listener decode2text {
# mode = 0666
# }
# }

149
mailserver/s3-ingest.py Executable file
View file

@ -0,0 +1,149 @@
#!/bin/env python3
import os
import datetime
import hashlib
import hmac
import http.client
import urllib.parse
import logging
import subprocess
import xml.etree.ElementTree as ET
# AWS S3 configuration
# Would rather these be in environment variables, but CRON doesn't have this.
bucket_name = "MYMAILBUCKET"
prefix = ""
region = 'us-west-2'
access_key = ""
secret_key = ""
# Logging configuration
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s',
datefmt='%Y-%m-%d %H:%M:%S')
def sign(key, msg):
return hmac.new(key, msg.encode('utf-8'), hashlib.sha256).digest()
def get_signature_key(key, date_stamp, region_name, service_name):
k_date = sign(('AWS4' + key).encode('utf-8'), date_stamp)
k_region = sign(k_date, region_name)
k_service = sign(k_region, service_name)
k_signing = sign(k_service, 'aws4_request')
return k_signing
def create_signed_headers(method, host, uri, params, body=''):
t = datetime.datetime.utcnow()
amz_date = t.strftime('%Y%m%dT%H%M%SZ')
date_stamp = t.strftime('%Y%m%d')
canonical_uri = uri
canonical_querystring = '&'.join([f"{urllib.parse.quote_plus(k)}={urllib.parse.quote_plus(v)}" for k, v in params.items()])
payload_hash = hashlib.sha256(body.encode('utf-8')).hexdigest() if body else hashlib.sha256(b'').hexdigest()
# Include x-amz-date and x-amz-content-sha256 in canonical headers and signed headers
canonical_headers = f'host:{host}\n' \
f'x-amz-content-sha256:{payload_hash}\n' \
f'x-amz-date:{amz_date}\n'
signed_headers = 'host;x-amz-content-sha256;x-amz-date'
canonical_request = f"{method}\n{canonical_uri}\n{canonical_querystring}\n{canonical_headers}\n{signed_headers}\n{payload_hash}"
algorithm = 'AWS4-HMAC-SHA256'
credential_scope = f'{date_stamp}/{region}/s3/aws4_request'
string_to_sign = f'{algorithm}\n{amz_date}\n{credential_scope}\n{hashlib.sha256(canonical_request.encode("utf-8")).hexdigest()}'
signing_key = get_signature_key(secret_key, date_stamp, region, 's3')
signature = hmac.new(signing_key, string_to_sign.encode('utf-8'), hashlib.sha256).hexdigest()
authorization_header = (
f"{algorithm} Credential={access_key}/{credential_scope}, "
f"SignedHeaders={signed_headers}, Signature={signature}"
)
headers = {
'x-amz-date': amz_date,
'x-amz-content-sha256': payload_hash,
'Authorization': authorization_header
}
return headers
def make_request(method, uri, params=None, headers=None):
host = f's3.{region}.amazonaws.com'
conn = http.client.HTTPSConnection(host)
if params:
query_string = urllib.parse.urlencode(params)
full_uri = f"{uri}?{query_string}"
else:
full_uri = uri
conn.request(method, full_uri, headers=headers)
response = conn.getresponse()
data = response.read()
conn.close()
return response.status, data
def list_objects():
uri = f'/{bucket_name}'
params = {'list-type': '2', 'prefix': prefix}
headers = create_signed_headers('GET', f's3.{region}.amazonaws.com', uri, params)
status, response = make_request('GET', uri, params, headers)
if status == 200:
return response
else:
logging.error(f"Error listing objects: {response}")
return None
def download_object(key):
uri = f'/{bucket_name}/{urllib.parse.quote_plus(key)}'
headers = create_signed_headers('GET', f's3.{region}.amazonaws.com', uri, {})
status, response = make_request('GET', uri, headers=headers)
if status == 200:
return response
else:
logging.error(f"Error downloading {key}: {response}")
return None
def delete_object(key):
uri = f'/{bucket_name}/{urllib.parse.quote_plus(key)}'
headers = create_signed_headers('DELETE', f's3.{region}.amazonaws.com', uri, {})
status, response = make_request('DELETE', uri, headers=headers)
if status == 204:
logging.info(f"Deleted {key} from S3")
else:
logging.error(f"Error deleting {key}: {response}")
def inject_email(email_content):
process = subprocess.Popen(['/usr/sbin/sendmail', '-t'], stdin=subprocess.PIPE)
process.communicate(input=email_content)
if process.returncode == 0:
logging.info("Email successfully injected into Postfix")
else:
logging.error("Failed to inject email into Postfix")
def main():
# List all objects with the specified prefix
xml_content = list_objects()
if xml_content:
root = ET.fromstring(xml_content)
namespace = {'ns': root.tag.split('}')[0].strip('{')} # Extracts namespace from the root tag
for contents in root.findall('.//ns:Contents', namespace):
key = contents.find('ns:Key', namespace).text
logging.info(f"Processing {key}")
email_content = download_object(key)
if email_content:
inject_email(email_content)
delete_object(key)
def extract_keys_from_xml(xml_content):
return [elem.text for elem in root.iter('Key')]
if __name__ == '__main__':
main()

13
mailserver/wireguard.conf Normal file
View file

@ -0,0 +1,13 @@
[Interface]
Address = 10.0.0.2/24 # Private IP for the home server in the VPN network
PrivateKey = ##PRIVATE KEY FOR PRIVATE SERVER##
Table = 123
PreUp = ip rule add from 10.0.0.2 table 123 priority 1
PostDown = ip rule del from 10.0.0.2 table 123 priority 1
[Peer]
PublicKey = ##PUBLIC_KEY_FOR_PUBLIC_SERVER##
AllowedIPs = 0.0.0.0/0
Endpoint = ##PUBLIC_SERVER_IP##:51820
PersistentKeepalive = 25

View file

@ -0,0 +1,2 @@
# Cloudflare Tunnel Token
TUNNEL_TOKEN= ##REQUIRED##

View file

@ -0,0 +1,15 @@
# PrivateBin and Cloudflare
I use PrivateBin ALL THE TIME to share temporary code snippets, etc.
Very little setup required.
Requires:
* Cloudflare Tunnel
Steps:
1. Setup your Tunnel (pointing to `http://privatebin:8080`)
2. Set Cloudflare Tunnel token in `.env`
3. Fine-tune settings in `conf.php`
4. `docker compose up -d`

View file

@ -0,0 +1,196 @@
;<?php http_response_code(403); /*
; config file for PrivateBin
;
; An explanation of each setting can be find online at https://github.com/PrivateBin/PrivateBin/wiki/Configuration.
[main]
; (optional) set a project name to be displayed on the website
name = "My Pastebin"
; The full URL, with the domain name and directories that point to the PrivateBin files
; This URL is essential to allow Opengraph images to be displayed on social networks
; basepath = ""
; enable or disable the discussion feature, defaults to true
discussion = true
; preselect the discussion feature, defaults to false
opendiscussion = false
; enable or disable the password feature, defaults to true
password = true
; enable or disable the file upload feature, defaults to false
fileupload = false
; preselect the burn-after-reading feature, defaults to false
burnafterreadingselected = false
; which display mode to preselect by default, defaults to "plaintext"
; make sure the value exists in [formatter_options]
defaultformatter = "plaintext"
; (optional) set a syntax highlighting theme, as found in css/prettify/
; syntaxhighlightingtheme = "sons-of-obsidian"
; size limit per paste or comment in bytes, defaults to 10 Mebibytes
sizelimit = 10485760
; template to include, default is "bootstrap" (tpl/bootstrap.php)
template = "bootstrap"
; (optional) info text to display
; use single, instead of double quotes for HTML attributes
;info = "More information on the <a href='https://privatebin.info/'>project page</a>."
; (optional) notice to display
; notice = "Note: This is a test service: Data may be deleted anytime. Kittens will die if you abuse this service."
; by default PrivateBin will guess the visitors language based on the browsers
; settings. Optionally you can enable the language selection menu, which uses
; a session cookie to store the choice until the browser is closed.
languageselection = false
; set the language your installs defaults to, defaults to English
; if this is set and language selection is disabled, this will be the only language
; languagedefault = "en"
; (optional) URL shortener address to offer after a new paste is created
; it is suggested to only use this with self-hosted shorteners as this will leak
; the pastes encryption key
; urlshortener = "https://shortener.example.com/api?link="
; (optional) Let users create a QR code for sharing the paste URL with one click.
; It works both when a new paste is created and when you view a paste.
; qrcode = true
; (optional) IP based icons are a weak mechanism to detect if a comment was from
; a different user when the same username was used in a comment. It might be
; used to get the IP of a non anonymous comment poster if the server salt is
; leaked and a SHA256 HMAC rainbow table is generated for all (relevant) IPs.
; Can be set to one these values: "none" / "vizhash" / "identicon" (default).
; icon = "none"
; Content Security Policy headers allow a website to restrict what sources are
; allowed to be accessed in its context. You need to change this if you added
; custom scripts from third-party domains to your templates, e.g. tracking
; scripts or run your site behind certain DDoS-protection services.
; Check the documentation at https://content-security-policy.com/
; Notes:
; - If you use a bootstrap theme, you can remove the allow-popups from the
; sandbox restrictions.
; - By default this disallows to load images from third-party servers, e.g. when
; they are embedded in pastes. If you wish to allow that, you can adjust the
; policy here. See https://github.com/PrivateBin/PrivateBin/wiki/FAQ#why-does-not-it-load-embedded-images
; for details.
; - The 'unsafe-eval' is used in two cases; to check if the browser supports
; async functions and display an error if not and for Chrome to enable
; webassembly support (used for zlib compression). You can remove it if Chrome
; doesn't need to be supported and old browsers don't need to be warned.
; cspheader = "default-src 'none'; base-uri 'self'; form-action 'none'; manifest-src 'self'; connect-src * blob:; script-src 'self' 'unsafe-eval'; style-src 'self'; font-src 'self'; frame-ancestors 'none'; img-src 'self' data: blob:; media-src blob:; object-src blob:; sandbox allow-same-origin allow-scripts allow-forms allow-popups allow-modals allow-downloads"
; stay compatible with PrivateBin Alpha 0.19, less secure
; if enabled will use base64.js version 1.7 instead of 2.1.9 and sha1 instead of
; sha256 in HMAC for the deletion token
; zerobincompatibility = false
; Enable or disable the warning message when the site is served over an insecure
; connection (insecure HTTP instead of HTTPS), defaults to true.
; Secure transport methods like Tor and I2P domains are automatically whitelisted.
; It is **strongly discouraged** to disable this.
; See https://github.com/PrivateBin/PrivateBin/wiki/FAQ#why-does-it-show-me-an-error-about-an-insecure-connection for more information.
; httpwarning = true
; Pick compression algorithm or disable it. Only applies to pastes/comments
; created after changing the setting.
; Can be set to one these values: "none" / "zlib" (default).
; compression = "zlib"
[expire]
; expire value that is selected per default
; make sure the value exists in [expire_options]
default = "1week"
[expire_options]
; Set each one of these to the number of seconds in the expiration period,
; or 0 if it should never expire
5min = 300
10min = 600
1hour = 3600
1day = 86400
1week = 604800
; Well this is not *exactly* one month, it's 30 days:
1month = 2592000
1year = 31536000
never = 0
[formatter_options]
; Set available formatters, their order and their labels
plaintext = "Plain Text"
syntaxhighlighting = "Source Code"
markdown = "Markdown"
[traffic]
; time limit between calls from the same IP address in seconds
; Set this to 0 to disable rate limiting.
limit = 10
; (optional) Set IPs addresses (v4 or v6) or subnets (CIDR) which are exempted
; from the rate-limit. Invalid IPs will be ignored. If multiple values are to
; be exempted, the list needs to be comma separated. Leave unset to disable
; exemptions.
; exempted = "1.2.3.4,10.10.10/24"
; (optional) If you want only some source IP addresses (v4 or v6) or subnets
; (CIDR) to be allowed to create pastes, set these here. Invalid IPs will be
; ignored. If multiple values are to be exempted, the list needs to be comma
; separated. Leave unset to allow anyone to create pastes.
; creators = "1.2.3.4,10.10.10/24"
; (optional) if your website runs behind a reverse proxy or load balancer,
; set the HTTP header containing the visitors IP address, i.e. X_FORWARDED_FOR
; header = "X_FORWARDED_FOR"
[purge]
; minimum time limit between two purgings of expired pastes, it is only
; triggered when pastes are created
; Set this to 0 to run a purge every time a paste is created.
limit = 300
; maximum amount of expired pastes to delete in one purge
; Set this to 0 to disable purging. Set it higher, if you are running a large
; site
batchsize = 10
[model]
; name of data model class to load and directory for storage
; the default model "Filesystem" stores everything in the filesystem
class = Filesystem
[model_options]
dir = PATH "data"
;[model]
; example of a Google Cloud Storage configuration
;class = GoogleCloudStorage
;[model_options]
;bucket = "my-private-bin"
;prefix = "pastes"
;[model]
; example of DB configuration for MySQL
;class = Database
;[model_options]
;dsn = "mysql:host=localhost;dbname=privatebin;charset=UTF8"
;tbl = "privatebin_" ; table prefix
;usr = "privatebin"
;pwd = "Z3r0P4ss"
;opt[12] = true ; PDO::ATTR_PERSISTENT
;[model]
; example of DB configuration for SQLite
;class = Database
;[model_options]
;dsn = "sqlite:" PATH "data/db.sq3"
;usr = null
;pwd = null
;opt[12] = true ; PDO::ATTR_PERSISTENT

View file

@ -0,0 +1,17 @@
version: "3.7"
services:
tunnel:
image: cloudflare/cloudflared
command: tunnel --no-autoupdate run
restart: unless-stopped
env_file: .env
privatebin:
image: privatebin/fs
restart: always
volumes:
- ./data:/srv/data
- ./conf.php:/srv/cfg/conf.php:ro