Sebastian Mark
13cded188f
- remove docker-compose.yml
- add ansible playbook for k3s and argocd deployment
- update renovate custom manager for k3s version
- update README.md with new instructions
🤖
95 lines
2.9 KiB
Markdown
95 lines
2.9 KiB
Markdown
# k3s Kubernetes + ArgoCD + Baseline
|
|
|
|
* [k3s](https://docs.k3s.io/)
|
|
* [ArgoCD](https://argoproj.github.io/cd/)
|
|
* [NGINX Ingress Controller](https://kubernetes.github.io/ingress-nginx/)
|
|
* [cert-manager](https://cert-manager.io/)
|
|
* selfsigned issuer
|
|
* LetsEncrypt issuers (Prod and Staging)
|
|
* [zabbix-proxy](https://git.zabbix.com/projects/ZT/repos/kubernetes-helm/browse?at=refs%2Fheads%2Frelease%2F7.0)
|
|
* [keel](https://keel.sh)
|
|
* [reloader](https://github.com/stakater/Reloader)
|
|
|
|
## Run (Deploy k3s + ArgoCD + Baseline)
|
|
|
|
`ansible-playbook k3s_boostrap.yml -i <host|ip>,`
|
|
|
|
### Get kubeconfig
|
|
|
|
`cat /etc/rancher/k3s/k3s.yml`
|
|
|
|
### Add Agents
|
|
|
|
#### Get Agent Token
|
|
|
|
> The secure token format (occasionally referred to as a "full" token) contains the following parts:
|
|
>
|
|
> <prefix><cluster CA hash>::<credentials>
|
|
|
|
Get existing server token:
|
|
`cat /var/lib/docker/volumes/baseline_k3s-data/_data/server/token`
|
|
|
|
Create new token:
|
|
`docker compose exec -it k3s k3s token create`
|
|
|
|
#### Register Agent/Worker
|
|
|
|
```bash
|
|
export K3S_URL=https://<cpn.fqdn>:6443
|
|
export K3S_NODE_NAME=<node.fqdn>
|
|
export K3S_TOKEN=<full-token>
|
|
curl -sfL https://get.k3s.io | sh -s -
|
|
```
|
|
|
|
## Notes
|
|
|
|
### ArgoCD
|
|
|
|
To retrieve the initial admin password use
|
|
`kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d`
|
|
|
|
To change the password follow [Argocd account update password](https://argo-cd.readthedocs.io/en/stable/user-guide/commands/argocd_account_update-password/).
|
|
|
|
#### Sync Applications with Kubectl
|
|
|
|
Add to application:
|
|
```yaml
|
|
operation:
|
|
sync:
|
|
syncStrategy:
|
|
hook: {}
|
|
```
|
|
|
|
### Zabbix Monitoring
|
|
|
|
See: [infrastructure/zabbix-config - Zabbix Kubernetes Monitoring](https://git.smsvc.net/infrastructure/zabbix-config/src/branch/master/Zabbix-Kubernetes.md)
|
|
|
|
## Cloud Setups
|
|
|
|
### Linode
|
|
|
|
PROXY protocol needs to be enabled for ingress-nginx to see the clients IP in ingress log.
|
|
|
|
Add the PROXY protocol annotation to the ingress-nginx service:
|
|
|
|
```
|
|
annotations:
|
|
service.beta.kubernetes.io/linode-loadbalancer-proxy-protocol: v2
|
|
```
|
|
|
|
Update the ingress-nginx ConfigMap to make nginx expect PROXY protocol data:
|
|
|
|
```
|
|
data:
|
|
use-proxy-protocol: "true"
|
|
```
|
|
|
|
#### cert-manager
|
|
|
|
> However, when you have the PROXY protocol enabled, the external load balancer does modify the traffic, prepending the PROXY line before each TCP connection. If you connect directly to the web server internally, bypassing the external load balancer, then it will receive traffic without the PROXY line.
|
|
>
|
|
> This is particularly a problem when using cert-manager for provisioning SSL certificates.
|
|
|
|
After enabling the PROXY protocol cert-manager is unable to perform a self check ("propagation check failed", "failed to perform self check GET request").
|
|
|
|
[hairpin-proxy](https://github.com/compumike/hairpin-proxy) adds PROXY protocol support for internal-to-LoadBalancer traffic for Kubernetes Ingress users, specifically for cert-manager self-checks (no further configuration needed).
|