CrowdSec Security Engine Setup Health-Check
Welcome to the interactive Health-Check of your CrowdSec setup. We'll guide you through a series of tests to ensure that your Security Stack is fully functional and ready to protect your services: Detecting, Threat Sharing and Remediating. This guide covers cases of protecting common services such as web servers (HTTP) and SSH.
We'll first test the final functionality of each component (top-down approach) before diving into detailed troubleshooting if issues arise.
This health check is divided into three main sections:
- π‘ Detection: Ensuring CrowdSec properly detects threats targeting your services.
- π Connectivity: Verifying communication with the CrowdSec network to receive the community blocklist.
- π‘οΈ Protection: Confirming that your bouncers automatically block threats detected by CrowdSec
π‘ Detection checksβ
Trigger CrowdSec's test scenariosβ
Let's use CrowdSec's built-in dummy scenarios (HTTP and Linux) to safely verify your Security Engine detects threats, without risking accidental self-blocking.
π HTTP detection test
We'll trigger the dummy scenario crowdsecurity/http-generic-test
by accessing a probe path on your web server.
1οΈβ£ Access your service URL with this path: /crowdsec-test-NtktlJHV4TfBSK3wvlhiOBnl
curl -I https://<your-service-url>/crowdsec-test-NtktlJHV4TfBSK3wvlhiOBnl
2οΈβ£ Confirm the alert has triggered for the scenario crowdsecurity/http-generic-test
- On Host
- Docker
- Kubernetes
sudo cscli alerts list -s crowdsecurity/http-generic-test
docker exec crowdsec cscli alerts list -s crowdsecurity/http-generic-test
kubectl exec -n crowdsec -it $(kubectl get pods -n crowdsec -l k8s-app=crowdsec -l type=lapi -o name) -- cscli alerts list -s crowdsecurity/http-generic-test
Notes:
- β οΈ Important: Requests from private IP addresses won't trigger alerts (private IPs are whitelisted by default).
- If testing from localhost or your internal network (192.168.x.x, 10.x.x.x, 172.16.x.x), the test will fail.
- Solution: Test from an external device with a public IP address, or test via a browser from your phone using mobile data.
- This scenario can be triggered again only after a 5-minutes delay.
π SSH detection test
We'll trigger the dummy scenario crowdsecurity/ssh-generic-test
by attempting an SSH login with a specific username.
1οΈβ£ Attempt SSH login using this username: crowdsec-test-NtktlJHV4TfBSK3wvlhiOBnl
.
ssh crowdsec-test-NtktlJHV4TfBSK3wvlhiOBnl@<your-server-ip>
2οΈβ£ Confirm the alert has triggered for the scenario crowdsecurity/ssh-generic-test
- On Host
- Docker
- Kubernetes
sudo cscli alerts list -s crowdsecurity/ssh-generic-test
docker exec crowdsec cscli alerts list -s crowdsecurity/ssh-generic-test
It's uncommon to have to deal with this scenario in Kubernetes, but if you do:
kubectl exec -n crowdsec -it $(kubectl get pods -n crowdsec -l k8s-app=crowdsec -l type=lapi -o name) -- cscli alerts list -s crowdsecurity/ssh-generic-test
Notes:
- This scenario can only be triggered again after a 5-minutes delay.
π‘οΈ AppSec detection test - CrowdSec WAF
If you've enabled an AppSec-capable bouncer with CrowdSec WAF with the Virtual Patching collection, you can trigger the crowdsecurity/appsec-generic-test
dummy scenario.
It would have triggered along with the HTTP detection test, but it is worth mentioning here as well.
We'll trigger the dummy scenario crowdsecurity/appsec-generic-test
by accessing a probe path on your web server.
1οΈβ£ Access your service URL with this path: /crowdsec-test-NtktlJHV4TfBSK3wvlhiOBnl
curl -I https://<your-service-url>/crowdsec-test-NtktlJHV4TfBSK3wvlhiOBnl
2οΈβ£ Confirm the alert has triggered for the scenario crowdsecurity/appsec-generic-test
- On Host
- Docker
- Kubernetes
sudo cscli alerts list -s crowdsecurity/appsec-generic-test
docker exec crowdsec cscli alerts list -s crowdsecurity/appsec-generic-test
kubectl exec -n crowdsec -it $(kubectl get pods -n crowdsec -l k8s-app=crowdsec -l type=lapi -o name) -- cscli alerts list -s crowdsecurity/appsec-generic-test
Notes:
- This scenario can only be triggered again after a 1-minute delay.
Were all the tests successful ?β
Were all the tests related to your setup successful? π If so, you can proceed to the next phase of the health check: Connectivity checks.
π οΈ If not, check the troubleshooting section below.
π Detection Troubleshooting
No alert triggered? Let's find out why.
If you installed CrowdSec on the same host as the service you're protecting, it should have auto-detected it and installed the right collections of parsers and scenarios. However, if you're using custom log paths, unusual log formats, or running in Docker/Kubernetes, you might need to configure some things manually.
This section will help you pinpoint the issue and walk you through how to fix it. CrowdSec needs to know what logs to read and how to interpret them. We'll look at the security engine metrics to see if logs are being read and if what's read is parsed correctly.
We'll do that using the for i in $(kubectl get pods -n crowdsec -l k8s-app=crowdsec -l type=agent -o name); do kubectl exec -n crowdsec -it $i -- cscli metrics show acquisition parsers; done Under Acquisition Metrics you should see: Under The Parsers Metrics you have the details of the parsers used. π¨ If this check fails, donβt worry -- the results will point you to the right area to troubleshoot: CrowdSec needs to know where to read your logs. The configuration varies by deployment method: The acquisition configuration is usually found in To troubleshoot: In Docker, logs must be accessible to the container through volumes. Common issues: To check your acquisition config: In Kubernetes, CrowdSec reads logs from Configuration is done in your Helm values file: Common issues: Note: Unlike standalone deployments, you use CrowdSec, via its Hub βοΈ uses collections to package correct parsers and detection scenarios for your services. On regular host installations, CrowdSec usually detects your services (like nginx or ssh) and installs the appropriate collections automatically. π To check what's currently installed: You can also list individual parsers and scenarios with: π₯ Install missing collections: In Docker, collections must be installed via the π To check what's currently installed: π₯ Install collections: Then restart the container. In Kubernetes, collections must be specified in your Helm values file. π To check what's currently installed: π₯ Install collections: Add to your Then upgrade your Helm release: β οΈ Log format mismatch: Let's check if the CrowdSec service is active: If the service is not running: Check logs for errors: Common issues: Check if the container is running: If not running, check container logs: Make sure your container starts without error Common issues: Check container status: Check if pods are running: You should see LAPI and agent pods in Check pod logs: Describe pod for more details: Common issues: Upgrade your Helmπ Are your logs being properly read and parsed?
This is handled by the acquisition configuration (log sources) and parsing (how to read them).
Multiple log sources can be defined in the acquisition(s) configuration files and they support diverse datasources (files, syslog, etc.).
For more details you can refer to the datasources documentation.cscli metrics
command:sudo cscli metrics show acquisition parsers
docker exec crowdsec cscli metrics show acquisition parsers
π₯ Acquisition Troubleshooting -- Are your logs properly declared as datasources
acquis.yaml
or in files under acquis.d/
inside the CrowdSec config directory.
On Debian-like OS it is typically located in /etc/crowdsec/
.
Example if your service logs are in /var/log
on the host or in a logs
shared volume:
volumes:
- /var/log:/var/log:ro # Example for mounting logs as read-only
- logs:/logs:ro # Example for shared log volume between containersacquis.yaml
or acquis.d/*.yaml
files should reference paths inside the container.docker exec crowdsec cat /etc/crowdsec/acquis.yaml # or acquis.d/*.yaml
/var/log/containers
which is mounted into pods by the helm chart.agent:
acquisition:
- namespace: your-namespace
podName: your-pod-*
program: nginx # Reference used by the FILTER function of your installed parsers
kubectl get pods -n <namespace>
program
field must match the FILTER of your installed parser (nginx, traefik, apache, etc.)container_runtime: containerd
or container_runtime: docker
in values.yamlprogram:
instead of type:
in Kubernetes acquisitions.π¦ Collection Troubleshooting -- Are the right parsers and scenarios installed?
sudo cscli collections list
sudo cscli parsers list
sudo cscli scenarios list
sudo cscli collections install crowdsecurity/nginx
sudo systemctl reload crowdsecCOLLECTIONS
environment variable.docker exec crowdsec cscli collections list
environment:
COLLECTIONS: "crowdsecurity/nginx crowdsecurity/linux"for i in $(kubectl get pods -n crowdsec -l k8s-app=crowdsec -l type=agent -o name); do kubectl exec -n crowdsec -it $i -- cscli collections list; done
values.yaml
:agent:
env:
- name: COLLECTIONS
value: "crowdsecurity/traefik crowdsecurity/nginx"helm upgrade crowdsec crowdsec/crowdsec -n crowdsec -f values.yaml
log_format custom '$remote_addr - $request - $status';
, you'll need a custom parserβοΈ CrowdSec Service Troubleshooting -- is the CrowdSec service running?
sudo systemctl status crowdsec
sudo systemctl start crowdsec
sudo systemctl enable crowdsec # Ensure it starts on boot# Start by checking crowdsec logs
less /var/log/crowdsec.log
# Eventually check systemd journal logs
sudo journalctl -u crowdsec -n 50
/etc/crowdsec/config.yaml
docker ps | grep crowdsec
docker logs crowdsec
/etc/crowdsec/
and /var/lib/crowdsec/data/
are properly mounted/var/lib/crowdsec/data/
must be persistedCOLLECTIONS
and other env vars are set correctlydocker inspect crowdsec
kubectl get pods -n crowdsec
Running
status.# LAPI logs
kubectl logs -n crowdsec -l k8s-app=crowdsec -l type=lapi
# Agent logs
kubectl logs -n crowdsec -l k8s-app=crowdsec -l type=agentkubectl describe pod -n crowdsec <pod-name>
kubectl get configmap -n crowdsec
kubectl get pvc -n crowdsec
helm upgrade crowdsec crowdsec/crowdsec -n crowdsec -f values.yaml
π CrowdSec Connectivity checksβ
Check CAPI statusβ
Let's confirm that your Security Engine can communicate with the CrowdSec Central API (CAPI). This connection allows you to:
- Receive Community Blocklists -- curated IPs flagged as malicious by the global CrowdSec network.
- Receive additional Blocklists of your choice among the ones available to you.
- Contribute back -- sharing detected Malicious IPs triggering installed scenarios.
π CrowdSec Central API connectivity test
Check your CAPI connection status:
- On Host
- Docker
- Kubernetes
sudo cscli capi status
docker exec crowdsec cscli capi status
kubectl exec -n crowdsec -it $(kubectl get pods -n crowdsec -l k8s-app=crowdsec -l type=lapi -o name) -- cscli capi status
βοΈ You should see: INFO You can successfully interact with Central API (CAPI)
Notes:
- On a fresh install, credentials might need to be registered (see troubleshooting below).
- The output also shows information about the connectivity config file path and enrollment status with CrowdSec Console.
Were all the tests successful ?β
Were all the tests related to your setup successful? π If so, you can proceed to the next phase of the health check: Remediation Check
π οΈ If not, check the troubleshooting section below.
π Connectivity Troubleshooting
If the CAPI status check fails, here are the most common issues and solutions:
- On Host
- Docker
- Kubernetes
Common issues:
- Missing credentials: If
online_api_credentials.yaml
is missing:sudo cscli capi register
sudo systemctl reload crowdsec - Firewall blocking: Ensure outbound network access (API endpoints, blocklists, etc.). See Network Management for full requirements
- DNS issues: Verify DNS resolution works:
nslookup api.crowdsec.net
- Proxy configuration: If behind a proxy, configure in
/etc/crowdsec/config.yaml
Common issues:
- No internet from container: Ensure container can reach external networks
docker exec crowdsec ping -c 3 api.crowdsec.net
- Missing credentials: Register if credentials are missing:
docker exec crowdsec cscli capi register
docker restart crowdsec - Volume not persisted: Ensure
/etc/crowdsec/
volume persists the credentials file - Network mode: If using custom networks, verify routing and DNS
- Proxy issues: Set
HTTP_PROXY
andHTTPS_PROXY
environment variables if needed
Common issues:
- No external connectivity: Test from pod:
kubectl exec -n crowdsec -it $(kubectl get pods -n crowdsec -l k8s-app=crowdsec -l type=lapi -o name) -- ping -c 3 api.crowdsec.net
- NetworkPolicy blocking: Check if NetworkPolicies allow egress to api.crowdsec.net
- DNS issues: Verify CoreDNS is working correctly
- Proxy configuration: Configure proxy via environment variables in values.yaml:
lapi:
env:
- name: HTTP_PROXY
value: "http://proxy:8080"
- name: HTTPS_PROXY
value: "http://proxy:8080" - PVC not bound: If credentials aren't persisting, check PVC status
- Enrollment key: If using console enrollment, verify
ENROLL_KEY
is set correctly in values.yaml
βπ» Remediation checksβ
Validate Blocks or Captchasβ
Now that detection and connectivity are working, letβs verify that your bouncers are correctly applying remediation on malicious IPs.
Prerequisite:
To apply remediation with CrowdSec, youβll need a bouncer β available for firewalls, web servers, reverse proxies, CDNs, cloud WAFs, edge appliances, and more.
βπ» Bouncer Remediation test
This test involves manually creating a decision against a public IP of one of your devices for a very short period (1 minute).
1οΈβ£ Find your public IP:
curl api.ipify.org
2οΈβ£ Add a ban decision for your IP (valid for 1 minute):
- On Host
- Docker
- Kubernetes
sudo cscli decisions add --ip <your-public-ip> --duration 1m --reason "CrowdSec remediation test"
docker exec crowdsec cscli decisions add --ip <your-public-ip> --duration 1m --reason "CrowdSec remediation test"
kubectl exec -n crowdsec -it $(kubectl get pods -n crowdsec -l k8s-app=crowdsec -l type=lapi -o name) -- cscli decisions add --ip <your-public-ip> --duration 1m --reason "CrowdSec remediation test"
β³ Wait a few seconds to ensure the decision is processed by the bouncer.
3οΈβ£ Try accessing your service (e.g. website, API). from the same public IP address.
β‘οΈ You should be blocked by the bouncer. returning a forbidden response (HTTP 403) or a captcha challenge.
4οΈβ£ Wait for 1 minute, then check the decisions list to see if the decision has been removed
Were all the tests successful ?β
If you were successfully blocked, congratulations! Your remediation setup is working correctly. π
You might want to continue to the next recommended steps:
- Enroll your Security Engine to the CrowdSec Console
- Then subscribe to more blocklists to benefit from additional proactive prevention
π Remediation Troubleshooting
Before diving into troubleshooting, remember that a remediation components (AKA bouncer) is a separate component that connects to the Security Engine and regularly pulls decisions (like bans or captchas) to apply them at its level (firewall, web server, etc.). If remediation isnβt working, itβs often due to issues in this communication loop. Check bouncers linked to your Security Engine: You should see: Common issues: Bouncer not valid or not pulling: Check authentication in bouncer config file Bouncer not listed: Register it: Copy the token and add it to your bouncer's configuration, then restart the bouncer service. Bouncer on different machine: Ensure it can reach the LAPI endpoint (default: Firewall blocking: Verify port 8080 is accessible from bouncer machine Check bouncers linked to your Security Engine: Common issues: Check bouncers linked to your Security Engine: Common issues: For Ingress Nginx bouncer:
You can find more information about bouncers in the Bouncers documentation.
The full list of available bouncers is available on the CrowdSec Hub βοΈ.Is your Bouncer Installed and Connected to your Security engine
sudo cscli bouncers list
Last API pull
timestamp
sudo cscli bouncers add my-bouncer-name
http://crowdsec-server:8080
)docker exec crowdsec cscli bouncers list
http://crowdsec:8080
(using container name)environment:
BOUNCER_KEY_mybouncer: "my-secret-api-key"docker exec my-bouncer ping crowdsec
kubectl exec -n crowdsec -it $(kubectl get pods -n crowdsec -l k8s-app=crowdsec -l type=lapi -o name) -- cscli bouncers list
http://crowdsec-service.crowdsec.svc.cluster.local:8080
# Generate API key with a tool of your choice
# Then fill the values.yaml accordingly to dictates the bouncer name and api key use for this communication with LAPI
# values.yaml
lapi:
env:
- name: BOUNCER_KEY_<bouncer-name>
value: "api-key-you-want-this-bouncer-to-use"crowdsec-service
is accessible:
kubectl get svc -n crowdsec crowdsec-service
kubectl logs -n ingress-nginx -l app.kubernetes.io/component=controller