To be able to monitor applications, the Security Engine needs to access logs. DataSources are configured via the acquisition configuration, or specified via the command-line when performing cold logs analysis.
|single files, glob expressions and .gz files
|journald via filter
|single stream or log group
|read logs received via syslog protocol
|read logs from docker containers
|read logs from a kinesis strean
|read logs from kafka topic
|read logs from windows event log
|expose a webhook to receive audit logs from a Kubernetes cluster
|read logs from a S3 bucket
Common configuration parameters
Those parameters are available in all datasources.
Log level to use in the datasource. Defaults to
Which type of datasource to use. It is mandatory except for file acquisition.
An expression that will run after the acquisition has read one line, and before the line is sent to the parsers.
It allows to modify an event (or generate multiple events from one line) before parsing.
For example, if you acquire logs from a file containing a JSON object on each line, and each object has a
Records array with multiple events, you can use the following to generate one event per entry in the array:
map(JsonExtractSlice(evt.Line.Raw, "Records"), ToJsonString(#))
The expression must return:
- A string: it will replace
evt.Line.Rawin the event
- A list of strings: One new event will be generated based on the source event per element in the list. Each element will replace the
evt.Line.Rawfrom the source event.
If the expression returns an error or an invalid type, the event will not be modified before sending it to the parsers.
A map of labels to add to the event.
type label is mandatory, and used by the Security Engine to choose which parser to use.
Acquisition configuration example
type fields are necessary to dispatch the log lines to the right parser.
Also note between each datasource is
--- this is needed to separate multiple YAML documents (each datasource) in a single file.