Once the property hostname works as a destination endpoint, you cannot monitor it as a property in this or another stream. Using Akamaized hostnames as endpoints also requires enabling the Allow POST behavior in your property. That means only IP addresses that belong to your Akamaized property hostname can send logs to your custom destination. As a result, you can filter incoming traffic to your destination endpoint by IP addresses using the Origin IP Access List behavior. When you create a property with a Splunk endpoint URL as hostname, this property acts as a proxy between the destination and DataStream. This destination supports using Akamaized hostnames as endpoints to send DataStream 2 logs for improved security. Click Validate & Save to validate the connection to the destination and save the details you provided.Akamai (allowed if using an Akamaized hostname as destination).See Channels and sending data in the Splunk documentation for details.ĭataStream 2 does not support custom header user values containing: Make sure your endpoint URL is in the supported format ( collector/raw) and does not contain the acknowledgement ID returned by Splunk. You can use this feature for Splunk indexer acknowledgements passed as the X-Splunk-Request-Channel header. he custom header name can contain the alphanumeric, dash, and underscore characters. If your destination accepts only requests with certain headers, enter the Custom header name and Custom header value.Optionally, go to Custom header and provide the details of the custom header for the log file: See Configure indexers to use a signed SSL certificate in the Splunk documentation. When enabling mTLS authentication for this destination, set requireClientCert to true in Splunk if you want the endpoint to require certificate authentication when receiving log data. If you want to use mutual authentication, provide both the client certificate and the client key. Client key you want to use to authenticate to the backend server in the PEM (non-encrypted PKCS8) format. ![]() ![]() Client certificate in the PEM format that you want to use to authenticate requests to your destination.Enter the CA certificate in the PEM format for verification. DataStream requires a CA certificate, if you provide a self-signed certificate or a certificate signed by an unknown authority. CA certificate that you want to use to verify the origin server's certificate.If not provided, DataStream 2 fetches the hostname from the URL. TLS hostname matching the Subject Alternative Names (SANs) present in the SSL certificate for the endpoint URL.Optionally, click Additional options to add mTLS certificates for additional authentication.You can see the data only if the destination validates, and you can access the destination storage. For JSON logs, the data follows the format. ![]() In case you chose the Structured log format, the sample data appears in the 0,access_validation format. If you want to send compressed gzip logs to this destination, check Send compressed data.Ĭlick Validate & Save to validate the connection to the destination and save the details you provided.Īs part of this validation process, the system uses the provided credentials to push a sample request to the provided endpoint to validate the write access. In Event collector token, enter the HEC token you created and enabled in Splunk. Entering endpoint URLs ending with /collector, /collector/event or /collector/ack will result in an error. Example: DataStream 2 supports only Splunk HEC URLs for raw events. The URL can't be longer than 1000 characters. In Endpoint, enter the HTTP Event Collector URL to a Splunk endpoint, where you want to send your logs in the ://:/ format. The name can't be longer than 255 characters. In Display name, enter a human-readable description for the destination.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |