Configure endpoints

Learn how to configure endpoints for the StackRox Kubernetes Security Platform by using a YAML configuration file.

4 minute read

Beginning from the StackRox Kubernetes Security Platform version 3.0.40, you can use a YAML configuration file to configure exposed endpoints. You can use this configuration file to define one or more endpoints for the StackRox Kubernetes Security Platform and customize the TLS settings for each endpoint, or disable the TLS for specific endpoints. You can also define if client authentication is required, and which client certificates to accept.

Custom YAML configuration

The StackRox Kubernetes Security Platform uses the following YAML configuration as a ConfigMap object, making configurations easier to change and manage.

When you use the custom configuration file, you can configure the following for each endpoint:

  • The protocols to use (HTTP, gRPC, or both).
  • Enable or disable TLS.
  • Specify server certificates.
  • Client Certificate Authorities (CA) to trust for client authentication.
  • Specify if client certificate authentication (“mTLS”) is required.

You can use the configuration file to specify endpoints either during the installation or on an existing deployment. However, if you expose any additional ports (other than the default port 8443), you must create network policies that allow traffic on these ports.

Following is a sample endpoints.yaml configuration file for the StackRox Kubernetes Security Platform.

Copy
# Sample endpoints.yaml configuration for StackRox Central.
#
# # CAREFUL: If the following line is uncommented, do not expose the default endpoint on port 8443 by default.
# #          This will break normal operation.
# disableDefault: true # if true, don't serve on :8443
endpoints:
  # Serve plaintext HTTP only on port 8080
  - listen: ":8080"
    # Backend protocols, possible values are 'http' and 'grpc'. If unset or empty, assume both.
    protocols:
      - http
    tls:
      # Disable TLS. If this is not specified, assume TLS is enabled.
      disable: true
  # Serve HTTP and  gRPC for sensors only on port 8444
  - listen: ":8444"
    tls:
      # Which TLS certificates to serve, possible values are 'service' (StackRox-generated service certificates)
      # and 'default' (user-configured default TLS certificate). If unset or empty, assume both.
      serverCerts:
        - default
        - service
      # Client authentication settings.
      clientAuth:
        # Enforce TLS client authentication. If unset, do not enforce, only request certificates
        # opportunistically.
        required: true
        # Which TLS client CAs to serve, possible values are 'service' (CA for StackRox-generated service
        # certificates) and 'user' (CAs for PKI auth providers). If unset or empty, assume both.
        certAuthorities: # if not set, assume ["user", "service"]
          - service

Following is the reference for the available keys you can configure in the YAML configuration file.

  • disableDefault: Use true to disable exposure on the default port number 8443. The default value is false, changing it to true might break existing functionality.
  • endpoints: A list of additional endpoints for exposing Central. Each element in the list has the following options:
    • listen (required): The address and port number on which to listen. You can use the format port, :port, or address:port to specify values. For example,
      • 8080 or :8080 - listen on port 8080 on all interfaces.
      • 0.0.0.0:8080 - listen on port 8080 on all IPv4 (not IPv6) interfaces.
      • 127.0.0.1:8080 - listen on port 8080 on the local loopback device only.
    • protocols: Protocols to use for the specified endpoint. Acceptable values are http and grpc. If you don’t specify a value, Central listens to both HTTP and gRPC traffic on the specified port. If you want to expose an endpoint exclusively for the StackRox portal, use http. However, you won’t be able to use the endpoint for service-to-service communication or for the roxctl CLI, because these clients require both gRPC and HTTP. We recommend that you don’t specify a value of this key, to enable both HTTP and gRPC protocols for the endpoint. If you want to restrict an endpoint to StackRox services only, use the clientAuth option.
    • tls: Use it to specify the TLS settings for the endpoint. If you don’t specify a value, the StackRox Kubernetes Security Platform enables TLS with the default settings for all the following nested keys.
      • disable: Use true to disable TLS on the specified endpoint. The default value is false. When you set it to true, you can’t specify values for serverCerts and clientAuth.
      • serverCerts: Specify a list of sources from which to configure server TLS certificates. Acceptable values are:
        • default: use the already configured custom TLS certificate (if any).
        • service: use the internal service certificate that the StackRox Kubernetes Security Platform generates.
        The serverCerts list is order-dependent, it means that the first item in the list determines the certificate which Central uses by default, when there is no matching SNI (Server Name Indication). You can use this to specify multiple certificates and Central automatically selects the right certificate based on SNI.
      • clientAuth: Use it to configure the behavior of the TLS-enabled endpoint’s client certificate authentication.
        • certAuthorities: A list of Certificate Authorities (CA) to verify client certificates. Acceptable values are:
          • service: CA for StackRox-generated service certificates.
          • user: CAs configured by PKI authentication providers.
          The default value is ["service", "user"]. The certAuthorities list is order-independent, it means that the position of the items in this list doesn’t matter. Also, setting it as empty list [] disables client certificate authentication for the endpoint, which is different from leaving this value unset.
        • required: Use true to only allow clients with a valid client certificate. The default value is false. You can use true in conjunction with a the certAuthorities setting of ["service"] to only allow the StackRox Kubernetes Security Platform services to connect to this endpoint.

If you are exposing the StackRox Kubernetes Security Platform over HTTP (for version 3.0.39 and older) by using the ROX_PLAINTEXT_ENDPOINTS option:

Copy
ROX_PLAINTEXT_ENDPOINTS="8080, grpc@:8081, http @ 0.0.0.0:8082"

You can also specify the same values by using the following YAML configuration:

Copy
endpoints:
  - listen: "8080"
    tls:
      disable: true
  - listen: ":8081"
    protocols: ["grpc"]
    tls:
      disable: true
  - listen: "0.0.0.0:8082"
    protocols: ["http"]
    tls:
      disable: true

During installation

When you install the StackRox Kubernetes Security Platform, by using the roxctl CLI, it creates a folder named central-bundle, which contains the necessary YAML manifests and scripts to deploy StackRox Central.

After you generate the central-bundle,

  1. Open the ./central-bundle/central/02-endpoints-config.yaml file.
  2. In this file, add your custom YAML configuration under the data: section of the key endpoints.yaml. Make sure that you maintain a 4 space indentation for the YAML configuration.
  3. Continue the installation instructions as usual. The StackRox Kubernetes Security Platform uses the specified configuration.
  4. If you expose any additional ports (other than the default port 8443), you must create network policies that allow traffic on these ports.

On an existing StackRox deployment

On an existing deployment:

  1. Download the existing ConfigMap:

    Copy
    kubectl -n stackrox get cm/central-endpoints -o go-template='{{index .data "endpoints.yaml"}}'  > <directory-path>/central-endpoints.yaml
    Copy
    oc -n stackrox get cm/central-endpoints -o go-template='{{index .data "endpoints.yaml"}}'  > <directory-path>/central-endpoints.yaml
  2. In the downloaded central-endpoints.yaml file, specify your custom YAML configuration.

  3. Upload and apply the modified central-endpoints.yaml configuration file.

    Copy
    kubectl -n stackrox create cm central-endpoints --from-file=endpoints.yaml=<directory-path>/central-endpoints.yaml -o yaml --dry-run | \
    kubectl label -f - --local -o yaml app.kubernetes.io/name=stackrox | \
    kubectl apply -f -
    Copy
    oc -n stackrox create cm central-endpoints --from-file=endpoints.yaml=<directory-path>/central-endpoints.yaml -o yaml --dry-run | \
    oc label -f - --local -o yaml app.kubernetes.io/name=stackrox | \
    oc apply -f -
  4. Use one of the following commands to restart Central:

    • You must wait for at least 1 minute, till Kubernetes propagate your changes, before you run the following command to restart the container:

      Copy
      kubectl -n stackrox exec deploy/central -c central -- kill 1
      Copy
      oc -n stackrox exec deploy/central -c central -- kill 1
    • Or, if you don’t want to wait, run the following command to delete the pod:

      Copy
      kubectl -n stackrox delete pod -lapp=central
      Copy
      oc -n stackrox delete pod -lapp=central
  5. If you expose any additional ports (other than the default port 8443), you must create network policies that allow traffic on these ports.

Enable traffic flow through custom ports

  • If you’re exposing a port by using a LoadBalancer service, you might want to allow traffic from all sources (including external). To do this:

    1. Clone the allow-ext-to-central Kubernetes network policy.

      Copy
      kubectl -n stackrox get networkpolicy.networking.k8s.io/allow-ext-to-central -o yaml > <directory-path>/allow-ext-to-central-custom-port.yaml
      Copy
      oc -n stackrox get networkpolicy.networking.k8s.io/allow-ext-to-central -o yaml > <directory-path>/allow-ext-to-central-custom-port.yaml
    2. Use it as a reference to create your network policy, and in that policy, specify the port number you want to expose. Make sure to change the name of your network policy (in metadata section of the YAML file) so that it doesn’t interfere with the built-in allow-ext-to-central policy.

  • If you’re exposing a port to another service running in the same cluster or to an ingress controller, you must only allow traffic from the services in your cluster or from the proxy of the ingress controller. For details on setting network policies read the blog post—Kubernetes Network Policies - A Detailed Security Guide.

Questions?

We're happy to help! Reach out to us to discuss questions, issues, or feature requests.

© 2021 StackRox Inc. All rights reserved.