Skip to content
logo
Percona Operator for MySQL
Monitor Kubernetes
Initializing search
    percona/k8sps-docs
    percona/k8sps-docs
    • Welcome
      • Design and architecture
      • Install with Helm
      • Install on Minikube
      • System Requirements
      • Install on Google Kubernetes Engine (GKE)
      • Install on Amazon Elastic Kubernetes Service (AWS EKS)
      • Generic Kubernetes installation
      • Backup and restore
      • Application and system users
      • Anti-affinity and tolerations
      • Labels and annotations
      • Changing MySQL Options
      • Exposing the cluster
      • Transport Encryption (TLS/SSL)
      • Telemetry
      • Horizontal and vertical scaling
      • Monitor with Percona Monitoring and Management (PMM)
      • Add sidecar containers
      • Monitor Kubernetes
        • Considerations
        • Pre-requisites
        • Procedure
          • Set up authentication in PMM Server
          • Create a ConfigMap to mount for kube-state-metrics
          • Install the Victoria Metrics Operator Helm chart
        • Verify metrics capture
      • Custom Resource options
      • Percona certified images
      • Copyright and licensing information
      • Trademark policy
      • Release notes index
      • Percona Operator for MySQL 0.5.0 (2023-03-30)
      • Percona Operator for MySQL 0.4.0 (2023-01-30)
      • Percona Operator for MySQL 0.3.0 (2022-09-29)
      • Percona Operator for MySQL 0.2.0 (2022-06-30)
      • Percona Distribution for MySQL Operator based on Percona Server for MySQL 0.1.0 (2022-01-25)

    • Considerations
    • Pre-requisites
    • Procedure
      • Set up authentication in PMM Server
      • Create a ConfigMap to mount for kube-state-metrics
      • Install the Victoria Metrics Operator Helm chart
    • Verify metrics capture

    Monitor Kubernetes¶

    Monitoring the state of the database is crucial to timely identify and react to performance issues. Percona Monitoring and Management (PMM) solution enables you to do just that.

    However, the database state also depends on the state of the Kubernetes cluster itself. Hence it’s important to have metrics that can depict the state of the Kubernetes cluster.

    This document describes how to set up monitoring of the Kubernetes cluster health. This setup has been tested with the PMM server as the centralized data storage and the Victoria Metrics Operator as the metrics collector. These steps may also apply if you use another Prometheus-compatible storage.

    Considerations¶

    1. In this setup we use Victoria Metrics kubernetes monitoring stack Helm chart. When customizing the chart’s values, consider the following:

      • Since we use the PMM server for monitoring, there is no need to store the data in Victoria Metrics Operator. Therefore, set the vmsngle.enabled and vmcuster.enabled parameters in the Victoria Metrics Helm chart to false.
      • The Prometheus node exporter is not installed by default since it requires privileged containers with the access to the host file system. If you need the metrics for Nodes, enable the Prometheus node exporter by setting the prometheus-node-exporter.enabled flag in the Victoria Metrics Helm chart to true.
      • Check all the role-based access control (RBAC) rules of the victoria-metrics-k8s-stack chart and the dependencies chart, and modify them based on your requirements.
    2. This setup is used for a 1:1 mapping from Kubernetes cluster to the PMM server. If you wish to monitor more than one Kubernetes cluster in a single PMM server, provide the unique cluster ID for the victoria-metrics-k8s-stack chart. The dashboard must support filtering per Kubernetes cluster. You also need to properly relabel the metrics from the backend.

    Pre-requisites¶

    To set up monitoring of Kubernetes, you need the following:

    1. PMM Server up and running. You can run PMM Server as a Docker image, a virtual appliance, or on an AWS instance. Please refer to the official PMM documentation for the installation instructions.

    2. Helm v3.

    3. kubectl.

    Procedure¶

    Set up authentication in PMM Server¶

    To access the PMM Server resources and perform actions on the server, configure authentication.

    1. Get the PMM API key. The key must have the role “Admin”.

      Generate the PMM API key

      You can query your PMM Server installation for the API Key using curl and jq utilities. Replace <login>:<password>@<server_host> placeholders with your real PMM Server login, password, and hostname in the following command:

      $ API_KEY=$(curl --insecure -X POST -H "Content-Type: application/json" -d '{"name":"operator", "role": "Admin"}' "https://<login>:<password>@<server_host>/graph/api/auth/keys" | jq .key)
      

      Note

      The API key is not rotated.

    2. Encode the API key with base64.

      $ echo -n <API-key> | base64 --wrap=0
      
      $ echo -n <API-key> | base64 
      
    3. Create the Namespace where you want to set up monitoring. The following command creates the Namespace monitoring-system. You can specify a different name. In the latter steps, specify your namespace instead of the <namespace> placeholder.

      $ kubectl create namespace monitoring-system
      
    4. Create the YAML file for the Kubernetes Secrets and specify the base64-encoded API key value within. Let’s name this file pmm-api-vmoperator.yaml.

      pmm-api-vmoperator.yaml
      apiVersion: v1
      data:
        api_key: <base-64-encoded-API-key>
      kind: Secret
      metadata:
        name: pmm-token-vmoperator
        #namespace: default
      type: Opaque
      
    5. Create the Secrets object using the YAML file you created previously. Replace the <filename> placeholder with your value.

      $ kubectl apply -f pmm-api-vmoperator.yaml -n <namespace>
      
    6. Check that the secret is created. The following command checks the secret for the resource named pmm-token-vmoperator (as defined in the metadata.name option in the secrets file). If you defined another resource name, specify your value.

    $ kubectl get secret pmm-token-vmoperator -n <namespace>
    

    Create a ConfigMap to mount for kube-state-metrics¶

    The kube-state-metrics (KSM) is a simple service that listens to the Kubernetes API server and generates metrics about the state of various objects - Pods, Deployments, Services and Custom Resources.

    To define what metrics the kube-state-metrics should capture, create the ConfigMap and mount it to a container.

    1. Edit the example configmap.yaml configuration file and specify the <namespace>. The Namespace must match the Namespace where you created the Secret.

    2. Apply the configuration

      $ kubectl apply -f <github-link> -n <namespace>
      

      As a result, you have the customresource-config-ksm ConfigMap created.

    Install the Victoria Metrics Operator Helm chart¶

    1. Add the dependency repositories of victoria-metrics-k8s-stack chart.

      $ helm repo add grafana https://grafana.github.io/helm-charts
      $ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
      
    2. Add the Victoria Metrics Operator repository.

      $ helm repo add vm https://victoriametrics.github.io/helm-charts/
      $ helm repo update
      
    3. Update the repositories.

      $ helm repo update
      
    4. Export default values of victoria-metrics-k8s-stack chart to file values.yaml file.

      $ helm show values vm/victoria-metrics-k8s-stack > values.yaml
      
    5. Edit the values.yaml file and specify the following:

      • the IP address / hostname of the PMM server in the externalVM.write.url option
      • specify the unique name or an ID of the Kubernetes cluster in the vmagent.spec.externalLabels.k8s_cluster_id option. Ensure to set different values if you are sending metrics from multiple Kubernetes clusters to the same PMM Server.
      • set the vmsingle.enabled and vmcluster.enabled to false
      • set the alertmanager.enabled, vmalert.enabled to false
      • specify the configuration for scraping Custom resources related to the Operator
      values.yaml
      ...
      externalVM:
        read:
          url: ""
          # bearerTokenSecret:
          #   name: dbaas-read-access-token
          #   key: bearerToken
        write:
          url: "<PMM-server-IP>//victoriametrics/api/v1/write"
          bearerTokenSecret:
            name: pmm-token-vmoperator
            key: api_key
      ....
      ....
      vmsingle:
         enabled: false
      .....
      
      vmcluster:
        enabled: false
      ....
      
      alertmanager:
        enabled: false
      ....
      
      vmalert:
        enabled: false
      .....
      
      vmagent:
        spec:
          selectAllByDefault: true
          image:
            tag: v1.91.3
          scrapeInterval: 25s
          externalLabels:
            k8s_cluster_id: <cluster-name>
      ....
      
      kube-state-metrics:
        image:
          tag: "v2.9.2"
        enabled: true
        ## all values for kube-state-metrics helm chart can be specified here  
        ## Customizaing kube-state-metrics installation for scraping Custom resources related to Percona Operators
        metricLabelsAllowlist:
        - pods=[app.kubernetes.io/component,app.kubernetes.io/instance,app.kubernetes.io/managed-by,app.kubernetes.io/name,app.kubernetes.io/part-of],persistentvolumeclaims=[app.kubernetes.io/component,app.kubernetes.io/instance,app.kubernetes.io/managed-by,app.kubernetes.io/name,app.kubernetes.io/part-of],jobs=[app.kubernetes.io/component,app.kubernetes.io/instance,app.kubernetes.io/managed-by,app.kubernetes.io/name,app.kubernetes.io/part-of]
        extraArgs: 
        - --custom-resource-state-config-file=/go/src/k8s.io/kube-state-metrics/config
        volumeMounts: 
        - mountPath: /go/src/k8s.io/kube-state-metrics/
          name: cr-config
        volumes: 
        - configMap:
            name: customresource-config-ksm
          name: cr-config  
        rbac: 
          extraRules:
          - apiGroups:
            - pxc.percona.com
            resources:
            - perconaxtradbclusters
            - perconaxtradbclusters/status
            - perconaxtradbclusterbackups
            - perconaxtradbclusterbackups/status
            - perconaxtradbclusterrestores
            - perconaxtradbclusterrestores/status
            verbs:
            - get
            - list
            - watch
          - apiGroups:
            - psmdb.percona.com
            resources:
            - perconaservermongodbs
            - perconaservermongodbs/status
            - perconaservermongodbbackups
            - perconaservermongodbbackups/status
            - perconaservermongodbrestores
            - perconaservermongodbrestores/status
            verbs:
            - get
            - list
            - watch
          - apiGroups:
            - pg.percona.com
            resources:
            - perconapgbackups/status
            - perconapgclusters/status
            - perconapgrestores/status
            - perconapgclusters
            - perconapgrestores
            - perconapgbackups
            - pgclusters
            - pgpolicies
            - pgreplicas
            - pgtasks
            verbs:
            - get
            - list
            - watch 
      
      .....
      
      coreDns:
        enabled: false
      

      Note

      The values above are taken from the victoria-metrics-k8s-stack version 0.17.5. The fields and default values may differ in newer releases of the victoria-metrics-k8s-stack helm chart. Please check them if you are using a different version of the victoria-metrics-k8s-stack helm chart.

    6. Install the Victoria Metrics Operator. The vm-k8s value in the following command is the Release name. You can use a different name. Replace the <namespace> placeholder with your value. The Namespace must be the same as the Namespace for the Secret and ConfigMap.

      $ helm install vm-k8s vm/victoria-metrics-k8s-stack  -f values.yaml -n <namespace>
      
    7. Validate the successful installation by checking the Pods.

      $ kubectl get pods -n <namespace>
      
      Sample output
      NAME                                                        READY   STATUS    RESTARTS   AGE
      vm-k8s-grafana-5f6bdb8c7c-d5bw5                             3/3     Running   0          90m
      vm-k8s-kube-state-metrics-57c5977d4f-6jtbj                  1/1     Running   0          81m
      vm-k8s-prometheus-node-exporter-kntfk                       1/1     Running   0          90m
      vm-k8s-prometheus-node-exporter-mjrvj                       1/1     Running   0          90m
      vm-k8s-prometheus-node-exporter-v98c8                       1/1     Running   0          90m
      vm-k8s-victoria-metrics-operator-6b7f4f786d-sctp8           1/1     Running   0          90m
      vmagent-vm-k8s-victoria-metrics-k8s-stack-fbc86c9db-rz8wk   2/2     Running   0          90m    
      

      What Pods are running depends on the configuration chosen in values used while installing victoria-metrics-k8s-stack chart.

    Verify metrics capture¶

    1. Connect to the PMM server.
    2. Click Explore and switch to the Code mode.
    3. Check that the required metrics are captured, type the following in the Metrics browser dropdown:

      • cadvisor:

      image

      • kubelet:

      image

      • kube-state-metrics metrics that also include Custom resource metrics for the Operators deployed in your Kubernetes cluster:

      image

    Contact Us

    For free technical help, visit the Percona Community Forum.

    To report bugs or submit feature requests, open a JIRA ticket.

    For paid support and managed or consulting services , contact Percona Sales.


    Last update: 2023-08-31
    Percona LLC and/or its affiliates, © 2009 - 2023
    Made with Material for MkDocs

    Cookie consent

    We use cookies to recognize your repeated visits and preferences, as well as to measure the effectiveness of our documentation and whether users find what they're searching for. With your consent, you're helping us to make our documentation better. Read more about Percona Cookie Policy.