Container engine service

  • This feature is not displayed on the Web interface if you do not request a container engine license for the management platform. To use this feature, purchase the related license as described in H3C Software Products Remote Licensing Guide.

  • The container engine service is available only when the container service is enabled. For more information about container configuration, see "Configure container settings."

  • The container engine service requires the role of operator admin. Do not disable this role.

  • ARM host do not support the container engine service.

The container engine consolidates compute, network, and storage, allowing you to create a highly available, scalable Kubernetes cluster. With disaster recovery and autoscaling, the container engine can manage the lifecycle of applications, simplifying cluster management and applications O&M.

The container engine service offers workloads and container clusters.

Clusters

A cluster is a set of compute, storage, and network resources that run containerized applications. Nodes in a cluster include controller nodes and worker nodes. A controller node determines where an application must run. A worker node runs containerized applications. It is managed by the controller node and monitors and reports application status, and manages the application as required by the controller node.

Figure-1 Cluster structure

Workloads

Workload is a concept in Kubernetes. Workloads provide automated and one-key container management capabilities. Workloads in Kubernetes include the following:

Other cluster management capabilities

The container engine also provides namespace management, ConfigMap management, service configuration, storage configuration, and autoscaling functions for Kubernetes cluster management and maintenance.

Applications run in a Kubernetes cluster as containers. Pods are the smallest deployable units of computing that you can create and manage in Kubernetes. A Pod has one or multiple containers running on it. Managing a Kubernetes cluster is actually managing the containers running in it.

Figure-3 Kubernetes capabilities

Service

A service exposes an application running on a set of Pods as a network service. In Kubernetes, each Pod gets its own IP address, but the IP address might change upon a Pod restart. Kubernetes assigns a service an IP address (the cluster IP) to map the service to Pods. To access a container, a user only needs to access the cluster IP, no matter whether Pods change or not.

The following are the service types:

Storage

PersistentVolumes (PVs) and PersistentVolumeClaims (PVCs) in Kubernetes provide storage space for clusters.

Configuration

In Kubernetes, the configuration system contains ConfigMaps and Secrets, allowing you to decouple configuration from your container images to keep containerized applications portable. Additionally, with this configuration system, you can manage and maintain configuration information in a unified manner.

Namespace

Namespaces are a way to isolate applications. Applications in different namespaces cannot access each other.

Autoscaling

When the CPU or memory usage reaches the specified threshold, your cluster scales up automatically for the nodes in the cluster to share load until the number nodes in the cluster reaches the upper limit.