tencent cloud

Tencent Kubernetes Engine

소식 및 공지 사항
릴리스 노트
제품 릴리스 기록
제품 소개
제품 장점
제품 아키텍처
시나리오
제품 기능
리전 및 가용존
빠른 시작
신규 사용자 가이드
표준 클러스터를 빠르게 생성
Demo
클라우드에서 컨테이너화된 애플리케이션 배포 Check List
TKE 표준 클러스터 가이드
Tencent Kubernetes Engine(TKE)
클러스터 관리
네트워크 관리
스토리지 관리
Worker 노드 소개
Kubernetes Object Management
워크로드
클라우드 네이티브 서비스 가이드
Tencent Managed Service for Prometheus
TKE Serverless 클러스터 가이드
TKE 클러스터 등록 가이드
실습 튜토리얼
Serverless 클러스터
네트워크
로그
모니터링
유지보수
DevOps
탄력적 스케일링
자주 묻는 질문
클러스터
TKE Serverless 클러스터
유지보수
서비스
이미지 레지스트리
원격 터미널
문서Tencent Kubernetes Engine

Native Node Scaling

포커스 모드
폰트 크기
마지막 업데이트 시간: 2024-06-27 11:09:15

Note

The auto-scaling of a native node is implemented by Tencent Kubernetes Engine (TKE). The auto-scaling of a normal node relies on Auto Scaling (AS).
If auto-scaling is not enabled for a native node pool:
The number of initialized nodes is specified by the Nodes parameter in the console, or the replicas parameter in the YAML configuration file.
You can manually adjust the number of nodes as needed. However, the number of nodes is limited by the maximum number, which is 500 by default, and the number of IP addresses in the container subnet.
If auto-scaling is enabled for a native node pool:
The number of initialized nodes is specified by the Nodes parameter in the console, or the replicas parameter in the YAML configuration file.
You must specify the Number of Nodes parameter in the console, or the minReplicas and maxReplicas parameters in the YAML configuration file to set the range for the number of nodes. Cluster Autoscaler (CA) adjusts the number of nodes in the current node pool within the specified range.
You cannot manually adjust the number of nodes as needed.
Note:
At a same moment, the auto-scaling of the node pool can be controlled only by 1 role in the console. If auto-scaling is enabled, the instance quantity cannot be adjusted manually. If you wish to manually adjust the instance quantity, first disable auto-scaling.

Enabling the Auto-scaling Feature for Nodes

Parameter description

Function
Parameter and Values
Description
Auto Scaling
Parameter: spec.scaling
The auto-scaling feature is enabled by default. If the auto-scaling feature is enabled for a node pool, CA automatically scales in or out the node pool.
Number of Nodes
Parameter: spec.scaling.maxReplicas and spec.scaling.minReplicas
Valid values: The value is customizable.
The number of nodes in the node pool cannot exceed the specified range. If auto-scaling is enabled for a node pool, the number of native nodes in the node pool can be automatically adjusted within the specified range.
Scaling policy
Parameter: spec.scaling.createPolicy
Example values:
Zone priority in the console, or ZonePriority in the YAML configuration file.
Zone equality in the console, or ZoneEquality in the YAML configuration file.
If you specify Zone priority, the auto-scaling feature performs scaling in the preferred zone first. If the preferred zone cannot be scaled, other zones are used.
If you specify Zone equality, the auto-scaling feature distributes node instances evenly among the zones, or subnets, specified in the scaling group. This policy takes effect only if you have configured multiple subnets.

Enabling the feature in the TKE console

Method 1: Enabling auto-scaling on the node pool creation page

1. Log in to the TKE console and create a node pool in the cluster. For more information, see Creating Native Nodes.
2. On the Create node pool page, select Enable for Auto-scaling. See the following figure:


Method 2: Enabling auto-scaling on the details page of a node pool

1. Log in to the TKE console and select Cluster in the left sidebar.
2. On the cluster list page, click the ID of the target cluster to go to the details page.
3. Select Node Management > Worker Node in the left sidebar, and click the node pool ID in the node pool to enter the node pool details page.
4. On the node pool details page, click Edit on the right side of Operation configuration, as shown in the following figure:

5. Check Activate Auto-scaling , and click Confirm to enable auto-scaling.


Enabling the feature by using YAML

Specify the scaling parameter in the YAML configuration file for a node pool.
apiVersion: node.tke.cloud.tencent.com/v1beta1
kind: MachineSet
spec:
type: Native
displayName: mstest
replicas: 2
autoRepair: true
deletePolicy: Random
healthCheckPolicyName: test-all
instanceTypes:
- C3.LARGE8
subnetIDs:
- subnet-xxxxxxxx
- subnet-yyyyyyyy
scaling:
createPolicy: ZonePriority
minReplicas: 10
maxReplicas: 100
template:
spec:
displayName: mtest
runtimeRootDir: /var/lib/containerd
unschedulable: false
......


Viewing the scaling records

1. Log in to the TKE console and select Cluster in the left sidebar.
2. On the cluster list page, click the ID of the target cluster to go to the details page.
3. Choose Node management > Node pool in the left sidebar to go to the Node pool list page.
4. Click the ID of the target node pool to go to the details page of the node pool.
5. View the scaling records on the Ops records page.

Introduction to Scale-out Principles

This document will explain the scale-out principles of native nodes with examples under conditions of multiple models and multiple subnets.

Scenario 1: The scale-out policy is Preferred availability zone first when auto-scaling is enabled

Algorithm :
1. Determine the preferred availability zone based on the subnet arrangement sequence.
2. Select a model with the highest current inventory from multiple models for scale-out, and check the inventory in real time after scale-out of each machine, to ensure that the machine is scaled out successfully in the preferred availability zone as much as possible.
Example :
Assume that the node pool is configured with Models A/B (A has an inventory of 5 units, and B has an inventory of 3 units) and Subnets 1/2/3 (3 subnets are in different availability zones, with Subnet 1 being preferred). The arrangement sequences of models and subnets are valid during algorithm judgment. At this moment, CA triggers the scale-out of 10 machines in the node pool. The background judgment process is as follows:
2.1 Based on the subnet arrangement sequence, identify the subnet of the preferred availability zone as Subnet 1.
2.2 Check the real-time inventory status of all models, and scale out 1 node. Then repeat this process.
2.3 After the scale-out of 8 nodes, if no resources are available to continue scaling out in Subnet 1, proceed to Step 2.1 and switch the subnet of preferred availability zone to Subnet 2.

Scenario 2: The scale-out policy is Distribute among multiple availability zones when auto-scaling is enabled

Algorithm :
1. Based on the distribution status of the existing nodes within the node pool in the availability zones, determine the expected scale-out quantity for each availability zone, to ensure that the number of nodes distributed in each zone is as uniform as possible after scaling out.
2. After the availability zone is determined, select a model with the highest current inventory from multiple models for scale-out, and check the inventory in real time after scale-out of each machine, to ensure that the machine is successfully scaled out in the current availability zone as much as possible.
Example :
Assume that the node pool is configured with Models A/B (A has an inventory of 5 units, and B has 3 units) and Subnets 1/2/3 (3 subnets are in different availability zones, with Subnet 1 as preferred). The arrangement sequences of the models and subnets are valid during algorithm evaluation, and the node pool has 5 nodes which are deployed in Availability Zone 1. At this moment, CA triggers the scale-out of 10 machines in the node pool. The background judgment process is as follows:
2.1 Based on the deployment status of the existing nodes, it is expected to scale out 5 machines respectively in Availability Zones 2 and 3.
2.2 Based on the subnet sequence, identify the subnet of currently operated availability zone, namely Subnet 2.
2.2.1 Check the real-time inventory status of all models, and scale out 1 node. Then repeat this process.
2.2.2 After the scale-out in Availability Zone 2 is completed, proceed to Step 2.2 and switch the subnet of the availability zone to be scaled out currently to Subnet 3.

Scenario 3: Manually increase the number of node pools when auto-scaling is disabled

At this time, the default scale-out policy is Distribute among multiple availability zones, and the principle is as same as that of Scenario 2.

도움말 및 지원

문제 해결에 도움이 되었나요?

피드백