tencent cloud

TencentCloud Managed Service for Prometheus

Product Introduction
Overview
Strengths
Use Cases
Concepts
Use Limits
Features
Service Regions
Purchase Guide
Billing Overview
Pay-as-You-Go (Postpaid)
Free Trial Introduction
Managed Collector Billing Introduction
Archive Storage Billing Introduction
Purchase Methods
Payment Overdue
Getting Started
Integration Guide
Scrape Configuration Description
Custom Monitoring
EMR Integration
Java Application Integration
Go Application Integration
Exporter Integration
Nacos Integration
Common Exporter
Health Check
Instructions for Installing Components in the TKE Cluster
Cloud Monitoring
Non-Tencent Cloud Host Monitoring
Read Cloud-Hosted Prometheus Instance Data via Remote Read
Agent Self-Service Access
Pushgateway Integration
Security Group Open Description
Operation Guide
Instance
TKE
Integration Center
Data Multi-Write
Recording Rule
Instance Diagnosis
Archive Storage
Alerting Rule
Tag
Access Control
Grafana
API Guide
TKE Metrics
Resource Usage and Billing Overview
Practical Tutorial
Migration from Self-Built Prometheus
Custom Integration with CVM
TKE Monitoring
Enabling Public Network Access for TKE Serverless Cluster
Connecting TMP to Local Grafana
Enabling Public Network Access for Prometheus Instances
Configuring a Public Network Address for a Prometheus Instance
Terraform
Terraform Overview
Managing Prometheus Instances Using Terraform
Managing the Integration Center of Prometheus Instances Using Terraform
Collecting Container Monitoring Data Using Terraform
Configuring Alarm Policies Using Terraform
FAQs
Basic Questions
Integration with TKE Cluster
Product Consulting
Use and Technology
Cloud Monitor FAQs
Service Level Agreement
TMP Policy
Privacy Policy
Data Processing And Security Agreement

Non-Tencent Cloud Host Monitoring

PDF
Focus Mode
Font Size
Last updated: 2025-07-25 09:58:28

Background

This document primarily guides you on how to quickly collect monitoring data from non-Tencent cloud hosts, reducing the configuration costs.

Connection Method

Method 1: One-Click Installation (Recommended)

Operation Steps

1. Log in to the TMP console.
2. Select the corresponding Prometheus instance from the instance list.
3. Enter the instance details page, click Data Collection > Integration Center.
4. In the Integration Center, locate and click External Node Exporter to open an installation window.




Step 1. Installing and Running node_exporter

1. Execute the following scripts on the host where data needs to be reported.
wget https://rig-1258344699.cos.ap-guangzhou.myqcloud.com/prometheus-agent/node_exporter_install -O node_exporter_install && chmod +x
node_exporter_install && ./node_exporter_install
Executing the script will automatically trigger the following actions: Download node_exporter, run node_exporter, check data reporting, and complete (data will be successfully exposed on port 9100).
An example of the script execution result is shown below:



Note:
The default parameters in the script are port=9100 and path=/metrics. If you need to customize parameters or perform operations such as restarting, stopping, health checks, or viewing logs, you can manage the script using Systemctl.
Custom parameters:
To modify the port, replace the script execution statement with:
wget https://rig-1258344699.cos.ap-guangzhou.myqcloud.com/prometheus-agent/node_exporter_install -O node_exporter_install && chmod +x node_exporter_install && ./node_exporter_install --web.listen-address=":9100"
To modify the path, replace the script execution statement with:
wget https://rig-1258344699.cos.ap-guangzhou.myqcloud.com/prometheus-agent/node_exporter_install -O node_exporter_install && chmod +x node_exporter_install && ./node_exporter_install --web.telemetry-path="/metrics"
Note:
For more guidance on configuring custom parameters, see documentation.
Common script management operations:
Restart:
systemctl restart node_exporter
Stop:
systemctl stop node_exporter
Status check:
systemctl status node_exporter
Viewing logs:
journalctl -u node_exporter
2. Ensure that the host's network is connected to the Prometheus instance's private network.
If the connection has been established via Direct Connect (DC), reporting can be done over the private network without any additional operation. Otherwise, to report over the public network, follow the steps below:
The host needs to have a public IP address enabled, which will serve as the target IP for data collection.
The route table of the VPC where the Prometheus instance is located needs to be configured with a NAT gateway. For guidance, see Enabling Public Network Access for TKE Serverless Cluster.
3. Manually open security group restrictions.
The inbound rules of the host’s security group need to be configured with an authorized policy that allows access: the protocol type should be Custom TCP, the port should match the <port> specified in the script, and the source IP should be 0.0.0.0/0.

Step 2: Configuring the Scraping Job




Parameter
Description
Job name
Exporter name, which should meet the following requirements:
The name should be unique.
The name should conform to the following regular expression: '^[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*$'.
Scrape interval(s)
Enter the metric collection interval (s).
Scrape path
Enter the collection target address in the format host:port. Multiple addresses can be added.
Address
Enter the metric collection path. The default path is /metrics.

Method 2: Custom Installation

The installation method from Step 1 can also be replaced with a custom installation, following the guide below.

1. Downloading and installing node_exporter:

On the host where data needs to be reported, download and install node_exporter. You can click the Prometheus open-source website download link for node_exporter or directly execute the following command:
wget https://rig-1258344699.cos.ap-guangzhou.myqcloud.com/prometheus-agent/node_exporter -O node_exporter
The directory is the current folder:




2. Running node_exporter to Collect Basic Monitoring Data:

Grant permissions, execute node_exporter, and view the logs.
chmod +x node_exporter && nohup ./node_exporter &
cat nohup.out
The following image shows a successful execution:



You can use the following command to view the monitoring data exposed on port 9100:
curl 127.0.0.1:9100/metrics
The following image shows the exposed metric monitoring data after executing the command:



After completing the above operations, you need to configure the scraping task on the page. See Configuration Description in Method 1 for guidance.

Viewing Monitoring Information

Prerequisites

The Prometheus instance has been bound to a Grafana instance.

Operation Steps

1. Log in to the TMP console and select the corresponding Prometheus instance to enter its management page.
2. In Data Collection > Integration Center, find the Non-Tencent Cloud Host Monitoring card on the Integration Center page, and click to open the integration page. Then, select Dashboard > Dashboard Install/Upgrade to install the corresponding Grafana Dashboard.
3. Open the Grafana instance address associated with the Prometheus instance, and navigate to the Dashboards page to view the relevant monitoring dashboards.







Configuring Alarm

1. Log in to the TMP console and select the corresponding Prometheus instance to enter its management page.
2. Select Alarm Management to add the corresponding alarm policies. For details, see Creating Alarm Rules.

Help and Support

Was this page helpful?

Help us improve! Rate your documentation experience in 5 mins.

Feedback