tencent cloud

TencentCloud Managed Service for Prometheus

Product Introduction
Overview
Strengths
Use Cases
Concepts
Use Limits
Features
Service Regions
Purchase Guide
Billing Overview
Pay-as-You-Go (Postpaid)
Free Trial Introduction
Managed Collector Billing Introduction
Archive Storage Billing Introduction
Purchase Methods
Payment Overdue
Getting Started
Integration Guide
Scrape Configuration Description
Custom Monitoring
EMR Integration
Java Application Integration
Go Application Integration
Exporter Integration
Nacos Integration
Common Exporter
Health Check
Instructions for Installing Components in the TKE Cluster
Cloud Monitoring
Non-Tencent Cloud Host Monitoring
Read Cloud-Hosted Prometheus Instance Data via Remote Read
Agent Self-Service Access
Pushgateway Integration
Security Group Open Description
Operation Guide
Instance
TKE
Integration Center
Data Multi-Write
Recording Rule
Instance Diagnosis
Archive Storage
Alerting Rule
Tag
Access Control
Grafana
API Guide
TKE Metrics
Resource Usage and Billing Overview
Practical Tutorial
Migration from Self-Built Prometheus
Custom Integration with CVM
TKE Monitoring
Enabling Public Network Access for TKE Serverless Cluster
Connecting TMP to Local Grafana
Enabling Public Network Access for Prometheus Instances
Configuring a Public Network Address for a Prometheus Instance
Terraform
Terraform Overview
Managing Prometheus Instances Using Terraform
Managing the Integration Center of Prometheus Instances Using Terraform
Collecting Container Monitoring Data Using Terraform
Configuring Alarm Policies Using Terraform
FAQs
Basic Questions
Integration with TKE Cluster
Product Consulting
Use and Technology
Cloud Monitor FAQs
Service Level Agreement
TMP Policy
Privacy Policy
Data Processing And Security Agreement

Use and Technology

PDF
Focus Mode
Font Size
Last updated: 2024-01-29 15:55:07

How do I migrate from self-built Prometheus to TMP?

In the configuration file of self-built Prometheus, add a remote write configuration pointing to TMP for migration. For more information, see Migration from Self-Built Prometheus.

Can I batch import/export dashboards into/from Grafana?

You can do so through APIs as instructed in HTTP API reference.

What should I do if one exporter has too much data?

You can use Prometheus in this scenario, but we recommend you not expose too many metrics in the exporter, as the exporter itself and the Prometheus agent consume a lot of memory. Only expose key metrics, and do not use parameters such as user ID and URL in metric labels.

How do I use TMP to monitor clusters in two different VPCs?

1. You can interconnect the VPCs of the two clusters through CCN and then integrate the clusters into Prometheus .
2. Install the agent in one of the clusters, and then use the Nginx proxy to remotely write the target address to the TMP instance.

How do I integrate native container services with TMP?

You can't integrate them directly. However, you can create data and add a remote write configuration pointing to TMP in the configuration file of your self-built Prometheus. For detailed directions, see Migration from Self-Built Prometheus.

How do I set the parameters of the alarm cycle when creating an alarm rule through Prometheus APIs?

Add key=_interval_ value=1m/5m/10m/15m/30m/60m to the new Labels parameter.

Will data get lost during instance rebooting?

Data will not get lost during instance rebooting, as it is stored in TencentDB for CTSDB. However, because data cannot be reported normally during rebooting, data breakpoints may occur.

Is it normal to see a rise in generated data after the TMP instance is rebooted?

Retries will be performed after instance rebooting, so it is normal if the data volume seems to fluctuate greatly in a short period of time.

Can I batch add MySQL instances when integrating MySQL with Prometheus in the integration center?

No. You can only add instances one by one.

Where can I view the security group of the EKS cluster created by Prometheus?

Go to Security > Security Group in the VPC console and search by the current TMP instance ID.

Can I configure multiple scrape tasks for a PodMonitor?

Yes, but you should ensure that the tasks have the same port name and label.

(TMP Pushgateway usage) If the clients of multiple instances push the same job with different label metrics, metrics will be overwritten. What should I do?

You can use groupingkey to solve this problem. Below is the sample code:
if err := push.New("http://$IP:$PORT", "db_backup").
BasicAuth("$APPID", "$TOKEN").
Collector(completionTime).
Grouping("instance", "$INSTANCE").

Help and Support

Was this page helpful?

Help us improve! Rate your documentation experience in 5 mins.

Feedback