tencent cloud

Cloud File Storage

Releases Notes and Announcements
Release Notes
Announcements
Product Introduction
Overview
Strengths
Storage Classes and Performance
Use Cases
Recommended Regions
Use Limits
Service Regions and Service Providers
Purchase Guide
Billing Overview
Pricing Overview
General Series Billing
Turbo Series Billing
High-Throughput CFS Billing
Billing Mode
IA ‍Storage Billing
Storage Resource Units
Resource Purchase
Viewing Bills
Arrears Reminder
Getting Started
Creating File Systems and Mount Targets
Use the CFS File System on Linux Clients
Using the CFS File Systems on Windows Clients
Using the CFS Turbo File System on Linux Clients
Using the Console or CFS Client Assistant to Mount File Systems
Operation Guide
Access Management
Managing File Systems
Permission Management
Using Tags
Turbo file system metadata type
Snapshot Management
Guide for Cross-AZ and Cross-Network Access
Automatically Mounting File Systems
Data Migration Service
User Permission Management
User Quotas
Data Encryption
Data Lifecycle Management
Upgrading Standard File Systems
Practical Tutorial
NFS Client Mount Parameter Description
General-Purpose CFS Best Practices
Managing Turbo CFS Directories
Terminating Compute Instances
Using CFS on TKE
Using CFS on SCF
Using CFS Turbo on TKE
Using CFS Turbo on TKE Serverless Cluster
Selecting a Network for Turbo CFS
Copying Data
CFS Storage Performance Testing
API Documentation
History
Introduction
API Category
Snapshot APIs
File system APIs
Lifecycle APIs
Other APIs
Data Flow APIs
Making API Requests
Permission Group APIs
Service APIs
Scaling APIs
Data Migration APIs
Data Types
Error Codes
Troubleshooting
Client Use Bottleneck due to Large Number of Small Files or Parallel Requests
FAQs
CFS Service Level Agreement
Contact Us
Glossary
DocumentaçãoCloud File StoragePractical TutorialGeneral-Purpose CFS Best Practices

General-Purpose CFS Best Practices

PDF
Modo Foco
Tamanho da Fonte
Última atualização: 2026-04-08 19:06:37

Overview

This article primarily focuses on the characteristics of the NFS protocol for General Series File Systems, aiming to deliver services better suited to user business scenarios.

Scenario 1: Client Kernel Selection

Background

The NFS client is kernel-space based. Due to flaws in certain kernel versions, the NFS service may not function properly. To ensure a better user experience, we recommend using the kernel versions we recommend.

FAQ and Troubleshooting Suggestions

Kernel Network Stack Flaw Causes File System Unresponsiveness (Priority: High)

When the system's kernel version is between 2.6.32-696 and 2.6.32-696.10.1 (including 2.6.32-696 but excluding 2.6.32-696.10.1), the NFS server may become busy, causing kernel request retransmissions that can potentially trigger a kernel network stack defect, resulting in unresponsive operations. When operations become unresponsive, restart the CVM instance. For more information, see the document RHEL 6.9: NFSv4 TCP transport stuck in FIN_WAIT_2 forever.

Kernel defect causes file system unresponsiveness (Priority: High)

When the system's kernel version is one of the following versions, NFS server failover may cause deadlocks in the NFS client's open, read, and write operations, resulting in persistent unresponsiveness of the file system. When operations become unresponsive, restart the CVM instance. For more information, see the document RHEL 7: NFSv4 client loops with WRITE/NFS4ERR_STALE_STATEID - if NFS server restarts multiple times within the grace period.
OS Type
Version
Red Hat Enterprise Linux 6, CentOS 6
2.6.32-696.3.1.el6
Red Hat Enterprise Linux 7, CentOS 7
All kernel versions prior to 3.10.0-229.11.1.el7
Ubuntu 15.10
Linux 4.2.0-18-generic
When the system's kernel version is one of the following versions, network partitioning or jitter occurs, causing connection re-establishment. The NFS client may become persistently unresponsive due to failing to properly handle error codes.
The symptom is that the file system becomes unresponsive and the system messages repeatedly print "bad sequence-id" errors. When operations become unresponsive, restart the CVM instance. For more information, see the document RHEL 6/RHEL 7: NFSv4 client receiving NFS4ERR_BAD_SEQID drops nfs4 stateowner resulting in infinite loop of READ/WRITE+NFS4ERR_BAD_STATEID.
OS Type
Version
Red Hat Enterprise Linux 6, CentOS 6
All kernel versions prior to 2.6.32-696.16.1.el6
Red Hat Enterprise Linux 7, CentOS 7
All kernel versions prior to 3.10.0-693.el7
When the operating system kernel version is any CentOS or Red Hat Enterprise Linux 5.11.x kernel, executing the ls command, commands containing wildcards * or ?, or other operations requiring directory traversal may cause lag or unresponsiveness due to a kernel defect. You are advised to upgrade the kernel version to avoid this issue.

Not supported chown command and system call (Priority: Low)

When the system kernel version is 2.6.32, the NFS client does not support executing the chown command and system calls.

ls operation cannot be terminated (Priority: Low)

When the system kernel version is 2.6.32-696.1.1.el6 or earlier, performing an ls operation while simultaneously adding or deleting files or subdirectories will cause the ls operation to hang indefinitely. Upgrade the kernel version to avoid this issue.
When the system kernel version is 4.18.0-305.12.1, directory traversal operations such as ls may hang indefinitely. Upgrade the kernel to version 4.18.0-305.19.1 or later to resolve this issue.

NFS recommended kernel image

Linux system image

OS Type
Version
CentOS
CentOS 7.5 64-bit: 3.10.0-862.14.4.el7.x86_64 and above
CentOS 7.6 64-bit: 3.10.0-957.21.3.el7.x86_64 and above
CentOS 7.7 64-bit: 3.10.0-1062.18.1.el7.x86_64 and above
CentOS 8.x 64-bit: 4.18.0-147.5.1.el8_1.x86_64 and above
Tencent OS Linux
TencentOS Server 2.2(Tkernel 3)
TencentOS Server 2.4 (Tkernel 4)
TencentOS Server 2.6(Final)
TencentOS Server 3.1(Tkernel 4)
Debian
Debian 9.6 64-bit: 4.9.0-8-amd64 and above
Debian 9.8 64-bit: 4.9.0-8-amd64 and above
Debian 9.10 64-bit: 4.9.0-9-amd64 and above
Ubuntu
Ubuntu 14.04 64-bit: 4.4.0-93-generic and above
Ubuntu 16.04 64-bit: 4.4.0-151-generic and above
Ubuntu 18.04 64-bit: 4.15.0-52-generic and above
Ubuntu 20.04 64-bit: 5.4.0-31-generic and above
OpenSuse
OpenSuse 42.3 64-bit: 4.4.90-28-default and above
Suse
SUSE Linux Enterprise Server 12 SP2 64-bit: 4.4.74-92.35-default and above
SUSE Linux Enterprise Server 12 SP4 64-bit: 4.12.14-95.16-default and above
CoreOS
CoreOS 1745.7.0 64-bit: 4.19.56-coreos-r1 and above
CoreOS 2023.4.0 64-bit: 4.19.56-coreos-r1 and above

Windows System Image

OS Type
Version
Windows Server 2012
Windows Server 2012 R2 Datacenter Edition 64-bit Chinese Edition
Windows Server 2012 R2 Datacenter Edition 64-bit English Edition
Windows Server 2016
Windows Server 2016 Datacenter Edition 64-bit Chinese Edition
Windows Server 2016 Datacenter Edition 64-bit English Edition
Windows Server 2019
Windows Server 2019 Datacenter Edition 64-bit Chinese Edition
Windows Server 2019 Datacenter Edition 64-bit English Edition

Scenario 2: Multi-process or Client Concurrent Write

Background

The NFS file protocol does not provide a strong consistency file locking mechanism. Both file extension and write operations are non-atomic. Thus, in scenarios where multiple processes or clients concurrently write to the same file, each process or client maintains its own file pointer and caches file data by default. This may ultimately lead to inconsistent file states observed by different clients or processes. Common issues include write overwrites (later writes overwriting earlier ones), interleaved content (interleaved execution of write operations), and serialization anomalies (similar to serial writes but without guaranteed correct ordering).

Usage Recommendations

(Recommended) Optimize the workflow by arranging different processes or clients to write to their own independent files. Use a merger program to periodically consolidate these files into the final output, completely avoiding concurrent write conflicts and providing optimal performance and data consistency.
(Recommended) Switch to the CFS Turbo file system, which offers automatic locking/unlocking and higher consistency in concurrent read/write operations.
When different clients must write to the same file, adopt the following multiple safeguards:
Mount configuration optimization: Use NFSv4 instead of NFSv3 protocol for mounting, add the noac mount option to ensure real-time synchronization of file states. Mount command reference:
mount -t nfs -o vers=4.0,proto=tcp,noac,noresvport 10.XX.XX.XX:/ /localfolder
Program-level manual locking: Use the flock system call in the write code to implement a mutex lock, and open the file in append-write O_APPEND mode to ensure strict adherence to the workflow of "obtain lock → write → release lock". For manual locking implementation in programs, refer to:
import os
import fcntl

O_WRONLY = os.O_WRONLY # Open the file in write-only mode
O_APPEND = os.O_APPEND # Open the file in append mode
O_DIRECT = getattr(os, 'O_DIRECT', 0o40000) # Attempt to obtain the system's O_DIRECT constant; if it does not exist, set it to the default value 0o40000
SEEK_END = os.SEEK_END # relative to the end of the file

# Assume the file for concurrent writes is file1
filename = '/root/cfs/file1'

try:
fd = os.open(filename, O_WRONLY | O_APPEND | O_DIRECT)
# Attempt to obtain the file lock
try:
fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB)
print("Successfully acquired lock")
# Position to the end of the file
os.lseek(fd, 0, SEEK_END)
# Write data
data = b"hello world"
os.write(fd, data)
except BlockingIOError:
print("File is locked by another process")
# Exit if the file is locked
exit(1)
finally:
# Release the file lock
fcntl.flock(fd, fcntl.LOCK_UN)
print("Lock released")
finally:
if 'fd' in locals():
os.close(fd)
Note:
In high-concurrency scenarios, lock contention may lead to a noticeable increase in latency, which could significantly impact file system read/write performance. Adjust settings according to actual business requirements.


Ajuda e Suporte

Esta página foi útil?

comentários