cgroup限制读写速率
0-前言
经常存在需要模拟用户场景下的性能测试,模拟的一个方式就是构造特定带宽的io,而fio、dd等等io测试工具是不具备限制读写速度的功能的,所以这里就要用到cgroup的能力,这里记录一下如何限制特定进程、盘符的读写能力。
1-安装工具&创建组
首先要安装group工具
yum install libcgroup-tools
然后用cgroup
工具创建一个io
cgroup组
cgcreate -g blkio:/iotest
如果创建成功后,会存在下列的目录
# tree /sys/fs/cgroup/blkio/iotest/
/sys/fs/cgroup/blkio/iotest/
├── blkio.bfq.ioprio_class
├── blkio.bfq.io_service_bytes
├── blkio.bfq.io_service_bytes_recursive
├── blkio.bfq.io_serviced
├── blkio.bfq.io_serviced_recursive
├── blkio.bfq.weight
├── blkio.bfq.weight_device
├── blkio.cost.stat
├── blkio.cost.weight
├── blkio.diskstats
├── blkio.diskstats_recursive
├── blkio.latency
├── blkio.reset_stats
├── blkio.throttle.buffered_write_bps
├── blkio.throttle.io_service_bytes
├── blkio.throttle.io_service_bytes_recursive
├── blkio.throttle.io_serviced
├── blkio.throttle.io_serviced_recursive
├── blkio.throttle.read_bps_device
├── blkio.throttle.read_iops_device
├── blkio.throttle.readwrite_bps_device
├── blkio.throttle.readwrite_dynamic_ratio
├── blkio.throttle.readwrite_iops_device
├── blkio.throttle.stat
├── blkio.throttle.write_bps_device
├── blkio.throttle.write_iops_device
├── cgroup.clone_children
├── cgroup.id
├── cgroup.priority
├── cgroup.procs
├── cgroup.role
├── io.pressure
├── notify_on_release
└── tasks
2-配置限制的设备
通过lsblk可以查看块设备的设备号
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 447.1G 0 disk
├─sda1 8:1 0 20G 0 part /
├─sda2 8:2 0 512M 0 part /boot/efi
├─sda3 8:3 0 20G 0 part /usr/local
└─sda4 8:4 0 406.6G 0 part /data
nvme2n1 259:0 0 3.5T 0 disk
nvme0n1 259:1 0 3.5T 0 disk
nvme1n1 259:2 0 3.5T 0 disk
nvme4n1 259:3 0 3.5T 0 disk /home
nvme7n1 259:4 0 3.5T 0 disk
nvme6n1 259:5 0 3.5T 0 disk
nvme3n1 259:6 0 3.5T 0 disk
nvme8n1 259:7 0 3.5T 0 disk
nvme9n1 259:8 0 3.5T 0 disk
nvme5n1 259:9 0 3.5T 0 disk
nvme11n1 259:10 0 3.5T 0 disk
nvme10n1 259:11 0 3.5T 0 disk /test
比如我们这次测试的设备是nvme11n1
裸盘,并且限制其读写的带宽上限为200MB/s
那么就
echo "259:10 209600000" > /sys/fs/cgroup/blkio/iotest/blkio.throttle.write_bps_device
echo "259:10 209600000" > /sys/fs/cgroup/blkio/iotest/blkio.throttle.read_bps_device
3-启动应用
启动应用,并加入刚刚创建的io cgroup,这里以fio
进行裸盘读进行测试
cgexec -g blkio:/iotest fio --name=mytest --ioengine=libaio --rw=read --bs=300k --numjobs=1 --size=1G --runtime=60s --time_based --iodepth=32 --filename=/dev/nvme11n1
可以看到测试的结果,读盘速度是被严格限制在200mb/s的
mytest: (g=0): rw=read, bs=(R) 300KiB-300KiB, (W) 300KiB-300KiB, (T) 300KiB-300KiB, ioengine=libaio, iodepth=32
fio-3.19
Starting 1 process
Jobs: 1 (f=1): [R(1)][100.0%][r=200MiB/s][r=681 IOPS][eta 00m:00s]
mytest: (groupid=0, jobs=1): err= 0: pid=59309: Tue Jul 16 14:24:26 2024
read: IOPS=681, BW=200MiB/s (209MB/s)(11.7GiB/60008msec)
slat (usec): min=63, max=97669, avg=1443.12, stdev=11037.07
clat (usec): min=2, max=183638, avg=45455.52, stdev=46404.80
lat (msec): min=3, max=183, avg=46.90, stdev=46.53
clat percentiles (msec):
| 1.00th=[ 4], 5.00th=[ 4], 10.00th=[ 4], 20.00th=[ 4],
| 30.00th=[ 4], 40.00th=[ 4], 50.00th=[ 4], 60.00th=[ 92],
| 70.00th=[ 92], 80.00th=[ 101], 90.00th=[ 101], 95.00th=[ 101],
| 99.00th=[ 101], 99.50th=[ 102], 99.90th=[ 176], 99.95th=[ 184],
| 99.99th=[ 184]
bw ( KiB/s): min=174600, max=234600, per=100.00%, avg=204873.28, stdev=6825.91, samples=119
iops : min= 582, max= 782, avg=682.91, stdev=22.75, samples=119
lat (usec) : 4=0.01%
lat (msec) : 4=53.92%, 10=0.49%, 20=0.35%, 100=43.77%, 250=1.47%
cpu : usr=0.06%, sys=6.89%, ctx=25101, majf=0, minf=2413
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=99.9%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued rwts: total=40922,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
READ: bw=200MiB/s (209MB/s), 200MiB/s-200MiB/s (209MB/s-209MB/s), io=11.7GiB (12.6GB), run=60008-60008msec
Disk stats (read/write):
nvme11n1: ios=47158/0, merge=598/0, ticks=5569/0, in_queue=0, util=8.04%