I have an older PC that I use as a host for some VM and docker containers. So far the system was working fine until a few days ago. It started to be very slow and iowait in glances was consistently in the red with 30-40%. The system is Ubuntu 20.04.1 LTS and fully up-to-date.
The PC has a 500GB SSD and a 4TB normal HDD with 16GB of RAM. Nothing too fancy, but as I said, it worked just fine. The Virtualbox VMs were all stored on the 4TB HDD. The VMs (all Ubuntu servers) were behaving so slow that I consistently had timeouts and a simple apt update would take 2-3 minutes. Surprisingly the HDD led on the PC was constantly on.
I have moved all the VMs VDI files to my NAS, and moved one persistent volume from a MySQL docker container off the 4TB disk. Suddenly, the performance of the VMs and the docker container was back normal again, so I assume it is the 4TB disk that is causing the problem.
I first ran sudo badblocks -v /dev/sda > badsectors.txt but that didn't show any bad sectors. It came back with 0 errors.
I then unmounted the disk, and excluded the drive from /etc/fstab. After a reboot, I ran sudo fsck -f /dev/sda1.
Surprisingly, this is what I got:
user@vs01:~$ sudo fsck -f /dev/sda1
[sudo] password for user:
fsck from util-linux 2.34
e2fsck 1.45.5 (07-Jan-2020)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/sda1: 298/244195328 files (9.4% non-contiguous), 91221367/976754176 blocksThe disk is clean and does not have any errors, yet there is a clear performance issue. SmartMonitor also shows no issues.
user@vs01:~$ sudo smartctl --all /dev/sda
smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.4.0-51-generic] (local build)
Copyright (C) 2002-19, Bruce Allen, Christian Franke,
=== START OF INFORMATION SECTION ===
Device Model: TOSHIBA DT02ABA400
Serial Number: X991S1DQS75H
LU WWN Device Id: 5 000039 995602197
Firmware Version: KQ000A
User Capacity: 4,000,787,030,016 bytes [4.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 5400 rpm
Form Factor: 3.5 inches
Device is: Not in smartctl database [for details use: -P showall]
ATA Version is: ACS-3 T13/2161-D revision 5
SATA Version is: SATA 3.3, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is: Sat Oct 17 13:09:11 2020 HKT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x00) Offline data collection activity was never started. Auto Offline Data Collection: Disabled.
Self-test execution status: ( 39) The self-test routine was interrupted by the host with a hard or soft reset.
Total time to complete Offline
data collection: ( 120) seconds.
Offline data collection
capabilities: (0x5b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported. General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 510) minutes.
SCT capabilities: (0x003d) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported.
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000b 100 100 050 Pre-fail Always - 0 2 Throughput_Performance 0x0005 100 100 050 Pre-fail Offline - 0 3 Spin_Up_Time 0x0027 100 100 001 Pre-fail Always - 6161 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 7 5 Reallocated_Sector_Ct 0x0033 100 100 050 Pre-fail Always - 0 7 Seek_Error_Rate 0x000b 100 100 050 Pre-fail Always - 0 8 Seek_Time_Performance 0x0005 100 100 050 Pre-fail Offline - 0 9 Power_On_Hours 0x0032 094 094 000 Old_age Always - 2584 10 Spin_Retry_Count 0x0033 100 100 030 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 7
191 G-Sense_Error_Rate 0x0032 100 100 000 Old_age Always - 5
192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 1
193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 832
194 Temperature_Celsius 0x0022 100 100 000 Old_age Always - 37 (Min/Max 23/52)
196 Reallocated_Event_Count 0x0032 100 100 000 Old_age Always - 0
197 Current_Pending_Sector 0x0032 100 100 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0030 100 100 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0
220 Disk_Shift 0x0002 100 100 000 Old_age Always - 0
222 Loaded_Hours 0x0032 094 094 000 Old_age Always - 2529
223 Load_Retry_Count 0x0032 100 100 000 Old_age Always - 0
224 Load_Friction 0x0022 100 100 000 Old_age Always - 0
226 Load-in_Time 0x0026 100 100 000 Old_age Always - 878
240 Head_Flying_Hours 0x0001 100 100 001 Pre-fail Offline - 0
SMART Error Log Version: 1
No Errors Logged
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Extended offline Interrupted (host reset) 70% 2561 -
SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing
Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.Is there anything else that I could try/check to find the cause of these slow performance? At the moment the VMs run off the NAS, which is working fine, but somehow I'd like to make use of the 4TB disk. And I would assume, a local disk should perform faster than a connection of the network, no?
EDIT:
I copied back one VM and this is what I get now from the iostat -x 1
Device r/s rkB/s rrqm/s %rrqm r_await rareq-sz w/s wkB/s wrqm/s %wrqm w_await wareq-sz d/s dkB/s drqm/s %drqm d_await dareq-sz aqu-sz %util
sda 381.00 1740.50 7.00 1.80 1.38 4.57 9.00 2568.00 36.00 80.00 2.44 285.33 0.00 0.00 0.00 0.00 0.00 0.00 0.36 68.80My disk is a 5400 spindle, so it's even slower...
Do I have 381 read operations per sec? Or which column is the read/write operations?
1 Answer
When it is slow, run:
iostat -x 1How many read and write operations per second do you get and at what %util?
A 7200 rpm disk is only good for about 120 IOPS, so if you are getting this many read + write operations per second combined or more, it's not a fault, mechanical disks are just painfully slow for random workloads.
You can use iotop to see which process is using up the most disk I/O.