Announcing Synergy/DE 12.3 LTS Release
March 8, 2024Automotive Entrepreneur Travels Successful Road with Synergy
April 3, 2024When it comes to ISAM files, typically only a single factor limits the size of an ISAM file: available disk space. Of course, there is the 2-GB limit (solved by adding “,tbyte” to the file) and the physical ISAM file maximum of 256 TB, but those situations are few and far between. Recently, a Synergy application developer discovered another answer to the “how big is too big” question. While STOREing records to an ISAM file on a disk with hundreds of gigabytes free, the application failed with a Windows system error 665 accompanied by a somewhat cryptic message: “The requested operation could not be completed due to a file system limitation.” Some Google research revealed that this error could also indicate an SQL server problem on Windows having to do with VERY large files (>100 GB) that have reached a limit in fragmentation. Because Synergy also can have VERY large files, a similar problem can occur with our DBMS ISAM files.
Fragmentation issues can typically be avoided by keeping the disk defragmented through regular use of the Microsoft Drive Optimizer or Sysinternal’s contig utility. Unfortunately, it turns out that once a file encounters a fragmentation problem, these utilities do not seem to resolve the issue. The best and quickest solution to get back up and running is to simply copy the file. This seems to do the trick, even with SQL Server, but at a cost of downtime.
How does an ISAM file get fragmented? Whenever a file is extended, if the space that has been allocated to the file is insufficient, the operating system may need to allocate another fragment (or extent) somewhere else on the disk. Over time, active files that are frequently updated will become fragmented, and there really is no way to prevent it. Files on HDDs that are heavily fragmented can affect performance. Regularly using the defragmentation utilities mentioned above can help to alleviate the issue.
Which kinds of ISAM files get fragmented most? The data file (.is1) tends to get the most fragmentation, especially when small segments are written. In the case mentioned above, the record size was 124 bytes with the most common compressed segment size being around 100 bytes. However, the biggest contributor to fragmentation was how the file was used. All writes to the file were in the form of new records added (records were seldom, if ever, deleted), causing the file to get extended frequently.
But how do we avoid this from happening in the first place? If you find yourself in a similar situation and you have an ISAM file where all updates are in the form of new records (no DELETEs) or you have encountered error 665, you may be interested in a few new command options recently added to the isutl utility. These new options were available in the 12.2.1 feature release and are now part of the 12.3 LTS release.
The new isutl -ex #% file command allows you to add free space to a file. By specifying a percentage, you can extend a file by (hopefully) a single extent, depending on available filesystem contiguous free space, pre-allocating contiguous free segments at one time. And, to help choose a percentage to specify, we’ve enhanced the -b (bucket usage statistics) option. This option used to only be allowed following a full file verify (-v), which as files get VERY large can take quite some time to produce. Now, it is also a standalone command that just scans the data file. You can simply use the command isutl -bfs file to produce a listing, where -f means to display the number of file fragments and -s means to suppress the freelist scan, which can take some time on large files with potentially millions of free segments.
What you are looking for is how close to zero the columns under %free and #free are, compared to the #inuse column and across the other buckets.
Bucket Allocation: slen #inuse %free #free lhead 95: 109 22024588 0.8427 185611 0x208059ff2b 96: 110 22504159 0.7515 169124 0x20a8156434 97: 111 6885278 0.6968 47980 0x2083293630 98: 112 9672375 0.6884 66583 0x20a870438e 99: 113 8040938 0.6958 55945 0x20844846b7 100: 114 12481383 0.3766 47004 0x2042ccf717 101: 115 6394156 0.6867 43911 0x20865d674e 102: 116 12822400 0 0 103: 117 3992881 0.6789 27107 0x2087a17239 104: 118 5724366 0.6995 40040 0x2087f0420c 105: 119 7020137 0.7181 50413 0x20a9864053
The above output would indicate that 114- and 116-byte segments appear to be more frequent at this time for some reason. Each record that fits into bucket 102 will be extending the file and more than likely creating a fragment. To avoid further fragmentation, a recommendation would be to reset the upper freelist percentage to at least 0.7%. This command will expand all buckets below 0.7% and ignore those at or above 0.7%:
isutl -ex 0.7% file
resulting in
Bucket Allocation: slen #inuse %free #free lhead 95: 109 22024588 0.8427 185611 0x208059ff2b 96: 110 22504159 0.7515 169124 0x20a8156434 97: 111 6885278 0.7000 48197 0x20aa1042b4 98: 112 9672375 0.7000 67707 0x20aa10a0cb 99: 113 8040938 0.7000 56287 0x20aa128c8b 100: 114 12481383 0.7000 87370 0x20aa132381 101: 115 6394156 0.7000 44760 0x20aa595afd 102: 116 12822400 0.7000 89757 0x20aa5ad860 103: 117 3992881 0.7000 27951 0x20aaf83a21 104: 118 5724366 0.7000 40071 0x20aaf83d6d 105: 119 7020137 0.7181 50413 0x20a9864053
Also, to remain accurate, you may want to run the -b command on a scheduled basis (depending on how fast the file grows) to determine trends. Until you determine how fast the freelist is being depleted, there’s no reason to adjust it. Remember, each time you use -ex, you will be creating another fragment. But consider this: Let us say the above bucket allocation shows a file that just one month ago was extended to 1% free. Given this new information, you might consider extending #free to 1.2% and checking back again in three weeks. After understanding these trends, you may decide to spread this out over many months using much larger free percentages.
Under normal circumstances, a file’s freelist will expand and contract as records are stored and deleted, so unless you are experiencing heavy fragmentation growth or have encountered the aforementioned error 665, you should not have to use this new command. Note that both new commands, -b and -ex, can be used while the file is in use, so there is no need to schedule them during downtime. Now that I’ve given you another reason for a file to be too big, know there is now a tool to make it even bigger.