Saturday, October 17, 2009

OCFS Cluster(block) size and performance

You may know that 1024(1MB) is the recommended OCFS cluster
size for data files while 4(4K) for program files. But OCFS
with 1024(1MB) cluster size seems to lead to performance issues
when it has directories with a large number of big files.
The file system, used to hold archive logs and export dumps,
showed gradual degradation as more and more files were
created/moved in, and at some point in time Windows explorer
failed to show the contents of a directory leading to server
reboot - noticed in a Windows 2003 - 64 bit environment.

Reformatting the drive with 4K cluster size seems to fix the issue.
So consider the number of many files that are going to be in the
drive, how often they are going to be accessed(I/O performance)
and how huge they are going to be over time before selecting a
cluster size not just based on whether it is going to have data
files or program files.

Size and determining factors

  • 4K - A huge number of files in the file system irrespective
    of the size such as program files, exports, archive logs etc.

  • 1024K - A few, but large files such as data files that need a
    lot of caching to improve I/O performance.

Unix: sleep <seconds> && command - equivalent in Windows

If your system has Windows Server resource tool kit, then the above construct should work fine as it has "sleep" command. If not, the following comes very close with a few hassles and gotchas.

Short Version: choice /T secs /D Y command
Long Version: choice /N /T secs /D Y >NUL command

    - Requires a number of key strokes.
    - You need to have or install choice.exe to be availabe in your path.
    - Notice that '||' is used instead of '&&'. Pipe-pipe is required if you wish to hit "Ctrl-C" to abort both choice and command together. If you used "&&" and hit "Ctrl-C", your command will still execute because choice returns a 0(zero - true) upon encountering "Ctrl-C".