OptiWrite® has been preventing fragmentation on PCs running PerfectDisk® for well over a year now, and as of July 10th, 2012, it’s official – our trademark is approved!
PerfectDisk’s OptiWrite technology prevents fragmentation before it happens – this reduces the amount of defragmentation energy and resources needed to keep your system running optimally.
OptiWrite detects when Windows is going to fragment files and intelligently redirects I/O to stop the fragmentation from occurring. System performance is maintained and the need to defragment files is greatly reduced, resulting in time and energy savings.
Unlike our competitors, OptiWrite doesn’t sacrifice system speed in order to do its job.
What is OptiWrite?
OptiWrite is a file system filter that eliminates fragmentation in real-time by ensuring that up to 100% of files are written to the file system in a single continuous stream in addition to saving the resources normally required to analyze and defrag. OptiWrite performs above and beyond its competition because it was designed to prevent file fragmentation in a way that does not negatively impact the performance of subsequent reads.
OptiWrite outperforms its leading competitor in both performance and energy savings.
Files optimized with OptiWrite read and write significantly faster than with its nearest competitor. The key factor in preserving performance and reducing energy costs is the amount and type of free space fragmentation created in exchange for preventing fragments.
While both PerfectDisk and our competitor are able to prevent up to 100% of fragments, the amount of free space fragmentation created as a result is dramatically different. The performance impact of free space fragmentation is highly evident when the diagnostic File Access Timer is run. The results with OptiWrite are exponentially faster because it avoids the creation of excessive free space fragmentation.
Preventing file fragmentation is not enough to ensure maximum performance and energy savings. It is just as important to factor in where and how files are written.
Why Prevent Fragments?
While disk capacity has grown greatly over the past 15 years, disk performance has not. This is why it is more important than ever to defragment disks in order to get the most out of your hardware, but the lack of increase in performance relative to capacity has created another problem: energy consumption. As larger capacity disks are filled with data, it takes longer to optimize them for peak performance and that means paying more for power. Preventing fragmentation before it occurs doesn’t just cut down on the time it takes to perform optimizations; it can eliminate the need to do so for extended periods of time depending on disk usage. Preventing fragmentation in real time saves you real money, and in today’s economy, any solution that both maximizes performance and reduces costs is a true winner.
There is also a significantly lower impact on system resource usage for preventing fragments up front than there is for defragmenting them after the fact.
In addition, random write behavior, a common type of disk activity where a disk must seek free space, then write and repeat based on the amount of fragmented free space (seek-write-seek-write, etc.), is significantly reduced or eliminated when preventing fragments in real time, allowing for smooth and fast sequential writing to the disk, which avoids multiple “seeks.”
The majority of the energy savings that can be attained through the prevention of fragmentation is not in the act of preventing fragments or in the avoidance of defragmentation, but in the performance gained when reading back files after the fact. In order to attain the best performance and energy savings, the prevention solution must factor in the placement of data and avoid the excessive creation of free space fragmentation. Otherwise the solution will simply trade one problem for another and require that additional system resources and energy costs be spent to fully restore performance. The simple act of preventing fragmentation is not enough to justify doing so if the solution sacrifices fast sequential reads for slow random reads.
It should be kept in mind that free space fragmentation is the most prevalent cause for the creation of fragmentation, failing to avoid it when preventing files from fragmenting simply delays the inevitable. If you simply prevent file fragmentation at the expense of creating free space fragmentation, a disk will inevitably be forced to fragment files regardless of any prevention method. Any solution that fails to understand this in its implementation will at best delay the need to defragment, and at worst, eventually promote the need to do so.