Is it bad to fill up my Windows installed Drive i.e. C: . If yes then Why , How much space should I fill , will adding more and more files in the C drive affect my hardware performance. How much space should I leave unfilled for better performance. What could be the consequences by filling the whole drive.
Well, what instantly jumps to mind is that having a “completely full” drive may cause some of the following problems:
- May cause problems for any program that need to generate temporary files or use the disk to cache (for example installers (even to other drives), downloading, compression, etc).
- Your pagefile will be unable to expand if it ever needs to (assuming it is kept on C).
- May prevent hibernation if there is insufficent space to save RAM to disk. [Edit: as Martin points out in his answer, hibernation doesn’t need additional space when you hibernate, but instead uses up space when hibernation is first switched on.]
- Will make defragmentation very slow or impossible.
Assuming that you keep the drive defragmented, a “full” drive should perform as well as an “empty” one, so I don’t think there is much of a performance issues. [Edit2: Joel’s answer wonderfully explains how full drives have more seeking to do in spread out or full drives, which will obviously have a negative effect on performance.]
I think that 85-90% usage is the maximum recommended, but I cannot remember where I’ve got that figure from at the moment.
To understand this answer, I first need to relate a couple paragraphs worth of background information:
The hard drive is typically the slowest part and biggest performance bottleneck on your system. Having a fast hard drive is more important for many facets of system performance than memory or a fast cpu. This slowness is because it depends on actual moving parts for operation. These moving parts impact performance in three ways: rotational throughput, rotational latency, and seek time. Rotational throughput is how many bytes pass under the read/write head in a given time span, depending on how fast the platter is moving. Rotational latency is the time it takes for the first byte in a request to rotate to reach the read/write head. Seek time is the amount of time it takes for the seek head to reach the correct position on the platter.
It’s that last item I want to talk about. To help reduce seek time, disk controllers will try to keep data grouped near the beginning of the drive. This is why if you watch an XP machine defragment files you will see it spend a lot of time compacting files. It’s also why you want to defragment at all, as every time you have a new fragment in a file you need to do another (slow!) seek operation.
Okay, enough background. Let’s move on to a real-world scenario that directly addresses your question. Let’s say you buy a new drive that advertises an 8-9millisecond average seek time. 8-9 milliseconds may not seem like much, but to your computer — which thinks in nanoseconds — even one full millisecond is an eternity. Now let’s say you only use 25% of that drive’s capacity and keep it well defragged. What’s your average seek time? The correct answer is something much less than 8-9 milliseconds, because the read/write head will never have to seek to the outer 75% of the drive. It’s always starting out much closer to it’s destination.
Now let’s imaging you fill that drive up. You’re back up to 8-9 milliseconds on average per seek. Hopefully I’ve helped you understand here that as you fill up the drive, your seek times will start to suffer, and this brings down performance of the entire system. You can reduce this impact by adding ram or other optimization techniques like readyboost, but you can’t eliminate it entirely.
Generally speaking filling up the hard drive on Windows has worse effects than on a Linux box.
I agree with all what the previous poster said, except for the remark on hibernation (not stand-by). Since the space necessary is allocated on the disk as soon as hibernation is turned “on”. It creates a hidden file
C:hiberfil.sys. Its size is almost the same as the amount of RAM you have in your computer (2GB RAM –> 2GB hiberfile.sys).
Having a full disk could “prevent” switching hibernation on. If you switch hibernation off you’ll free as much disk space as you have RAM.
The maximum recommended usage of 85% -> This number appears when you install Windows XP. More precisly: It tells you to keep at least 15% of your disk free.
You may perform the following two experiments to get a rough understanding of what is going on inside your computer/on your disk regarding writing/reading files and defragmentation.
- Open the defragmentation tool in windows
- Defragment the disk (probably twice), until nothing red (fragmented space) appears anymore
- Download a large file (e.g. an Ubuntu iso-image ~750MB).
- Switch to the defrag tool and just check the disk
- You see a lot of red marks now
The reason is that the data was written “step-by-step”. It got somewhat “distributed” across the whole disk. Your browser didn’t tell your computer/Windows to create a file with a size of e.g. 750MB, but created a file and appended data byte-by-byte. Windows didn’t know how big this file will become in advance.
If you never defragment your disk and it’s quite full, “your computer” has too look/seek for free space on your disk and suddenly MUST distribute your (in this example) newly downloaded file in a way that fills the remaining gaps of free space. This makes writing slower.
If a file is distributed across the whole disk and you want to read it (e.g. burn the above mentioned ISO file) the read/write head inside the disk must move back-and-forth very often to grab all the pieces. This makes reading slower.
- Mark the newly downloaded ISO and copy it to another disk. (USB stick, external hard drive, etc)
- Delete the original file (“Windows” drive), just keep the copy on the other media
- Copy the file back from the external disk or whatever onto your “Windows” harddisk
- Run “check” in the defrag tool again
- You’ll see almost no red marks now.
The file was written in one sequence. This is because Windows knew the file size in advance and therefore was able to reserve all the necessary space on the disk in one go.