Wednesday, February 4, 2009

Making Linux FEEL faster...

What we want is perceived performance
Take a file manager for example. Let's focus on Konqueror (it's a nice case study, and it's a nice file manager). Suppose I hit the home button on my panel, which shows one of the (usually hidden and prelaunched) Konqueror instances and prompts it to browse my home directory.

If you have lots of RAM, it won't be a problem -- both Konqueror and the contents of your home directory will be in memory, so it'll be blindingly fast. But if you're rather short of memory, it's a different matter -- what happens next determines whether you feel your computer slow or fast.

If Konqueror has been paged out, it will appear to be frozen (or take longer to "start up") for a couple of seconds, until Linux has paged necessary code paths in. If, on the contrary, my home directory has been evicted from the RAM cache, Konqueror will show up instantly and be responsive, while the home directory loads.

I'd much rather wait for the directory display than have to wait for Konq to unfreeze because it was paged out. The difference is that in the first scenario, I can close the window, use the menus, navigate among the window controls, change the URL, abort the operation; in the second case, I'm screwed until Linux decides to fully pagein whatever Konq needs.

What we want is perceived performance, not throughput. It matters to me that I can manipulate my file manager half-a-second after I've hit the home button. It doesn't matter to me that, because of this preference, the home directory actually takes one second longer to finish displaying.

Variations of this pattern can be found everywhere: in file open dialogs, in multimedia applications with collection managers, basically everywhere an operation requires some sort of progress report.


The solution
There are two distinct and complementary measures we'll take to solve this problem. Keep reading to find out about them.

Tuning swappiness to prevent impromptu RAM hijacking
Swappiness is the name Linux kernel developers gave to the preference between paging applications out to disk and (in practice) shrinking caches. If it's close to 0, Linux will prefer to keep applications in RAM and not grow the caches. If it's close to 100, Linux will prefer to swap applications out, and enlarge the caches as much as possible. The default is a healthy 60.

The irony of this preference is that, in fact, paging an unused application out generally produces a net performance increment, since the cache really helps a lot when it's needed -- but this net performance increment translates to a net drop in perceived performance, since you usually don't care whether a file uncompresses a few seconds later, but you do care (a lot) when your applications don't respond instantaneously.

On a desktop computer, you want swappiness to be as close to zero as possible. The reason you want to do this (even though it might hurt actual performance) is because this will immunize your computer to memory shortages caused by temporary big file manipulations (think copying a big video file to another disk). The cache will still be as big as possible, but it won't displace running applications.

With swappiness turned down, the Linux kernel no longer attempts to enlarge the cache by paging applications out. Not unless you're experiencing an extremely high memory shortage.

To make the change:
sudo gedit /etc/sysctl.conf

Paste this to end of the file:
vm.swappiness=10


Filesystem caches are more important than other caches
We've already established that the filesystem cache is important because, without it, file browsing goes extremely slowl. Now lets tell Linux that we want it to prefer inode/dentry cache to other caches.

Back to the terminal we go:
sudo gedit /etc/sysctl.conf

Paste this to the end of the file:
vm.vfs_cache_pressure=50

Values close to 100 provide no gain. Values close to zero can cause huge swap activity during big filesystem scans.

Know this - These tips work well for me, however ymmv and as always - If you don't understand any of what I've said here, just leave your system alone. There is always the possibility that something COULD go wrong (though very unlikely), so play with this at your own risk.

It's a good practice to back up any file you are altering (prior to making that change and saving). If you back up the files mentioned and something goes wrong, it's always very easy to fix.

Good luck - and enjoy the improved speed of your desktop.

Disclaimer: The preceeding information was taken from rudd-o.com. I have posted the edited content here for my own use, and other family members to refer to.

No comments: