Using more and bigger textures can make your scenes look a lot better, but it comes at a price. At some point, you're going to run out of memory. Some clever texture management can help you reduce the resulting performance losses.
Imagine a PC with 64 Mb of RAM and 32 Mb of video RAM. With Win98 loaded, the PC might have about 35 Mb of RAM left. Now suppose that your application has a 10 Mb memory footprint, textures not yet included. That leaves 25 Mb.
If you're running 1024x768 in 32 bit, with 24 bits of Z buffer and 8 bits of stencil, you're using 6 Mb of video RAM for your frame buffer. That leaves 26 Mb for textures.
Notice anything peculiar yet? Suppose you have 40 Mb of textures. 26 Mb of them will fit in video memory, but only 25 Mb will fit in system RAM! Moreover, if the 3D card now requests a texture that isn't currently in system memory, it will have to wait while Windows fetches it from the swap file. Needless to say, this takes painfully long. Luckily, the swap file can be avoided altogether by doing your own explicit texture management.
The first thing to do, is determine whether or not a texture is in video memory. You can use glAreTexturesResident() to find out. What you really want to do, is avoid texture trashing completely. You can do this by creating as many blank textures as you can in video memory, and then streaming data into them yourself.
Let's take that one step at a time. You can create a blank texture of any given size quite easily. Just create a new texture object and use glCopyTexImage2D() to attach some pixel data to it. The less trivial part is determining which size to give the texture. You should analyze your scene and see which texture resolutions are used in it. Create blank textures at all these resolutions. If a certain resolution, for example 128x128, is used very often, create several more blank textures of that size. Keep creating blank textures until glAreTexturesResident() indicates that texture memory is full.
Now you can load textures into system RAM. Determine the amount of available memory using GlobalMemoryStatus(). You don't want to load more textures than that, so set up a texture cache that streams textures from disk as you need them. The most common swapping algorithm is LRU (Least Recently Used), meaning that textures that haven't been used in a while will be swapped out. Reloading the textures yourself will be much faster than letting Windows use the swap file. Additionally, you can perform the loading in a separate thread.
The last step is to get the textures from system RAM to the video card. You already have some texture objects, so you can just bind one and use glTexSubImage2D() to overwrite its pixel data. Just make sure you keep track of which texture objects are in use and which ones aren't.
There you have it. Using this technique, you can be sure that you never allocate more memory than you actually have. By doing caching and swapping yourself, you can keep Windows from using the swap file. From what I gathered from the writings of Tim Sweeney, this technique is more or less what's used in the Direct3D version of Unreal.
Direct3D, you say? You're probably wondering why Unreal's OpenGL renderer doesn't just do the same, then. The answer, I'm afraid, is that it can't. The implementation of the glAreTexturesResident() function, which is so critical for this technique, has not been properly standardized. The OpenGL specification says:
An implementation may choose to establish a working set of texture objects on which binding operations are performed with higher performance. A texture object that is currently part of the working set is said to be resident.
Nowhere does it say that resident textures have to be in video memory! NVidia's OpenGL drivers implement glAreTexturesResident() correctly, but the Matrox G400, for example, doesn't. This makes the technique described in this article pretty much useless - unless you want to put "NVidia 3D cards only" in your app's system requirements. To find out if your card can properly report texture residency, download texswap.zip.
Oops. A perfectly good idea just went down the drain because of driver issues. Let's just hope that future drivers will be more thorough in their OpenGL support!