在OpenGL下如何控制纹理/贴图的装载

freyshaw 2003-07-10 03:54:47
由于纹理数据相当大(多分辨率格式存储,近千兆jpg),用无法一次装入内存。在Opengl下是否能手动控制纹理地装载/卸载?
...全文
31 1 打赏 收藏 转发到动态 举报
写回复
用AI写文章
1 条回复
切换为时间正序
请发表友善的回复…
发表回复
maplexp 2003-07-10
  • 打赏
  • 举报
回复
可以,这篇文章可能可以帮你

Texture Management

Using more and bigger textures can make your scenes look a lot better, but it comes at a price. At some point, you're going to run out of memory. Some clever texture management can help you reduce the resulting performance losses.

Imagine a PC with 64 Mb of RAM and 32 Mb of video RAM. With Win98 loaded, the PC might have about 35 Mb of RAM left. Now suppose that your application has a 10 Mb memory footprint, textures not yet included. That leaves 25 Mb.

If you're running 1024x768 in 32 bit, with 24 bits of Z buffer and 8 bits of stencil, you're using 6 Mb of video RAM for your frame buffer. That leaves 26 Mb for textures.

Notice anything peculiar yet? Suppose you have 40 Mb of textures. 26 Mb of them will fit in video memory, but only 25 Mb will fit in system RAM! Moreover, if the 3D card now requests a texture that isn't currently in system memory, it will have to wait while Windows fetches it from the swap file. Needless to say, this takes painfully long. Luckily, the swap file can be avoided altogether by doing your own explicit texture management.

The first thing to do, is determine whether or not a texture is in video memory. You can use glAreTexturesResident() to find out. What you really want to do, is avoid texture trashing completely. You can do this by creating as many blank textures as you can in video memory, and then streaming data into them yourself.

Let's take that one step at a time. You can create a blank texture of any given size quite easily. Just create a new texture object and use glCopyTexImage2D() to attach some pixel data to it. The less trivial part is determining which size to give the texture. You should analyze your scene and see which texture resolutions are used in it. Create blank textures at all these resolutions. If a certain resolution, for example 128x128, is used very often, create several more blank textures of that size. Keep creating blank textures until glAreTexturesResident() indicates that texture memory is full.

Now you can load textures into system RAM. Determine the amount of available memory using GlobalMemoryStatus(). You don't want to load more textures than that, so set up a texture cache that streams textures from disk as you need them. The most common swapping algorithm is LRU (Least Recently Used), meaning that textures that haven't been used in a while will be swapped out. Reloading the textures yourself will be much faster than letting Windows use the swap file. Additionally, you can perform the loading in a separate thread.

The last step is to get the textures from system RAM to the video card. You already have some texture objects, so you can just bind one and use glTexSubImage2D() to overwrite its pixel data. Just make sure you keep track of which texture objects are in use and which ones aren't.

There you have it. Using this technique, you can be sure that you never allocate more memory than you actually have. By doing caching and swapping yourself, you can keep Windows from using the swap file. From what I gathered from the writings of Tim Sweeney, this technique is more or less what's used in the Direct3D version of Unreal.

Direct3D, you say? You're probably wondering why Unreal's OpenGL renderer doesn't just do the same, then. The answer, I'm afraid, is that it can't. The implementation of the glAreTexturesResident() function, which is so critical for this technique, has not been properly standardized. The OpenGL specification says:


An implementation may choose to establish a working set of texture objects on which binding operations are performed with higher performance. A texture object that is currently part of the working set is said to be resident.

Nowhere does it say that resident textures have to be in video memory! NVidia's OpenGL drivers implement glAreTexturesResident() correctly, but the Matrox G400, for example, doesn't. This makes the technique described in this article pretty much useless - unless you want to put "NVidia 3D cards only" in your app's system requirements. To find out if your card can properly report texture residency, download texswap.zip.

Oops. A perfectly good idea just went down the drain because of driver issues. Let's just hope that future drivers will be more thorough in their OpenGL support!

8,304

社区成员

发帖
与我相关
我的任务
社区描述
游戏开发相关内容讨论专区
社区管理员
  • 游戏开发
  • 呆呆敲代码的小Y
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧