concentrate on c/c++ related technology

plan,refactor,daily-build, self-discipline,

  C++博客 :: 首页 :: 联系 :: 聚合  :: 管理
  37 Posts :: 1 Stories :: 12 Comments :: 0 Trackbacks









Direct3D---- HAL----  Graphic Devices.
REF device: reference Rasterizer.
surface: a matrix of pixel that Direct3D uses primarily to store 2D image data.
when we visualize the surface data as matrix, the pixel data is actually stored in a linear array.
Direct3D ---- HAL ---- Graphics device
REF: reference rasterizer, which emulates the wholly Direct3D in software.
this allows u to write and test code that uses Direct3D features that are not available on your device.
the width and height of a surface is measured in pixel.
IDirect3DSurface9 includes several methods:
1)LockRect: allow to obtain a pointer to the surface memory.
2)UnlockRect: after LockRecting it, then we should UnlockRect it.
3)GetDesc:retrieve a description of the surface by filling out the a D3DSURFACE_DESC structure.

Multisample:smooth out blocky-looking images that can result when representing images as a matrix of pixels.

we often need to specify the pixel format of Direct3D resources when we create a surface or texture.
the format of a pixel is defined by specifying a member of the D3DFORMAT enmuerated type.
D3DFMT_R8G8B8, D3DFMT_X8R8G8B8,D3DFMT_A8R8G8B8 are widely supported.
D3DPOOL_DEFAULT: it instructs Direct3D to place the resource in the memory that is best suited for the resource type and its usage.
it maybe video memory, AGP memory, or system memory.

D3DPOOL_MANAGED:resources placed in the manage pool are managed by Direct3D(that is, they are moved to video or AGP memory)
also a back-up copy of the resource is maintained in the system memory.
when resources are accessed and changed by the application,
they work with the system copy.
then Direct3D automatically updates them to video memory as needed.

D3DPOOL_SYSTEMMEM:specifies that the resource be placed in system memory.

D3DPOOL_SCRATCH:specify that the resource be placed in system memory. the difference between this pool
and D3DPOOL_SYSTEMMEM is that these resources must not follow the graphics device's restrictions.

Direct3D maintains a collection of surfaces, usually two or three,
called a swap chain that is represented by the IDirect3DSwapChain9 interface.

swap chains and more specifically,
the technique of page flipping are used to provide smooth animation betweeen frames
Front Buffer: the contents of this buffer are currently beging displayed by the monitor.
Back Buffer: the frame currently being processed its rendered to this buffer.

the application's frame rate is often out of sync with the mointor's refresh rate,
we do not want to update the contents of the front buffer with the next frame of animation
until the monitor has finished drawing the current frame,
but we donot want to halt our rendering while waiting for the monitor to
finish displaying the contents of the front buffer either.

we render to an off-screen surface(back buffer); then when the monitor is done displaying the surface in the back buffer, we move it to the end of the swaip chain and the next back buffer in the swap chain is promoted to be the front buffer.
this process is called presenting .

the depth buffer is a surface that does not contain image data but rather depth information about a particular pixel.
there is an entry in the depth buffer that corresponds to each pixel in the final rendered image.

In order for Direct3D to determine which pixels of an object are in front of another,
it uses a technique called depth buffering or z-buffering.

depth buffering works by computing a depth value for each pixel,and performing a depth test.
the pixel with the depth value closest to the camera wins, and that pixel gets written.
24-bit depth buffer is more accurate.

software vertex processing is always supported and can always be used.
hardware vertex processing can only be used if the graphics card supports vertex processing in hardware.

in our application, we can check if a device supports a feature
by checking the corresponding data member or bit in the D3DCAPS9 instance.

initializing Direct3D:
1) Acquire a pointer to an IDirect3D9 interface.
2) check the device capabilities(D3DCAPS9) to see
if the primary display adapter(primary graphics card)support hardware vertex processing, or transformation & Light.
3) Initialize an instance of D3DPRESENT_PARAMETERS.
4) Create the IDirect3DDevice9 object based on an initialized D3DPRESENT_PARAMETERS structure.

1) Direct3DCreate9(D3D_SDK_VERSION), the D3D_SDK_VERSION can guarantees that the application is built against the correct header files.
IDirect3D9 object is used for two things: device enumeration and creating the IDirect3DDevice9 object.
device enumeration refers to finding out the capabilities, display modes,
formats, and other information about each graphics device available on the system.

2) check the D3DCAPS9 structure.
use caps.DevCaps & D3DDEVCAPS_HWTRANSFORMANDLIGHT to check which type of vertex processing that the display card supports.

3) Fill out the D3DPRESENT_PARAMETERS structure.

4)Create the IDirect3DDevice9 interface.

it works like this:
we create a vertex list and an index list, the vertex list consists of all the unique vertices,
and the index list contains values that index into the vertex list to
define how they are to be put together to form triangles.

the camera specifies what part of the world the viewer can see
and thus what part of the world for which we need to generate 2d image.

the volume of space is a frustum and defined by the field of view angles and the near and far planes.

the projection window is the 2d area
that the 3d geometry inside the frustum gets projected onto to create the 2D image representation of the 3D scene.

local space, or modeling space, is the coordinate system in which we define an object's triangle list.

objects in local space are transformed to world space through a process called the world transform,
which usually consists of translations, rotations,
and scaling operations that set the position, orientation, and size of the model in the world.

projection and other operations are difficult or less efficient.
when the camera is at an arbitrary position and orientation in the world.
to make things easier, we transform the camera to the origin of the world system and rotate it
so that the camera is looking down the positive z-axis.

all geometry in the world is transformed along with the camera
so that the view of the world remains the same. this transformatoin is called view space transformation.

Direct3D takes advantage of this by culling(discard from further processing) the back facing polygons,
this is called backface culling.

by default: Direct3D treats triangles with vertices specified in a clockwise winding order(in view space)as front facing.
triangles with vertices specified in counterclockwise winding orders(in view space) are considered back facing.
Lighting sources are defined in world space but transformed into view space by the view space transformation.
in view space these light sources are applied to light the objects in the scene to give a more realistic appearance.

we need to cull the geometry that is outside the viewing volume, this process is called clipping.

in view space we have the task of obtaining a 2d representation of the 3D scene.

the process of going from n dimension to an n-1 dimension is called projection.

there are many ways of performing a projection, but we are interested in a particular way called perspective projection.
a perspective projection projects geometry in such a way that foreshortening occurs.
this type of projection allows us to respresent a 3D scene on a 2D image.

the projection transformation defines our viewing volume(frustum) and
is responsible for projecting the geometry in the frustum onto the projection window.

viewport transform is responsible for transforming coordinates on the project window to a rectangle on the screen,
which we call the viewport.

a vertex buffer is simply a chunk of contiguous memory that contains vertex data(IDirect3DVertexBuffer9).
a index buffer is a chunk of contiguous memory that contains index data(IDirect3DIndexBuffer9).

Set the stream source.
setting the stream source hooks up a vertex buffer to a stream that essentially feeds geometry into the rendering pipleline.

once we have created a vertex buffer and, optionally, an index buffer,
we are almost ready to render its contents, but there are three steps that must be taken first.
1) Set the stream source.SetStreamSource.
2) Set the vertex format. SetFVF.
3) Set index buffer.SetIndices.


#define D3DCOLOR_XRGB(r,g,b) D3DCOLOR_ARGB(0xff,r,g,b)
typedef struct _D3DCOLORVALUE
 float r;
        float g;
        float b;
        float a;
0.0f < components < 1.0f.

shading occurs during raserization and specifies
how the vertex colors are used to compute the pixel colors that make up the primitive.

with flat shading, the pixels of a primitive are uniformly colored by the color specified
in the first vertex of the primitive.

with gouraud shading, the colors at each vertex are interpolated linearly across the face of the primitive.

the Direct3D lighting model, the light emitted by a light source consists of three components,or three kinds of light.
ambient light:
this kind of light models light that has reflected off other surfaces and is used to brighten up the overall scene.
diffuse light:
this type of light travels in a light direction. when it strikes a surface, it reflects equally in all directions.

diffuse light reflects equally in all directions, the reflected light will reach the eye no matter the viewpoint,
and therefore we do not need to take the viewer into consideration.thus,
the diffuse lighting equation needs only to consider the light direction and the attitude of the surface.

specular light: when it strikes a surface, it reflects harshly in one directions,
causing a bright shine that can only be seen from some angles.

since the light reflects in one direction,
clearly the viewpoint,
in addition to the light direction and surface attitude,
must be taken into consideration in the specular lighting equation.
used to model light that produces hightlights on such objects,
the bright shines created when light strikes a polished surface.

the material allows us to define the percentage at which light is reflected from the surface.

a face normal is a vector that describes the direction a polygon is facing.

Direct3D needs to know the vertex normals so that it can determine the angle at which light strikes a surface,
and since lighting calculations are done per vertex,
Direct3D needs to know the surface orientation per vertex.

Direct3D supports three types of light sources:

point lights: the light source has a position in world space and emits light in all directions.

directional lights: the light source has no position but shoots parallel rays of light in the specified direction.

spot lights: it has position and shines light through a conical shape in a particular direction.

the cone is characterized by two angles, theta, and phi, theta describes an innder cone,and phi describes an outer cone.

texture mapping is a technique that allows us to map image data onto triangles.

D3DFVF_TEX1: our vertex structure contains one pair of texture coordinates.

D3DXCreateTextureFromFile: to load texture from can load bmp,dds,dib, jpg, png and tga.

SetTexture: set the current texture.
Filtering is a technique that Direct3D uses to help smooth out these distortions.
distortions include: MAGFILTER/MINFILTER.

nearest point sampling:
default filtering method, produces the worst-looking result, but the fastest to compute.
linear filtering:
produces fairly good results, and can be fast on today's hardware.
anisotropic filtering:
provide the best result, but take the longest time to compute.
the anisotropic level should also be set, and the maximum level is 4.
the idea behind mimaps is to take a texture and
create a series of smaller lower resolution textures
but customize the filtering for each of these levels so it perserves the detail that is important for us.

the mipmap filter is used to control how Direct3D uses the mipmaps.

D3DTEXF_NONE: Disable mipmapping
D3DTEXF_POINT: Direct3D will choose the level that is closest in size to that triangle.
D3DTEXF_LINEAR: Direct3D will choose two closest levels, filter each level with the min and mag filters,
and linearly combine these two levels to form the final color values.

mipmap chain is created automatically with the D3DXCreateTextureFromFile function if the device supports mipmapping.

blending allows us to blend pixels that
we are currently rasterizing with pixels
that have been previously rasterized to the same location.

in other words, we blend primitives over previously drawn primitives.

the idea of combining the pixel values that are currently being computed(source pixel)
with pixel values previously written(destination pixel) is called blending.

u can enable blending by setting D3DRS_ALPHABLENDENABLE to be true.

you can set the source blend factor and destination blend factor by setting D3DRS_SRCBLEND and D3DRS_DESTBLEND.

the default values for the source blend factor and destination blend factor are D3DBLEND_SRCALPHA and D3DBLEND_INVSRCALPHA.

the alpha component is mainly used to specify the level of transparency of a pixel.

In order to make the alpha component describe the level of transparent of each pixel,

we can obtain alpha info from a texture's alpha channel.

the alpha channel is an extra set of bits reserved for each texel that stores a alpha component.

when the texure is mapped to a primitive, the alpha component in the alpha channel are also mapped,
and they become tha alpha components for the pixels of the texured primitive.

dds file is an image format specifically designed for DirectX applications and textures.

the stencil buffer is an 0ff-screen buffer that we can use to achieve special effects.
the stencil buffer has the same resolution as the back buffer and deep buffer,
so that the i-jth pixel in the stencil buffer corresponds with the i-jth pixel in the back buffer and deep buffer.

use stencil buffer, we can set it like this.Device->SetRenderState(D3DRS_STENCILENABLE, true/false).
we can clear the stencil buffer, use Device->Clear(0,0,D3DCLEAR_TARGET|D3DCLEAR_ZBUFFER|D3DCLEAR_STENCIL,0xff00000,1.0f,0);
it means that we want to clear the stencil buffer as well the target(back buffer) and depth buffer.

a stencil buffer can be created at the time that we create the depth buffer.
when specifying the format of the depth buffer,we can specify the format of stencil buffer at the same time.
in actuality, the stencil buffer and depth buffer share the same off-screen surface buffer.
but a segment of memory in each pixel is designated to each particular buffer.

we can use the stencil buffer to block rendering to certain areas of the back buffer.
the decision to block a particular pixel from being written is decided by stencil test.
the test is performed for every pixel.
(ref & mask) ComparisonOperator (value & mask)
ref: application-defined reference value.
mask: application-defined mask value.
value: the pixel in the stencil buffer that we want to test.
if the test evaluates to be false, we block the pixel from being written to the back buffer.
if a pixel isn't written to the back buffer, it isn't written to the depth buffer either.

we can set the stencil reference value by Device->SetRenderState(D3DRS_STENCILREF,0x1122);
we can set the stencil mask value by Device->SetRenderState(D3DRS_STENCILMASK,0x1215);
the default is 0xffffffff, which doesn't mask any bits.

we can not explicitly set the individual stencil values, but recall that we can clear the stencil buffer.
in addition, we can use the stencil render state to control what's written to the stencil buffer.

the comparison operation can be any member of the D3DCMPFUNC emumerated type.

In addition to decide whether to write or block a particular pixel from being written to the back buffer.
we can specify how the stencil buffer should be updated.
we can set a writtten mask that will mask off bits of any value that we want to write in the stencil buffer.
we set the state D3DRS_STENCILWRITEMASK.

the stencil buffer allows us to block rendering to certain areas on the back buffer.

we can use the stencil buffer to block the rendering of the reflected teapot if it is not being rendered into the mirror.

parallel light shadow.
r(t) = p +tL (1)
n.p + d = 0  (2)
the set of intersection points found by
shooting r(t) through each of the object's vertices with the plane
defines the geometry of the shadow.
the equations:
s = p + [(-d-n.p)/(n.L)]L
L: define the position of the point light

point light shadow.
r(t) = P + t(P - L) (1)
n.p + d = 0         (2)
the set of intersection points found by
shooting r(t) through each of the object's vertices with the plane
define the geometry of the shadow.
L: define the direction of the parallel light rays.

shadow matrix can be gotten from D3DXMatrixShadow.
using stencil buffer, we can prevent writing overlapping pixels and therefore avoid double blending artifacts.

ID3DXFont is to draw the Direct3D application.

we can create an ID3DXFont interface using the D3DXCreateFontIndirect function.
also we can use D3DXCreateFont function to obtain a pointer to an ID3DXFont interface.

the ID3DXFont and CFont samples for this chapter compute and display the frames rendered per second(fps).

CD3DFont can be simple alternatives for fonts though it doesn't support enough complex formats and font types.
to use CD3DFont class, we should include ,d3dfont header files/source files, d3dutil header files/source files,dxutil header files/source files.

CD3DFont class can be constructed from its constructor functions.
and it can use its member functions, such as DrawText.

D3DXCreateText can also create text.

ID3DXBaseMesh interface contains a vertex buffer that stores the vertices of the mesh and an index buffer
 that how these vertices are put together to form the triangles of the mesh.
also there are these related functions:LockVertexBuffer/LockIndexBuffer, UnlockVertexBuffer/UnlockIndexBuffer.


a mesh consists of one or more subsets.
a subset is a group of triangles in the mesh that can all be rendered using the same attribute.
by attribute we mean material, texture, and render states.
each triangle in the mesh is given an attribute ID that specifies the subset in which the triangle lives.

the attribute IDs for the triangles are stored in the mesh's attribute buffer, which is a DWORD array.
since each face has an entry in the attribute buffer,
the mumber of elements in the attribute buffer is equal to the number of faces in the mesh.
the entries in the attribute buffer and the triangles defined in the index buffer have a one-to-one correspondence.
that is, entry i in the attribute buffer corresponds with triangle i in the index buffer.
we can access attribute buffer by LockAttributeBuffering and UnlockAttributeBuffering.

ID3DXMesh interface provides the DrawSubset(DWORD AttribId) method to
draw the triangles of a particular subset specified by the AttribId argument.
when we want to optimize the mesh, then we can use function OptimizeInplace.

// get the adjacency of the non-optimized mesh.
DWORD adjacencyInfo[Mesh->GetNumFaces() *3]'

// array to hold optimized adjacency info.
DWORD optimizedAdjacencyInfo[Mesh->GetNumFaces() * 3];

a similar method is the Optimize method,
which outputs an optimized version of the calling mesh object rather than actually optimizing the calling mesh object.

when a mesh is optimized with the D3DXMESHOPT_ATTSORT flag,
the geometry of the mesh is sorted by its attribute
so that the geomoetry of a particular subset exists as a contiguous block in the vertex/index buffers.

In addition to sorting the geometry,
the D3DXMESHOPT_ATTRSORT optimization builds an attribute table.
the attribute table is an array of D3DXATTRIBUTERANGE structures.

Each entry in the attribute table corresponds to a subset of the mesh and
specifies the block of memory in the vertex/index buffers,
where the geometry for the subset resides.

to access the attribute table of a mesh, we can use GetAttributeTable method.
the method can return the number of attributes in the attribute table or
it can fill an array of D3DXATTRIBUTERANGE structures with the attribute data.
to get the number of elements in the attribute table, we pass in 0 for the first argument:
DWORD numSubsets  = 0;
once we know the number of elements, we can fill a D3DXATTRIBUTERANGE array with the actual attribute table by writing:
we can also set the attribute table directly by SetAttributeTabling.

the adjacency array is a DWORD array, where each entry contains an index identifying a triangle in the mesh.

GenerateAdjacency can also output the adjacency info.
DWORD adjacencyInfo[Mesh->GetNumFaces()*3];

sometimes we need to copy the data from one mesh to another.
this is accomplished with the ID3DXBaseMesh::CloneMeshFVF method.
this method allows the creation options and flexible vertex format of the destination mesh to be different from those of the source mesh.
for example:
ID3DXMesh* clone = 0;

we can also create an empty mesh using the D3DXCreateMeshFVF function.
by empty mesh, we mean that we specify the number of faces and vertices that we want the mesh to be able to hold.
then D3DXCreateMeshFVF allocated the appropriately sized vertex, index, and attribute buffers.once we have the mesh's buffers allocated, we mannually fill in the mesh's data contents,
that is we must write the vertices, indices, and attributes to the vertex buffer, index buffer, and attribute buffer, respectively.

alternatively, you can create an empty mesh with the D3DXCreateMesh function.

ID3DXBuffer interface is a generic data structure that D3DX uses to store data in a contiguous block of memory.
GetBufferPointer: return a pointer to the start of the data.
GetBufferSize: return the size of the buffer in bytes.

load a x file: D3DXLoadMeshFromX.

D3DXComputeNormals generate the vertex normals for any mesh by using normal averaging.
if adjacency information is provided, then duplicated vertices are disregarded.
if adjacency info is not provided,
then duplicated vertices have normals averaged from the faces that reference them.

ID3DXPMesh allows us to simplify a mesh by applying a sequence of edge collapse transformations(ECT)
each ECT removes one vertex and one or two faces.
because each ECT is invertible(its inverse is called a vertex split),
we can reverse the simplification process and restore the mesh to its exact original state.

we would end up spending time rendering a high triangle count model when a simpler low triangle count model would suffice.
we can create an ID3DXPMesh object using the following function:

the attribute weights are used to determine the chance that a vertex is removed during simplification.
the higher a vertex weight, the less chance it has of being removed during simplification.

one way that we can use progressive meshes is to adjust the LOD(level of details) of a mesh based on its distance from the camera.

the vertex weight structure allows us to specify a weight for each possible component of a vertex.

bounding boxes/spheres are often used to speed up visibility tests and collision tests, among other things.

a more efficient approach would be to compute the bounding box/sphere of each mesh and then do one ray/box or ray/sphere intersection test per object.
we can then say that the object is hit if the ray intersected its bounding volume.

since the right,up,and look vectors define the camera's orientation in the world, we sometimes refer to all three as the orientation vectors. the orientation vectors must be orthonormal.
a set of vectors is orthonormal if they are mutually perpendicular to each other and of unit length.

an orthogonal matrix has the property that its inverse equals its transpose.

each time this function is called, we recompute the up and right vectors with respect to the look vector to ensure that they are mutually orthogonal to each other.

pitch, or rotate the up and look vectors around the camera's right vector.
Yaw, or rotate the look and  right vectors round  the camera's up vector.
Roll, or rotate the up and right vectors around the camera's look vector.

walking means moving in the direction that we are looking(along the look vector).
strafing is moving side to side from the direction we are looking, which is of course moving along the right vector.
flying is moving along the up vector.

the AIRCRAFT model allows us to move freely through space and gives us six degrees of freedom.
however, in some games, such as a first-person shooter, people can't fly;

a heightmap is an array where each element specifies the height of  a particular vertex in the terrain grid.
one of the possible graphical representations of a heightmap is a grayscale map, where darker values reflect portions of the terrain with low altitude and whiter values refect portions of the terrain with higher altitudes.

a particle is a very small object that is usually modeled as a point mathematically.
programmers would use a billboard to display a particle, a billboard is a quad whose world matrix orients it so that it always faces the camera.
Direct3D 8.0 introduced a special point primitive called a point sprite that is most applicable to particle system.
point sprites can have textures mapped to them and can change size.we can describe a point sprite by a single point. this saves memory and processing time because we only have to store and process one vertex over the four needed to store a billboard(quad).
 we can add one field in the particle vertex structure to specify the size of the particle with the flag D3DFVF_PSIZE.

the behavior of the point sprites is largely controlled through render states.

the formula below is used to calculate the final size of a point sprite based on distance and these constants:
FinalSize = ViewportHeight.Size.sqrt(1/(a + bd+cdd))
final size : the final size of the point sprite after the distance calculations.
viewportheight: the height of the viewport.
size: corresponds to the value specifies by the D3DRS_POINTSIZE render state.
A,B,C: correspond to the values specified by D3DRS_POINTSCALE_A,D3DRS_POINTSCALE_B,
D: the distance of the point sprite in view space to the camera's position. since the camera is positioned at the origin in view space, this value is D = sqrt(xx+yy+zz), where (x,y,z) is the position of the point sprite in view space.

the attributes of a particle are specific to the particular kind of particle system that we are modeling.
the particle system is responsible for updating, displaying, killing, and creating particles.
we use the D3DUSAGE_POINTS to specifies that the vertex buffer will hold point sprites when creating vertex buffer for the point sprite.
we use D3DUSAGE_DYNAMIC when creating vertex buffer is because we need to update our particles every frame.

therefore, once we compute the picking ray, we can iterate through each object in the scene and test if the ray intersects it. the object that the ray intersects is the object that was picked by the user.

when using picking algorithm, we need to know the object that was picked, and its location in 3D space.

screen to projection window transform:
the first task is to transform the screen point to the projection window.
the viewport transformation matrix is:
[ width/2  0  0 0]
[0 -height/2 0 0 ]
[0 0 MaxZ - MinZ 0]
[x+(width/2) y+(height/2) MinZ 1]
transforming a point p = (px,py,pz) on the projection window by the viewport transformation yields the screen point s = (sx,sy):
sx = px(width/2) +  x + width/2
sy = -py(height/2) + y + height/2.
recall that the z-coordinate after the viewport transformation is not stored as part of the 2D image but is stored in the depth buffer.

assuming the X and Y memebers of the viewport are 0,and P be the projection matrix, and since entries P00 and P11 of a transformation matrix scale the x and y coordinates of a point, we get:
px = (2x/viewportwidth - 1)(1/p00)
py = (-2y/viewportheight + 1)(1/p11).
pz = 1
computing the picking ray
recall that a ray can be represented by the parameteric equation p(t) = p0 + tu, where p0 is the origin of the ray describing its position and u is a vector describing its direction.
transforming ray 
in order to perform a ray-object intersection test, the ray and the objects must be in the same coodinate system. rather than transform all the objects into view space, it's often easier to transform the picking ray into world space or even an object's local space.
D3DXVec3TransformCoord : transform points.
D3DXVec3TransformNormal: transform vectors.
for each object in the scene, iterate through its triangle list and test if the   ray intersects one of the triangles, if it does, it must have hit the object that the triangle belongs to.
the picking ray may intersect multiple objects. however, the object closest to the camera is the object that was picked, since the closer object would have obscured the object behind it.
we write out shaders in notepad and save them as regular ascii text files. then we use the D3DXCompileShaderFromFile function to compile our shaders.
the special colon syntax denotes a semantic, which is used to specify the usage of the variable. this is similar to the flexible vertex format(FVF) of a vertex structure.

as with a c++ program, every HLSL program has an antry point,
posted on 2008-10-30 22:56 jolley 阅读(1726) 评论(2)  编辑 收藏 引用


# re: Introduction to 3D Game Programming with DirectX 9.0 2009-06-11 20:28 akira32
D3DPOOL_SYSTEMMEM是用system memory,D3DPOOL_MANAGED是用video memory嗎?  回复  更多评论

# re: Introduction to 3D Game Programming with DirectX 9.0 2009-06-12 12:16 jolley
D3DPOOL_MANAGED: 托管资源或者管理资源,对于托管资源,Direct3D会先在系统内存里面备份,当设备丢失时,Direct3D将自动释放这类资源,恢复设备时也不需要重新创建,Direct3D会自动从系统内存恢复这些资源.
D3DPOOL_SYSTEM:一般是放置不被设备经常访问的资源,这些资源常驻内存,即使在设备丢失以后,也不会丢失,并且在恢复设备的时候,也不需要重新创建.   回复  更多评论

网站导航: 博客园   IT新闻   BlogJava   知识库   博问   管理