天行健 君子当自强而不息

Working with Morphing Animation(2)

Building a Morphed Mesh through Manipulation

Directly manipulating a mesh's vertex buffers is probably the easiest way to work with morphing. For this method you'll need a third mesh that contains the final coordinates of each vertex after morphing; it's this third mesh that you'll render.

To create the third mesh, which I call the resulting morphed mesh, you can clone the source mesh and be on your way.

// Declare third mesh to use for resulting morphed mesh
ID3DXMesh *pResultMesh = NULL;

// Clone the mesh using the source mesh pSourceMesh
pSourceMesh−>CloneMeshFVF(0, pSourceMesh−>GetFVF(), pDevice,&pResultMesh);

After you've created the resulting morphed mesh (pResultMesh), you can begin processing the morphing animation by locking the source, target, and resulting morphed mesh's vertex buffers. Before you do that however, you need to declare a generic vertex structure that contains only the vertex coordinates, which you'll use to lock and access each vertex buffer.

typedef struct {
  D3DXVECTOR3 vecPos;
} sGenericVertex;

Also, because each vertex buffer contains vertices of varying sizes (for example, the source might use normals whereas the target doesn't), you need to calculate the size of the vertex structure used for each mesh's vertices. You can do so using the D3DXGetFVFVertexSize function.

// pSourceMesh = source mesh object
// pTargetMesh = target mesh object
// pResultMesh = resulting morphed mesh object
DWORD SourceSize = D3DXGetFVFVertexSize(pSourceMesh−>GetFVF());
DWORD TargetSize = D3DXGetFVFVertexSize(pTargetMesh−>GetFVF());
DWORD ResultSize = D3DXGetFVFVertexSize(pResultMesh−>GetFVF());

Now you can lock the vertex buffers and assign the pointers to them.

// Declare vertex pointers
char *pSourcePtr, *pTargetPtr, *pResultPtr;
pSourceMesh−>LockVertexBuffer (D3DLOCK_READONLY, (void**)&pSourcePtr);
pTargetMesh−>LockVertexBuffer (D3DLOCK_READONLY, (void**)&pTargetPtr);
pResultMesh−>LockVertexBuffer (0, (void**)&pResultPtr);

Notice how I assigned a few char * pointers to the vertex buffers instead of using the generic vertex structure? You need to do that because the vertices in the buffers could be of any size, remember? Whenever you need to access a vertex, you cast the pointer to the generic vertex structure and access the data. To go to the next vertex in the list, add the size of the vertex structure to the pointer. Get it? If not, don't worry−the upcoming code will help you make sense of it all.

After you've locked the buffers you can begin iterating through all vertices, grabbing the coordinates and using the calculations from the previous section to calculate the morphed vertex positions. Assuming that the length of the animation is stored in Length and the current time you are using is stored in Time, the following code will illustrate how to perform the calculations:

// Length = FLOAT with length of animation in milliseconds
// Time = FLOAT with time in animation to use
// Calculate a scalar value to use for calculations
float Scalar = Time / Length;
// Loop through all vertices
for(DWORD i=0;i<pSourceMesh−>GetNumVertices();i++)
{
// Cast vertex buffer pointers to a generic vertex structure
sGenericVertex *pSourceVertex = (sGenericVertex*)pSourcePtr;
sGenericVertex *pTargetVertex = (sGenericVertex*)pTargetPtr;
sGenericVertex *pResultVertex = (sGenericVertex*)pResultPtr;
	// Get source coordinates and scale them
D3DXVECTOR3 vecSource = pSourceVertex−>vecPos;
vecSource *= (1.0f − Scalar);
	// Get target coordinates and scale them
D3DXVECTOR3 vecTarget = pTargetVertex−>vecPos;
vecTarget *= Scalar;
	// Store summed coordinates in resulting morphed mesh
pResultVertex−>vecPos = vecSource + vecTarget;
	// Go to next vertices in each buffer and continue loop
pSourcePtr += SourceSize;
pTargetPtr += TargetSize;
pResultPtr += ResultSize;
}

Up to this point I've skipped over the topic of vertex normals because normals are identical to vertex coordinates in that you use scalar and inversed scalar values on the normals to perform the same calculations as you do for the vertex coordinates.

In the preceding code, you can calculate the morphing normal values by first seeing whether the mesh uses normals. If so, during the loop of all vertices you grab the normals from both the source and target vertices, multiply by the scalar and inversed scalar, and store the results. Take another look at the code to see how to do that:

// Length = FLOAT with length of animation in milliseconds
// Time = FLOAT with time in animation to use
// Calculate a scalar value to use for calculations
float Scalar = Time / Length;
// Set a flag if using normals
BOOL UseNormals = FALSE;
if(pSourceMesh−>GetFVF() & D3DFVF_NORMAL && pTargetMesh−>GetFVF() & D3DFVF_NORMAL)
UseNormals = TRUE;
// Loop through all vertices
for(DWORD i=0;i<pSourceMesh−>GetNumVertices();i++)
{
// Cast vertex buffer pointers to a generic vertex structure
sGenericVertex *pSourceVertex = (sGenericVertex*)pSourcePtr;
sGenericVertex *pTargetVertex = (sGenericVertex*)pTargetPtr;
sGenericVertex *pResultVertex = (sGenericVertex*)pResultPtr;
	// Get source coordinates and scale them
D3DXVECTOR3 vecSource = pSourceVertex−>vecPos;
vecSource *= (1.0f − Scalar);
	// Get target coordinates and scale them
D3DXVECTOR3 vecTarget = pTargetVertex−>vecPos;
vecTarget *= Scalar;
	// Store summed coordinates in resulting morphed mesh
pResultVertex−>vecPos = vecSource + vecTarget;
	// Process normals if flagged
if(UseNormals == TRUE)
{
// Adjust generic vertex structure pointers to access
// normals, which are next vector after coordinates.
pSourceVertex++; pTargetVertex++; pResultVertex++;
		// Get normals and apply scalar and inversed scalar values
D3DXVECTOR3 vecSource = pSourceVertex−>vecPos;
vecSource *= (1.0f − Scalar);
		D3DXVECTOR3 vecTarget = pTargetVertex−>vecPos;
vecTarget *= Scalar;
		pResultVertex−>vecPos = vecSource + vecTarget;
}
	// Go to next vertices in each buffer and continue loop
pSourcePtr += SourceSize;
pTargetPtr += TargetSize;
pResultPtr += ResultSize;
}

Everything looks great! All you need to do now is unlock the vertex buffers and render the resulting mesh! I'll skip the code to unlock the buffers and get right to the good part−rendering the meshes.

 

Drawing Morphed Meshes

If you're building the morphing meshes by directly manipulating the resulting mesh's vertex buffer, as shown in the previous section, then rendering the morphing mesh is the same as for any other ID3DXMesh object you've been using. For example, you can loop through each material in the mesh, set the material and texture, and then draw the currently iterated subset. No need to show any code here−it's just simple mesh rendering.

On the other hand, if you want to move past the basics and start playing with real power, you can create your own vertex shader to render the morphing meshes for you. Take my word for it−this is something you'll definitely want to do. Using a vertex shader means you have one less mesh to deal with because the resulting mesh object is no longer needed; the speed increase is well worth a little extra effort.

Before you can move on to using a vertex shader, however, you need to figure out how to render the mesh's subsets yourself.

 

Dissecting the Subsets

To draw the morphing mesh, you need to set the source mesh's vertex stream as well as the target mesh's stream. Also, you need to set only the source mesh's indices. At that point, it's only a matter of scanning through every subset and rendering the polygons related to each subset.

Wait a second! How do you render the subsets yourself? By duplicating what the ID3DXMesh::DrawSubset function does, that's how! The DrawSubset function works in one of two ways. The first method, which you use if your mesh has not been optimized to use an attribute table, is to scan the entire list of attributes and render those batches of polygons belonging to the same subset. This method
can be a little slow because it renders multimaterial meshes in small batches of polygons.

The second method, which is used after you optimize the mesh to use an attribute table, works by scanning the built attribute table to determine which grouped faces are drawn all in one shot. That is, all faces that belong to the same subset are grouped together beforehand and rendered in one call to DrawPrimitive or DrawIndexedPrimitive. That seems like the way to go!

To use the second method of rendering, you need to first optimize your source mesh. You can (and should) do this when you load the mesh. It's a safe habit to optimize all meshes you load using the ID3DXMesh::OptimizeInPlace function, as shown in the following bit of code:

// pMesh = just−loaded mesh
pMesh−>OptimizeInPlace(D3DXMESHOPT_ATTRSORT, NULL, NULL, NULL, NULL);

Once the mesh is optimized, you can query the ID3DXMesh object for the attribute table it is using. The attribute table is of the data type D3DXATTRIBUTERANGE, which is defined as follows:

typedef struct_D3DXATTRIBUTERANGE {
  DWORD AttribId;
  DWORD FaceStart;
  DWORD FaceCount;
  DWORD VertexStart;
  DWORD VertexCount;
} D3DXATTRIBUTERANGE;

The first variable, AttribId is the subset number that the structure represents. For each material in your mesh, you have one D3DXATTRIBUTERANGE structure with the AttribId set to match the subset number.

Next come FaceStart and FaceCount. You use these two variables to determine which polygon faces belong to the subset. Here's where the optimization comes in handy−all faces belonging to the same subset are grouped together in the index buffer. FaceStart represents the first face in the index buffer belonging to the subset, whereas FaceCount represents the number of polygon faces to render using that subset.

Last, you see VertexStart and VertexCount, which, much like FaceStart and FaceCount, determine which vertices are used during the call to render the polygons. VertexStart represents the first vertex in the vertex buffer to use for a subset, and VertexCount represents the number of vertices you can render in one call. When you optimize a mesh based on vertices, you'll notice that all vertices are packed in
the buffer to reduce the number of vertices used in a call to render a subset.

For each subset in your mesh you must have a matching D3DXATTRIBUTERANGE structure. Therefore, a mesh using three materials will have three attribute structures. After you've optimized a mesh (using ID3DXMesh::OptimizeInPlace ), you can get the attribute table by first querying the mesh object for the number of attribute structures using the ID3DXMesh::GetAttributeTable function, as shown
here:

// Get the number of attributes in the table
DWORD NumAttributes;
pMesh−>GetAttributeTable(NULL, &NumAttributes);

At this point, you only need to allocate a number of D3DXATTRIBUTERANGE objects and call the GetAttributeTable function again, this time supplying a pointer to your array of attribute objects.

// Allocate memory for the attribute table and query for the table data
D3DXATTRIBUTERANGE *pAttributes;
pAttributes = new D3DXATTRIBUTERANGE[NumAttributes];
pMesh−>GetAttributeTable(pAttributes, NumAttributes);

Cool! After you've got the attribute data, you can pretty much render the subsets by scanning through each attribute table object and using the specified data in each in a call to DrawIndexedPrimitive. In fact, do that now by first grabbing the mesh's vertex buffer and index buffer pointers.

// Get the vertex buffer interface
IDirect3DVertexBuffer9 *pVB;
pMesh−>GetVertexBuffer(&pVB);

// Get the index buffer interface
IDirect3DIndexBuffer9 *pIB;
pMesh−>GetIndexBuffer(&pIB);

Now that you have both buffer pointers, go ahead and set up your streams, vertex shader, and vertex element declaration, and loop through each subset, setting the texture and then rendering the polygons.

// Set the vertex shader and declaration
pDevice−>SetFVF(NULL); // Clear FVF usage
pDevice−>SetVertexShader(pShader);
pDevice−>SetVertexDeclaration(pDecl);
// Set the streams
pDevice−>SetStreamSource(0, pVB, 0, pMesh−>GetNumBytesPerVertex());
pDevice−>SetIndices(pIB);
// Go through each subset
for(DWORD i=0;i<NumAttributes;i++)
{
// Get the material id#
DWORD MatID = pAttributes[i];
	// Set the texture of the subset
pDevice−>SetTexture(0, pTexture[AttribID]);
	// Render the polygons using the table
pDevice−>DrawIndexedPrimitive(D3DPT_TRIANGLELIST, 0,
pAttributes[i].VertexStart,
pAttributes[i].VertexCount,
pAttributes[i].FaceStart * 3,
pAttributes[i].FaceCount);
}

After you've rendered the subsets you can free the vertex buffer and index buffer interfaces you obtained.

pVB−>Release(); pVB = NULL;
pIB−>Release(); pIB = NULL;

When you're done with the attribute table, make sure to free that memory as well.

delete [] pAttributes; pAttributes = NULL;

All right, now you're getting somewhere! Now that you know how to render the subsets yourself, it's time to move on to using a vertex shader.

 

Check Out the Demos

This project demonstrates building morphing meshes by directly manipulating a mesh's vertex buffer.

Figure 8.4: The animated dolphin jumps over morphing sea waves!


WinMain.cpp:

#include <windows.h>
#include 
<d3d9.h>
#include 
<d3dx9.h>
#include 
"Direct3D.h"

// A generic coordinate vertex structure
struct sVertex
{
    D3DXVECTOR3 pos;
};

// Background vertex structure, fvf, and texture.
struct sBackdropVertex
{
    
float x, y, z, rhw;
    
float u, v;
};

#define BACKDROP_FVF (D3DFVF_XYZRHW | D3DFVF_TEX1)

// Structure to contain a morphing mesh
struct sMorphMesh
{
    D3DXMESHCONTAINER_EX
* source_mesh;
    D3DXMESHCONTAINER_EX
* target_mesh;
    D3DXMESHCONTAINER_EX
* result_mesh;

    
long    normal_offset;
    DWORD    vertex_pitch;

    sMorphMesh()
    {
        ZeroMemory(
thissizeof(*this));
    }

    
~sMorphMesh()
    {
        delete source_mesh; source_mesh 
= NULL;
        delete target_mesh; target_mesh 
= NULL;
        delete result_mesh; result_mesh 
= NULL;
    }
};

//////////////////////////////////////////////////////////////////////////////////////////////

IDirect3D9
*                g_d3d;
IDirect3DDevice9
*        g_device;
IDirect3DVertexBuffer9
*    g_backdrop_vb;
IDirect3DTexture9
*        g_backdrop_texture;
sMorphMesh
*                g_water_morph_mesh;
sMorphMesh
*                g_dolphin_morph_mesh;

const char CLASS_NAME[] = "MorphClass";
const char CAPTION[]    = "Morphing Demo";

////////////////////////////////////////////////////////////////////////////////////////////////

LRESULT FAR PASCAL window_proc(HWND hwnd, UINT msg, WPARAM wParam, LPARAM lParam);

bool do_init(HWND hwnd);
void do_shutdown();
void do_frame();

// Function to load a group of meshes for morphing
bool load_morphing_mesh(sMorphMesh* morph_mesh,
                        
const char* source_mesh_file,
                        
const char* target_mesh_file,
                        
const char* texture_path);

// Function to build a resulting morphes mesh
void build_morph_mesh(sMorphMesh* morph_mesh, float scalar);

//////////////////////////////////////////////////////////////////////////////////////////////

int PASCAL WinMain(HINSTANCE inst, HINSTANCE, LPSTR, int cmd_show)
{      
    CoInitialize(NULL);    
// Initialize the COM system

    
// Create the window class here and register it

    WNDCLASSEX win_class;  

    win_class.cbSize        
= sizeof(win_class);
    win_class.style         
= CS_CLASSDC;
    win_class.lpfnWndProc   
= window_proc;
    win_class.cbClsExtra    
= 0;
    win_class.cbWndExtra    
= 0;
    win_class.hInstance     
= inst;
    win_class.hIcon         
= LoadIcon(NULL, IDI_APPLICATION);
    win_class.hCursor       
= LoadCursor(NULL, IDC_ARROW);
    win_class.hbrBackground 
= NULL;
    win_class.lpszMenuName  
= NULL;
    win_class.lpszClassName 
= CLASS_NAME;
    win_class.hIconSm       
= LoadIcon(NULL, IDI_APPLICATION);

    
if(!RegisterClassEx(&win_class))
        
return -1;

    
// Create the main window
    HWND hwnd = CreateWindow(CLASS_NAME, CAPTION, WS_CAPTION | WS_SYSMENU | WS_MINIMIZEBOX,
                             
00640480, NULL, NULL, inst, NULL);

    
if(hwnd == NULL)
        
return -1;

    ShowWindow(hwnd, cmd_show);
    UpdateWindow(hwnd);

    
// Call init function and enter message pump
    if(do_init(hwnd)) 
    {
        MSG msg;    
        ZeroMemory(
&msg, sizeof(MSG));

        
// Start message pump, waiting for user to exit
        while(msg.message != WM_QUIT) 
        {
            
if(PeekMessage(&msg, NULL, 00, PM_REMOVE)) 
            {
                TranslateMessage(
&msg);
                DispatchMessage(
&msg);
            }
            
            do_frame();    
// Render a single frame
        }
    }
  
    do_shutdown();
    UnregisterClass(CLASS_NAME, inst);
    CoUninitialize();

    
return 0;
}

LRESULT FAR PASCAL window_proc(HWND hwnd, UINT msg, WPARAM wParam, LPARAM lParam)
{
    
// Only handle window destruction messages
    switch(msg) 
    {
    
case WM_DESTROY:
        PostQuitMessage(
0);
        
break;

    
case WM_KEYDOWN:
        
if(wParam == VK_ESCAPE)
            DestroyWindow(hwnd);

        
break;
    }

    
return DefWindowProc(hwnd, msg, wParam, lParam);
}

bool do_init(HWND hwnd)
{
    init_d3d(
&g_d3d, &g_device, hwnd, falsefalse);

    g_water_morph_mesh 
= new sMorphMesh;

    
if(! load_morphing_mesh(g_water_morph_mesh, "..\\Data\\Water1.x""..\\Data\\Water2.x""..\\Data\\"))
        
return false;

    g_dolphin_morph_mesh 
= new sMorphMesh;

    
if(! load_morphing_mesh(g_dolphin_morph_mesh, "..\\Data\\Dolphin1.x""..\\Data\\Dolphin3.x""..\\Data\\"))
        
return false;

    
// create the backdrop

    sBackdropVertex backdrop_verts[
4= {
        {   
0.0f,   0.01.01.0f0.0f0.0f },
        { 
640.0f,   0.01.01.0f1.0f0.0f },
        {   
0.0f480.01.01.0f0.0f1.0f },
        { 
640.0f480.01.01.0f1.0f1.0f }
    };

    g_device
->CreateVertexBuffer(sizeof(backdrop_verts), D3DUSAGE_WRITEONLY, BACKDROP_FVF, D3DPOOL_DEFAULT,
                                 
&g_backdrop_vb, NULL);

    
char* ptr;
    g_backdrop_vb
->Lock(00, (void**)&ptr, 0);
    memcpy(ptr, backdrop_verts, 
sizeof(backdrop_verts));
    g_backdrop_vb
->Unlock();

    D3DXCreateTextureFromFile(g_device, 
"..\\Data\\Sky.bmp"&g_backdrop_texture);

    
// Create and enable a directional light

    D3DLIGHT9 light;
    ZeroMemory(
&light, sizeof(light));

    light.Type 
= D3DLIGHT_DIRECTIONAL;
    light.Diffuse.r 
= light.Diffuse.g = light.Diffuse.b = light.Diffuse.a = 1.0f;
    light.Direction 
= D3DXVECTOR3(0.0f-1.0f0.0f);

    g_device
->SetLight(0&light);
    g_device
->LightEnable(0, TRUE);

    
// start playing an ocean sound
    PlaySound("..\\Data\\Ocean.wav", NULL, SND_ASYNC | SND_LOOP);

    
return true;
}

void do_shutdown()
{
    
// Stop playing an ocean sound
    PlaySound(NULL, NULL, 0);

    
// release backdrop data
    release_com(g_backdrop_vb);
    release_com(g_backdrop_texture);

    
// free mesh data
    delete g_water_morph_mesh;        g_water_morph_mesh   = NULL;
    delete g_dolphin_morph_mesh;    g_dolphin_morph_mesh 
= NULL;
    
    
// release D3D objects
    release_com(g_device);
    release_com(g_d3d);
}

void do_frame()
{
    
static float dolphin_x_pos = 0.0f, dolphin_z_pos = 256.0f;

    
// build the water morphing mesh using a time-based sine-wave scalar value
    float water_scalar = (sin(timeGetTime() * 0.001f+ 1.0f* 0.5f;
    build_morph_mesh(g_water_morph_mesh, water_scalar);

    
// build the dolphin morphing mesh using a time-based scalar value
    float dolphin_time_factor = (timeGetTime() % 501/ 250.0f;
    
float dolphin_scalar = (dolphin_time_factor <= 1.0f? dolphin_time_factor : (2.0f - dolphin_time_factor);
    build_morph_mesh(g_dolphin_morph_mesh, dolphin_scalar);

    
// calculate the angle of the dolphin's movement and reposition the dolphin if it's far enough underwater.

    
float dolphin_angle = (timeGetTime() % 6280/ 1000.0f * 3.0f;

    
if(sin(dolphin_angle) < -0.7f)
    {
        dolphin_x_pos 
= (float)(rand() % 1400- 700.0f;
        dolphin_z_pos 
= (float)(rand() % 1500);
    }

    
// create and set the view transformation        

    D3DXMATRIX  mat_view;
    D3DXVECTOR3 eye(
0.0f170.0f-1000.0f);
    D3DXVECTOR3 at(
0.0f150.0f0.0f);  
    D3DXVECTOR3 up(
0.0f1.0f0.0f);

    D3DXMatrixLookAtLH(
&mat_view, &eye, &at, &up);
    g_device
->SetTransform(D3DTS_VIEW, &mat_view);    

    
// clear the device and start drawing the scene

    g_device
->Clear(0, NULL, D3DCLEAR_TARGET | D3DCLEAR_ZBUFFER, D3DCOLOR_RGBA(000255), 1.0f0);

    g_device
->BeginScene();    

    
// draw the backdrop
    g_device->SetFVF(BACKDROP_FVF);
    g_device
->SetStreamSource(0, g_backdrop_vb, 0sizeof(sBackdropVertex));
    g_device
->SetTexture(0, g_backdrop_texture);
    g_device
->DrawPrimitive(D3DPT_TRIANGLESTRIP, 02);

    
// set identity matrix for world transformation
    D3DXMATRIX mat_world;
    D3DXMatrixIdentity(
&mat_world);
    g_device
->SetTransform(D3DTS_WORLD, &mat_world);

    g_device
->SetRenderState(D3DRS_LIGHTING, TRUE);

    draw_mesh(g_water_morph_mesh
->result_mesh);

    
// draw the jumping dolphin

    D3DXMatrixRotationZ(
&mat_world, dolphin_angle - 1.57f);

    mat_world._41 
= dolphin_x_pos + cos(dolphin_angle) * 256.0f;
    mat_world._42 
= sin(dolphin_angle) * 512.0f;
    mat_world._43 
= dolphin_z_pos;

    g_device
->SetTransform(D3DTS_WORLD, &mat_world);
    draw_mesh(g_dolphin_morph_mesh
->result_mesh);

    g_device
->SetRenderState(D3DRS_LIGHTING, FALSE);

    g_device
->EndScene();

    g_device
->Present(NULL, NULL, NULL, NULL);
}

bool load_morphing_mesh(sMorphMesh* morph_mesh,
                        
const char* source_mesh_file,
                        
const char* target_mesh_file,
                        
const char* texture_path)
{
    
if(FAILED(load_mesh(&morph_mesh->source_mesh, g_device, source_mesh_file, texture_path, 0, D3DXMESH_SYSTEMMEM)))
        
return false;

    
if(FAILED(load_mesh(&morph_mesh->target_mesh, g_device, target_mesh_file, texture_path, 0, D3DXMESH_SYSTEMMEM)))
        
return false;

    
// reload the source mesh as a resulting morphed mesh container
    if(FAILED(load_mesh(&morph_mesh->result_mesh, g_device, source_mesh_file, texture_path, 0, D3DXMESH_SYSTEMMEM)))
        
return false;

    DWORD mesh_fvf 
= morph_mesh->source_mesh->MeshData.pMesh->GetFVF();

    
// determin if source mesh uses normals and calculate offset
    if(mesh_fvf & D3DFVF_NORMAL)
        morph_mesh
->normal_offset = 3 * sizeof(float);
    
else
        morph_mesh
->normal_offset = 0;

    morph_mesh
->vertex_pitch = D3DXGetFVFVertexSize(mesh_fvf);

    
return true;
}

void build_morph_mesh(sMorphMesh* morph_mesh, float scalar)
{
    
char* source_ptr;
    
char* target_ptr;
    
char* result_ptr;

    morph_mesh
->source_mesh->MeshData.pMesh->LockVertexBuffer(D3DLOCK_READONLY, (void**)&source_ptr);
    morph_mesh
->target_mesh->MeshData.pMesh->LockVertexBuffer(D3DLOCK_READONLY, (void**)&target_ptr);
    morph_mesh
->result_mesh->MeshData.pMesh->LockVertexBuffer(0, (void**)&result_ptr);

    DWORD num_verts 
= morph_mesh->source_mesh->MeshData.pMesh->GetNumVertices();

    
// go through each vertex and interpolate coordinates
    for(DWORD i = 0; i < num_verts; i++)
    {
        sVertex
* source_vert = (sVertex*) source_ptr;
        sVertex
* target_vert = (sVertex*) target_ptr;
        sVertex
* result_vert = (sVertex*) result_ptr;

        D3DXVECTOR3 source_pos 
= source_vert->pos;
        source_pos 
*= (1.0f - scalar);

        D3DXVECTOR3 target_pos 
= target_vert->pos;
        target_pos 
*= scalar;

        result_vert
->pos = source_pos + target_pos;

        
// handle interpolation of normals
        if(morph_mesh->normal_offset)
        {
            sVertex
* source_normal = (sVertex*&source_ptr[morph_mesh->normal_offset];
            sVertex
* target_normal = (sVertex*&target_ptr[morph_mesh->normal_offset];
            sVertex
* result_normal = (sVertex*&result_ptr[morph_mesh->normal_offset];

            D3DXVECTOR3 source_pos 
= source_normal->pos;
            source_pos 
*= (1.0f - scalar);

            D3DXVECTOR3 target_pos 
= target_normal->pos;
            target_pos 
*= scalar;

            result_normal
->pos = source_pos + target_pos;
        }

        
// goto next vertex
        source_ptr += morph_mesh->vertex_pitch;
        target_ptr 
+= morph_mesh->vertex_pitch;
        result_ptr 
+= morph_mesh->vertex_pitch;
    }

    morph_mesh
->source_mesh->MeshData.pMesh->UnlockVertexBuffer();
    morph_mesh
->target_mesh->MeshData.pMesh->UnlockVertexBuffer();
    morph_mesh
->result_mesh->MeshData.pMesh->UnlockVertexBuffer();
}

 

download source file


posted on 2008-04-28 18:09 lovedday 阅读(1274) 评论(0)  编辑 收藏 引用


只有注册用户登录后才能发表评论。
网站导航: 博客园   IT新闻   BlogJava   知识库   博问   管理


公告

导航

统计

常用链接

随笔分类(178)

3D游戏编程相关链接

搜索

最新评论