twzheng's cppblog

『站在风口浪尖紧握住鼠标旋转!』 http://www.cnblogs.com/twzheng

  C++博客 :: 首页 :: 新随笔 :: 联系 :: 聚合  :: 管理 ::
  136 随笔 :: 78 文章 :: 353 评论 :: 0 Trackbacks

A Reusable Windows Socket Server Class With C++

Contributed by Len Holgate

摘自:http://www.devarticles.com 

Ever thought of writing your own Windows socket server class? In this article Len shows you exactly how to do just that, including details of what a socket server should do and example C++ code.Writing a high performance server that runs on Windows NT and uses sockets to communicate with the outside world isn't that hard once you dig through the API references. What's more most of the code is common between all of the servers that you're likely to want to write. It should be possible to wrap all of the common code up in some easy to reuse classes. However, when I went looking for some classes to use to write my first socket server all of the examples and articles that I found required the user to pretty much start from scratch or utilise "cut and paste reuse" when they wanted to use the code in their own servers. Also the more complicated examples, ones that used io completion ports for example, tended to stop short of demonstrating real world usage. After all, anyone can write an echo server...

 

The aim of this article is to explain the set of reusable classes that I designed for writing socket servers and show how they can be used with servers which do more than simply echo every byte they receive. Note that I'm not going to bother explaining the hows and why's of IO completion ports etc, there are plenty of references available. A socket server needs to be able to listen on a specific port, accept connections and read and write data from the socket. A high performance and scaleable socket server should use asynchronous socket IO and IO completion ports. Since we're using IO completion ports we need to maintain a pool of threads to service the IO completion packets. If we were to confine ourselves to running on Win2k and above we could use the QueueUserWorkItem api to deal with our threading requirements but to enable us to run on the widest selection of operating systems we have to do the work ourselves.

 

Before we can start accepting connections we need to have a socket to listen on. Since there are many different ways to set such a socket up, we'll allow the user's derived class to create this socket by providing a pure virtual function as follows:

 

virtual SOCKET CreateListeningSocket(
                                     unsigned 
long address,
                                     unsigned 
short port) = 0;

 

The user's class can now implement this function as they see fit, a common implementation might be something like this:

 

SOCKET CSocketServer::CreateListeningSocket(
    unsigned 
long address,
    unsigned 
short port)
{
    SOCKET s 
= ::WSASocket(AF_INET, SOCK_STREAM, IPPROTO_IP, NULL, 0, WSA_FLAG_OVERLAPPED);
    
if (s == INVALID_SOCKET)
    
{
        
throw CWin32Exception(_T("CSocket::CreateListeningSocket()"), ::WSAGetLastError());
    }

    CSocket listeningSocket(s);
    CSocket::InternetAddress localAddress(address, port);
    listeningSocket.Bind(localAddress);
    listeningSocket.Listen(
5);
    
return listeningSocket.Detatch();
}

 

Note that we use a helper class, CSocket, to handle setting up our listening socket. This class acts as a "smart pointer" for sockets, automatically closing the socket to release resources when it goes out of scope and also wraps the standard socket API calls with member functions that throw exceptions on failure.

 

Now that we have a socket to listen on we can expect to start receieving connections. We'll use the WSAAccept() function to accept our connections as this is easier to use than the higher performance AcceptEx() we'll then compare the performance characteristics with AcceptEx() in a later article.

 

When a connection occurs we create a Socket object to wrap the SOCKET handle. We associate this object with our IO completion port so that IO completion packets will be generated for our asynchronous IO. We then let the derived class know that a connection has occurred by calling the OnConnectionEstablished() virtual function. The derived class can then do whatever it wants with the connection, but the most common thing would be to issue a read request on the socket after perhaps writing a welcome message to the client.

 

void CSocketServer::OnConnectionEstablished(
    Socket 
*pSocket,
    CIOBuffer 
*pAddress)
{
    
const std::string welcomeMessage("+OK POP3 server readyrn");
    pSocket.
>Write(welcomeMessage.c_str(), welcomeMessage.length());
    pSocket.
>Read();
}

 

Since all of our IO operations are operating aynchronously they return imediately to the calling code. The actual implementation of these operations is made slightly more complex by the fact that any outstanding IO requests are terminated when the thread that issued those requests exits. Since we wish to ensure that our IO requests are not terminated inappropriately we marshal these calls into our socket server's IO thread pool rather than issuing them from the calling thread. This is done by posting an IO completion packet to the socket server's IO Completion Port. The server's worker threads know how to handle 4 kinds of operation: Read requests, read completions, write requests and write completions. The request operations are generated by calls to PostQueuedCompletionStatus and the completions are generated when calls to WSARecv and WSASend complete asyncronously.

 

To be able to read and write data we need somewhere to put it, so we need some kind of memory buffer. To reduce memory allocations we could pool these buffers so that we don't delete them once they're done with but instead maintain them in a list for reused. Our data buffers are managed by an allocator which is configured by passing arguments to the construtor of our socket server. This allows the user to set the size of the IO buffers used as well as being able to control how many buffers are retained in the list for reuse. The CIOBuffer class serves as our data buffer follows the standard IO Completion Port pattern of being an extended "overlapped" structure.

 

As all good references on IO Completion Ports tell you, calling GetQueuedCompletionStatus blocks your thread until a completion packet is available and, when it is, returns you a completion key, the number of bytes transferred and an "overlapped" structure. The completion key represents 'per device' data and the overlapped structure represents 'per call' data. In our server we use the completion key to pass our Socket class around and the overlapped structure to pass our data buffer. Both our Socket class and our data buffer class allow the user to associate 'user data' with them. This is in the form of a single unsigned long value (which could always be used to store a pointer to a larger structure).

 

The socket server's worker threads loop continuously, blocking on their completion port until work is available and then extracting the Socket and CIOBuffer from the completion data and processing the IO request. The loop looks something like this:

 

int CSocketServer::WorkerThread::Run()
{
    
while (true)
    
{
        DWORD dwIoSize 
= 0;
        Socket 
*pSocket = 0;
        OVERLAPPED 
*pOverlapped = 0;

        m_iocp.GetStatus((PDWORD_PTR)
&pSocket, &dwIoSize, &pOverlapped);
        CIOBuffer 
*pBuffer = CIOBuffer::FromOverlapped(pOverlapped);
        
switch pBuffer.>GetUserData()
        
{
        
case IO_Read_Request:
            Read(pSocket, pBuffer);
            
break;
        
case IO_Read_Completed:
            ReadCompleted(pSocket, pBuffer);
            
break;
        
case IO_Write_Request:
            Write(pSocket, pBuffer);
            
break;
        
case IO_Write_Completed:
            WriteCompleted(pSocket, pBuffer);
            
break;
        }

    }

}

 

Read and write requests cause a read or write to be performed on the socket. Note that the actual read/write is being performed by our IO threads so that they cannot be terminated early due to the thread exiting. The ReadCompleted() and WriteCompleted() methods are called when the read or write actually completes. The worker thread provides two virtual functions to allow the caller's derived class to handle these situations. Most of the time the user will not be interested in the write completion, but the derived class is the only place that read completion can be handled.

 

virtual void ReadCompleted(
                           Socket 
*pSocket,
                           CIOBuffer 
*pBuffer) = 0;

virtual void WriteCompleted(
                            Socket 
*pSocket,
                            CIOBuffer 
*pBuffer);

 

Since our client must provide their own worker thread that derives from our socket server's worker thread we need to have a way for the server to be configured to use this derived worker thread. Whenever the server creates a worker thread (and this only occurs when the server first starts as the threads run for the life time of the server) it calls the following pure virtual function:

 

virtual WorkerThread *CreateWorkerThread(
    CIOCompletionPort 
&iocp) = 0;

 

We now have a framework for creating servers. The user needs to provide a worker thread class that is derived from CSocketServer::WorkerThread and a socket server that's derived from CSocketServer.These classes could look something like this:

 

class CSocketServerWorkerThread : public CSocketServer::WorkerThread
{
public :
    CSocketServerWorkerThread(CIOCompletionPort 
&iocp);

private :
    
virtual void ReadCompleted(
        CSocketServer::Socket 
*pSocket,
        CIOBuffer 
*pBuffer);
}
;

class CMySocketServer : CSocketServer
{
public :
    CMySocketServer (
        unsigned 
long addressToListenOn,
        unsigned 
short portToListenOn);

private :
    
virtual WorkerThread *CreateWorkerThread(
        CIOCompletionPort 
&iocp);
    
virtual SOCKET CreateListeningSocket(
        unsigned 
long address,
        unsigned 
short port);
    
virtual void OnConnectionEstablished(
        Socket 
*pSocket,
        CIOBuffer 
*pAddress);
}
;

 

Implementations for CreateListeningSocket() and OnConnectionEstablished() have already been presented. CreateWorkerThread() is as simple as this:

 

CSocketServer::WorkerThread *CMySocketServer::CreateWorkerThread(
    CIOCompletionPort 
&iocp)
{
    
return new CSocketServerWorkerThread(iocp);
}

 

Which leaves us with the implementation of our worker thread's ReadCompleted() method. This is where the server handles incoming data and, in the case of a simple Echo server ;) it could be as simple as this:

 

void CSocketServerWorkerThread::ReadCompleted(
    CSocketServer::Socket 
*pSocket,
    CIOBuffer 
*pBuffer)
{
    pSocket.
>Write(pBuffer);
}

 

A complete echo server is available for download in SocketServer1.zip. The server simply echos the incoming byte stream back to the client. In addition to implementing the methods discussed above the socket server and worker thread derived classes also implement several 'notifciation' methods that the server and worker thread classes call to inform the derived class of various internal goings on. The echo server simply outputs a message to the screen (and log file) when these notifications occur but the idea behind them is that the derived class can use them to report on internal server state via performance counters or suchlike. You can test the echo server by using telnet. Simply telnet to localhost on port 5001 (the port that the sample uses by default) and type stuff and watch it get typed back at you. The server runs until a named event is set and then shuts down. The very simple Server Shutdown program, available in ServerShutdown.zip, provides an off switch for the server.

 

 

More complex servers

Servers that do nothing but echo a byte stream are rare, except as poor examples. Normally a server will be expecting a message of some kind, the exact format of the message is protocol specific but two common formats are a binary message with some form of message length indicator in a header and an ASCII text message with a predefined set of 'commands' and a fixed command terminator, often "rn". As soon as you start to work with real data you are exposed to a real.world problem that is simply not an issue for echo servers. Real servers need to be able to break the input byte stream provided by the TCP/IP socket interface into distinct commands. The results of issuing a single read on a socket could be any number of bytes up to the size of the buffer that you supplied. You may get a single, distinct, message or you may only get half of a message, or 3 messages, you just can't tell. Too often inexperienced socket developers assume that they'll always get a complete, distinct, message and often their testing methods ensure that this is the case during development.

 

Chunking the byte stream

One of the simplest protocols that a server could implement is a packet based protocol where the first X bytes are a header and the header contains details of the length of the complete packet. The server can read the header, work out how much more data is required and keep reading until it has a complete packet. At this point it can pass the packet to the business logic that knows how to process it. The code to handle this kind of situation might look something like this:

 

void CSocketServerWorkerThread::ReadCompleted(
    CSocketServer::Socket 
*pSocket,
    CIOBuffer 
*pBuffer)
{
    pBuffer 
= ProcessDataStream(pSocket, pBuffer);
    pSocket.
>Read(pBuffer);
}


CIOBuffer 
*CSocketServerWorkerThread::ProcessDataStream(
    CSocketServer::Socket 
*pSocket,
    CIOBuffer 
*pBuffer)
{
    
bool done;
    
do
    
{
        done 
= true;
        
const size_t used = pBuffer.>GetUsed();
        
if (used >= GetMinimumMessageSize())
        
{
            
const size_t messageSize = GetMessageSize(pBuffer);
            
if (used == messageSize)
            
{
                
// we have a whole, distinct, message
                EchoMessage(pSocket, pBuffer);
                pBuffer 
= 0;
                done 
= true;
            }

            
else if (used > messageSize)
            
{
                
// we have a message, plus some more data 
                
// allocate a new buffer, copy the extra data into it and try again
                CIOBuffer *pMessage = pBuffer.>SplitBuffer(messageSize);
                EchoMessage(pSocket, pMessage);
                pMessage.
>Release();
                
// loop again, we may have another complete message in there
                done = false;
            }

            
else if (messageSize > pBuffer.>GetSize())
            
{
                Output(_T(
"Error: Buffer too smallnExpecting: "+ ToString(messageSize) + 
                    _T(
"Got: "+ ToString(pBuffer.>GetUsed()) + _T("nBuffer size = "+ 
                    ToString(pBuffer.
>GetSize()) + _T("nData = n"+ 
                    DumpData(pBuffer.
>GetBuffer(), pBuffer.>GetUsed(), 40));

                pSocket.
>Shutdown();
                
// throw the rubbish away 
                pBuffer.>Empty(); 
                done 
= true;
            }

        }

    }

    
while (!done);
    
    
// not enough data in the buffer, reissue a read into the same buffer to collect more data
    return pBuffer;
}

 

The key points of the code above are that we need to know if we have at least enough data to start looking at the header, if we do then we can work out the size of the message somehow. Once we know that we have the minimum amount of data required we can work out if we have all the data that makes up this message. If we do, great, we process it. If the buffer only contains our message then we simply process the message and since processing simply involves us posting a write request for the data buffer we return 0 so that the next read uses a new buffer. If we have a complete message and some extra data then we split the buffer into two, a new one with our complete message in it and the old one which has the extra data copied to the front of the buffer. We then pass our complete message to the business logic to handle and loop to handle the data that we had left over. If we dont have enough data we return the buffer and the Read() that we issue in ReadCompleted() reads more data into the same buffer, starting at the point that we're at now. Since we're a simple server we have a fairly important limitation, all our messages must fit into the IO buffer size that our server is using. Often this is a practical limitation, maximum message sizes can be known in advance and by setting our IO buffer size to be at least our maximum message size we avoid having to copy data around. If this isn't a viable limitation for your server then you'll need to have an alternative strategy here, copying data out of IO buffers and into something big enough to hold your whole message, or, processing the message in pieces...

 

In our simple server if the message is too big then we simply shutdown the socket connection, throw away the garbage data, and wait for the client to go away...

 

So how do we implement GetMinimumMessageSize() and GetMessageSize(), well, obviously it's protocol dependant, but for our packet echo server we do it like this:

 

size_t CSocketServerWorkerThread::GetMinimumMessageSize() const
{
    
return 1;
}


size_t CSocketServerWorkerThread::GetMessageSize(CIOBuffer 
*pBuffer) const
{
    size_t messageSize 
= *pBuffer.>GetBuffer();
    
return messageSize;
}

 

You may have noticed that in the case where we had a message and some extra data we called SplitBuffer() to break the complete message out into its own buffer, and then, once we'd dealt with it, we called Release(). This is a little of the implementation of the socket server's buffer allocator poking through. The buffers are reference counted. The only time we need to worry about this is if we create a new buffer using SplitBuffer, or if we decide to call AddRef() on the buffer because we wish to pass it off to another thread for processing. We'll cover this in more detail in the next article, but the gist of it is that every time we post a read or a write the buffer's reference count goes up and every time a read or write completes the count goes down, when there are no outstanding references the buffer goes back into the pool for reuse.

 

A packet echo server

 

A packet based echo server is available for download in SocketServer2.zip. The server expects to receive packets of up to 256 bytes which have a 1 byte header. The header byte contains the total length of the packet (including the header). The server reads complete packets and echos them back to the client. You can test the echo server by using telnet, if you're feeling clever ;) Simply telnet to localhost on port 5001 (the port that the sample uses by default) and type stuff and watch it get typed back at you. (Hint, CTRL B is 2 which is the smallest packet that contains data).

 

A real internet RFC protocol

 

Some of the common internet protocols, such as RFC 1939 (POP3), use a crlf terminated ASCII text stream command structure. An example of how such a server might be implemented using the CSocketServer classes presented here can be found in SocketServer3.zip. The classes presented here provide an easy way to develop scalable socket servers using IO completion and thread pooling in such a way that the user of the classes need not concern themselves with these low level issues. To create your own server simply derive from CSocketServer to create your listening socket, handle connection establishment and any of the other notifications that you require. Then derive from the WorkerThread class to provide the byte stream chunking and business logic. Running your server is as simple as this:

 

    CSocketServer server(
                         
"+OK POP3 server readyrn",
                         INADDR_ANY, 
// address to listen on
                         5001// port to listen on
                         10// max number of sockets to keep in the pool
                         10// max number of buffers to keep in the pool
                         1024); // buffer size
    server.Start();
    server.StartAcceptingConnections();

 

Your code can then do whatever it likes and the socket server runs on its own threads. When you are finished, simply call:

 

server.WaitForShutdownToComplete();

 

And the socket server will shutdown.

In the next article we address the issue of moving the business logic out of the IO thread pool and into a thread pool of its own so that long operations don't block the IO threads.

 

Notes

The source was built using Visual Studio 6.0 SP5 and Visual Studio .Net. You need to have a version of the Microsoft Platform SDK installed.

All of the zip files mentioned can be found in the single zip file attached to this article, which is linked here.

 

Revision history

· 21st May 2002 . Initial revision posted on www.jetbyte.com.

· 27th May 2002 . Added pause/resume functionality to all servers and the server shutdown program. Use CSocket to protect from resource leaks when creating the listening socket. Refactored the Socket and CIOBuffer classes so that common list management code is now in CNodeList and common user data code is now in COpaqueUserData.

· 29th May 2002 . Linting and general code cleaning

· 18th June 2002 . Removed call to ReuseAddress() during the creation of the listening socket as it not required . Thanks to Alun Jones for pointing this out to me.

 

DISCLAIMER: The content provided in this article is not warranted or guaranteed by Developer Shed, Inc. The content provided is intended for entertainment and/or educational purposes in order to introduce to the reader key ideas, concepts, and/or product reviews. As such it is incumbent upon the reader to employ real.world tactics for security and implementation of best practices. We are not liable for any negative consequences that may result from implementing any information covered in our articles or tutorials. If this is a hardware review, it is not recommended to open and/or modify your hardware.
posted on 2007-05-23 00:12 谭文政 阅读(1573) 评论(0)  编辑 收藏 引用 所属分类: 网络编程vc++.net

只有注册用户登录后才能发表评论。
网站导航: 博客园   IT新闻   BlogJava   知识库   博问   管理