In our previous tutorial, we created a datagram listener and a couple of clients that would send it datagrams. That server would respond to any datagram sent to the TCP/IP port at which the server was listening. What we really want to do, however, is to have the server only respond to clients that meet some criteria.
Why is this important?
Imagine you're writting a distributed system that will have many server applications. Each of those will probably listen at different (and well-known) TCP/IP addresses so that clients can find each server without confusion. However... In a large system you might have several versions of the same server running at the same time*. You probably don't want those servers running at different addresses since that breaks the well-known address requirement.
By creating a datagram listener similar to the last tutorial, a client can send broadcast datagrams to locate all of the servers listening at the well-known address. By adding a thin protocol layer into the datagram contents, the servers can be selective about which clients they respond to. Thus, if each client sends its version signature in the broadcast, then the servers can choose to respond only to clients with matching versions.
This feature of discrimnation depending on the client signature could be used for security reasons or version confirmation by the server.
*Note: I'm making the assumption that your multiple server versions will be running on different hosts since you can only have one server listening at the well-known address on a given host.