The second method is called asychronous socket programming, while the first method is traditional blocking sockets.
Neither way is best, just use whichever method works best for your particular program. In my experience, asychronous sockets are easier to use for client applications, and blocking sockets are easier to use for server applications.
Which of these two ways is best?
Since your post is asking about server side programming, I will talk a little bit about the pros/cons regarding server sockets.
The way I usually do a server is with blocking sockets, where a new thread is spawned for each new connection. This has the advantage where each thread can handle it's own connection, and you dont have to worry about sending the wrong data to the wrong client. It is also very easy to keep track of all the connected clients this way since each thread represents a client. One thing you need to be aware of here though, is that any shared data (data that is accessed by different threads) needs to be thread safe to avoid data corruption.
With asychronous sockets on a server, everything runs on a single thread. This is always nice because you never have to worry about corrupting data via threads. The downside here though is that it is easy to lock up the server and have client waiting long amounts of time.
Imagine the following function gets called when data is available to be read from the socket:
A threaded server on the other hand can simply context switch to another thread at any time.
// While you are processing the data here, all other socket connections are on hold. If
// the server takes 1 second to do the processing for a connection, and there are 50
// clients connected, you are in trouble.