From: lcargill99@comcast.com   
      
   Rick Jones wrote:   
   > Les Cargill wrote:   
   >> Rick Jones wrote:   
   >>> Even if your Server checks for and correctly interprets the read   
   >>> return of zero before it goes to send(),   
   >> it does.   
   >   
   > Then why wasn't that in the sequence of events you gave? If it   
   > actually got a read return of zero, then it should not have called   
   > send() right? Unless it cannot assume that about the application   
   > protocol.   
   >   
      
   Please see below. I misunderstood you just as you suspected.   
      
   >> Right or wrong, I am not in a "read()" phase at this point in the   
   >> program. I *have* read; select() and recv() were called and the   
   >> client hung up before I could respond...   
   >   
   > That makes it sound like you aren't actually checking for a read   
   > return of zero before you make the send() call. Just to be clear, I'm   
   > not speaking to the recv() or receives you did to get the client's   
   > request.   
      
   Ah, okay. I was confused. No, I have not done that yet.   
      
   > I'm suggesting an additional check of the socket to see if   
   > it is "readable" just before you go to write to it. I am assuming the   
   > application protocol here is such that there are no "pipelined"   
   > requests from the client, only either one request per connection, or   
   > one request outstanding at a time.   
   >   
      
   Sadly, requests can span multiple calls to recv(), but I can call read()   
   and store any read characters to be consumed when   
   the recv() call occurs. Basically, all read data is buffered,   
   then parsed when a complete request arrives.   
      
      
   > Still, even if you were say doing a polling poll/select/whatnot for   
   > socket readability, and posting a recv() of one byte or doing an NREAD   
   > or a MSG_PEEK or whatnot, before doing that send() in your "writing   
   > mode" there will be that window, which means you do have to address   
   > the matter of the server application terminating without complaint   
   > when that send() call is made into a connection where the remote has   
   > closed. I don't see getting around it, only making it (hopefully)   
   > less common.   
   >   
      
   Right.   
      
   > And while you expressed active uninterest in the "it shouldn't do   
   > that" bit, even if the FIN had arrived from Client, as far as Server's   
   > TCP is concerned, it is still a perfectly valid "send only"   
   > connection. The RST that would be elicited would arrive only after   
   > the data reached Client, and unless this system blocks the send() call   
   > until all the data is ACKed by the remote (ugh) I would have expected   
   > the send() call to complete either immediately after queuing the data   
   > to the socket, or at least upon the data hitting the wire.   
   >   
   > Of course, if Client has done an abortive close (shame on it...) that   
   > will have been a RST segment rather than a FIN, and then presumably   
   > the TCP stack, upon your calling send() would say "Yo! Ungood!" and   
   > should have caused the send() call to return an error status.   
   >   
   > Any chance your application on server isn't correctly handling an   
   > error return from the send() call?   
      
   it is always possible, but it's something I've gone through   
   fairly carefully.   
      
   FWIW, the instrumentation I used to arrive at this conclusion   
   basically puts characters out stdout and calls fflush(stdout),   
   then usleep for a few milliseconds. The send() never returns.   
      
      
   *Sign* I miss UDP. Might just come to that...   
      
   >   
   > rick jones   
   >   
      
   --   
   Les Cargill   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|