Then there's a client thread mutex to protect fetching from the job queue and also for sending and receiving this mutex is low contention but necessary to protect against connection reuse and then you have a mutex in the TLS.pbi which globally locks the sends and receives.
The complexity is partially due to error handling, for instance ReceiveNetworkData states
Returns the number of bytes received. If 'Result' is equal to DataBufferLength then more data is available to be read. If an error occurred on the connection (link broken, connection close by the server etc...) 'Result' will be -1.
but a socket will return 0 or n bytes read or -1 on error but I don't know if ReceiveNetworkData returns 0 or treats that as a -1
If you get a 0 you would know that a connection is dropped and you could stop polling until your timeout, if you get a -1
99% of the time its a wouldblock which means please wait and try again my tcp buffer is full, there are 7 recoverable errors that should be checked to see if you should continue the operation and the 20 or so other errors are to abort.
ReceiveNetworkData should wrap wsagetlasterror and errno and filter the errors, then it would behave like it's documented but at the moment is doesn't and I still don't know if it returns 0 at all.
So to fix Receive and Send NetworkData for TCP it should test for the below and delay and try again to either return n bytes read or -1 to abort
Code: Select all
#WSA_IO_INCOMPLETE = 996
#WSA_IO_PENDING = 997
#WSAEINTR = 10004
#WSAEMFILE = 10024
#WSAEWOULDBLOCK = 10035
#WSAEINPROGRESS = 10036
#WSAEALREADY = 10037
;for libretls
#_WANT_POLLIN = -2
#_WANT_POLLOUT = -3
or just add a function to check if the error is recoverable so we can set the delay and try again. though I think it'd be better to wrap it.