CreateNetworkServer? How many clients?
Posted: Mon Jan 10, 2011 11:08 pm
I have yet to use these functions, but I'm playing with sending some files around and the way these are documented they confuse me.
If you use the PureBasic CreateNetworkServer command how many clients can connect to that port at one time? Is it actually designed to work like a non-blocking server, in the sense that each new client connects to that port and then is brokered off to another port for the connection, or is really a blocking server in which each connection basically blocks any other until it's done?
Also can the bandwidth to these open sockets by throttled to slow down say the transfer of a file, or would I have to actually build some form of file splitting and throttling into process? I mean obviously there must be some upper limit to the file size as these functions are documented and as I note from the forum posts I've read so far. So basically I should split very large files down anyway just to reduce the payload.
Is there really any advantage to these functions for me in an environment where I may have 20 or more local computers downloading large files? Would I be better off using the FTP functions with which I can get an FTP server to simply bandwidth throttle (assuming it's a feature in that FTP server that is)? I mean a properly threaded or asynchronous FTP server can easily handle 20 computers using it.
Thanks
If you use the PureBasic CreateNetworkServer command how many clients can connect to that port at one time? Is it actually designed to work like a non-blocking server, in the sense that each new client connects to that port and then is brokered off to another port for the connection, or is really a blocking server in which each connection basically blocks any other until it's done?
Also can the bandwidth to these open sockets by throttled to slow down say the transfer of a file, or would I have to actually build some form of file splitting and throttling into process? I mean obviously there must be some upper limit to the file size as these functions are documented and as I note from the forum posts I've read so far. So basically I should split very large files down anyway just to reduce the payload.
Is there really any advantage to these functions for me in an environment where I may have 20 or more local computers downloading large files? Would I be better off using the FTP functions with which I can get an FTP server to simply bandwidth throttle (assuming it's a feature in that FTP server that is)? I mean a properly threaded or asynchronous FTP server can easily handle 20 computers using it.
Thanks