It is a dual edged sword: The server can cache data so the transfer speed is faster (ie directly from ram), but it increases latency a little bit for every read and write.
I am assuming that you have done some calculations about the raw read speed required (ie the data rate for the media itself), the number of hard drives and seek times. I am not familiar enough with ZFS to know if it can do something similar like RAID striping (to spread the load out on more than one HDD).
Since you are doing networking, the latency in the network itself can also play a role (if the server hosting the files are not physically close, where close is within 100 miles). The TCP transfer window size is also important, but that is more complicated than what you're asking here