Setting timelimit for sftp.get() of Paramiko module
I am using Paramiko's SFTP client to download a file from remote server to a client(ie get operation)
The file to be transferred is a bit huge ~1GB.
So I would like the get operation to timeout if the time is more than 10s.
But setting the timeout value for connect doesn't work, It appears to be the timeout for just creating the SSH connection and not the timeout for the whole ssh connection.
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(host, username=username, password=password, timeout=10.0)
sftp = ssh.open_sftp()
start_time = time.time()
sftp.get(local_path,remote_path)
elapsed_time = time.time()-start_time
print elapsed_time
sftp.close()
I also tried setting the timeout value for the channel but it doesn't work too
sftp.get_channel.settimeout(10.0)
But this timeout is again just for read/write operations
There is a similar question Timeout in paramiko (python) but it only has answer for timeout in creation of SSH-connection
Update 1
Following the comments of @Martin I implemented a callback function which checks for the time-limit for the get operation of sftp:
import paramiko
import time
Class TimeLimitExceeded(Exception):
pass
timelimit = 10
start_time = time.time()
def _timer():
elapsed_time = time.time()-start_time
if elapsed_time > timelimit:
raise TimeLimitExceeded
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(host, username=username, password=password, timeout=10.0)
sftp = ssh.open_sftp()
try:
sftp.get(local_path,remote_path,_timer)
except TimeLimitExceeded:
print "The operation took too much time to complete"
sftp.close()
But the time to except the exception is a lot, the code is blocking somewhere. I dived into Paramiko source code and found the culprit behind it was the _close(self,async=False)
method of the sftp_file.py
Any help to get around this?
Update 2
Trying to close the channel itself if timelimit is exceeded. Then the exception is flushed to console as the prefetch
is implemented by separate daemon thread
File "/scratch/divjaisw/python2.7/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/scratch/divjaisw/python2.7/lib/python2.7/threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "/scratch/divjaisw/python_virtual/lib/python2.7/site-packages/paramiko/sftp_file.py", line 488, in _prefetch_thread
num = self.sftp._async_request(self, CMD_READ, self.handle, long(offset), int(length))
File "/scratch/divjaisw/python_virtual/lib/python2.7/site-packages/paramiko/sftp_client.py", line 754, in _async_request
self._send_packet(t, msg)
File "/scratch/divjaisw/python_virtual/lib/python2.7/site-packages/paramiko/sftp.py", line 170, in _send_packet
self._write_all(out)
File "/scratch/divjaisw/python_virtual/lib/python2.7/site-packages/paramiko/sftp.py", line 133, in _write_all
n = self.sock.send(out)
File "/scratch/divjaisw/python_virtual/lib/python2.7/site-packages/paramiko/channel.py", line 715, in send
return self._send(s, m)
File "/scratch/divjaisw/python_virtual/lib/python2.7/site-packages/paramiko/channel.py", line 1081, in _send
raise socket.error('Socket is closed')
error: Socket is closed
What you ask for, is not really a timeout. The term "timeout" is used for a limit of waiting for a response.
But your server does not stop responding. The communication is active.
What you ask for, is rather a limit for a duration of an operation. You can hardly expect this to be implemented readily for you. It's rather a specific requirement. You have to implement it yourself.
You can use the callback
argument of the get
method:
def get(self, remotepath, localpath, callback=None):
In the callback, check the duration of the transfer and throw an exception, if the time limit expires.
This won't cancel the transfer immediately. To optimize transfer performance, Paramiko queues up to 100 read requests to the server (see the condition in the sftp_file._write
). Once you attempt to cancel the transfer, Paramiko has to wait for (up to 100) responses to those requests to clear the queue.