Is there a need to close files that have no reference to them?

As a complete beginner to programming, I am trying to understand the basic concepts of opening and closing files. One exercise I am doing is creating a script that allows me to copy the contents from one file to another.

in_file = open(from_file)
indata = in_file.read()

out_file = open(to_file, 'w')
out_file.write(indata)

out_file.close()
in_file.close()

I have tried to shorten this code and came up with this:

indata = open(from_file).read()
open(to_file, 'w').write(indata)

This works and looks a bit more efficient to me. However, this is also where I get confused. I think I left out the references to the opened files; there was no need for the in_file and out_file variables. However, does this leave me with two files that are open, but have nothing referring to them? How do I close these, or is there no need to?

Any help that sheds some light on this topic is much appreciated.


You asked about the "basic concepts", so let's take it from the top: When you open a file, your program gains access to a system resource, that is, to something outside the program's own memory space. This is basically a bit of magic provided by the operating system (a system call, in Unix terminology). Hidden inside the file object is a reference to a "file descriptor", the actual OS resource associated with the open file. Closing the file tells the system to release this resource.

As an OS resource, the number of files a process can keep open is limited: Long ago the per-process limit was about 20 on Unix. Right now my OS X box imposes a limit of 256 open files (though this is an imposed limit, and can be raised). Other systems might set limits of a few thousand, or in the tens of thousands (per user, not per process in this case). When your program ends, all resources are automatically released. So if your program opens a few files, does something with them and exits, you can be sloppy and you'll never know the difference. But if your program will be opening thousands of files, you'll do well to release open files to avoid exceeding OS limits.

There's another benefit to closing files before your process exits: If you opened a file for writing, closing it will first "flush its output buffer". This means that i/o libraries optimize disk use by collecting ("buffering") what you write out, and saving it to disk in batches. If you write text to a file and immediately try to reopen and read it without first closing the output handle, you'll find that not everything has been written out. Also, if your program is closed too abruptly (with a signal, or occasionally even through normal exit), the output might never be flushed.

There's already plenty of other answers on how to release files, so here's just a brief list of the approaches:

  • Explicitly with close() . (Note for python newbies: Don't forget the parens! My students like to write in_file.close , which does nothing.)

  • Recommended: Implicitly, by opening files with the with statement. The close() method will be called when the end of the with block is reached, even in the event of abnormal termination (from an exception).

    with open("data.txt") as in_file:
        data = in_file.read()
    
  • Implicitly by the reference manager or garbage collector, if your python engine implements it. This is not recommended since it's not entirely portable; see the other answers for details. That's why the with statement was added to python.

  • Implicitly, when your program ends. If a file is open for output, this may run a risk of the program exiting before everything has been flushed to disk.


  • The pythonic way to deal with this is to use the with context manager:

    with open(from_file) as in_file, open(to_file, 'w') as out_file:
        indata = in_file.read()
        out_file.write(indata)
    

    Used with files like this, with will ensure all the necessary cleanup is done for you, even if read() or write() throw errors.


    The default python interpeter, CPython, uses reference counting. This means that once there are no references to an object, it gets garbage collected, ie cleaned up.

    In your case, doing

    open(to_file, 'w').write(indata)
    

    will create a file object for to_file , but not asign it to a name - this means there is no reference to it. You cannot possibly manipulate the object after this line.

    CPython will detect this, and clean up the object after it has been used. In the case of a file, this means closing it automatically. In principle, this is fine, and your program won't leak memory.

    The "problem" is this mechanism is an implementation detail of the CPython interpreter. The language standard explicitly gives no guarantee for it! If you are using an alternate interpreter such as pypy, automatic closing of files may be delayed indefinitely. This includes other implicit actions such as flushing writes on close.

    This problem also applies to other resources, eg network sockets. It is good practice to always explicitly handle such external resources. Since python 2.6, the with statement makes this elegant:

    with open(to_file, 'w') as out_file:
        out_file.write(in_data)
    

    TLDR: It works, but please don't do it.

    链接地址: http://www.djcxy.com/p/53148.html

    上一篇: 为什么Python yield语句形成闭包?

    下一篇: 是否需要关闭不参考它们的文件?