How can I make my Python script faster?

I'm pretty new to Python, and I have written a (probably very ugly) script that is supposed to randomly select a subset of sequences from a fastq-file. A fastq-file stores information in blocks of four rows each. The first row in each block starts with the character "@". The fastq file I use as my input file is 36 GB, containing about 14,000,000 lines.

I tried to rewrite an already existing script that used way too much memory, and I managed to reduce the memory usage a lot. But the script takes forever to run, and I don't see why.

parser = argparse.ArgumentParser()
parser.add_argument("infile", type = str, help = "The name of the fastq input file.", default = sys.stdin)
parser.add_argument("outputfile", type = str, help = "Name of the output file.")
parser.add_argument("-n", help="Number of sequences to sample", default=1)
args = parser.parse_args()


def sample():
    linesamples = []
    infile = open(args.infile, 'r')
    outputfile = open(args.outputfile, 'w')
    # count the number of fastq "chunks" in the input file:
    seqs = subprocess.check_output(["grep", "-c", "@", str(args.infile)])
    # randomly select n fastq "chunks":
    seqsamples = random.sample(xrange(0,int(seqs)), int(args.n))
    # make a list of the lines that are to be fetched from the fastq file:
    for i in seqsamples:
        linesamples.append(int(4*i+0))
        linesamples.append(int(4*i+1))
        linesamples.append(int(4*i+2))
        linesamples.append(int(4*i+3))
    # fetch lines from input file and write them to output file.
    for i, line in enumerate(infile):
        if i in linesamples:
            outputfile.write(line)

The grep-step takes practically no time at all, but after over 500 minutes, the script still hasn't started to write to the output file. So I suppose it is one of the steps in between grep and the last for-loop that takes such a long time. But I don't understand which step exactly, and what I can do to speed it up.


Depending on the size of linesamples , the if i in linesamples will take a long time since you are searching through a list for each iteration through infile . You could convert this into a set to improve the lookup time. Also, enumerate is not very efficient - I have replaced that with a line_num construct which we increment in each iteration.

def sample():
    linesamples = set()
    infile = open(args.infile, 'r')
    outputfile = open(args.outputfile, 'w')
    # count the number of fastq "chunks" in the input file:
    seqs = subprocess.check_output(["grep", "-c", "@", str(args.infile)])
    # randomly select n fastq "chunks":
    seqsamples = random.sample(xrange(0,int(seqs)), int(args.n))
    for i in seqsamples:
        linesamples.add(int(4*i+0))
        linesamples.add(int(4*i+1))
        linesamples.add(int(4*i+2))
        linesamples.add(int(4*i+3))
    # make a list of the lines that are to be fetched from the fastq file:
    # fetch lines from input file and write them to output file.
    line_num = 0
    for line in infile:
        if line_num in linesamples:
            outputfile.write(line)
        line_num += 1
    outputfile.close()

You said that grep finishes running quite quickly, so in that case instead of just using grep to count the occurrences of @ have grep output the byte offsets of each @ character it sees (using the -b option for grep). Then, use random.sample to pick which ever blocks you want. Once you've chosen the byte offsets you want, use infile.seek to go to each byte offset and print out 4 lines from there.


Try to parallelize your code. What I mean is this. You have 14,000,000 lines of input.

  • Work your grep and filter your lines first and write it to filteredInput.txt
  • Split your filteredInput to 10.000-100.000 lines of files, like filteredInput001.txt, filteredInput002.txt
  • Work our code on this split files. Write your Output to different files such as output001.txt, output002.txt
  • Merge your results as final step.
  • Since your code is not working at all. You may also run your code on these filtered inputs. Your code will check existence of filteredInput files and will understand which step he was in, and resume from that step.

    You can also use multiple python process this way (after step 1) using your shell or python threads.

    链接地址: http://www.djcxy.com/p/23770.html

    上一篇: 如何为临时NSURLSession设置cookieAcceptPolicy

    下一篇: 我怎样才能让我的Python脚本更快?