这需要很长时间......我如何加快这本字典的速度? (蟒蛇)

    meta_map = {}
    results = db.meta.find({'corpus_id':id, 'method':method}) #this Mongo query only takes 3ms
    print results.explain()
    #result is mongo queryset of 2000 documents

    count = 0
    for r in results:
        count += 1
        print count
        word = r.get('word')
        data = r.get('data',{})
        if not meta_map.has_key(word):
            meta_map[word] = data
    return meta_map

这是超级的,由于某种原因超级慢。

总共有2000个结果。 以下是result文档(来自Mongo)的示例。 所有其他结果的长度都相似。

{ "word" : "articl", "data" : { "help" : 0.42454812322341984, "show" : 0.24099054286865948, "lack" : 0.2368313038407821, "steve" : 0.20491936823259457, "gb" : 0.18757527934987422, "feedback" : 0.2855335862138559, "categori" : 0.28210549642632016, "itun" : 0.23615623082085788, "articl" : 0.21378509220044106, "black" : 0.22720575131038662, "hidden" : 0.26172127252557625, "holiday" : 0.27662433827306804, "applic" : 0.1802411089325281, "digit" : 0.20491936823259457, "sourc" : 0.21909218369809863, "march" : 0.2632736571995878, "ceo" : 0.2153108869289692, "donat" : 1, "volum" : 0.2572042432755638, "octob" : 0.2802470156773559, "toolbox" : 0.2153108869289692, "discuss" : 0.26973295489368615, "list" : 0.3698592948408095, "upload" : 0.1802411089325281, "random" : 1, "default" : 0.33044754314072383, "februari" : 0.2899936154686609, "januari" : 0.25228424754983525, "septemb" : 0.1802411089325281, "page" : 0.24675067183234803, "view" : 0.20019523259334138, "pleas" : 0.2839965947961194, "mdi" : 0.2731217555354, "unsourc" : 0.2709524603813144, "direct" : 0.18757527934987422, "dead" : 0.22720575131038662, "smartphon" : 0.2839965947961194, "jump" : 0.3004203939398161, "see" : 0.33044754314072383, "design" : 0.2839965947961194, "download" : 0.19574598998663462, "home" : 0.3004203939398161, "event" : 0.651573574681647, "wikipedia" : 0.21909218369809863, "content" : 0.2471475889083912, "version" : 0.42454812322341984, "gener" : 0.3004203939398161, "refer" : 0.2188507485718582, "navig" : 0.27662433827306804, "june" : 0.2153108869289692, "screen" : 0.27662433827306804, "free" : 0.22720575131038662, "job" : 0.19574598998663462, "key" : 0.3004203939398161, "addit" : 0.22484486630589545, "search" : 0.2878804276884952, "current" : 0.5071530767683105, "worldwid" : 0.20491936823259457, "iphon" : 0.2230524329516571, "action" : 0.24099054286865948, "chang" : 0.18757527934987422, "summari" : 0.33044754314072383, "origin" : 0.2572042432755638, "softwar" : 0.651573574681647, "point" : 0.27662433827306804, "extern" : 0.22190187748860113, "mobil" : 0.2514880028687207, "cloud" : 0.18757527934987422, "use" : 0.2731217555354, "log" : 0.27662433827306804, "commun" : 0.33044754314072383, "interact" : 0.5071530767683105, "devic" : 0.3004203939398161, "long" : 0.2839965947961194, "avail" : 0.19574598998663462, "appl" : 0.24099054286865948, "disambigu" : 0.3195885490528538, "statement" : 0.2737499468972353, "namespac" : 0.3004203939398161, "season" : 0.3004203939398161, "juli" : 0.27243508666247285, "relat" : 0.19574598998663462, "phone" : 0.26973295489368615, "link" : 0.2178125232318433, "line" : 0.42454812322341984, "pilot" : 0.27243508666247285, "account" : 0.2572042432755638, "main" : 0.34870313981256423, "provid" : 0.2153108869289692, "histori" : 0.2714135089366041, "vagu" : 0.24875213214603717, "featur" : 0.24099054286865948, "creat" : 0.26645207330844684, "ipod" : 0.2230524329516571, "player" : 0.20491936823259457, "io" : 0.2447908314834019, "need" : 0.2580912994161046, "develop" : 0.27662433827306804, "began" : 0.24099054286865948, "client" : 0.19574598998663462, "also" : 0.42454812322341984, "cleanup" : 0.24875213214603717, "split" : 0.26973295489368615, "tool" : 0.2878804276884952, "product" : 0.42454812322341984, "may" : 0.2676701118192027, "assist" : 0.1802411089325281, "variant" : 0.2514880028687207, "portal" : 0.3004203939398161, "user" : 0.20491936823259457, "consid" : 0.27662433827306804, "date" : 0.2731217555354, "recent" : 0.24099054286865948, "read" : 0.2572042432755638, "reliabl" : 0.2388872270166464, "sale" : 0.22720575131038662, "ambigu" : 0.23482106920048526, "person" : 0.260801274024785, "contact" : 0.24099054286865948, "encyclopedia" : 0.2153108869289692, "time" : 0.2368313038407821, "model" : 0.24099054286865948, "audio" : 0.19574598998663462 }}

整个过程大约需要15秒 ...到底是什么? 我如何加快速度? :)

编辑:我意识到,当我在控制台中打印计数时,它会非常快地从0到101,然后冻结10秒,然后从102继续到2000

这可能是MongoDB问题吗?

编辑2:我打印下面的查询的Mongo EXPLAIN():

{u'allPlans': [{u'cursor': u'BtreeCursor corpus_id_1_method_1_word_1',
                u'indexBounds': {u'corpus_id': [[u'iphone', u'iphone']],
                                 u'method': [[u'advanced', u'advanced']],
                                 u'word': [[{u'$minElement': 1},
                                            {u'$maxElement': 1}]]}}],
 u'cursor': u'BtreeCursor corpus_id_1_method_1_word_1',
 u'indexBounds': {u'corpus_id': [[u'iphone', u'iphone']],
                  u'method': [[u'advanced', u'advanced']],
                  u'word': [[{u'$minElement': 1}, {u'$maxElement': 1}]]},
 u'indexOnly': False,
 u'isMultiKey': False,
 u'millis': 3,
 u'n': 2443,
 u'nChunkSkips': 0,
 u'nYields': 0,
 u'nscanned': 2443,
 u'nscannedObjects': 2443,
 u'oldPlan': {u'cursor': u'BtreeCursor corpus_id_1_method_1_word_1',
              u'indexBounds': {u'corpus_id': [[u'iphone', u'iphone']],
                               u'method': [[u'advanced', u'advanced']],
                               u'word': [[{u'$minElement': 1},
                                          {u'$maxElement': 1}]]}}}

这些是mongo系列的统计数据:

> db.meta.stats();
{
    "ns" : "inception.meta",
    "count" : 2450,
    "size" : 3001068,
    "avgObjSize" : 1224.9257142857143,
    "storageSize" : 18520320,
    "numExtents" : 6,
    "nindexes" : 2,
    "lastExtentSize" : 13893632,
    "paddingFactor" : 1.009999999999931,
    "flags" : 1,
    "totalIndexSize" : 368640,
    "indexSizes" : {
        "_id_" : 114688,
        "corpus_id_1_method_1_word_1" : 253952
    },
    "ok" : 1
}


> db.meta.getIndexes();
[
    {
        "name" : "_id_",
        "ns" : "inception.meta",
        "key" : {
            "_id" : 1
        },
        "v" : 0
    },
    {
        "ns" : "inception.meta",
        "name" : "corpus_id_1_method_1_word_1",
        "key" : {
            "corpus_id" : 1,
            "method" : 1,
            "word" : 1
        },
        "v" : 0
    }
]

您的查询将返回集合中的几乎所有文档(在这种情况下可能正确也可能不正确;良好的数据库建议始终将尽可能少的文档/行从服务器传输到应用程序),并且您的集合是关于3兆字节大小。 您看到的延迟可能仅仅是由于网络传输时间。


代替

if not meta_map.has_key(word):

你应该使用

if word not in meta_map:

如果你不打算使用它,那么在执行data = r.get('data',{})时没有意义。

这不是很明显,你为什么要做word = r.get('word') ...如果'word'一直存在于r ,你应该只用word = r['word'] ; 否则你应该测试get后的word是否为None

同样的数据得到。

尝试这个:

for r in results:
    word = r['word']
    if word not in meta_map:
         meta_map[word] = r['data']

无论如何,你所引用的时间是巨大的......那里肯定还有别的事情发生。 我会非常有兴趣看到您的代码用于执行计时并计算results的条目数。


如果你的问题确实是字典,也许使用setdefault()而不是先查看键,然后设置它可以帮助。

链接地址: http://www.djcxy.com/p/53737.html

上一篇: This takes a long time...how do I speed this dictionary up? (python)

下一篇: Getting started with GitHub and Eclipse (spring source toolsuite 2.7.1)