MySQL vs MongoDB 1000 reads

I have been very excited about MongoDb and have been testing it lately. I had a table called posts in MySQL with about 20 million records indexed only on a field called 'id'.

I wanted to compare speed with MongoDB and I ran a test which would get and print 15 records randomly from our huge databases. I ran the query about 1,000 times each for mysql and MongoDB and I am suprised that I do not notice a lot of difference in speed. Maybe MongoDB is 1.1 times faster. That's very disappointing. Is there something I am doing wrong? I know that my tests are not perfect but is MySQL on par with MongoDb when it comes to read intensive chores.


Note:

  • I have dual core + ( 2 threads ) i7 cpu and 4GB ram
  • I have 20 partitions on MySQL each of 1 million records
  • Sample Code Used For Testing MongoDB

    <?php
    function microtime_float()
    {
        list($usec, $sec) = explode(" ", microtime());
        return ((float)$usec + (float)$sec);
    }
    $time_taken = 0;
    $tries = 100;
    // connect
    $time_start = microtime_float();
    
    for($i=1;$i<=$tries;$i++)
    {
        $m = new Mongo();
        $db = $m->swalif;
        $cursor = $db->posts->find(array('id' => array('$in' => get_15_random_numbers())));
        foreach ($cursor as $obj)
        {
            //echo $obj["thread_title"] . "<br><Br>";
        }
    }
    
    $time_end = microtime_float();
    $time_taken = $time_taken + ($time_end - $time_start);
    echo $time_taken;
    
    function get_15_random_numbers()
    {
        $numbers = array();
        for($i=1;$i<=15;$i++)
        {
            $numbers[] = mt_rand(1, 20000000) ;
    
        }
        return $numbers;
    }
    
    ?>
    


    Sample Code For Testing MySQL

    <?php
    function microtime_float()
    {
        list($usec, $sec) = explode(" ", microtime());
        return ((float)$usec + (float)$sec);
    }
    $BASE_PATH = "../src/";
    include_once($BASE_PATH  . "classes/forumdb.php");
    
    $time_taken = 0;
    $tries = 100;
    $time_start = microtime_float();
    for($i=1;$i<=$tries;$i++)
    {
        $db = new AQLDatabase();
        $sql = "select * from posts_really_big where id in (".implode(',',get_15_random_numbers()).")";
        $result = $db->executeSQL($sql);
        while ($row = mysql_fetch_array($result) )
        {
            //echo $row["thread_title"] . "<br><Br>";
        }
    }
    $time_end = microtime_float();
    $time_taken = $time_taken + ($time_end - $time_start);
    echo $time_taken;
    
    function get_15_random_numbers()
    {
        $numbers = array();
        for($i=1;$i<=15;$i++)
        {
            $numbers[] = mt_rand(1, 20000000);
    
        }
        return $numbers;
    }
    ?>
    

    MongoDB is not magically faster. If you store the same data, organised in basically the same fashion, and access it exactly the same way, then you really shouldn't expect your results to be wildly different. After all, MySQL and MongoDB are both GPL, so if Mongo had some magically better IO code in it, then the MySQL team could just incorporate it into their codebase.

    People are seeing real world MongoDB performance largely because MongoDB allows you to query in a different manner that is more sensible to your workload.

    For example, consider a design that persisted a lot of information about a complicated entity in a normalised fashion. This could easily use dozens of tables in MySQL (or any relational db) to store the data in normal form, with many indexes needed to ensure relational integrity between tables.

    Now consider the same design with a document store. If all of those related tables are subordinate to the main table (and they often are), then you might be able to model the data such that the entire entity is stored in a single document. In MongoDB you can store this as a single document, in a single collection. This is where MongoDB starts enabling superior performance.

    In MongoDB, to retrieve the whole entity, you have to perform:

  • One index lookup on the collection (assuming the entity is fetched by id)
  • Retrieve the contents of one database page (the actual binary json document)
  • So a b-tree lookup, and a binary page read. Log(n) + 1 IOs. If the indexes can reside entirely in memory, then 1 IO.

    In MySQL with 20 tables, you have to perform:

  • One index lookup on the root table (again, assuming the entity is fetched by id)
  • With a clustered index, we can assume that the values for the root row are in the index
  • 20+ range lookups (hopefully on an index) for the entity's pk value
  • These probably aren't clustered indexes, so the same 20+ data lookups once we figure out what the appropriate child rows are.
  • So the total for mysql, even assuming that all indexes are in memory (which is harder since there are 20 times more of them) is about 20 range lookups.

    These range lookups are likely comprised of random IO — different tables will definitely reside in different spots on disk, and it's possible that different rows in the same range in the same table for an entity might not be contiguous (depending on how the entity has been updated, etc).

    So for this example, the final tally is about 20 times more IO with MySQL per logical access, compared to MongoDB.

    This is how MongoDB can boost performance in some use cases.


    Do you have concurrency, ie simultaneous users ? If you just run 1000 times the query straight, with just one thread, there will be almost no difference. Too easy for these engines :)

    BUT I strongly suggest that you build a true load testing session, which means using an injector such as JMeter with 10, 20 or 50 users AT THE SAME TIME so you can really see a difference (try to embed this code inside a web page JMeter could query).

    I just did it today on a single server (and a simple collection / table) and the results are quite interesting and surprising (MongoDb was really faster on writes & reads, compared to MyISAM engine and InnoDb engine).

    This really should be part of your test : concurrency & MySQL engine. Then, data/schema design & application needs are of course huge requirements, beyond response times. Let me know when you get results, I'm also in need of inputs about this!


    Source: https://github.com/webcaetano/mongo-mysql

    10 rows

    mysql insert: 1702ms
    mysql select: 11ms
    
    mongo insert: 47ms
    mongo select: 12ms
    

    100 rows

    mysql insert: 8171ms
    mysql select: 10ms
    
    mongo insert: 167ms
    mongo select: 60ms
    

    1000 rows

    mysql insert: 94813ms (1.58 minutes)
    mysql select: 13ms
    
    mongo insert: 1013ms
    mongo select: 677ms
    

    10.000 rows

    mysql insert: 924695ms (15.41 minutes)
    mysql select: 144ms
    
    mongo insert: 9956ms (9.95 seconds)
    mongo select: 4539ms (4.539 seconds)
    
    链接地址: http://www.djcxy.com/p/88748.html

    上一篇: MongoDB从MySQL迁移的用例

    下一篇: MySQL与MongoDB 1000读取