compare client device clock with server clock exactly upto milliseconds

I am finding a way to get the difference between client clock and server clock.

Till now i have tried the following approach.

collecting:

  • client request time
  • server time
  • client response time
  • the problem is we get unknown delay between request to reach server and response to reach client.

    Here's an implementation of this scheme using JavaScript and PHP:

    time.js

    var request = new XMLHttpRequest();
    request.onreadystatechange = readystatechangehandler;
    request.open("POST", "http://www.example.com/sync.php", true);
    request.setRequestHeader("Content-Type", "application/x-www-form-urlencoded");
    request.send("original=" + (new Date).getTime());
    
    function readystatechangehandler() {
    var returned = (new Date).getTime();
    if (request.readyState === 4 && request.status === 200) {
        var timestamp = request.responseText.split('|');
        var original = + timestamp[0];
        var receive = + timestamp[1];
        var transmit = + timestamp[2];
        var sending = receive - original;
        var receiving = returned - transmit;
        var roundtrip = sending + receiving;
        var oneway = roundtrip / 2;
        var difference = sending - oneway; // this is what you want
        // so the server time will be client time + difference
    }
    }
    

    Sync.php

    <?php
    $receive = round(microtime(true) * 1000);
    echo $_POST["original"] . '|';
    echo $receive . '|';
    echo round(microtime(true) * 1000);
    ?>
    

    Even With this approach i get 50-500 ms error. If the delay is high, the error will be more.

    But i wonder how a company named "adtruth" claims that they were able to differentiate devices based on clock time. they call it as "Time differential Linking" The key to device recognition AdTruth-style is its patented technology called TDL, for time-differential linking. While in the billions of connected devices there may be thousands with the same configuration, no two will have their clocks set to the same time -- at least, not when you take it down to the millisecond. Says Ori Eisen, founder of 41st Parameter and AdTruth, "We take these disparate time stamps and compare them to the server master clock. If there is any doubt, the TDL is the tie-breaker."

    http://www.admonsters.com/blog/adtruth-joins-w3c-qa-ori-eisen-founder-and-chief-innovation-officer

    Here is the link to their "Time differential linking" patent

    http://www.google.com/patents/US7853533


    Its actually quite simple, first have the client calculate a fixed time - 2005, Jan 31, 18:34:20.050 (in milli seconds). Then calculate the time on the client machine (current time) and calculate the delta between the current time and the fixed time. Send the client time and delta back to the server. On the server, from the same fixed time, if you add the same delta, what would the server current time (no longer current due to response time lag etc) be. The different between the client current time and server current time would give you the time difference between client and server.


    var oneway = roundtrip / 2;
    

    Why do you assume that the network is symmetrical? Actually it's a fairly reasonable assumption - you could try to calibrate the connection by send data in both directions to get an estimate of the throughput and latency (see boomerang's bw module for an example of the server-client measurement) however a basic feature of TCP is that the congestion window adapts progressively - and hence even on a static connection, the throughput changes markedly in the early stages of the connection (exactly the point at which you're likely to try and capture the client device identity).

    Do try to make sure that the response is less than 1kb including headers (to make sure it fits in a single packet) and that keep alive is enabled. A GET request will be slightly smaller than a POST, although using websockets would give a more accurate figure.

    A more realistic approach would be to capture several samples at known intervals capturing then calculating the average, eg

    estmatedRtt=300;
    for (var x=0; x<10; x++) {
        setTimeout(estimatedRtt * x * 1.3, captureOffset);
    }
    

    It looks like that they simply ignore problem of the network lag in their description. I notice their (vague) phrasing:

    (...) based on the determination that the matching [delta of time] parameter falls within the selected range (...)

    This could account for network lag variations.

    Over a large, busy network such as the Internet, it is not possible to bring the accuracy "down to the millisecond". Other network types (I'm thinking Token Ring or networks with very, very strict QoS policies) might allow this level of precision.

    链接地址: http://www.djcxy.com/p/15114.html

    上一篇: 什么可以使加载缓慢的android 4.1.X而不是4.2?

    下一篇: 客户端设备时钟与服务器时钟的精确比较可达数毫秒