CosmosDB Query Performance
I wrote my latest update, and then got the following error from Stack Overflow: "Body is limited to 30000 characters; you entered 38676."
It's fair to say I have been very verbose in documenting my adventures, so I've rewritten what I have here to be more concise.
I have stored my (long) original post and updates on pastebin. I don't think many people will read them, but I put a lot of effort in to them so it'd be nice not to have them lost.
I have a collection which contains 100,000 documents for learning how to use CosmosDB and for things like performance testing.
Each of these documents has a Location
property which is a GeoJSON Point
.
According to the documentation, a GeoJSON point should be automatically indexed.
Azure Cosmos DB supports automatic indexing of Points, Polygons, and LineStrings
I've checked the Indexing Policy for my collection, and it has the entry for automatic point indexing:
{
"automatic":true,
"indexingMode":"Consistent",
"includedPaths":[
{
"path":"/*",
"indexes":[
...
{
"kind":"Spatial",
"dataType":"Point"
},
...
]
}
],
"excludedPaths":[ ]
}
I've been looking for a way to list, or otherwise interrogate the indexes that have been created, but I haven't found such a thing yet, so I haven't been able to confirm that this property definitely is being indexed.
I created a GeoJSON Polygon
, and then used that to query my documents.
This is my query:
var query = client
.CreateDocumentQuery<TestDocument>(documentCollectionUri)
.Where(document => document.Type == this.documentType && document.Location.Intersects(target.Area));
And I then pass that query object to the following method so I can get the results while tracking the Request Units used:
protected async Task<IEnumerable<T>> QueryTrackingUsedRUsAsync(IQueryable<T> query)
{
var documentQuery = query.AsDocumentQuery();
var documents = new List<T>();
while (documentQuery.HasMoreResults)
{
var response = await documentQuery.ExecuteNextAsync<T>();
this.AddUsedRUs(response.RequestCharge);
documents.AddRange(response);
}
return documents;
}
The point locations are randomly chosen from 10s of millions of UK addresses, so they should have a fairly realistic spread.
The polygon is made up of 16 points (with the first and last point being the same), so it's not very complex. It covers most of the most southern part of the UK, from London down.
An example run of this query returned 8728 documents, using 3917.92 RU, in 170717.151 ms, which is just under 171 seconds, or just under 3 minutes.
3918 RU / 171 s = 22.91 RU/s
I currently have the Throughput (RU/s) set to the lowest value, at 400 RU/s.
It was my understanding that this is the reserved level you are guaranteed to get. You can "burst" above that level at times, but do that too frequently and you'll be throttled back to your reserved level.
The "query speed" of 23 RU/s is, obviously, much much lower than the Throughput setting of 400 RU/s.
I am running the client "locally" ie in my office, and not up in the Azure data center.
Each document is roughly 500 bytes (0.5 kb) in size.
So what's happening?
Am I doing something wrong?
Am I misunderstanding how my query is being throttled with regard to RU/s?
Is this the speed at which the GeoSpatial indexes operate, and so the best performance I'll get?
Is the GeoSpatial index not being used?
Is there a way I can view the created indexes?
Is there a way I can check if the index is being used?
Is there a way I can profile the query and get metrics about where time is being spent? eg s was used looking up documents by their type, s was used filtering them GeoSpatially, and s was used transferring the data.
UPDATE 1
Here's the polygon I'm using in the query:
Area = new Polygon(new List<LinearRing>()
{
new LinearRing(new List<Position>()
{
new Position(1.8567 ,51.3814),
new Position(0.5329 ,51.4618),
new Position(0.2477 ,51.2588),
new Position(-0.5329 ,51.2579),
new Position(-1.17 ,51.2173),
new Position(-1.9062 ,51.1958),
new Position(-2.5434 ,51.1614),
new Position(-3.8672 ,51.139 ),
new Position(-4.1578 ,50.9137),
new Position(-4.5373 ,50.694 ),
new Position(-5.1496 ,50.3282),
new Position(-5.2212 ,49.9586),
new Position(-3.7049 ,50.142 ),
new Position(-2.1698 ,50.314 ),
new Position(0.4669 ,50.6976),
new Position(1.8567 ,51.3814)
})
})
I have also tried reversing it (since ring orientation matters), but the query with the reversed polygon took significantly longer (I don't have the time to hand) and returned 91272 items.
Also, the coordinates are specified as Longitude/Latitude, as this is how GeoJSON expects them (ie as X/Y), rather than the traditional order used when speaking of Latitude/Longitude.
The GeoJSON specification specifies longitude first and latitude second.
UPDATE 2
Here's the JSON for one of my documents:
{
"GeoTrigger": null,
"SeverityTrigger": -1,
"TypeTrigger": -1,
"Name": "13, LONSDALE SQUARE, LONDON, N1 1EN",
"IsEnabled": true,
"Type": 2,
"Location": {
"$type": "Microsoft.Azure.Documents.Spatial.Point, Microsoft.Azure.Documents.Client",
"type": "Point",
"coordinates": [
-0.1076407397346815,
51.53970315059827
]
},
"id": "0dc2c03e-082b-4aea-93a8-79d89546c12b",
"_rid": "EQttAMGhSQDWPwAAAAAAAA==",
"_self": "dbs/EQttAA==/colls/EQttAMGhSQA=/docs/EQttAMGhSQDWPwAAAAAAAA==/",
"_etag": ""42001028-0000-0000-0000-594943fe0000"",
"_attachments": "attachments/",
"_ts": 1497973747
}
UPDATE 3
I created a minimal reproduction of the issue, and I found the issue no longer occured.
This indicated that the problem was indeed in my own code.
I set out to check all the differences between the original and the reproduction code and eventually found that something that appeared to be fairly innocent to me was infact having a big impact. And thankfully, that code wasn't needed at all, so it was an easy fix to simply not use that bit of code.
At one point I was using a custom ContractResolver
and I hadn't removed it once it was no longer needed.
Here's the offending reproduction code:
using System;
using System.Collections.Generic;
using System.Configuration;
using System.Diagnostics;
using System.Linq;
using System.Runtime.CompilerServices;
using System.Threading;
using System.Threading.Tasks;
using Microsoft.Azure.Documents;
using Microsoft.Azure.Documents.Client;
using Microsoft.Azure.Documents.Spatial;
using Newtonsoft.Json;
using Newtonsoft.Json.Serialization;
namespace Repro.Cli
{
public class Program
{
static void Main(string[] args)
{
JsonConvert.DefaultSettings = () =>
{
return new JsonSerializerSettings
{
ContractResolver = new PropertyNameMapContractResolver(new Dictionary<string, string>()
{
{ "ID", "id" }
})
};
};
//AJ: Init logging
Trace.AutoFlush = true;
Trace.Listeners.Add(new ConsoleTraceListener());
Trace.Listeners.Add(new TextWriterTraceListener("trace.log"));
//AJ: Increase availible threads
//AJ: https://docs.microsoft.com/en-us/azure/storage/storage-performance-checklist#subheading10
//AJ: https://github.com/Azure/azure-documentdb-dotnet/blob/master/samples/documentdb-benchmark/Program.cs
var minThreadPoolSize = 100;
ThreadPool.SetMinThreads(minThreadPoolSize, minThreadPoolSize);
//AJ: https://docs.microsoft.com/en-us/azure/cosmos-db/performance-tips
//AJ: gcServer enabled in app.config
//AJ: Prefer 32-bit disabled in project properties
//AJ: DO IT
var program = new Program();
Trace.TraceInformation($"Starting @ {DateTime.UtcNow}");
program.RunAsync().Wait();
Trace.TraceInformation($"Finished @ {DateTime.UtcNow}");
//AJ: Wait for user to exit
Console.WriteLine();
Console.WriteLine("Hit enter to exit...");
Console.ReadLine();
}
public async Task RunAsync()
{
using (new CodeTimer())
{
var client = await this.GetDocumentClientAsync();
var documentCollectionUri = UriFactory.CreateDocumentCollectionUri(ConfigurationManager.AppSettings["databaseID"], ConfigurationManager.AppSettings["collectionID"]);
//AJ: Prepare Test Documents
var documentCount = 10000; //AJ: 10,000
var documentsForUpsert = this.GetDocuments(documentCount);
await this.UpsertDocumentsAsync(client, documentCollectionUri, documentsForUpsert);
var allDocuments = this.GetAllDocuments(client, documentCollectionUri);
var area = this.GetArea();
var documentsInArea = this.GetDocumentsInArea(client, documentCollectionUri, area);
}
}
private async Task<DocumentClient> GetDocumentClientAsync()
{
using (new CodeTimer())
{
var serviceEndpointUri = new Uri(ConfigurationManager.AppSettings["serviceEndpoint"]);
var authKey = ConfigurationManager.AppSettings["authKey"];
var connectionPolicy = new ConnectionPolicy
{
ConnectionMode = ConnectionMode.Direct,
ConnectionProtocol = Protocol.Tcp,
RequestTimeout = new TimeSpan(1, 0, 0),
RetryOptions = new RetryOptions
{
MaxRetryAttemptsOnThrottledRequests = 10,
MaxRetryWaitTimeInSeconds = 60
}
};
var client = new DocumentClient(serviceEndpointUri, authKey, connectionPolicy);
await client.OpenAsync();
return client;
}
}
private List<TestDocument> GetDocuments(int count)
{
using (new CodeTimer())
{
return External.CreateDocuments(count);
}
}
private async Task UpsertDocumentsAsync(DocumentClient client, Uri documentCollectionUri, List<TestDocument> documents)
{
using (new CodeTimer())
{
//TODO: AJ: Parallelise
foreach (var document in documents)
{
await client.UpsertDocumentAsync(documentCollectionUri, document);
}
}
}
private List<TestDocument> GetAllDocuments(DocumentClient client, Uri documentCollectionUri)
{
using (new CodeTimer())
{
var query = client
.CreateDocumentQuery<TestDocument>(documentCollectionUri, new FeedOptions()
{
MaxItemCount = 1000
});
var documents = query.ToList();
return documents;
}
}
private Polygon GetArea()
{
//AJ: Longitude,Latitude i.e. X/Y
//AJ: Ring orientation matters
return new Polygon(new List<LinearRing>()
{
new LinearRing(new List<Position>()
{
new Position(1.8567 ,51.3814),
new Position(0.5329 ,51.4618),
new Position(0.2477 ,51.2588),
new Position(-0.5329 ,51.2579),
new Position(-1.17 ,51.2173),
new Position(-1.9062 ,51.1958),
new Position(-2.5434 ,51.1614),
new Position(-3.8672 ,51.139 ),
new Position(-4.1578 ,50.9137),
new Position(-4.5373 ,50.694 ),
new Position(-5.1496 ,50.3282),
new Position(-5.2212 ,49.9586),
new Position(-3.7049 ,50.142 ),
new Position(-2.1698 ,50.314 ),
new Position(0.4669 ,50.6976),
//AJ: Last point must be the same as first point
new Position(1.8567 ,51.3814)
})
});
}
private List<TestDocument> GetDocumentsInArea(DocumentClient client, Uri documentCollectionUri, Polygon area)
{
using (new CodeTimer())
{
var query = client
.CreateDocumentQuery<TestDocument>(documentCollectionUri, new FeedOptions()
{
MaxItemCount = 1000
})
.Where(document => document.Location.Intersects(area));
var documents = query.ToList();
return documents;
}
}
}
public class TestDocument : Resource
{
public string Name { get; set; }
public Point Location { get; set; } //AJ: Longitude,Latitude i.e. X/Y
public TestDocument()
{
this.Id = Guid.NewGuid().ToString("N");
}
}
//AJ: This should be "good enough". The times being recorded are seconds or minutes.
public class CodeTimer : IDisposable
{
private Action<TimeSpan> reportFunction;
private Stopwatch stopwatch = new Stopwatch();
public CodeTimer([CallerMemberName]string name = "")
: this((ellapsed) =>
{
Trace.TraceInformation($"{name} took {ellapsed}, or {ellapsed.TotalMilliseconds} ms.");
})
{ }
public CodeTimer(Action<TimeSpan> report)
{
this.reportFunction = report;
this.stopwatch.Start();
}
public void Dispose()
{
this.stopwatch.Stop();
this.reportFunction(this.stopwatch.Elapsed);
}
}
public class PropertyNameMapContractResolver : DefaultContractResolver
{
private Dictionary<string, string> propertyNameMap;
public PropertyNameMapContractResolver(Dictionary<string, string> propertyNameMap)
{
this.propertyNameMap = propertyNameMap;
}
protected override string ResolvePropertyName(string propertyName)
{
if (this.propertyNameMap.TryGetValue(propertyName, out string resolvedName))
return resolvedName;
return base.ResolvePropertyName(propertyName);
}
}
}
I was using a custom ContractResolver
and that was evidently having a big impact on the performance of the DocumentDB classes from the .Net SDK.
This was how I was setting the ContractResolver
:
JsonConvert.DefaultSettings = () =>
{
return new JsonSerializerSettings
{
ContractResolver = new PropertyNameMapContractResolver(new Dictionary<string, string>()
{
{ "ID", "id" }
})
};
};
And this is how it was implemented:
public class PropertyNameMapContractResolver : DefaultContractResolver
{
private Dictionary<string, string> propertyNameMap;
public PropertyNameMapContractResolver(Dictionary<string, string> propertyNameMap)
{
this.propertyNameMap = propertyNameMap;
}
protected override string ResolvePropertyName(string propertyName)
{
if (this.propertyNameMap.TryGetValue(propertyName, out string resolvedName))
return resolvedName;
return base.ResolvePropertyName(propertyName);
}
}
The solution was easy, don't set JsonConvert.DefaultSettings
so the ContractResolver
isn't used.
Results:
I was able to perform my spatial query in 21799.0221 ms, which is 22 seconds.
Previously it took 170717.151 ms, which is 2 minutes 50 seconds.
That's about 8x faster!
链接地址: http://www.djcxy.com/p/39828.html上一篇: 如何重置嵌套在DrawerNavigatior中的StackNavigator的状态?
下一篇: CosmosDB查询性能