高性能WCFサービスの作成
6243 ワード
I had a WCF service where I wanted to be able to support over a hundred concurrent users, and while most of the service methods had small payloads which returned quickly, the startup sequence needed to pull down 200,000 records. The out of the box WCF service had no ability to support this scenario, but with some effort I was able to squeeze orders of magnitude performance increases out of the service and hit the performance goal.
Initially performance was abysmal and there was talk of ditching WCF entirely ( and as the one pushing WCF technology on the project this didn't seem like a career enhancing change )
Here's how performance was optimized. These are listed in the order they were implemented. Some are fairly obvious, others took some time to discover. Each item represents, a significant increase in latency or scalability from the prior - and although I have internal measurement numbers, I'm not comfortable publishing them as the size of the data increased, and the testing approach changed. Use NetTCP binding This helps both throughput and the time it takes to open and close connections Use DataContract Serializer instead of XMLSerializer I started out using DataTables - POCO objects via Linq2Sql yielded a 6x increase slow: [OperationContract] MyDataTable GetData(...); fast: [OperationContract] IEnumerable GetData(...); Unthrottle your service It's quite understanable that WCF is resistant to Denial of Service attacks out of the box, but it's too bad that it's is such a manual operation to hit the "turbo button". It would be nice if the Visual Studio tooling did this for you, or at least had some guidance (MS - hint, hint) The items to look at here are: set the max values high and under setting the listenBacklog, maxConnections, and maxBuffer* value high
Cache your data WCF, unlike ASP.Net has no built in facility to cache service responses, so you need to do it by hand. Any cache class will do. Normalize/compress your data this doesn't necessarily have to be done in the database, the Linq GroupBy operators make this easy to do in code. To clarify, say your data is kept in a denormalized table
string
Key1
string
Key2
string
Key3
int
val1
int
val2
the bulk of the result set ends up being duplicate data
LongKeyVal1
LongKeyVal2
LongKeyVal3
10
12
LongKeyVal1
LongKeyVal2
LongKeyVal3
11
122
LongKeyVal1
LongKeyVal2
LongKeyVal3
12
212
so normalize this into
LongKeyVal1
LongKeyVal2
LongKeyVal3
10
12
11
122
12
212
In code, given the following classes
you can transform an IEnumerable into a IEnumerable via the following
Use the BinaryFormatter, and cache your serializations If you're willing to forgo over the wire type safety, the binary formatter is the way to go for scalability. Data caching has only a limited impact if a significant amount of CPU time is spent serializing it - which is exactly what happens with the DataContract serializer. The operation contract changes to
and the implementation to
Before items 4,5, and 6 the service would max out at about 50 clients ( response time to go way up and CPU usage would hit 80% - on a 8 core box). After these changes were made, the service could handle of 100 + clients and CPU usage flattened out at 30%
Update: Shay Jacoby has reasonably suggested I show some code.
Update2: Brett asks about relative impact. Here's a summary
item
latency
scalability
2) DataContract Serializer
large
large
3) unthrottle
small
large
4) cache data
small
5) normalize data
medium
6) cache serialization
small
large
Initially performance was abysmal and there was talk of ditching WCF entirely ( and as the one pushing WCF technology on the project this didn't seem like a career enhancing change )
Here's how performance was optimized. These are listed in the order they were implemented. Some are fairly obvious, others took some time to discover. Each item represents, a significant increase in latency or scalability from the prior - and although I have internal measurement numbers, I'm not comfortable publishing them as the size of the data increased, and the testing approach changed.
string
Key1
string
Key2
string
Key3
int
val1
int
val2
the bulk of the result set ends up being duplicate data
LongKeyVal1
LongKeyVal2
LongKeyVal3
10
12
LongKeyVal1
LongKeyVal2
LongKeyVal3
11
122
LongKeyVal1
LongKeyVal2
LongKeyVal3
12
212
so normalize this into
LongKeyVal1
LongKeyVal2
LongKeyVal3
10
12
11
122
12
212
In code, given the following classes
public class MyDataDenormalized
{
public string Key1 { get; set; }
public string Key2 { get; set; }
public string Key3 { get; set; }
public int Val1 { get; set; }
public int Val2 { get; set; }
}
public class MyDataGroup
{
public string Key1 { get; set; }
public string Key2 { get; set; }
public string Key3 { get; set; }
public MyDataItem[] Values { get; set; }
}
public class MyDataItem
{
public int Val1 { get; set; }
public int Val2 { get; set; }
}
you can transform an IEnumerable
var keyed = from sourceItem in source
group sourceItem by new
{
sourceItem.Key1,
sourceItem.Key2,
sourceItem.Key3,
} into g
select g;
var groupedList = from kItems in keyed
let newValues = (from sourceItem in kItems select new MyDataItem() { Val1 = sourceItem.Val1, Val2= sourceItem.Val2 }).ToArray()
select new MyDataGroup()
{
Key1 = kItems.Key.Key1,
Key2 = kItems.Key.Key2,
Key3 = kItems.Key.Key3,
Values = newValues,
};
[OperationContract]
Byte[] GetData(...);
and the implementation to
var bf = new BinaryFormatter();
using (var ms = new MemoryStream())
{
bf.Serialize(ms, groupeList);
// and best to cache it too
return ms.GetBuffer();
}
Before items 4,5, and 6 the service would max out at about 50 clients ( response time to go way up and CPU usage would hit 80% - on a 8 core box). After these changes were made, the service could handle of 100 + clients and CPU usage flattened out at 30%
Update: Shay Jacoby has reasonably suggested I show some code.
Update2: Brett asks about relative impact. Here's a summary
item
latency
scalability
2) DataContract Serializer
large
large
3) unthrottle
small
large
4) cache data
small
5) normalize data
medium
6) cache serialization
small
large