From: | Joel Leach |
To: | Rick Strahl |
I hear what you are saying about real-world requests taking longer, but I think the same would apply when using COM directly. I'm just getting a sense of how much additional overhead I would add by processing a request through WC as opposed to using COM directly, and these tests are telling me it is almost nothing. The main draw to me of using COM is the ability to work directly with Fox objects, but as we discussed in the other thread, the only way to do that reliably is to use MTDLLs, and those have too many caveats for me.
With this tiny overhead, using Web Connection for my Fox code is looking a lot more attractive. Do you know anyone that has combined ASP.NET and FoxPro in this fashion? Maybe there are some caveats I'm not thinking about.
Thanks,
Joel
Below is the fasthit request (ie. hello world) with 1 instance and 1 client. Same numbers really.
<<pre class="code">
c:\>ab.exe -c1 -n1000 http://localhost/wconnect/wc.wc?wwmaint~fasthit
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests
Server Software: Microsoft-IIS/8.0
Server Hostname: localhost
Server Port: 80
Document Path: /wconnect/wc.wc?wwmaint~fasthit
Document Length: 784 bytes
Concurrency Level: 1
Time taken for tests: 2.902 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Total transferred: 1103800 bytes
HTML transferred: 784000 bytes
Requests per second: 344.56 [#/sec] (mean)
Time per request: 2.902 [ms] (mean)
Time per request: 2.902 [ms] (mean, across all concurrent requests)
Transfer rate: 371.41 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.3 0 1
Processing: 1 3 7.7 2 105
Waiting: 1 3 7.7 2 105
Total: 1 3 7.7 2 105
Percentage of the requests served within a certain time (ms)
50% 2
66% 2
75% 2
80% 3
90% 3
95% 3
98% 3
99% 3
100% 105 (longest request)
FWIW, when I ran fasthit both today and with the last numbers I turned off the Web Connection UI from displaying hits which improved performance by about 15% for this request.
Http access is FAST!
However, this is not realistic to expect that kind of request throughput with real work items - as soon as you hit even a local fox table throughput probably halves. Anything more complex and you're adding overhead. I think for typical requests that pull a few records from the database it ends up being between 10-20 requests a second per server instance.
+++ Rick ---
It appears the file mode timer/delays are in fact the primary factor in my test results. I think Time per request is the number I'm looking for from your results, but I'm not sure if that was actually measured or calculated from the end result. Would you mind running the fast hit test again in COM mode with only one concurrent user (-c1)? I believe that would be a direct comparison to my test, but would remove the file mode delays.
Thanks,
Joel
Running in file mode will have quite a bit of overhead because you got a timer polling for incoming requests. The default timer tick is 200ms so your varience most likely comes from that (fractions thereof). Also running in debug mode is slower and you by default all logging and the server display are on by default. Performance can be improved a lot by turning those options off or running without a UI entirely - but some things like compiled code and COM compilation can't be done with the
COM operation removes the timer and requests are immediately picked up. In my tests on my fairly powerful laptop (I7 4 core) I get about 63 req/sec for a hello world type request with two instances with logging and UI turned on.
Microsoft Windows [Version 6.2.9200]
(c) 2012 Microsoft Corporation. All rights reserved.c:\>ab -n200 -c10 http://localhost/wconnect/testpage.wwd
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/Benchmarking localhost (be patient)
Completed 100 requests
Completed 200 requests
Finished 200 requests
Server Software: Microsoft-IIS/8.0
Server Hostname: localhost
Server Port: 80Document Path: /wconnect/testpage.wwd
Document Length: 3877 bytesConcurrency Level: 10
Time taken for tests: 3.169 seconds
Complete requests: 200
Failed requests: 0
Write errors: 0
Total transferred: 843181 bytes
HTML transferred: 775400 bytes
Requests per second: 63.10 [#/sec] (mean)
Time per request: 158.471 [ms] (mean)
Time per request: 15.847 [ms] (mean, across all concurrent requests)
Transfer rate: 259.80 [Kbytes/sec] receivedConnection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.3 0 1
Processing: 9 143 476.1 26 3168
Waiting: 9 143 476.1 26 3168
Total: 9 143 476.2 26 3168Percentage of the requests served within a certain time (ms)
50% 26
66% 34
75% 36
80% 38
90% 130
95% 726
98% 2828
99% 3031
100% 3168 (longest request)
I talk more about the test process here:
http://www.west-wind.com/weblog/posts/2012/Sep/04/ASPNET-Frameworks-and-Raw-Throughput-Performance
although that's strictly for ASP.NET, but the same thing applies and you can grab ab.exe out of the GIT repository.
The example is stock Web Connection so you can duplicate this setup for yourself (ie. the TEstPage.wwd). It's not quite a hello world type request as it reads a lot of the request data back. A true do-nothing request would probably double the throughput.
Actually there's one in the box:
c:\>ab -n200 -c10 http://localhost/wconnect/wc.wc?wwmaint~fasthit
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/Benchmarking localhost (be patient)
Completed 100 requests
Completed 200 requests
Finished 200 requests
Server Software: Microsoft-IIS/8.0
Server Hostname: localhost
Server Port: 80Document Path: /wconnect/wc.wc?wwmaint~fasthit
Document Length: 784 bytesConcurrency Level: 10
Time taken for tests: 0.507 seconds
Complete requests: 200
Failed requests: 0
Write errors: 0
Total transferred: 220783 bytes
HTML transferred: 156800 bytes
Requests per second: 394.50 [#/sec] (mean)
Time per request: 25.349 [ms] (mean)
Time per request: 2.535 [ms] (mean, across all concurrent requests)
Transfer rate: 425.29 [Kbytes/sec] receivedConnection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.3 0 1
Processing: 2 18 66.3 3 506
Waiting: 2 18 66.3 3 505
Total: 2 18 66.4 3 506Percentage of the requests served within a certain time (ms)
50% 3
66% 4
75% 4
80% 4
90% 5
95% 103
98% 305
99% 405
100% 506 (longest request)
which actually bumps the throughput to almost 400 requests a second.
Remember the throughput and request time is not the limiting performance factor - the limiting factor is always the actual request processing and CPU load on the system (which FoxPro is not very good at balancing).
+++ Rick ---
I've had my fill of COM and threads for now, so I'm playing with the Web Connection demo to get a feel for what it might be like to use it in conjunction with ASP.NET. There's a lot more to WC than its function as a pool manager for Fox servers, so I'm interested in that stuff as well. My first task is to find out what kind of hit I'll take forwarding Fox requests from ASP.NET to WC, and I've put together a little test. Here's a very simple function I added to wwdemo.prg:
Function JoelTest
Response.Write("This is a test.")
EndFunc
Here is the code in an ASP.NET Web Form button click:
System.Diagnostics.Stopwatch sw = new System.Diagnostics.Stopwatch();
sw.Start();string baseUri = "http://localhost/wconnect_demo/JoelTest.wwd";
for (int i = 0; i < 10; i++)
{
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(baseUri);
request.Method = "GET";
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
StreamReader reader = new StreamReader(response.GetResponseStream());
string page = reader.ReadToEnd();
reader.Close();
response.Close();
}
sw.Stop();
this.lblResponse.Text = "Total WC time: " + sw.ElapsedMilliseconds.ToString() + "<br>";
The times for each request were all over the map, so I ran 10 back to back. The average was roughly 200-250ms per request. There are quite a few caveats I realize:
1. This is the demo version.
2. It is a single WC instance.
3. It is operating in file mode.
4. This isn't a raw throughput test. I'm only sending one request at a time.
5. I don't know how much of the time is on the .NET side.
6. This is my desktop development computer, and my hardware is showing its age.
7. It's entirely possible I have something configured wrong.
All things considered, 250ms isn't bad, but I'm wondering if I should expect better. There may be a flaw in my test. I also did a little testing with Microsoft's Stress Tool directly against the WC server. If I pump up the threads, I can get about 55 requests per second. If I use only one thread, it drops to 4-5 requests per second, consistent with my test from ASP.NET. So again, we're not talking about throughput, I'm just trying to get an indication of how long an average individual request takes to process.
EDIT: I just had a thought. I wonder if I'm hitting a file polling interval here. Since I'm only sending one request at a time, WC can only process as fast as the file system updates. If I send multiple requests through the Stress Tool then WC can process more requests/second because it sees more files between intervals. I imagine that issues mostly goes away in COM mode.
Thanks,
Joel