Hello Pummel

I’ve been doing some capacity modeling at $work recently and found myself needing to do “find the concurrency limit at which the 99th percentile stays below 100 milliseconds” type thing. So I wrote a tool, pummel to do it for me.

$ pummel limit --labels ./urls.txt 
clients	tp99.0	mean	reqs/sec
1	2.00	1.03	967.59
2	2.00	1.04	1932.37
4	3.00	1.43	2799.16
8	16.00	3.64	2199.13
16	130.00	7.81	2049.02
8	17.00	3.54	2262.44
12	73.00	5.62	2135.61
16	129.00	7.58	2110.93
12	71.00	5.57	2155.99
14	117.97	6.58	2127.79
12	71.00	5.57	2155.99
$ 

By default it is looking for that “tp99 < 100ms” threshold, which in this case it found at 12 requests in flight at the same time.

Also really useful is the step command, which just increases concurrent load and reports on times:

$ pummel step --limit 20 --max 5000 ./urls.txt 
1	2.00	1.05	956.21
2	3.00	1.08	1854.26
3	4.00	1.24	2415.46
4	3.00	1.41	2836.48
5	6.00	2.05	2444.27
6	9.00	2.54	2358.31
7	11.00	2.92	2398.74
8	16.00	3.38	2364.49
9	23.99	3.99	2257.22
10	35.99	4.43	2258.05
11	54.99	5.10	2157.37
12	72.98	5.94	2020.61
13	87.97	5.94	2187.89
14	125.99	6.40	2187.64
15	125.00	6.85	2188.50
16	130.00	7.39	2163.68
17	134.00	7.86	2163.13
18	143.98	8.35	2156.93
19	156.00	8.92	2129.52
$ 

Assuming you put this output in data.csv this also plots very nicely with gnuplot

set terminal png size 640,480
set xlabel 'concurrency'
set ylabel 'millis'

set output 'tp99.png'
plot 'data.csv' using 2 with lines title 'tp99 response time'

set output 'mean.png'
plot 'data.csv' using 3 with lines title 'mean response time'

set output 'requests_per_second.png'
plot 'data.csv' using 4 with lines title 'requests/second'

Nothing super fancy, but is kind of fun :-)