Skip to content

Performance Metrics

Bench Reports can be customized using many items (charts, tables, ...). Each one of these items can be configured using performance metrics.

This section list all the metrics available in OctoPerf.

Hit metrics

Hit Metrics List

All the following metrics are available in OctoPerf. To know which report item car display this metrics, please report to the hit metrics availability table.

These metrics comes in various types (Minimum, average, count, rate, etc.). Refer to the performance metrics types table to know them.

Metric Description Performance
UserLoad Number of active users. Many other metrics should not changed as the user load increase.
Response time Time between the request and the end of the response, in milliseconds. The response time includes both the latency and the connect time. The lower the better. Should be less than 4 seconds.
Connect time Time between the request and the server connection, in milliseconds. The lower the better. If you get high connect times your servers may be running out of available sockets, or your database may be overloaded.
Latency (Server time) Time between the request and the first response byte, in milliseconds. The lower the better. If you get high response times and low latencies your servers may be running out of bandwidth. Check the throughput to confirm this.
Network time Response time - Latency The lower the better. If you get high network times your servers may be running out of bandwidth. Check the throughput to confirm this.
Throughput Bit rate in Bytes per second. Amount of data exchanged between the clients and the servers. Must grow along the user load. If it reaches an unexpected plateau, you may be running out of bandwidth.
Errors Count or rate of errors that occurred. Errors may happen if you did not validate your virtual user. Otherwise, errors may be the sign that your servers or database are overloaded.
Hits Count or rate of hits (requests) that occurred. Should increase as the user load goes up.
Assertions Count of assertions in error, failed, or successful. Assertions in error or failed lets your know that your servers did not answer as you expected.

Hit Metrics Types

Each metric comes in various types. The table below list all of them.

Type Description
Minimum Minimum value of a metric.
Average Average value of a metric.
Maximum Maximum value of a metric.
Variance The variance quantifies the dispersion of the metric. A variance close to 0 indicates that the metric values tend to be very close to the mean, while a high variance indicates that the values are spread out over a wider range. Its unit is the square of the metric unit.
Standard deviation Simply the square root of the variance. It's easier to compare to other metric types using a common unit.
Percentile 90 A percentile indicates the value below which a given percentage of observations in a group of observations fall. For example, the 90th percentile is the value below which 90 percent of the observations may be found.
Percentile 95 A percentile indicates the value below which a given percentage of observations in a group of observations fall. For example, the 95th percentile is the value below which 95 percent of the observations may be found.
Percentile 99 A percentile indicates the value below which a given percentage of observations in a group of observations fall. For example, the 99th percentile is the value below which 99 percent of the observations may be found.
Median Simply a 50th percentile: the value below which 50 percent of all the values may be found.
Total Count of a metric. Number of occurrences of an event.
Rate Count of a metric per second.
Apdex Apdex (Application Performance Index) defines a standard method for reporting the performance of software applications, by specifying a way to analyze the degree to which measured performance meets user expectations. Score is between 0 and 1, at 1 all users are satisfied.

The following table defines the metrics and their associated statistics:

Metric Min. Avg. and Max. Std Dev. and Variance Med. Percentile Total Rate Apdex
Response Time X X X X
Connect Time X X X
Latency X X X
Network Time X X
Errors X X
Hits X X
Assertions X
Throughput X X

Hit Metrics Availability

The table below displays all performance metrics per type and which report items can display them.

Metric Type Line Chart Summary Top Chart Percentiles Chart Results Table/Tree
Userload Total X
Response time Average X X X X X
Response time Maximum X X X X X
Response time Minimum X X X X X
Response time Variance X X X X
Response time Standard deviation X X X X
Response time Apdex X X X
Response time Median X
Response time Percentile 90 X X
Response time Percentile 95 X X
Response time Percentile 99 X X
Network time Average X X X X X
Network time Maximum X X X X X
Network time Minimum X X X X X
Network time Variance X X X X
Connect time Average X X X X X
Connect time Maximum X X X X X
Connect time Minimum X X X X X
Connect time Variance X X X X
Connect time Standard deviation X X X X
Connect time Apdex X X X
Latency Average X X X X X
Latency Maximum X X X X X
Latency Minimum X X X X X
Latency Variance X X X X
Latency Standard deviation X X X X
Latency Apdex X X X
Errors Rate X X X X
Errors Total X X X X
Errors % Error X X X X
Hits Rate X X X X X
Hits Total X X X X
Hits Total Successful X X X X
Hits % Successful X X X X
Throughput Rate X X X X
Throughput Total X X X X
Response Size Total X X X X
Assertions in error Total X X X X
Assertions failed Total X X X X
Assertions successful Total X X X X

Monitoring Metrics

Monitoring Metrics List

The following monitoring metrics are collected for each load generator involved during the load tests. Note that there is a page dedicated to load generators monitoring where you can find more details.

Operating system

Metric Unit Description
% CPU Usage Percentage of CPU usage The lower the better. Excessive CPU usage can lead to increased response times or random failures.
Load avg per CPU (1 min) Number of threads queued per CPU averaged for the last minute Load average represents the number of threads waiting to be processed by the CPU. Ideally this value should not be more than 1. Otherwise it can indicate a high CPU activity, disk usage or network bottleneck.
% Used memory Percentage of memory usage The lower the better. As our agent's JVM makes use of all the memory it can, this value should not change much during the test, if at all.
Sent MB/Sec Outbound network usage in megabytes Must grow along the user load. If it reaches a plateau before maximum load is achieved, you may be running out of bandwidth.
Received MB/Sec InBound network usage in megabytes Must grow along the user load. If it reaches a plateau before maximum load is achieved, you may be running out of bandwidth.
Established connections Number of established TCP connections Must grow along with the user load. If it reaches a plateau before maximum load is achieved, your server network capacity may be exceeded.
% Segments retransmitted Percentage of TCP segments retransmitted The lower the better. A very small percentage can have a huge impact on response times.

Java Virtual Machine

Metric Unit Description
Memory / % heap memory used Percentage of heap memory in use The lower the better. High heap memory usage can cause the load generator to fail.
G1 Young / collectionCount Number of garbage collections in the young generation. These collections have little impact on the JVM performance and can often be disregarded.
G1 Young / collectionTime Amount of time spent in collections Time spent in garbage collection of G1 young.
G1 Old / collectionCount Number of garbage collections in the old generation. These collection have a large impact on the JVM performance, it is important to keep track of them if you suspect a performance issue on the load generators.
G1 Old / collectionTime Amount of time spent in collections Time spent in garbage collection of G1 old.

Info

More details are available in the page dedicated to load generators monitoring.

Monitoring Metrics Availability

Monitoring metrics are only available in line charts.

Donut chart metrics

There are 4 performance metrics left that can only be displayed in donut charts and in area charts. These metrics show the distribution of certain data.

Metric Description
HTTP methods HTTP methods (GET, POST, DELETE, ...) distribution
HTTP response codes HTTP response codes (2xx, 3xx, 4xx, 5xx, ...) distribution. You should avoid error codes such as 4xx and 5xx.
Media types count Media types (html, css, json, javascript, xml, ...) distribution by request count. Useful to check resources distribution by type.
Media types throughput Media types (html, css, json, javascript, xml, ...) distribution by bandwidth usage. Useful to know what resources use your bandwidth.