There will be variables based on environment differences, tool selection, site architecture, and common user behaviors.
## Testing strategies
There are a number of testing strategies, appropriate for different scenarios. Some considerations for when to conduct testing are:
Testing on Production Hardware
Defining typical user behaviors ahead of the test
Define success metrics ahead of the test
Test after code is complete
Testing prior to being "solution complete" may be useful for finding issues in the code, but it can also waste resources chasing after the wrong issues. Performing tests on the Production environment with a code-complete solution will be the most efficient use of time.
There are multiple tools to use. Complexity and testing strategies may vary based on tool selection.
## Synthetic testing
Synthetic Testing means simulating site. It starts with defining common user behaviors and expected numbers of users, and then increasing the load until the site ceases to deliver acceptable performance levels. Gatling is the tool currently used by <<product-name>>.
Common user behaviors are often difficult to define. Google Analytics/Tag Manager has historically been helpful for gathering real data and prioritizing the types of common user behavior.
Typically Synthetic Testing will be done before go-live and during the Alpha/Beta releases to define where the common bottlenecks exist in the site. It may be conducted later if issues are identified to assist in troubleshooting or verification of a fix.
### Tool - Gatling
Server-side tool for Capacity and Load Testing (Synthetic Testing). It will only be as valid as your model of a typical user. Google Analytics or the common buyer behavior from a previous site can be helpful for these definitions. Because of different architectures, site customizations and user behaviors this is the most difficult to make a general, prescriptive recommendation. Insite is happy to discuss more on a case-by-case basis.
### Tool - WebPageTest
Client Side, Front End Testing tool. This can be used for both synthetic testing and real user monitoring. The focus is on optimizing initial request and limiting render blocking resources (fonts/css/js in head). Our recommendation is to optimize the following areas:
Minimize TimeToFirstByte/StartRender, \<1s
Visual Progress % Over Time
Generally we ignore the following areas of the tool because they have not been pertinent to performance issues:
Most cases render complete
# Real user monitoring
Real User Monitoring, or Application Performance Monitoring, includes monitoring tools that are always running and tracking environment performance and user traffic, so you choose when to be reactive base on pre-determined performance thresholds. Only someone familiar with the environment will be able to view the tool and diagnose all but the most obvious issues.
There are multiple points at which monitoring should be deployed. Monitoring can be active or passive, firing alerts to proactively show issues, or serve as a tool to diagnose an issue after something has been discovered. Active tools require time to properly tune alerts and ensure site performance is not being affected by the monitoring tool itself.
### Tool - App Dynamics
This is an Application Performance Monitoring tool which measures the full application stack from code down to database for gathering as much information as possible, pinpointing the exact area where an issue exists.
This tool is used for monitoring the <<product-name>> Application itself. It pin-points code as it is which models are being used and when. It also monitors database queries.
### Tool - Logic Monitor/Amazon Hosting Stats
Insite operates from the AWS (Amazon Web Services) environment. We use the Amazon hosting stats information and Logic Monitor for basic server monitoring in the following areas:
IIS Monitoring (open connections, etc)
Overall SQL Server DB Queries and Connections
## Recurring, active monitoring
Some advanced tools combine monitoring with synthetic testing to give more reliable, repeated benchmarks. Sites which have periods of increased traffic that affect performance may benefit from recurring synthetic testing. It can also be highly useful for comparing speeds pre and post changes to site architecture, code, software patches, and so on.