Go-live performance spot check
Describes how to test site performance prior to the go-live process.
During project implementation, the partner tests and monitors site performance in QA and development environments, looking for site-specific data or custom code that could cause performance issues. A few weeks before go-live, as part of the Cloud Customer Launch Checklist, Optimizely performs a spot check of the site performance in sandbox to help ensure a successful launch. These tests place a subset of the site under load typical for other Configured Commerce sites.
This is a pre-go-live process only. If performance issues arise after go-live, Optimizely and the Partner can work together to resolve the issue through a support ticket separate from the go-live spot check ticket. Please note that catalog-only sites won't be profiled.
- A Partner should log a Zendesk ticket to schedule a performance profile test. Optimizely will require 2 weeks’ notice to begin the test. If there is an influx of tickets, then the Optimizely partner success team will work with you to appropriately prioritize trying not to impact delivery dates. It is important to add to the Zendesk ticket any additional information with regard to any performance tuning you have already done and/or communicate any known performance concerns.
- Partner to check-in all custom extension code into Github
- The partner and/or the customer should run webpagetest.org. This is a very simple process that would ensure that glaring issues such as large images are taken care of. There is an option to run Google Lighthouse when running webpagetest.org. We would suggest doing that also since it is more comprehensive and gives the user an overall score.
- Once the above bullets are completed, then the Optimizely testing engineer will be scheduled to begin the performance profile testing. Testing will take between 1- 3 days to complete.
- Optimizely will build the Gatling scripts leveraging your custom code against the customer site & feature set (noted below) in the sandbox environment
- The tests with test the home page (authenticated and unauthenticated), up to three catalog pages, a product detail page, and adding to cart and checkout. This ensures that we hit most of the APIs
- Optimizely will execute the Gatling scripts with a moderate load by running all of the tests mentioned above at the same time.
- Optimizely will synthesize the results and perform the due diligence for recommendations on data outliers.
- The final report will include timings for everything, not only the APIs, but we will give an evaluation only on the APIs.
Template for Performance Profile
Performance Profiling requested for: [Customer name]
Targeted Go Live:
Storefront User Username:
Storefront User Password:
Scoring and benchmark times
An evaluation will be given for each benchmark:
- Marked as bad if the 99th percentile exceeds the upper threshold of the benchmark.
- Marked as good if the 99th percentile is within the benchmark and the mean is lower than the benchmark
- Marked as normal if both the 99th percentile and mean are within the benchmark.
General performance guidelines
If the results of the test highlight areas of poor performance, ISC's built-in Prometheus metrics may be useful in narrowing down the source of the performance bottleneck.
Some other performance recommendations:
- Consider using IQueryable GetTableAsNoTracking() as opposed to IQueryable GetTable().
a. Inside Handlers and Pipelines, the developers are given access to IUnitOfWork which in turn exposes IRepository. By leveraging the IRepository.GetTableAsNoTracking method to get the Queryable of the entity, it will prevent the considerable potential overhead that Entity Framework entity tracking can have.
- Try and avoid iteration load loops when working with IUnitOfWork (or Entity Framework in general).
a. This is especially true if the iterating off of the GetTable() which has change tracking still on.
b. If you cannot avoid, consider breaking large loops of Entities into smaller chunks. Otherwise, the performance in the loop will start fast and quickly degrade the deeper into the loop it gets.
- Make strategic choices when using Lazy Loading vs Eager Loading of Entities.
a. The cost of Lazy Loading is the trip from the webserver to the database server and back. A common anti-pattern to be on the lookout for here is n+1 queries. The expense of lazy loading is often hidden during local development b/c often the database and web server are on your same machine and so the trip to and from is often under 1 ms and not noticeable.
b. The cost of Eager Loading is the load and performance hit of the complexity of the join query itself. This can be especially noticeable when the Entity Framework query generated is not taking advantage of existing indexes/keys.
- Optimize Linq queries.
a. Entity Framework is our ORM, and this allows rapid development. That said, just like a developer must take time to optimize their SQL queries to keep code performant, they also must apply that same level of optimization to the Linq statements they are using that, in essence, are generating SQL queries behind the scenes.
b. Tooling to profile Linq to Sql:
- Sql Profiler – built into SQL Management Studio
- Glimpse -https://github.com/Glimpse/Glimpse
- Stackify/Prefix -https://stackify.com/prefix/
- MiniProfiler with EF extension:https://miniprofiler.com/ & https://miniprofiler.com/dotnet/HowTo/ProfileEF6
c. Make strategic choices when using Expands during API calls.
i. The expand API parameter tells the API it needs to fetch deeper levels of data and this can be expensive.
ii. For example,?expand=CustomProperties on the products API is going to retrieve all the custom properties and values on each of the products returned. If you just trying to get a list of product names and don’t need the custom properties, then this is an unnecessary performance hit.
Updated 5 days ago