The purpose with a content delivery network or content distribution network (CDN) is to ensure high availability when serving content to visitors. CDN consists of a globally distributed network of proxy servers deployed in multiple data centers. CDNs deliver content from the fastest and closest server available.
The CDN servers are designed to cache content according to the cache rules you define in the HTTP headers from your application. It is critical that you set these caching rules correctly for your solution to scale properly. See CDN recommendations.
Proper site warmup is crucial for cloud-based environments. Since a node may be brought out for maintenance at any time, and then put back in during peak hours, a node that gets a full share of traffic without being warmed up first will cause response-time spikes and increasing risk of outages. The warmup feature automatically starts up and initializes a web application, to ready the server and data caches. See Initialization.
Limiting the number of content types is a good practice. Startup scans assemblies and caches views, so a large number (200+) of content types significantly affects the startup time. You also should keep the Web App below 1 GB. This includes binaries, but not media assets and logs that should be written to a BLOB storage container.
Cloud-based solutions are more likely to scale out the web servers rather than to scale them up. This means that each front-end node also contributes to a constant load to the database. In other words, if you go from two front-end servers (a typical on-premises setup) to four front-end servers, while keeping the total throughput the same, the load on the database server increases.
When scaling out, be sure that the machines that spend most effort building a page are the front-end servers. Caching in multiple layers (object caches, partial HTML caches such as for complex menus and/or full output cache), helps avoid a "cache stampede," especially when combined with warm-up.
By default when a page is published, output caches are immediately invalidated for all sites. This causes output-cached pages to be re-rendered using the lower-level caches. Most of these lower-level caches remain valid after a publish, except the caches for the page that was published. Be sure to implement proper multi-layer/partial caching for rendered pages with heavy data processing. See Caching.
The ETag or entity tag is part of HTTP protocol, and determines web cache validation. See CDN recommendations for information about using ETags.
In a cloud environment, retry policies become increasingly important. Transient errors may occur due to network issues, or maintenance of infrastructure elements, and retry policies let the application gracefully recover from such errors without propagating the error to the end user.
Retry mechanisms for Azure services differ, because each service has its own requirements and characteristics. Therefore, each retry mechanism is tuned to a specific service. See transient faults and retry policies, and the Azure SDK for .NET for guidelines.
Because the virtual machines hosting a Web App may be restarted at any time, you risk losing any information stored in the file system. Also, if you have large media volumes, you should store assets in a BLOB storage instead of in the Web App, because this limits scalability. Optimizely provides access to BLOB storage through a BlobProvider interface.
Some third-party components such as Lucene.NET that use file shares or files local to the web server, may have problems with high traffic in a cloud environment, and are therefore not supported.
Updated about 1 month ago