HomeGuides
Submit Documentation FeedbackJoin Developer CommunityLog In

How search engines crawl product pages

This topic describes how search engines crawl product pages.

Search engines use automated bots to crawl content of your website. These bots update the search engine records for web content and search indices of your website.

B2B Commerce Cloud has a Search Engineering Optimization (SEO) feature that allows bots to consume server-side rendered content instead of the dynamically rendered Angular pages.

The following table represents the crawlers that will trigger the SEO Catalog:

CrawlerCrawler Description
botThis substring will catch all crawlers with "bot" in the UserAgent
crawlerThis substring will catch all crawlers with "crawler" in the UserAgent
baiduspiderBaidu's web crawling spider
80legs80 legs web crawling and screen scraping platform
ia_archiverAlexa's web and site audit crawler
voyagerCosmix Corporation's web crawling bot
curlCommand-line tool for transferring data
wgetCommand-line tool for retrieving files
yahoo! SlurpYahoo!'s web-indexing robot
mediapartners-googleGoogle's web-indexing robot

From a coding standpoint, the logic is handled in the SearchCrawlerRouteConstraint class located in the InSite.Mvc.Infrastructure assembly. The Match method returns a Boolean response of whether or not the incoming request, specifically the User Agent object, contains any of the crawlers listed above.

This approach is required due to the use of Angular JS in the B2B Commerce Cloud front end.

When deploying and testing your website we recommend using a browser, such as Chrome, with developer tools enabled. This way you can spoof the User Agent value to test your SEO settings. Press F12, click Configure throttling, and select the Custom user agent of your choosing.

For more information about Device Emulation with Google, visit the Google developer site.


Did this page help you?