How to create a snapshot for search engine callers
Some developers set up one website for the humans and another for the robots - the search engines. The downside of this workaround is having to maintain two websites.
Google made efforts to index JavaScript applications but creating snapshots for search engines is still recommended. Other search engines like Bing Baidu and Yahoo only has limited support for SPAs.
Follow search engine guidelines. Render the website for crawlers so that the contents display properly <h1> hello John </h1> not just as <h1> </h1> because crawlers cannot make the correct request to display {{getMyNameFromServer(id)}} for example. Creating snapshots is developers doing work for the crawler.
It's also important to generate a clear site map for the website to help the crawler navigate.
SPA pages made with the likes of AngularJS are snappy and very interactive by nature. There are many entry and exit points and user flows. Developers should work with product owners and product managers to create relevant user flows and experiences when creating the snapshot.
Automatically recreate or crawl website for search engine optimization
Have you heard about the popularity of PhantomJs? The headless "ghostly" browser? It's perfect for generating snapshot automatically, regularly for sophisticated SPA applications. Use PhantomJs with pretender (prepackaged with Phantom) and store and cache the result in Redis for crawlers to consume.
Route all crawlers by specifying head user agent.
Get the SEO foundations right
It's still important to still clearly mark SEO metadata like meta description, title, image alt, and meta tags for every page.
No comments:
Post a Comment