How do I make my AJAX content crawlable by Google?
I've been working on a site that uses jQuery heavily and loads in content via AJAX like so:
$('#newPageWrapper').load(newPath + ' .pageWrapper', function() {
//on load logic
}
It has now come to my attention that Google won't index any dynamically loaded content via Javascript and so I've been looking for a solution to the problem.
I've read through Google's Making AJAX Applications Crawlable document what seems like 100 times and I still don't understand how to implement it (due in the most part to my limited knowledge of servers).
So my first question would be:
And secondly, if there isn't anything out there yet, would anyone be able to explain:
How to 'Set up my server to handle requests for URLs that contain _escaped_fragment_'
How to implement HtmlUnit on my server to create an 'HTML snapshot' of the page to show to the crawler.
I would be incredibly grateful if someone could shed some light on this for me, thanks in advance!
-Ben
The best solution is to make a site that works with and without JavaScript. Read articles on Progressive enhancement.
I couldn't find an alternative so I took epascarello's advice and now I'm generating the content with php if the URL includes '_escaped_fragment_' (the URL will include that if a crawler visits)
For those searching:
<?php
if(isset($_GET['_escaped_fragment_'])){
$newID = $_GET['_escaped_fragment_'];
//Generate page here
}
?>
These days this problem is typically solved by using a service that plugs an implementation of Google's scheme for Making AJAX Applications Crawlable in at web server level. You don't have to do it yourself any more.
I work for one of these companies: https://ajaxsnapshots.com (there are others)
链接地址: http://www.djcxy.com/p/89606.html上一篇: 什么是AngularJS指令?