How to improve SEO for Serverless Websites?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ height:90px;width:728px;box-sizing:border-box;
}
I want to improve SEO (i.e., correctly index my pages on search engines) in a serverless architecture when my website is hosted on AWS S3.
As I'm using a JavaScript approach to routing (something akin to angular, but simpler) and getting dynamic content to fill metatags, I'm finding everything to be quite troublesome for scrapers without JavaScript support, like Facebook's.
I have default meta-tags already inserted and those are, of course, loaded just fine but I need the updated ones.
I know most people uses pre-rendering on a server or through something like Prerender.io but I really wanted to find an alternative that makes sense on a serverless approach.
I thought I had it figured out since Open Graph metatags allow for a "pointers" URL where you can have a "metatags-only" HTML if needed. So I was thinking of using a Lambda function to generate the HTML response with the right metatags on a GET request. The problem is since the Facebook scraper has no JavaScript support, how can I send the dynamic content on the GET request?
javascript facebook amazon-s3 seo serverless-framework
add a comment |
I want to improve SEO (i.e., correctly index my pages on search engines) in a serverless architecture when my website is hosted on AWS S3.
As I'm using a JavaScript approach to routing (something akin to angular, but simpler) and getting dynamic content to fill metatags, I'm finding everything to be quite troublesome for scrapers without JavaScript support, like Facebook's.
I have default meta-tags already inserted and those are, of course, loaded just fine but I need the updated ones.
I know most people uses pre-rendering on a server or through something like Prerender.io but I really wanted to find an alternative that makes sense on a serverless approach.
I thought I had it figured out since Open Graph metatags allow for a "pointers" URL where you can have a "metatags-only" HTML if needed. So I was thinking of using a Lambda function to generate the HTML response with the right metatags on a GET request. The problem is since the Facebook scraper has no JavaScript support, how can I send the dynamic content on the GET request?
javascript facebook amazon-s3 seo serverless-framework
add a comment |
I want to improve SEO (i.e., correctly index my pages on search engines) in a serverless architecture when my website is hosted on AWS S3.
As I'm using a JavaScript approach to routing (something akin to angular, but simpler) and getting dynamic content to fill metatags, I'm finding everything to be quite troublesome for scrapers without JavaScript support, like Facebook's.
I have default meta-tags already inserted and those are, of course, loaded just fine but I need the updated ones.
I know most people uses pre-rendering on a server or through something like Prerender.io but I really wanted to find an alternative that makes sense on a serverless approach.
I thought I had it figured out since Open Graph metatags allow for a "pointers" URL where you can have a "metatags-only" HTML if needed. So I was thinking of using a Lambda function to generate the HTML response with the right metatags on a GET request. The problem is since the Facebook scraper has no JavaScript support, how can I send the dynamic content on the GET request?
javascript facebook amazon-s3 seo serverless-framework
I want to improve SEO (i.e., correctly index my pages on search engines) in a serverless architecture when my website is hosted on AWS S3.
As I'm using a JavaScript approach to routing (something akin to angular, but simpler) and getting dynamic content to fill metatags, I'm finding everything to be quite troublesome for scrapers without JavaScript support, like Facebook's.
I have default meta-tags already inserted and those are, of course, loaded just fine but I need the updated ones.
I know most people uses pre-rendering on a server or through something like Prerender.io but I really wanted to find an alternative that makes sense on a serverless approach.
I thought I had it figured out since Open Graph metatags allow for a "pointers" URL where you can have a "metatags-only" HTML if needed. So I was thinking of using a Lambda function to generate the HTML response with the right metatags on a GET request. The problem is since the Facebook scraper has no JavaScript support, how can I send the dynamic content on the GET request?
javascript facebook amazon-s3 seo serverless-framework
javascript facebook amazon-s3 seo serverless-framework
edited Nov 21 '16 at 1:56
Zanon
14.8k127896
14.8k127896
asked Nov 20 '16 at 22:11
João MoreiraJoão Moreira
345
345
add a comment |
add a comment |
3 Answers
3
active
oldest
votes
If you are using S3, you must prerender the pages before uploading them. You can't call Lambda functions on the fly because the crawler will not execute JavaScript. You can't even use Prerender.io with S3.
Suggestion:
- Host your website locally.
- Use PhanthomJS to fetch the pages and write a prerendered version.
- Upload each page to S3 following the page address*.
* E.g.: the address from example.com/about/us must be mapped as a us.html file inside a folder about in your bucket root.
Now, your users and the crawlers will see the exactly the same pages, without needing JavaScript to load the initial state. The difference is that with JavaScript enabled, your framework (Angular?) will load the JS dependencies (like routes, services, etc.) and take control like a normal SPA application. When the user click to browse another page, the SPA will reload the inner content without making a full page reload.
Pros:
- Easy to setup.
- Very fast to serve content. You can also use CloudFront to improve the speed.
Cons:
- If you have 1000 pages (for e.g.: 1000 products that you sell in your store), you need make 1000 prerendered pages.
- If your page data changes frequently, you need to prerender frequently.
- Sometimes the crawler will index old content*.
* The crawler will see the old content, but the user will probably see the current content as the SPA framework will take control of the page and load the inner content again.
You said that you are using S3. If you want to prerender on the fly, you can't use S3. You need to use the following:
Route 53 => CloudFront => API Gateway => Lambda
Configure:
- Set the API Gateway endpoint as the CloudFront origin.
- Use "HTTPS Only" in the "Origin Policy Protocol" (CloudFront).
- The Lambda function must be a proxy.
In this case, your Lambda function will know the requested address and will be able to correctly render the requested HTML page.
Pros:
- As Lambda has access to the database, the rendered page will always be updated.
Cons:
- Much slower to load the webpages.
1
Wow very detailed answer! Thank you very much! If none other comes I'll definitely "give up" and take your suggestion. Thank you so much for your time.
– João Moreira
Nov 21 '16 at 10:00
add a comment |
If you are willing to use CloudFront on top of your S3 bucket, there is a new possibility to solve your problem using prerender on the fly. Lambda@Edge is a new feature that allows code to be executed with low latency when a page is requested. With this, you can verify if the agent is a crawler and prerender the page for him.
01 Dec 2016 announcement: Lambda@Edge – Preview
Just last week, a comment that I made on Hacker News resulted in an
interesting email from an AWS customer!
(...)
Here’s how he explained his problem to me:
In order to properly get indexed by search engines and in order for
previews of our content to show up correctly within Facebook and
Twitter, we need to serve a prerendered version of each of our pages.
In order to do this, every time a normal user hits our site need for
them to be served our normal front end from Cloudfront. But if the
user agent matches Google / Facebook / Twitter etc., we need to
instead redirect them the prerendered version of the site.
Without spilling any beans I let him know that we were very aware of
this use case and that we had some interesting solutions in the works.
Other customers have also let us know that they want to customize
their end user experience by making quick decisions out at the edge.
This feature is currently in preview mode (dec/2016), but you can request AWS to experiement it.
add a comment |
Here's a solution that uses (and is approved by) prerender.cloud: https://github.com/sanfrancesco/prerendercloud-lambda-edge
This uses Lambda@Edge to prerender your app via a make deploy
command.
Taken from the repo's README:
Server-side rendering (pre-rendering) via Lambda@Edge for single-page apps hosted on CloudFront with an s3 origin.
This is a serverless project with a make deploy command that:
- serverless.yml deploys 3 functions to Lambda (viewerRequest, originRequest, originResponse)
- deploy.js associates them with your CloudFront distribution
- create-invalidation.js clears/invalidates your CloudFront cache
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f40709990%2fhow-to-improve-seo-for-serverless-websites%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
If you are using S3, you must prerender the pages before uploading them. You can't call Lambda functions on the fly because the crawler will not execute JavaScript. You can't even use Prerender.io with S3.
Suggestion:
- Host your website locally.
- Use PhanthomJS to fetch the pages and write a prerendered version.
- Upload each page to S3 following the page address*.
* E.g.: the address from example.com/about/us must be mapped as a us.html file inside a folder about in your bucket root.
Now, your users and the crawlers will see the exactly the same pages, without needing JavaScript to load the initial state. The difference is that with JavaScript enabled, your framework (Angular?) will load the JS dependencies (like routes, services, etc.) and take control like a normal SPA application. When the user click to browse another page, the SPA will reload the inner content without making a full page reload.
Pros:
- Easy to setup.
- Very fast to serve content. You can also use CloudFront to improve the speed.
Cons:
- If you have 1000 pages (for e.g.: 1000 products that you sell in your store), you need make 1000 prerendered pages.
- If your page data changes frequently, you need to prerender frequently.
- Sometimes the crawler will index old content*.
* The crawler will see the old content, but the user will probably see the current content as the SPA framework will take control of the page and load the inner content again.
You said that you are using S3. If you want to prerender on the fly, you can't use S3. You need to use the following:
Route 53 => CloudFront => API Gateway => Lambda
Configure:
- Set the API Gateway endpoint as the CloudFront origin.
- Use "HTTPS Only" in the "Origin Policy Protocol" (CloudFront).
- The Lambda function must be a proxy.
In this case, your Lambda function will know the requested address and will be able to correctly render the requested HTML page.
Pros:
- As Lambda has access to the database, the rendered page will always be updated.
Cons:
- Much slower to load the webpages.
1
Wow very detailed answer! Thank you very much! If none other comes I'll definitely "give up" and take your suggestion. Thank you so much for your time.
– João Moreira
Nov 21 '16 at 10:00
add a comment |
If you are using S3, you must prerender the pages before uploading them. You can't call Lambda functions on the fly because the crawler will not execute JavaScript. You can't even use Prerender.io with S3.
Suggestion:
- Host your website locally.
- Use PhanthomJS to fetch the pages and write a prerendered version.
- Upload each page to S3 following the page address*.
* E.g.: the address from example.com/about/us must be mapped as a us.html file inside a folder about in your bucket root.
Now, your users and the crawlers will see the exactly the same pages, without needing JavaScript to load the initial state. The difference is that with JavaScript enabled, your framework (Angular?) will load the JS dependencies (like routes, services, etc.) and take control like a normal SPA application. When the user click to browse another page, the SPA will reload the inner content without making a full page reload.
Pros:
- Easy to setup.
- Very fast to serve content. You can also use CloudFront to improve the speed.
Cons:
- If you have 1000 pages (for e.g.: 1000 products that you sell in your store), you need make 1000 prerendered pages.
- If your page data changes frequently, you need to prerender frequently.
- Sometimes the crawler will index old content*.
* The crawler will see the old content, but the user will probably see the current content as the SPA framework will take control of the page and load the inner content again.
You said that you are using S3. If you want to prerender on the fly, you can't use S3. You need to use the following:
Route 53 => CloudFront => API Gateway => Lambda
Configure:
- Set the API Gateway endpoint as the CloudFront origin.
- Use "HTTPS Only" in the "Origin Policy Protocol" (CloudFront).
- The Lambda function must be a proxy.
In this case, your Lambda function will know the requested address and will be able to correctly render the requested HTML page.
Pros:
- As Lambda has access to the database, the rendered page will always be updated.
Cons:
- Much slower to load the webpages.
1
Wow very detailed answer! Thank you very much! If none other comes I'll definitely "give up" and take your suggestion. Thank you so much for your time.
– João Moreira
Nov 21 '16 at 10:00
add a comment |
If you are using S3, you must prerender the pages before uploading them. You can't call Lambda functions on the fly because the crawler will not execute JavaScript. You can't even use Prerender.io with S3.
Suggestion:
- Host your website locally.
- Use PhanthomJS to fetch the pages and write a prerendered version.
- Upload each page to S3 following the page address*.
* E.g.: the address from example.com/about/us must be mapped as a us.html file inside a folder about in your bucket root.
Now, your users and the crawlers will see the exactly the same pages, without needing JavaScript to load the initial state. The difference is that with JavaScript enabled, your framework (Angular?) will load the JS dependencies (like routes, services, etc.) and take control like a normal SPA application. When the user click to browse another page, the SPA will reload the inner content without making a full page reload.
Pros:
- Easy to setup.
- Very fast to serve content. You can also use CloudFront to improve the speed.
Cons:
- If you have 1000 pages (for e.g.: 1000 products that you sell in your store), you need make 1000 prerendered pages.
- If your page data changes frequently, you need to prerender frequently.
- Sometimes the crawler will index old content*.
* The crawler will see the old content, but the user will probably see the current content as the SPA framework will take control of the page and load the inner content again.
You said that you are using S3. If you want to prerender on the fly, you can't use S3. You need to use the following:
Route 53 => CloudFront => API Gateway => Lambda
Configure:
- Set the API Gateway endpoint as the CloudFront origin.
- Use "HTTPS Only" in the "Origin Policy Protocol" (CloudFront).
- The Lambda function must be a proxy.
In this case, your Lambda function will know the requested address and will be able to correctly render the requested HTML page.
Pros:
- As Lambda has access to the database, the rendered page will always be updated.
Cons:
- Much slower to load the webpages.
If you are using S3, you must prerender the pages before uploading them. You can't call Lambda functions on the fly because the crawler will not execute JavaScript. You can't even use Prerender.io with S3.
Suggestion:
- Host your website locally.
- Use PhanthomJS to fetch the pages and write a prerendered version.
- Upload each page to S3 following the page address*.
* E.g.: the address from example.com/about/us must be mapped as a us.html file inside a folder about in your bucket root.
Now, your users and the crawlers will see the exactly the same pages, without needing JavaScript to load the initial state. The difference is that with JavaScript enabled, your framework (Angular?) will load the JS dependencies (like routes, services, etc.) and take control like a normal SPA application. When the user click to browse another page, the SPA will reload the inner content without making a full page reload.
Pros:
- Easy to setup.
- Very fast to serve content. You can also use CloudFront to improve the speed.
Cons:
- If you have 1000 pages (for e.g.: 1000 products that you sell in your store), you need make 1000 prerendered pages.
- If your page data changes frequently, you need to prerender frequently.
- Sometimes the crawler will index old content*.
* The crawler will see the old content, but the user will probably see the current content as the SPA framework will take control of the page and load the inner content again.
You said that you are using S3. If you want to prerender on the fly, you can't use S3. You need to use the following:
Route 53 => CloudFront => API Gateway => Lambda
Configure:
- Set the API Gateway endpoint as the CloudFront origin.
- Use "HTTPS Only" in the "Origin Policy Protocol" (CloudFront).
- The Lambda function must be a proxy.
In this case, your Lambda function will know the requested address and will be able to correctly render the requested HTML page.
Pros:
- As Lambda has access to the database, the rendered page will always be updated.
Cons:
- Much slower to load the webpages.
answered Nov 21 '16 at 1:46
ZanonZanon
14.8k127896
14.8k127896
1
Wow very detailed answer! Thank you very much! If none other comes I'll definitely "give up" and take your suggestion. Thank you so much for your time.
– João Moreira
Nov 21 '16 at 10:00
add a comment |
1
Wow very detailed answer! Thank you very much! If none other comes I'll definitely "give up" and take your suggestion. Thank you so much for your time.
– João Moreira
Nov 21 '16 at 10:00
1
1
Wow very detailed answer! Thank you very much! If none other comes I'll definitely "give up" and take your suggestion. Thank you so much for your time.
– João Moreira
Nov 21 '16 at 10:00
Wow very detailed answer! Thank you very much! If none other comes I'll definitely "give up" and take your suggestion. Thank you so much for your time.
– João Moreira
Nov 21 '16 at 10:00
add a comment |
If you are willing to use CloudFront on top of your S3 bucket, there is a new possibility to solve your problem using prerender on the fly. Lambda@Edge is a new feature that allows code to be executed with low latency when a page is requested. With this, you can verify if the agent is a crawler and prerender the page for him.
01 Dec 2016 announcement: Lambda@Edge – Preview
Just last week, a comment that I made on Hacker News resulted in an
interesting email from an AWS customer!
(...)
Here’s how he explained his problem to me:
In order to properly get indexed by search engines and in order for
previews of our content to show up correctly within Facebook and
Twitter, we need to serve a prerendered version of each of our pages.
In order to do this, every time a normal user hits our site need for
them to be served our normal front end from Cloudfront. But if the
user agent matches Google / Facebook / Twitter etc., we need to
instead redirect them the prerendered version of the site.
Without spilling any beans I let him know that we were very aware of
this use case and that we had some interesting solutions in the works.
Other customers have also let us know that they want to customize
their end user experience by making quick decisions out at the edge.
This feature is currently in preview mode (dec/2016), but you can request AWS to experiement it.
add a comment |
If you are willing to use CloudFront on top of your S3 bucket, there is a new possibility to solve your problem using prerender on the fly. Lambda@Edge is a new feature that allows code to be executed with low latency when a page is requested. With this, you can verify if the agent is a crawler and prerender the page for him.
01 Dec 2016 announcement: Lambda@Edge – Preview
Just last week, a comment that I made on Hacker News resulted in an
interesting email from an AWS customer!
(...)
Here’s how he explained his problem to me:
In order to properly get indexed by search engines and in order for
previews of our content to show up correctly within Facebook and
Twitter, we need to serve a prerendered version of each of our pages.
In order to do this, every time a normal user hits our site need for
them to be served our normal front end from Cloudfront. But if the
user agent matches Google / Facebook / Twitter etc., we need to
instead redirect them the prerendered version of the site.
Without spilling any beans I let him know that we were very aware of
this use case and that we had some interesting solutions in the works.
Other customers have also let us know that they want to customize
their end user experience by making quick decisions out at the edge.
This feature is currently in preview mode (dec/2016), but you can request AWS to experiement it.
add a comment |
If you are willing to use CloudFront on top of your S3 bucket, there is a new possibility to solve your problem using prerender on the fly. Lambda@Edge is a new feature that allows code to be executed with low latency when a page is requested. With this, you can verify if the agent is a crawler and prerender the page for him.
01 Dec 2016 announcement: Lambda@Edge – Preview
Just last week, a comment that I made on Hacker News resulted in an
interesting email from an AWS customer!
(...)
Here’s how he explained his problem to me:
In order to properly get indexed by search engines and in order for
previews of our content to show up correctly within Facebook and
Twitter, we need to serve a prerendered version of each of our pages.
In order to do this, every time a normal user hits our site need for
them to be served our normal front end from Cloudfront. But if the
user agent matches Google / Facebook / Twitter etc., we need to
instead redirect them the prerendered version of the site.
Without spilling any beans I let him know that we were very aware of
this use case and that we had some interesting solutions in the works.
Other customers have also let us know that they want to customize
their end user experience by making quick decisions out at the edge.
This feature is currently in preview mode (dec/2016), but you can request AWS to experiement it.
If you are willing to use CloudFront on top of your S3 bucket, there is a new possibility to solve your problem using prerender on the fly. Lambda@Edge is a new feature that allows code to be executed with low latency when a page is requested. With this, you can verify if the agent is a crawler and prerender the page for him.
01 Dec 2016 announcement: Lambda@Edge – Preview
Just last week, a comment that I made on Hacker News resulted in an
interesting email from an AWS customer!
(...)
Here’s how he explained his problem to me:
In order to properly get indexed by search engines and in order for
previews of our content to show up correctly within Facebook and
Twitter, we need to serve a prerendered version of each of our pages.
In order to do this, every time a normal user hits our site need for
them to be served our normal front end from Cloudfront. But if the
user agent matches Google / Facebook / Twitter etc., we need to
instead redirect them the prerendered version of the site.
Without spilling any beans I let him know that we were very aware of
this use case and that we had some interesting solutions in the works.
Other customers have also let us know that they want to customize
their end user experience by making quick decisions out at the edge.
This feature is currently in preview mode (dec/2016), but you can request AWS to experiement it.
answered Dec 4 '16 at 2:41
ZanonZanon
14.8k127896
14.8k127896
add a comment |
add a comment |
Here's a solution that uses (and is approved by) prerender.cloud: https://github.com/sanfrancesco/prerendercloud-lambda-edge
This uses Lambda@Edge to prerender your app via a make deploy
command.
Taken from the repo's README:
Server-side rendering (pre-rendering) via Lambda@Edge for single-page apps hosted on CloudFront with an s3 origin.
This is a serverless project with a make deploy command that:
- serverless.yml deploys 3 functions to Lambda (viewerRequest, originRequest, originResponse)
- deploy.js associates them with your CloudFront distribution
- create-invalidation.js clears/invalidates your CloudFront cache
add a comment |
Here's a solution that uses (and is approved by) prerender.cloud: https://github.com/sanfrancesco/prerendercloud-lambda-edge
This uses Lambda@Edge to prerender your app via a make deploy
command.
Taken from the repo's README:
Server-side rendering (pre-rendering) via Lambda@Edge for single-page apps hosted on CloudFront with an s3 origin.
This is a serverless project with a make deploy command that:
- serverless.yml deploys 3 functions to Lambda (viewerRequest, originRequest, originResponse)
- deploy.js associates them with your CloudFront distribution
- create-invalidation.js clears/invalidates your CloudFront cache
add a comment |
Here's a solution that uses (and is approved by) prerender.cloud: https://github.com/sanfrancesco/prerendercloud-lambda-edge
This uses Lambda@Edge to prerender your app via a make deploy
command.
Taken from the repo's README:
Server-side rendering (pre-rendering) via Lambda@Edge for single-page apps hosted on CloudFront with an s3 origin.
This is a serverless project with a make deploy command that:
- serverless.yml deploys 3 functions to Lambda (viewerRequest, originRequest, originResponse)
- deploy.js associates them with your CloudFront distribution
- create-invalidation.js clears/invalidates your CloudFront cache
Here's a solution that uses (and is approved by) prerender.cloud: https://github.com/sanfrancesco/prerendercloud-lambda-edge
This uses Lambda@Edge to prerender your app via a make deploy
command.
Taken from the repo's README:
Server-side rendering (pre-rendering) via Lambda@Edge for single-page apps hosted on CloudFront with an s3 origin.
This is a serverless project with a make deploy command that:
- serverless.yml deploys 3 functions to Lambda (viewerRequest, originRequest, originResponse)
- deploy.js associates them with your CloudFront distribution
- create-invalidation.js clears/invalidates your CloudFront cache
edited Nov 22 '18 at 11:40
ayaio
59.1k20135189
59.1k20135189
answered Nov 22 '18 at 11:40
Ben InadaBen Inada
4118
4118
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f40709990%2fhow-to-improve-seo-for-serverless-websites%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown