Nginx retry same end point on http_502 in Docker service Discovery
We use docker swarm with service discovery for Backend REST application. The services in swarm are configured with endpoint_mode: vip
and are running in global
mode. Nginx is proxy passed with service discovery aliases. When we update Backend services sometimes nginx throws 502 as service discovery may point to the updating service.
In such case, We wanted to retry the same endpoint again. How can we achieve this?
According to this we added upstream with the host's private IP and used proxy_next_upstream error timeout http_502;
but still the problem persists.
nginx.conf
upstream servers {
server 192.168.1.2:443; #private ip of host machine
server 192.168.1.2:443 backup;
}
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
proxy_next_upstream http_502;
location /endpoint1 {
proxy_pass http://docker.service1:8080/endpoint1;
}
location /endpoint2 {
proxy_pass http://docker.service2:8080/endpoint2;
}
location /endpoint3 {
proxy_pass http://docker.service3:8080/endpoint3;
}
}
Here if http://docker.service1:8080/endpoint1 throws 502 we want to hit http://docker.service1:8080/endpoint1 again.
Additional queries:
- Is there any way in docker swarm to make it stop pointing to updating service in service discovery till that service is fully up?
- Is upstream necessary here since we directly use docker service discovery?
docker nginx docker-swarm high-availability
add a comment |
We use docker swarm with service discovery for Backend REST application. The services in swarm are configured with endpoint_mode: vip
and are running in global
mode. Nginx is proxy passed with service discovery aliases. When we update Backend services sometimes nginx throws 502 as service discovery may point to the updating service.
In such case, We wanted to retry the same endpoint again. How can we achieve this?
According to this we added upstream with the host's private IP and used proxy_next_upstream error timeout http_502;
but still the problem persists.
nginx.conf
upstream servers {
server 192.168.1.2:443; #private ip of host machine
server 192.168.1.2:443 backup;
}
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
proxy_next_upstream http_502;
location /endpoint1 {
proxy_pass http://docker.service1:8080/endpoint1;
}
location /endpoint2 {
proxy_pass http://docker.service2:8080/endpoint2;
}
location /endpoint3 {
proxy_pass http://docker.service3:8080/endpoint3;
}
}
Here if http://docker.service1:8080/endpoint1 throws 502 we want to hit http://docker.service1:8080/endpoint1 again.
Additional queries:
- Is there any way in docker swarm to make it stop pointing to updating service in service discovery till that service is fully up?
- Is upstream necessary here since we directly use docker service discovery?
docker nginx docker-swarm high-availability
add a comment |
We use docker swarm with service discovery for Backend REST application. The services in swarm are configured with endpoint_mode: vip
and are running in global
mode. Nginx is proxy passed with service discovery aliases. When we update Backend services sometimes nginx throws 502 as service discovery may point to the updating service.
In such case, We wanted to retry the same endpoint again. How can we achieve this?
According to this we added upstream with the host's private IP and used proxy_next_upstream error timeout http_502;
but still the problem persists.
nginx.conf
upstream servers {
server 192.168.1.2:443; #private ip of host machine
server 192.168.1.2:443 backup;
}
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
proxy_next_upstream http_502;
location /endpoint1 {
proxy_pass http://docker.service1:8080/endpoint1;
}
location /endpoint2 {
proxy_pass http://docker.service2:8080/endpoint2;
}
location /endpoint3 {
proxy_pass http://docker.service3:8080/endpoint3;
}
}
Here if http://docker.service1:8080/endpoint1 throws 502 we want to hit http://docker.service1:8080/endpoint1 again.
Additional queries:
- Is there any way in docker swarm to make it stop pointing to updating service in service discovery till that service is fully up?
- Is upstream necessary here since we directly use docker service discovery?
docker nginx docker-swarm high-availability
We use docker swarm with service discovery for Backend REST application. The services in swarm are configured with endpoint_mode: vip
and are running in global
mode. Nginx is proxy passed with service discovery aliases. When we update Backend services sometimes nginx throws 502 as service discovery may point to the updating service.
In such case, We wanted to retry the same endpoint again. How can we achieve this?
According to this we added upstream with the host's private IP and used proxy_next_upstream error timeout http_502;
but still the problem persists.
nginx.conf
upstream servers {
server 192.168.1.2:443; #private ip of host machine
server 192.168.1.2:443 backup;
}
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
proxy_next_upstream http_502;
location /endpoint1 {
proxy_pass http://docker.service1:8080/endpoint1;
}
location /endpoint2 {
proxy_pass http://docker.service2:8080/endpoint2;
}
location /endpoint3 {
proxy_pass http://docker.service3:8080/endpoint3;
}
}
Here if http://docker.service1:8080/endpoint1 throws 502 we want to hit http://docker.service1:8080/endpoint1 again.
Additional queries:
- Is there any way in docker swarm to make it stop pointing to updating service in service discovery till that service is fully up?
- Is upstream necessary here since we directly use docker service discovery?
docker nginx docker-swarm high-availability
docker nginx docker-swarm high-availability
edited Nov 13 at 14:46
asked Nov 13 at 5:05
Mani
2,3571026
2,3571026
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
I suggest you add a health check directly at container level (here)
By doing so, docker pings periodically an endpoint you specified, if it's found unhealthy it will 1) stop routing traffic to it 2) kill the container and restart a new one. Therefore you upstream will be resolved to one of the healthy containers. No need to retry.
As for your additional questions, the first one, docker won't start routing til it's healthy. The second, nginx is still useful to distribute traffic according to endpoint url. But personally nginx + swarm vip mode is not a great choice because swarm load balancer is poorly documented, it doesn't support sticky session and you can't have proxy level health check, I would use traefik
instead, it has its own load balancer.
Can you help me understand how health check is useful here? May be an explanation would help me and others to plot what is happening. I'm trying to eliminate upstream and do something in proxy pass to retry the same url, By doing this I'm trying to minimize another entire flow.
– Mani
Nov 13 at 15:44
@Mani sorry I forgot you were using vip mode, see updated answer.
– Siyu
Nov 13 at 16:08
So I need to use traefik as load balancer and redirect proxy pass to it right?
– Mani
Nov 13 at 16:30
Well I'd suggest replacing nginx with traefik.
– Siyu
Nov 13 at 16:33
1
I figured out added docker DNS 127.0.0.1 as resolver with 5s as validity and removed upstream. So each time when we are trying to proxy pass Nginx resolves it using Docker DNS. It's working fine. Thanks for all the help. Will add it as an answer as it might help someone else later.
– Mani
Dec 7 at 8:22
|
show 6 more comments
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53274144%2fnginx-retry-same-end-point-on-http-502-in-docker-service-discovery%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
I suggest you add a health check directly at container level (here)
By doing so, docker pings periodically an endpoint you specified, if it's found unhealthy it will 1) stop routing traffic to it 2) kill the container and restart a new one. Therefore you upstream will be resolved to one of the healthy containers. No need to retry.
As for your additional questions, the first one, docker won't start routing til it's healthy. The second, nginx is still useful to distribute traffic according to endpoint url. But personally nginx + swarm vip mode is not a great choice because swarm load balancer is poorly documented, it doesn't support sticky session and you can't have proxy level health check, I would use traefik
instead, it has its own load balancer.
Can you help me understand how health check is useful here? May be an explanation would help me and others to plot what is happening. I'm trying to eliminate upstream and do something in proxy pass to retry the same url, By doing this I'm trying to minimize another entire flow.
– Mani
Nov 13 at 15:44
@Mani sorry I forgot you were using vip mode, see updated answer.
– Siyu
Nov 13 at 16:08
So I need to use traefik as load balancer and redirect proxy pass to it right?
– Mani
Nov 13 at 16:30
Well I'd suggest replacing nginx with traefik.
– Siyu
Nov 13 at 16:33
1
I figured out added docker DNS 127.0.0.1 as resolver with 5s as validity and removed upstream. So each time when we are trying to proxy pass Nginx resolves it using Docker DNS. It's working fine. Thanks for all the help. Will add it as an answer as it might help someone else later.
– Mani
Dec 7 at 8:22
|
show 6 more comments
I suggest you add a health check directly at container level (here)
By doing so, docker pings periodically an endpoint you specified, if it's found unhealthy it will 1) stop routing traffic to it 2) kill the container and restart a new one. Therefore you upstream will be resolved to one of the healthy containers. No need to retry.
As for your additional questions, the first one, docker won't start routing til it's healthy. The second, nginx is still useful to distribute traffic according to endpoint url. But personally nginx + swarm vip mode is not a great choice because swarm load balancer is poorly documented, it doesn't support sticky session and you can't have proxy level health check, I would use traefik
instead, it has its own load balancer.
Can you help me understand how health check is useful here? May be an explanation would help me and others to plot what is happening. I'm trying to eliminate upstream and do something in proxy pass to retry the same url, By doing this I'm trying to minimize another entire flow.
– Mani
Nov 13 at 15:44
@Mani sorry I forgot you were using vip mode, see updated answer.
– Siyu
Nov 13 at 16:08
So I need to use traefik as load balancer and redirect proxy pass to it right?
– Mani
Nov 13 at 16:30
Well I'd suggest replacing nginx with traefik.
– Siyu
Nov 13 at 16:33
1
I figured out added docker DNS 127.0.0.1 as resolver with 5s as validity and removed upstream. So each time when we are trying to proxy pass Nginx resolves it using Docker DNS. It's working fine. Thanks for all the help. Will add it as an answer as it might help someone else later.
– Mani
Dec 7 at 8:22
|
show 6 more comments
I suggest you add a health check directly at container level (here)
By doing so, docker pings periodically an endpoint you specified, if it's found unhealthy it will 1) stop routing traffic to it 2) kill the container and restart a new one. Therefore you upstream will be resolved to one of the healthy containers. No need to retry.
As for your additional questions, the first one, docker won't start routing til it's healthy. The second, nginx is still useful to distribute traffic according to endpoint url. But personally nginx + swarm vip mode is not a great choice because swarm load balancer is poorly documented, it doesn't support sticky session and you can't have proxy level health check, I would use traefik
instead, it has its own load balancer.
I suggest you add a health check directly at container level (here)
By doing so, docker pings periodically an endpoint you specified, if it's found unhealthy it will 1) stop routing traffic to it 2) kill the container and restart a new one. Therefore you upstream will be resolved to one of the healthy containers. No need to retry.
As for your additional questions, the first one, docker won't start routing til it's healthy. The second, nginx is still useful to distribute traffic according to endpoint url. But personally nginx + swarm vip mode is not a great choice because swarm load balancer is poorly documented, it doesn't support sticky session and you can't have proxy level health check, I would use traefik
instead, it has its own load balancer.
edited Nov 13 at 16:07
answered Nov 13 at 15:40
Siyu
1,9171621
1,9171621
Can you help me understand how health check is useful here? May be an explanation would help me and others to plot what is happening. I'm trying to eliminate upstream and do something in proxy pass to retry the same url, By doing this I'm trying to minimize another entire flow.
– Mani
Nov 13 at 15:44
@Mani sorry I forgot you were using vip mode, see updated answer.
– Siyu
Nov 13 at 16:08
So I need to use traefik as load balancer and redirect proxy pass to it right?
– Mani
Nov 13 at 16:30
Well I'd suggest replacing nginx with traefik.
– Siyu
Nov 13 at 16:33
1
I figured out added docker DNS 127.0.0.1 as resolver with 5s as validity and removed upstream. So each time when we are trying to proxy pass Nginx resolves it using Docker DNS. It's working fine. Thanks for all the help. Will add it as an answer as it might help someone else later.
– Mani
Dec 7 at 8:22
|
show 6 more comments
Can you help me understand how health check is useful here? May be an explanation would help me and others to plot what is happening. I'm trying to eliminate upstream and do something in proxy pass to retry the same url, By doing this I'm trying to minimize another entire flow.
– Mani
Nov 13 at 15:44
@Mani sorry I forgot you were using vip mode, see updated answer.
– Siyu
Nov 13 at 16:08
So I need to use traefik as load balancer and redirect proxy pass to it right?
– Mani
Nov 13 at 16:30
Well I'd suggest replacing nginx with traefik.
– Siyu
Nov 13 at 16:33
1
I figured out added docker DNS 127.0.0.1 as resolver with 5s as validity and removed upstream. So each time when we are trying to proxy pass Nginx resolves it using Docker DNS. It's working fine. Thanks for all the help. Will add it as an answer as it might help someone else later.
– Mani
Dec 7 at 8:22
Can you help me understand how health check is useful here? May be an explanation would help me and others to plot what is happening. I'm trying to eliminate upstream and do something in proxy pass to retry the same url, By doing this I'm trying to minimize another entire flow.
– Mani
Nov 13 at 15:44
Can you help me understand how health check is useful here? May be an explanation would help me and others to plot what is happening. I'm trying to eliminate upstream and do something in proxy pass to retry the same url, By doing this I'm trying to minimize another entire flow.
– Mani
Nov 13 at 15:44
@Mani sorry I forgot you were using vip mode, see updated answer.
– Siyu
Nov 13 at 16:08
@Mani sorry I forgot you were using vip mode, see updated answer.
– Siyu
Nov 13 at 16:08
So I need to use traefik as load balancer and redirect proxy pass to it right?
– Mani
Nov 13 at 16:30
So I need to use traefik as load balancer and redirect proxy pass to it right?
– Mani
Nov 13 at 16:30
Well I'd suggest replacing nginx with traefik.
– Siyu
Nov 13 at 16:33
Well I'd suggest replacing nginx with traefik.
– Siyu
Nov 13 at 16:33
1
1
I figured out added docker DNS 127.0.0.1 as resolver with 5s as validity and removed upstream. So each time when we are trying to proxy pass Nginx resolves it using Docker DNS. It's working fine. Thanks for all the help. Will add it as an answer as it might help someone else later.
– Mani
Dec 7 at 8:22
I figured out added docker DNS 127.0.0.1 as resolver with 5s as validity and removed upstream. So each time when we are trying to proxy pass Nginx resolves it using Docker DNS. It's working fine. Thanks for all the help. Will add it as an answer as it might help someone else later.
– Mani
Dec 7 at 8:22
|
show 6 more comments
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53274144%2fnginx-retry-same-end-point-on-http-502-in-docker-service-discovery%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown