spark - how to know which executor failed during Job execution and avoid them?
up vote
2
down vote
favorite
background:
I'm running a spark job on a huge cluster with heavy workloads that constantly has ill-state nodes, which receives task, respond driver's heartbeat, not actually working, and takes forever to run, and may fail finally so driver need to re-submit the task somewhere else.
what I did to deal with the ill-state nodes:
I'm setting spark.blacklist.enabled
to True
to make sure the re-submitted task goes somewhere else (and in blink of an eye job finished). However, as I found out in log, the blacklist is only working for one stage:
Blacklisting executor 28 for stage 0
so next stage there will certainly try the ill node again, and there's high chance that the ill node may not come back to normal. I just met such situation that a node keeps failing task for 48 hours 180 times and just kill itself finally.
18/11/11 19:47:26 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Container marked as failed: container_1534870268016_1640615_01_000051 on host: ill-datanode. Exit status: -100. Diagnostics: Container released on a *lost* node
executor like this drags spark application's performance heavily.
so I comes up with plan B: I kill it myself
I found there are 2 function to manage the executor called SparkSession.sparkContext.killExecutor(executorId: String)
and requestExecutors(numAdditionalExecutors: Int)
. But to remove executor using such function I must know which executor failed during last job.
How to do that?
apache-spark yarn
add a comment |
up vote
2
down vote
favorite
background:
I'm running a spark job on a huge cluster with heavy workloads that constantly has ill-state nodes, which receives task, respond driver's heartbeat, not actually working, and takes forever to run, and may fail finally so driver need to re-submit the task somewhere else.
what I did to deal with the ill-state nodes:
I'm setting spark.blacklist.enabled
to True
to make sure the re-submitted task goes somewhere else (and in blink of an eye job finished). However, as I found out in log, the blacklist is only working for one stage:
Blacklisting executor 28 for stage 0
so next stage there will certainly try the ill node again, and there's high chance that the ill node may not come back to normal. I just met such situation that a node keeps failing task for 48 hours 180 times and just kill itself finally.
18/11/11 19:47:26 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Container marked as failed: container_1534870268016_1640615_01_000051 on host: ill-datanode. Exit status: -100. Diagnostics: Container released on a *lost* node
executor like this drags spark application's performance heavily.
so I comes up with plan B: I kill it myself
I found there are 2 function to manage the executor called SparkSession.sparkContext.killExecutor(executorId: String)
and requestExecutors(numAdditionalExecutors: Int)
. But to remove executor using such function I must know which executor failed during last job.
How to do that?
apache-spark yarn
add a comment |
up vote
2
down vote
favorite
up vote
2
down vote
favorite
background:
I'm running a spark job on a huge cluster with heavy workloads that constantly has ill-state nodes, which receives task, respond driver's heartbeat, not actually working, and takes forever to run, and may fail finally so driver need to re-submit the task somewhere else.
what I did to deal with the ill-state nodes:
I'm setting spark.blacklist.enabled
to True
to make sure the re-submitted task goes somewhere else (and in blink of an eye job finished). However, as I found out in log, the blacklist is only working for one stage:
Blacklisting executor 28 for stage 0
so next stage there will certainly try the ill node again, and there's high chance that the ill node may not come back to normal. I just met such situation that a node keeps failing task for 48 hours 180 times and just kill itself finally.
18/11/11 19:47:26 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Container marked as failed: container_1534870268016_1640615_01_000051 on host: ill-datanode. Exit status: -100. Diagnostics: Container released on a *lost* node
executor like this drags spark application's performance heavily.
so I comes up with plan B: I kill it myself
I found there are 2 function to manage the executor called SparkSession.sparkContext.killExecutor(executorId: String)
and requestExecutors(numAdditionalExecutors: Int)
. But to remove executor using such function I must know which executor failed during last job.
How to do that?
apache-spark yarn
background:
I'm running a spark job on a huge cluster with heavy workloads that constantly has ill-state nodes, which receives task, respond driver's heartbeat, not actually working, and takes forever to run, and may fail finally so driver need to re-submit the task somewhere else.
what I did to deal with the ill-state nodes:
I'm setting spark.blacklist.enabled
to True
to make sure the re-submitted task goes somewhere else (and in blink of an eye job finished). However, as I found out in log, the blacklist is only working for one stage:
Blacklisting executor 28 for stage 0
so next stage there will certainly try the ill node again, and there's high chance that the ill node may not come back to normal. I just met such situation that a node keeps failing task for 48 hours 180 times and just kill itself finally.
18/11/11 19:47:26 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Container marked as failed: container_1534870268016_1640615_01_000051 on host: ill-datanode. Exit status: -100. Diagnostics: Container released on a *lost* node
executor like this drags spark application's performance heavily.
so I comes up with plan B: I kill it myself
I found there are 2 function to manage the executor called SparkSession.sparkContext.killExecutor(executorId: String)
and requestExecutors(numAdditionalExecutors: Int)
. But to remove executor using such function I must know which executor failed during last job.
How to do that?
apache-spark yarn
apache-spark yarn
asked Nov 11 at 13:54
skywalkerytx
999
999
add a comment |
add a comment |
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53249442%2fspark-how-to-know-which-executor-failed-during-job-execution-and-avoid-them%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown