setting hadoop mapreduce size without mapred-site.xml
I am running a mapreduce job on server and am constantly getting this error:
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Container is running beyond physical memory limits. Current usage: 1.0
GB of 1 GB physical memory used; 2.7 GB of 2.1 GB virtual memory used.
Killing container.
Of course I have read all possible resources and I know that I need to set the config in these files: mapred-site.xml \ yarn-site.xml
But our server doesn't let me to overwrite those properties and I want an approach to do it in terminal or in config of my hadoop program.
I was running this job with hive and I was able to overwrite these properties like this:
set HADOOP_HEAPSIZE=4096;
set mapreduce.reduce.memory.mb = 4096;
set mapreduce.map.memory.mb = 4096;
set tez.am.resource.memory.mb=4096;
set yarn.app.mapreduce.am.resource.mb=4096;
But when I am writing a map reduce program instead of hive query, how can I change these?
How can I export mapreduce.reduce.memory.mb in shell for example?
hadoop memory containers
add a comment |
I am running a mapreduce job on server and am constantly getting this error:
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Container is running beyond physical memory limits. Current usage: 1.0
GB of 1 GB physical memory used; 2.7 GB of 2.1 GB virtual memory used.
Killing container.
Of course I have read all possible resources and I know that I need to set the config in these files: mapred-site.xml \ yarn-site.xml
But our server doesn't let me to overwrite those properties and I want an approach to do it in terminal or in config of my hadoop program.
I was running this job with hive and I was able to overwrite these properties like this:
set HADOOP_HEAPSIZE=4096;
set mapreduce.reduce.memory.mb = 4096;
set mapreduce.map.memory.mb = 4096;
set tez.am.resource.memory.mb=4096;
set yarn.app.mapreduce.am.resource.mb=4096;
But when I am writing a map reduce program instead of hive query, how can I change these?
How can I export mapreduce.reduce.memory.mb in shell for example?
hadoop memory containers
add a comment |
I am running a mapreduce job on server and am constantly getting this error:
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Container is running beyond physical memory limits. Current usage: 1.0
GB of 1 GB physical memory used; 2.7 GB of 2.1 GB virtual memory used.
Killing container.
Of course I have read all possible resources and I know that I need to set the config in these files: mapred-site.xml \ yarn-site.xml
But our server doesn't let me to overwrite those properties and I want an approach to do it in terminal or in config of my hadoop program.
I was running this job with hive and I was able to overwrite these properties like this:
set HADOOP_HEAPSIZE=4096;
set mapreduce.reduce.memory.mb = 4096;
set mapreduce.map.memory.mb = 4096;
set tez.am.resource.memory.mb=4096;
set yarn.app.mapreduce.am.resource.mb=4096;
But when I am writing a map reduce program instead of hive query, how can I change these?
How can I export mapreduce.reduce.memory.mb in shell for example?
hadoop memory containers
I am running a mapreduce job on server and am constantly getting this error:
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Container is running beyond physical memory limits. Current usage: 1.0
GB of 1 GB physical memory used; 2.7 GB of 2.1 GB virtual memory used.
Killing container.
Of course I have read all possible resources and I know that I need to set the config in these files: mapred-site.xml \ yarn-site.xml
But our server doesn't let me to overwrite those properties and I want an approach to do it in terminal or in config of my hadoop program.
I was running this job with hive and I was able to overwrite these properties like this:
set HADOOP_HEAPSIZE=4096;
set mapreduce.reduce.memory.mb = 4096;
set mapreduce.map.memory.mb = 4096;
set tez.am.resource.memory.mb=4096;
set yarn.app.mapreduce.am.resource.mb=4096;
But when I am writing a map reduce program instead of hive query, how can I change these?
How can I export mapreduce.reduce.memory.mb in shell for example?
hadoop memory containers
hadoop memory containers
asked Nov 19 '18 at 4:40
Reihan_amnReihan_amn
1,612914
1,612914
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
You may need to specify like this in order to add the configuration parameter set for each application/Job
export HADOOP_HEAPSIZE=4096
hadoop jar YOUR_JAR.jar ClassName -Dmapreduce.reduce.memory.mb=4096 -Dmapreduce.map.memory.mb=4096 -Dyarn.app.mapreduce.am.resource.mb=4096 /input /output
Note: Replace YOUR_JAR.jar with your Jar and ClassName with your Driver class name
1
nice answer! it isn't this possible that I set them outside the hadoop running job command? (reason I am asking is that on server I run my hadoop job with different command because of specific properties)
– Reihan_amn
Nov 20 '18 at 7:12
add a comment |
I ended up solving this problem by setting the memory size in my mapreduce script as this:
conf.set("mapreduce.map.memory.mb", "4096")
conf.set("mapreduce.reduce.memory.mb", "8192")
conf.set("mapred.child.java.opts", "-Xmx512m")
conf.set("tez.am.resource.memory.mb", "4096")
conf.set("yarn.app.mapreduce.am.resource.mb", "4096")
Be careful of the number you assign to reducer. My mapper memory and reducer memory had the same capacity and I got some error. It shouldn't be the same!
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53368411%2fsetting-hadoop-mapreduce-size-without-mapred-site-xml%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
You may need to specify like this in order to add the configuration parameter set for each application/Job
export HADOOP_HEAPSIZE=4096
hadoop jar YOUR_JAR.jar ClassName -Dmapreduce.reduce.memory.mb=4096 -Dmapreduce.map.memory.mb=4096 -Dyarn.app.mapreduce.am.resource.mb=4096 /input /output
Note: Replace YOUR_JAR.jar with your Jar and ClassName with your Driver class name
1
nice answer! it isn't this possible that I set them outside the hadoop running job command? (reason I am asking is that on server I run my hadoop job with different command because of specific properties)
– Reihan_amn
Nov 20 '18 at 7:12
add a comment |
You may need to specify like this in order to add the configuration parameter set for each application/Job
export HADOOP_HEAPSIZE=4096
hadoop jar YOUR_JAR.jar ClassName -Dmapreduce.reduce.memory.mb=4096 -Dmapreduce.map.memory.mb=4096 -Dyarn.app.mapreduce.am.resource.mb=4096 /input /output
Note: Replace YOUR_JAR.jar with your Jar and ClassName with your Driver class name
1
nice answer! it isn't this possible that I set them outside the hadoop running job command? (reason I am asking is that on server I run my hadoop job with different command because of specific properties)
– Reihan_amn
Nov 20 '18 at 7:12
add a comment |
You may need to specify like this in order to add the configuration parameter set for each application/Job
export HADOOP_HEAPSIZE=4096
hadoop jar YOUR_JAR.jar ClassName -Dmapreduce.reduce.memory.mb=4096 -Dmapreduce.map.memory.mb=4096 -Dyarn.app.mapreduce.am.resource.mb=4096 /input /output
Note: Replace YOUR_JAR.jar with your Jar and ClassName with your Driver class name
You may need to specify like this in order to add the configuration parameter set for each application/Job
export HADOOP_HEAPSIZE=4096
hadoop jar YOUR_JAR.jar ClassName -Dmapreduce.reduce.memory.mb=4096 -Dmapreduce.map.memory.mb=4096 -Dyarn.app.mapreduce.am.resource.mb=4096 /input /output
Note: Replace YOUR_JAR.jar with your Jar and ClassName with your Driver class name
answered Nov 19 '18 at 9:31
UserUser
6152722
6152722
1
nice answer! it isn't this possible that I set them outside the hadoop running job command? (reason I am asking is that on server I run my hadoop job with different command because of specific properties)
– Reihan_amn
Nov 20 '18 at 7:12
add a comment |
1
nice answer! it isn't this possible that I set them outside the hadoop running job command? (reason I am asking is that on server I run my hadoop job with different command because of specific properties)
– Reihan_amn
Nov 20 '18 at 7:12
1
1
nice answer! it isn't this possible that I set them outside the hadoop running job command? (reason I am asking is that on server I run my hadoop job with different command because of specific properties)
– Reihan_amn
Nov 20 '18 at 7:12
nice answer! it isn't this possible that I set them outside the hadoop running job command? (reason I am asking is that on server I run my hadoop job with different command because of specific properties)
– Reihan_amn
Nov 20 '18 at 7:12
add a comment |
I ended up solving this problem by setting the memory size in my mapreduce script as this:
conf.set("mapreduce.map.memory.mb", "4096")
conf.set("mapreduce.reduce.memory.mb", "8192")
conf.set("mapred.child.java.opts", "-Xmx512m")
conf.set("tez.am.resource.memory.mb", "4096")
conf.set("yarn.app.mapreduce.am.resource.mb", "4096")
Be careful of the number you assign to reducer. My mapper memory and reducer memory had the same capacity and I got some error. It shouldn't be the same!
add a comment |
I ended up solving this problem by setting the memory size in my mapreduce script as this:
conf.set("mapreduce.map.memory.mb", "4096")
conf.set("mapreduce.reduce.memory.mb", "8192")
conf.set("mapred.child.java.opts", "-Xmx512m")
conf.set("tez.am.resource.memory.mb", "4096")
conf.set("yarn.app.mapreduce.am.resource.mb", "4096")
Be careful of the number you assign to reducer. My mapper memory and reducer memory had the same capacity and I got some error. It shouldn't be the same!
add a comment |
I ended up solving this problem by setting the memory size in my mapreduce script as this:
conf.set("mapreduce.map.memory.mb", "4096")
conf.set("mapreduce.reduce.memory.mb", "8192")
conf.set("mapred.child.java.opts", "-Xmx512m")
conf.set("tez.am.resource.memory.mb", "4096")
conf.set("yarn.app.mapreduce.am.resource.mb", "4096")
Be careful of the number you assign to reducer. My mapper memory and reducer memory had the same capacity and I got some error. It shouldn't be the same!
I ended up solving this problem by setting the memory size in my mapreduce script as this:
conf.set("mapreduce.map.memory.mb", "4096")
conf.set("mapreduce.reduce.memory.mb", "8192")
conf.set("mapred.child.java.opts", "-Xmx512m")
conf.set("tez.am.resource.memory.mb", "4096")
conf.set("yarn.app.mapreduce.am.resource.mb", "4096")
Be careful of the number you assign to reducer. My mapper memory and reducer memory had the same capacity and I got some error. It shouldn't be the same!
answered Nov 28 '18 at 22:10
Reihan_amnReihan_amn
1,612914
1,612914
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53368411%2fsetting-hadoop-mapreduce-size-without-mapred-site-xml%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown