setting hadoop mapreduce size without mapred-site.xml












1















I am running a mapreduce job on server and am constantly getting this error:



Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Container is running beyond physical memory limits. Current usage: 1.0
GB of 1 GB physical memory used; 2.7 GB of 2.1 GB virtual memory used.
Killing container.


Of course I have read all possible resources and I know that I need to set the config in these files: mapred-site.xml \ yarn-site.xml



But our server doesn't let me to overwrite those properties and I want an approach to do it in terminal or in config of my hadoop program.



I was running this job with hive and I was able to overwrite these properties like this:



set HADOOP_HEAPSIZE=4096;
set mapreduce.reduce.memory.mb = 4096;
set mapreduce.map.memory.mb = 4096;
set tez.am.resource.memory.mb=4096;
set yarn.app.mapreduce.am.resource.mb=4096;


But when I am writing a map reduce program instead of hive query, how can I change these?



How can I export mapreduce.reduce.memory.mb in shell for example?










share|improve this question



























    1















    I am running a mapreduce job on server and am constantly getting this error:



    Container killed on request. Exit code is 143
    Container exited with a non-zero exit code 143
    Container is running beyond physical memory limits. Current usage: 1.0
    GB of 1 GB physical memory used; 2.7 GB of 2.1 GB virtual memory used.
    Killing container.


    Of course I have read all possible resources and I know that I need to set the config in these files: mapred-site.xml \ yarn-site.xml



    But our server doesn't let me to overwrite those properties and I want an approach to do it in terminal or in config of my hadoop program.



    I was running this job with hive and I was able to overwrite these properties like this:



    set HADOOP_HEAPSIZE=4096;
    set mapreduce.reduce.memory.mb = 4096;
    set mapreduce.map.memory.mb = 4096;
    set tez.am.resource.memory.mb=4096;
    set yarn.app.mapreduce.am.resource.mb=4096;


    But when I am writing a map reduce program instead of hive query, how can I change these?



    How can I export mapreduce.reduce.memory.mb in shell for example?










    share|improve this question

























      1












      1








      1


      2






      I am running a mapreduce job on server and am constantly getting this error:



      Container killed on request. Exit code is 143
      Container exited with a non-zero exit code 143
      Container is running beyond physical memory limits. Current usage: 1.0
      GB of 1 GB physical memory used; 2.7 GB of 2.1 GB virtual memory used.
      Killing container.


      Of course I have read all possible resources and I know that I need to set the config in these files: mapred-site.xml \ yarn-site.xml



      But our server doesn't let me to overwrite those properties and I want an approach to do it in terminal or in config of my hadoop program.



      I was running this job with hive and I was able to overwrite these properties like this:



      set HADOOP_HEAPSIZE=4096;
      set mapreduce.reduce.memory.mb = 4096;
      set mapreduce.map.memory.mb = 4096;
      set tez.am.resource.memory.mb=4096;
      set yarn.app.mapreduce.am.resource.mb=4096;


      But when I am writing a map reduce program instead of hive query, how can I change these?



      How can I export mapreduce.reduce.memory.mb in shell for example?










      share|improve this question














      I am running a mapreduce job on server and am constantly getting this error:



      Container killed on request. Exit code is 143
      Container exited with a non-zero exit code 143
      Container is running beyond physical memory limits. Current usage: 1.0
      GB of 1 GB physical memory used; 2.7 GB of 2.1 GB virtual memory used.
      Killing container.


      Of course I have read all possible resources and I know that I need to set the config in these files: mapred-site.xml \ yarn-site.xml



      But our server doesn't let me to overwrite those properties and I want an approach to do it in terminal or in config of my hadoop program.



      I was running this job with hive and I was able to overwrite these properties like this:



      set HADOOP_HEAPSIZE=4096;
      set mapreduce.reduce.memory.mb = 4096;
      set mapreduce.map.memory.mb = 4096;
      set tez.am.resource.memory.mb=4096;
      set yarn.app.mapreduce.am.resource.mb=4096;


      But when I am writing a map reduce program instead of hive query, how can I change these?



      How can I export mapreduce.reduce.memory.mb in shell for example?







      hadoop memory containers






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 19 '18 at 4:40









      Reihan_amnReihan_amn

      1,612914




      1,612914
























          2 Answers
          2






          active

          oldest

          votes


















          2














          You may need to specify like this in order to add the configuration parameter set for each application/Job



          export HADOOP_HEAPSIZE=4096
          hadoop jar YOUR_JAR.jar ClassName -Dmapreduce.reduce.memory.mb=4096 -Dmapreduce.map.memory.mb=4096 -Dyarn.app.mapreduce.am.resource.mb=4096 /input /output


          Note: Replace YOUR_JAR.jar with your Jar and ClassName with your Driver class name






          share|improve this answer



















          • 1





            nice answer! it isn't this possible that I set them outside the hadoop running job command? (reason I am asking is that on server I run my hadoop job with different command because of specific properties)

            – Reihan_amn
            Nov 20 '18 at 7:12



















          1














          I ended up solving this problem by setting the memory size in my mapreduce script as this:



          conf.set("mapreduce.map.memory.mb", "4096")
          conf.set("mapreduce.reduce.memory.mb", "8192")
          conf.set("mapred.child.java.opts", "-Xmx512m")
          conf.set("tez.am.resource.memory.mb", "4096")
          conf.set("yarn.app.mapreduce.am.resource.mb", "4096")


          Be careful of the number you assign to reducer. My mapper memory and reducer memory had the same capacity and I got some error. It shouldn't be the same!






          share|improve this answer























            Your Answer






            StackExchange.ifUsing("editor", function () {
            StackExchange.using("externalEditor", function () {
            StackExchange.using("snippets", function () {
            StackExchange.snippets.init();
            });
            });
            }, "code-snippets");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "1"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53368411%2fsetting-hadoop-mapreduce-size-without-mapred-site-xml%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            2














            You may need to specify like this in order to add the configuration parameter set for each application/Job



            export HADOOP_HEAPSIZE=4096
            hadoop jar YOUR_JAR.jar ClassName -Dmapreduce.reduce.memory.mb=4096 -Dmapreduce.map.memory.mb=4096 -Dyarn.app.mapreduce.am.resource.mb=4096 /input /output


            Note: Replace YOUR_JAR.jar with your Jar and ClassName with your Driver class name






            share|improve this answer



















            • 1





              nice answer! it isn't this possible that I set them outside the hadoop running job command? (reason I am asking is that on server I run my hadoop job with different command because of specific properties)

              – Reihan_amn
              Nov 20 '18 at 7:12
















            2














            You may need to specify like this in order to add the configuration parameter set for each application/Job



            export HADOOP_HEAPSIZE=4096
            hadoop jar YOUR_JAR.jar ClassName -Dmapreduce.reduce.memory.mb=4096 -Dmapreduce.map.memory.mb=4096 -Dyarn.app.mapreduce.am.resource.mb=4096 /input /output


            Note: Replace YOUR_JAR.jar with your Jar and ClassName with your Driver class name






            share|improve this answer



















            • 1





              nice answer! it isn't this possible that I set them outside the hadoop running job command? (reason I am asking is that on server I run my hadoop job with different command because of specific properties)

              – Reihan_amn
              Nov 20 '18 at 7:12














            2












            2








            2







            You may need to specify like this in order to add the configuration parameter set for each application/Job



            export HADOOP_HEAPSIZE=4096
            hadoop jar YOUR_JAR.jar ClassName -Dmapreduce.reduce.memory.mb=4096 -Dmapreduce.map.memory.mb=4096 -Dyarn.app.mapreduce.am.resource.mb=4096 /input /output


            Note: Replace YOUR_JAR.jar with your Jar and ClassName with your Driver class name






            share|improve this answer













            You may need to specify like this in order to add the configuration parameter set for each application/Job



            export HADOOP_HEAPSIZE=4096
            hadoop jar YOUR_JAR.jar ClassName -Dmapreduce.reduce.memory.mb=4096 -Dmapreduce.map.memory.mb=4096 -Dyarn.app.mapreduce.am.resource.mb=4096 /input /output


            Note: Replace YOUR_JAR.jar with your Jar and ClassName with your Driver class name







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Nov 19 '18 at 9:31









            UserUser

            6152722




            6152722








            • 1





              nice answer! it isn't this possible that I set them outside the hadoop running job command? (reason I am asking is that on server I run my hadoop job with different command because of specific properties)

              – Reihan_amn
              Nov 20 '18 at 7:12














            • 1





              nice answer! it isn't this possible that I set them outside the hadoop running job command? (reason I am asking is that on server I run my hadoop job with different command because of specific properties)

              – Reihan_amn
              Nov 20 '18 at 7:12








            1




            1





            nice answer! it isn't this possible that I set them outside the hadoop running job command? (reason I am asking is that on server I run my hadoop job with different command because of specific properties)

            – Reihan_amn
            Nov 20 '18 at 7:12





            nice answer! it isn't this possible that I set them outside the hadoop running job command? (reason I am asking is that on server I run my hadoop job with different command because of specific properties)

            – Reihan_amn
            Nov 20 '18 at 7:12













            1














            I ended up solving this problem by setting the memory size in my mapreduce script as this:



            conf.set("mapreduce.map.memory.mb", "4096")
            conf.set("mapreduce.reduce.memory.mb", "8192")
            conf.set("mapred.child.java.opts", "-Xmx512m")
            conf.set("tez.am.resource.memory.mb", "4096")
            conf.set("yarn.app.mapreduce.am.resource.mb", "4096")


            Be careful of the number you assign to reducer. My mapper memory and reducer memory had the same capacity and I got some error. It shouldn't be the same!






            share|improve this answer




























              1














              I ended up solving this problem by setting the memory size in my mapreduce script as this:



              conf.set("mapreduce.map.memory.mb", "4096")
              conf.set("mapreduce.reduce.memory.mb", "8192")
              conf.set("mapred.child.java.opts", "-Xmx512m")
              conf.set("tez.am.resource.memory.mb", "4096")
              conf.set("yarn.app.mapreduce.am.resource.mb", "4096")


              Be careful of the number you assign to reducer. My mapper memory and reducer memory had the same capacity and I got some error. It shouldn't be the same!






              share|improve this answer


























                1












                1








                1







                I ended up solving this problem by setting the memory size in my mapreduce script as this:



                conf.set("mapreduce.map.memory.mb", "4096")
                conf.set("mapreduce.reduce.memory.mb", "8192")
                conf.set("mapred.child.java.opts", "-Xmx512m")
                conf.set("tez.am.resource.memory.mb", "4096")
                conf.set("yarn.app.mapreduce.am.resource.mb", "4096")


                Be careful of the number you assign to reducer. My mapper memory and reducer memory had the same capacity and I got some error. It shouldn't be the same!






                share|improve this answer













                I ended up solving this problem by setting the memory size in my mapreduce script as this:



                conf.set("mapreduce.map.memory.mb", "4096")
                conf.set("mapreduce.reduce.memory.mb", "8192")
                conf.set("mapred.child.java.opts", "-Xmx512m")
                conf.set("tez.am.resource.memory.mb", "4096")
                conf.set("yarn.app.mapreduce.am.resource.mb", "4096")


                Be careful of the number you assign to reducer. My mapper memory and reducer memory had the same capacity and I got some error. It shouldn't be the same!







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Nov 28 '18 at 22:10









                Reihan_amnReihan_amn

                1,612914




                1,612914






























                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53368411%2fsetting-hadoop-mapreduce-size-without-mapred-site-xml%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    鏡平學校

                    ꓛꓣだゔៀៅຸ໢ທຮ໕໒ ,ໂ'໥໓າ໼ឨឲ៵៭ៈゎゔit''䖳𥁄卿' ☨₤₨こゎもょの;ꜹꟚꞖꞵꟅꞛေၦေɯ,ɨɡ𛃵𛁹ޝ޳ޠ޾,ޤޒޯ޾𫝒𫠁သ𛅤チョ'サノބޘދ𛁐ᶿᶇᶀᶋᶠ㨑㽹⻮ꧬ꧹؍۩وَؠ㇕㇃㇪ ㇦㇋㇋ṜẰᵡᴠ 軌ᵕ搜۳ٰޗޮ޷ސޯ𫖾𫅀ल, ꙭ꙰ꚅꙁꚊꞻꝔ꟠Ꝭㄤﺟޱސꧨꧼ꧴ꧯꧽ꧲ꧯ'⽹⽭⾁⿞⼳⽋២៩ញណើꩯꩤ꩸ꩮᶻᶺᶧᶂ𫳲𫪭𬸄𫵰𬖩𬫣𬊉ၲ𛅬㕦䬺𫝌𫝼,,𫟖𫞽ហៅ஫㆔ాఆఅꙒꚞꙍ,Ꙟ꙱エ ,ポテ,フࢰࢯ𫟠𫞶 𫝤𫟠ﺕﹱﻜﻣ𪵕𪭸𪻆𪾩𫔷ġ,ŧآꞪ꟥,ꞔꝻ♚☹⛵𛀌ꬷꭞȄƁƪƬșƦǙǗdžƝǯǧⱦⱰꓕꓢႋ神 ဴ၀க௭எ௫ឫោ ' េㇷㇴㇼ神ㇸㇲㇽㇴㇼㇻㇸ'ㇸㇿㇸㇹㇰㆣꓚꓤ₡₧ ㄨㄟ㄂ㄖㄎ໗ツڒذ₶।ऩछएोञयूटक़कयँृी,冬'𛅢𛅥ㇱㇵㇶ𥄥𦒽𠣧𠊓𧢖𥞘𩔋цѰㄠſtʯʭɿʆʗʍʩɷɛ,əʏダヵㄐㄘR{gỚṖḺờṠṫảḙḭᴮᵏᴘᵀᵷᵕᴜᴏᵾq﮲ﲿﴽﭙ軌ﰬﶚﶧ﫲Ҝжюїкӈㇴffצּ﬘﭅﬈軌'ffistfflſtffतभफɳɰʊɲʎ𛁱𛁖𛁮𛀉 𛂯𛀞నఋŀŲ 𫟲𫠖𫞺ຆຆ ໹້໕໗ๆทԊꧢꧠ꧰ꓱ⿝⼑ŎḬẃẖỐẅ ,ờỰỈỗﮊDžȩꭏꭎꬻ꭮ꬿꭖꭥꭅ㇭神 ⾈ꓵꓑ⺄㄄ㄪㄙㄅㄇstA۵䞽ॶ𫞑𫝄㇉㇇゜軌𩜛𩳠Jﻺ‚Üမ႕ႌႊၐၸဓၞၞၡ៸wyvtᶎᶪᶹစဎ꣡꣰꣢꣤ٗ؋لㇳㇾㇻㇱ㆐㆔,,㆟Ⱶヤマފ޼ޝަݿݞݠݷݐ',ݘ,ݪݙݵ𬝉𬜁𫝨𫞘くせぉて¼óû×ó£…𛅑הㄙくԗԀ5606神45,神796'𪤻𫞧ꓐ㄁ㄘɥɺꓵꓲ3''7034׉ⱦⱠˆ“𫝋ȍ,ꩲ軌꩷ꩶꩧꩫఞ۔فڱێظペサ神ナᴦᵑ47 9238їﻂ䐊䔉㠸﬎ffiﬣ,לּᴷᴦᵛᵽ,ᴨᵤ ᵸᵥᴗᵈꚏꚉꚟ⻆rtǟƴ𬎎

                    Why https connections are so slow when debugging (stepping over) in Java?