AppEngine - What's the timeout for Node cloud tasks handlers?












1















I have an application that does some work in background, using default Cloud Tasks for scheduling/executing the process.



I would like the job to be able to run for a few minutes, or at least understand what the actual limitations are and what I can do about them.



According to docs on Push Queues (which seem to be equivalent to the modern Cloud Tasks?), the deadline is 10 minutes for auto-scaling, and 24 hours for basic scaling.



However, my job seems to crash after 2 minutes. 115 seconds is fine, 121 seconds is a crash. The workload and resource consumption is the same in all cases. The message is always the unhelpful "The process handling this request unexpectedly died. This is likely to cause a new process to be used for the next request to your application. (Error code 203)".



It does not matter if I use an auto-scaling F2 instance, or basic-scaling B2. It gets terminated after 2 minutes.



According to docs on Node request handling, there is a 60-second timeout for "request handlers"



What is the timeout in the end? Is it 1 minute, 2 minutes, or 10 minutes? Is there anything I can do to change it, if I want my job to run for 5 or 30 minutes.










share|improve this question























  • The cloud tasks specific timeout notes are here, aligned with your first reference (and apparently independent of the language sandbox). But indeed the info appears conflicting with the 2nd reference. Maybe some 1st/2nd generation standard env differences not taken into consideration somewhere? And anyways none matching the 120 sec observed :)

    – Dan Cornilescu
    Nov 16 '18 at 12:04













  • @DanCornilescu The question is, am I indeed restricted by the AppEngine timeout (regardless of whether it's 60 or 120s)? What can I do to implement a long-running job, besides spinning up a flexible worker?

    – Konrad Garus
    Nov 16 '18 at 12:48











  • You need to figure out what exactly is the crash caused by - it might not be the request deadline exceeded itself (there are several types of deadline errors, see cloud.google.com/appengine/articles/deadlineexceedederrors (donno how that maps to node, tho) Or it might be something else.

    – Dan Cornilescu
    Nov 16 '18 at 14:21













  • Yeah. I've already sunk hours on research and experiments, StackOverflow here is part of that. The question is: Is cloud task execution subject to the same request handling timeout? Any way around it? Am I just doing something wrong?

    – Konrad Garus
    Nov 16 '18 at 14:57
















1















I have an application that does some work in background, using default Cloud Tasks for scheduling/executing the process.



I would like the job to be able to run for a few minutes, or at least understand what the actual limitations are and what I can do about them.



According to docs on Push Queues (which seem to be equivalent to the modern Cloud Tasks?), the deadline is 10 minutes for auto-scaling, and 24 hours for basic scaling.



However, my job seems to crash after 2 minutes. 115 seconds is fine, 121 seconds is a crash. The workload and resource consumption is the same in all cases. The message is always the unhelpful "The process handling this request unexpectedly died. This is likely to cause a new process to be used for the next request to your application. (Error code 203)".



It does not matter if I use an auto-scaling F2 instance, or basic-scaling B2. It gets terminated after 2 minutes.



According to docs on Node request handling, there is a 60-second timeout for "request handlers"



What is the timeout in the end? Is it 1 minute, 2 minutes, or 10 minutes? Is there anything I can do to change it, if I want my job to run for 5 or 30 minutes.










share|improve this question























  • The cloud tasks specific timeout notes are here, aligned with your first reference (and apparently independent of the language sandbox). But indeed the info appears conflicting with the 2nd reference. Maybe some 1st/2nd generation standard env differences not taken into consideration somewhere? And anyways none matching the 120 sec observed :)

    – Dan Cornilescu
    Nov 16 '18 at 12:04













  • @DanCornilescu The question is, am I indeed restricted by the AppEngine timeout (regardless of whether it's 60 or 120s)? What can I do to implement a long-running job, besides spinning up a flexible worker?

    – Konrad Garus
    Nov 16 '18 at 12:48











  • You need to figure out what exactly is the crash caused by - it might not be the request deadline exceeded itself (there are several types of deadline errors, see cloud.google.com/appengine/articles/deadlineexceedederrors (donno how that maps to node, tho) Or it might be something else.

    – Dan Cornilescu
    Nov 16 '18 at 14:21













  • Yeah. I've already sunk hours on research and experiments, StackOverflow here is part of that. The question is: Is cloud task execution subject to the same request handling timeout? Any way around it? Am I just doing something wrong?

    – Konrad Garus
    Nov 16 '18 at 14:57














1












1








1








I have an application that does some work in background, using default Cloud Tasks for scheduling/executing the process.



I would like the job to be able to run for a few minutes, or at least understand what the actual limitations are and what I can do about them.



According to docs on Push Queues (which seem to be equivalent to the modern Cloud Tasks?), the deadline is 10 minutes for auto-scaling, and 24 hours for basic scaling.



However, my job seems to crash after 2 minutes. 115 seconds is fine, 121 seconds is a crash. The workload and resource consumption is the same in all cases. The message is always the unhelpful "The process handling this request unexpectedly died. This is likely to cause a new process to be used for the next request to your application. (Error code 203)".



It does not matter if I use an auto-scaling F2 instance, or basic-scaling B2. It gets terminated after 2 minutes.



According to docs on Node request handling, there is a 60-second timeout for "request handlers"



What is the timeout in the end? Is it 1 minute, 2 minutes, or 10 minutes? Is there anything I can do to change it, if I want my job to run for 5 or 30 minutes.










share|improve this question














I have an application that does some work in background, using default Cloud Tasks for scheduling/executing the process.



I would like the job to be able to run for a few minutes, or at least understand what the actual limitations are and what I can do about them.



According to docs on Push Queues (which seem to be equivalent to the modern Cloud Tasks?), the deadline is 10 minutes for auto-scaling, and 24 hours for basic scaling.



However, my job seems to crash after 2 minutes. 115 seconds is fine, 121 seconds is a crash. The workload and resource consumption is the same in all cases. The message is always the unhelpful "The process handling this request unexpectedly died. This is likely to cause a new process to be used for the next request to your application. (Error code 203)".



It does not matter if I use an auto-scaling F2 instance, or basic-scaling B2. It gets terminated after 2 minutes.



According to docs on Node request handling, there is a 60-second timeout for "request handlers"



What is the timeout in the end? Is it 1 minute, 2 minutes, or 10 minutes? Is there anything I can do to change it, if I want my job to run for 5 or 30 minutes.







google-app-engine google-cloud-platform google-appengine-node






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 16 '18 at 11:40









Konrad GarusKonrad Garus

38.3k35131203




38.3k35131203













  • The cloud tasks specific timeout notes are here, aligned with your first reference (and apparently independent of the language sandbox). But indeed the info appears conflicting with the 2nd reference. Maybe some 1st/2nd generation standard env differences not taken into consideration somewhere? And anyways none matching the 120 sec observed :)

    – Dan Cornilescu
    Nov 16 '18 at 12:04













  • @DanCornilescu The question is, am I indeed restricted by the AppEngine timeout (regardless of whether it's 60 or 120s)? What can I do to implement a long-running job, besides spinning up a flexible worker?

    – Konrad Garus
    Nov 16 '18 at 12:48











  • You need to figure out what exactly is the crash caused by - it might not be the request deadline exceeded itself (there are several types of deadline errors, see cloud.google.com/appengine/articles/deadlineexceedederrors (donno how that maps to node, tho) Or it might be something else.

    – Dan Cornilescu
    Nov 16 '18 at 14:21













  • Yeah. I've already sunk hours on research and experiments, StackOverflow here is part of that. The question is: Is cloud task execution subject to the same request handling timeout? Any way around it? Am I just doing something wrong?

    – Konrad Garus
    Nov 16 '18 at 14:57



















  • The cloud tasks specific timeout notes are here, aligned with your first reference (and apparently independent of the language sandbox). But indeed the info appears conflicting with the 2nd reference. Maybe some 1st/2nd generation standard env differences not taken into consideration somewhere? And anyways none matching the 120 sec observed :)

    – Dan Cornilescu
    Nov 16 '18 at 12:04













  • @DanCornilescu The question is, am I indeed restricted by the AppEngine timeout (regardless of whether it's 60 or 120s)? What can I do to implement a long-running job, besides spinning up a flexible worker?

    – Konrad Garus
    Nov 16 '18 at 12:48











  • You need to figure out what exactly is the crash caused by - it might not be the request deadline exceeded itself (there are several types of deadline errors, see cloud.google.com/appengine/articles/deadlineexceedederrors (donno how that maps to node, tho) Or it might be something else.

    – Dan Cornilescu
    Nov 16 '18 at 14:21













  • Yeah. I've already sunk hours on research and experiments, StackOverflow here is part of that. The question is: Is cloud task execution subject to the same request handling timeout? Any way around it? Am I just doing something wrong?

    – Konrad Garus
    Nov 16 '18 at 14:57

















The cloud tasks specific timeout notes are here, aligned with your first reference (and apparently independent of the language sandbox). But indeed the info appears conflicting with the 2nd reference. Maybe some 1st/2nd generation standard env differences not taken into consideration somewhere? And anyways none matching the 120 sec observed :)

– Dan Cornilescu
Nov 16 '18 at 12:04







The cloud tasks specific timeout notes are here, aligned with your first reference (and apparently independent of the language sandbox). But indeed the info appears conflicting with the 2nd reference. Maybe some 1st/2nd generation standard env differences not taken into consideration somewhere? And anyways none matching the 120 sec observed :)

– Dan Cornilescu
Nov 16 '18 at 12:04















@DanCornilescu The question is, am I indeed restricted by the AppEngine timeout (regardless of whether it's 60 or 120s)? What can I do to implement a long-running job, besides spinning up a flexible worker?

– Konrad Garus
Nov 16 '18 at 12:48





@DanCornilescu The question is, am I indeed restricted by the AppEngine timeout (regardless of whether it's 60 or 120s)? What can I do to implement a long-running job, besides spinning up a flexible worker?

– Konrad Garus
Nov 16 '18 at 12:48













You need to figure out what exactly is the crash caused by - it might not be the request deadline exceeded itself (there are several types of deadline errors, see cloud.google.com/appengine/articles/deadlineexceedederrors (donno how that maps to node, tho) Or it might be something else.

– Dan Cornilescu
Nov 16 '18 at 14:21







You need to figure out what exactly is the crash caused by - it might not be the request deadline exceeded itself (there are several types of deadline errors, see cloud.google.com/appengine/articles/deadlineexceedederrors (donno how that maps to node, tho) Or it might be something else.

– Dan Cornilescu
Nov 16 '18 at 14:21















Yeah. I've already sunk hours on research and experiments, StackOverflow here is part of that. The question is: Is cloud task execution subject to the same request handling timeout? Any way around it? Am I just doing something wrong?

– Konrad Garus
Nov 16 '18 at 14:57





Yeah. I've already sunk hours on research and experiments, StackOverflow here is part of that. The question is: Is cloud task execution subject to the same request handling timeout? Any way around it? Am I just doing something wrong?

– Konrad Garus
Nov 16 '18 at 14:57












1 Answer
1






active

oldest

votes


















0














In short summary, I think the best deduction that can help your scenario is Node's Request Timeout which has exactly 2 minutes timeout by default





In Long, after reading your question. I decided to create PoC out of it




  1. created the Dummy Node 8 Service which only uses a built-in HTTP server

  2. created a URL path that can have an artificially long response (using setTimeout) and can specify the duration from the request (e.g. /lr/300 means it gonna response approximately in 5 minutes)

  3. deployed it to GAE service another than default (Node8, Automatic Scaling)

  4. created Cloud Tasks "task" that request /lr/540 to the aforementioned service


Before:
Before



As you can see, the Cloud Tasks and App Engine have problems waiting longer than 2 minutes, and have the same unhelpful message that you got (The process handling this request unexpectedly died...)



And then: Code



I wrote this line in order to increase Global Request Timeout



And the result: Result



In my case, I can safely say that it's Node Request Timeout that causes the problem. I hope this can be of use to you too.






share|improve this answer

























    Your Answer






    StackExchange.ifUsing("editor", function () {
    StackExchange.using("externalEditor", function () {
    StackExchange.using("snippets", function () {
    StackExchange.snippets.init();
    });
    });
    }, "code-snippets");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "1"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53337156%2fappengine-whats-the-timeout-for-node-cloud-tasks-handlers%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0














    In short summary, I think the best deduction that can help your scenario is Node's Request Timeout which has exactly 2 minutes timeout by default





    In Long, after reading your question. I decided to create PoC out of it




    1. created the Dummy Node 8 Service which only uses a built-in HTTP server

    2. created a URL path that can have an artificially long response (using setTimeout) and can specify the duration from the request (e.g. /lr/300 means it gonna response approximately in 5 minutes)

    3. deployed it to GAE service another than default (Node8, Automatic Scaling)

    4. created Cloud Tasks "task" that request /lr/540 to the aforementioned service


    Before:
    Before



    As you can see, the Cloud Tasks and App Engine have problems waiting longer than 2 minutes, and have the same unhelpful message that you got (The process handling this request unexpectedly died...)



    And then: Code



    I wrote this line in order to increase Global Request Timeout



    And the result: Result



    In my case, I can safely say that it's Node Request Timeout that causes the problem. I hope this can be of use to you too.






    share|improve this answer






























      0














      In short summary, I think the best deduction that can help your scenario is Node's Request Timeout which has exactly 2 minutes timeout by default





      In Long, after reading your question. I decided to create PoC out of it




      1. created the Dummy Node 8 Service which only uses a built-in HTTP server

      2. created a URL path that can have an artificially long response (using setTimeout) and can specify the duration from the request (e.g. /lr/300 means it gonna response approximately in 5 minutes)

      3. deployed it to GAE service another than default (Node8, Automatic Scaling)

      4. created Cloud Tasks "task" that request /lr/540 to the aforementioned service


      Before:
      Before



      As you can see, the Cloud Tasks and App Engine have problems waiting longer than 2 minutes, and have the same unhelpful message that you got (The process handling this request unexpectedly died...)



      And then: Code



      I wrote this line in order to increase Global Request Timeout



      And the result: Result



      In my case, I can safely say that it's Node Request Timeout that causes the problem. I hope this can be of use to you too.






      share|improve this answer




























        0












        0








        0







        In short summary, I think the best deduction that can help your scenario is Node's Request Timeout which has exactly 2 minutes timeout by default





        In Long, after reading your question. I decided to create PoC out of it




        1. created the Dummy Node 8 Service which only uses a built-in HTTP server

        2. created a URL path that can have an artificially long response (using setTimeout) and can specify the duration from the request (e.g. /lr/300 means it gonna response approximately in 5 minutes)

        3. deployed it to GAE service another than default (Node8, Automatic Scaling)

        4. created Cloud Tasks "task" that request /lr/540 to the aforementioned service


        Before:
        Before



        As you can see, the Cloud Tasks and App Engine have problems waiting longer than 2 minutes, and have the same unhelpful message that you got (The process handling this request unexpectedly died...)



        And then: Code



        I wrote this line in order to increase Global Request Timeout



        And the result: Result



        In my case, I can safely say that it's Node Request Timeout that causes the problem. I hope this can be of use to you too.






        share|improve this answer















        In short summary, I think the best deduction that can help your scenario is Node's Request Timeout which has exactly 2 minutes timeout by default





        In Long, after reading your question. I decided to create PoC out of it




        1. created the Dummy Node 8 Service which only uses a built-in HTTP server

        2. created a URL path that can have an artificially long response (using setTimeout) and can specify the duration from the request (e.g. /lr/300 means it gonna response approximately in 5 minutes)

        3. deployed it to GAE service another than default (Node8, Automatic Scaling)

        4. created Cloud Tasks "task" that request /lr/540 to the aforementioned service


        Before:
        Before



        As you can see, the Cloud Tasks and App Engine have problems waiting longer than 2 minutes, and have the same unhelpful message that you got (The process handling this request unexpectedly died...)



        And then: Code



        I wrote this line in order to increase Global Request Timeout



        And the result: Result



        In my case, I can safely say that it's Node Request Timeout that causes the problem. I hope this can be of use to you too.







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited Jan 3 at 12:30

























        answered Jan 3 at 11:45









        ThammachartThammachart

        13




        13






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Stack Overflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53337156%2fappengine-whats-the-timeout-for-node-cloud-tasks-handlers%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            鏡平學校

            ꓛꓣだゔៀៅຸ໢ທຮ໕໒ ,ໂ'໥໓າ໼ឨឲ៵៭ៈゎゔit''䖳𥁄卿' ☨₤₨こゎもょの;ꜹꟚꞖꞵꟅꞛေၦေɯ,ɨɡ𛃵𛁹ޝ޳ޠ޾,ޤޒޯ޾𫝒𫠁သ𛅤チョ'サノބޘދ𛁐ᶿᶇᶀᶋᶠ㨑㽹⻮ꧬ꧹؍۩وَؠ㇕㇃㇪ ㇦㇋㇋ṜẰᵡᴠ 軌ᵕ搜۳ٰޗޮ޷ސޯ𫖾𫅀ल, ꙭ꙰ꚅꙁꚊꞻꝔ꟠Ꝭㄤﺟޱސꧨꧼ꧴ꧯꧽ꧲ꧯ'⽹⽭⾁⿞⼳⽋២៩ញណើꩯꩤ꩸ꩮᶻᶺᶧᶂ𫳲𫪭𬸄𫵰𬖩𬫣𬊉ၲ𛅬㕦䬺𫝌𫝼,,𫟖𫞽ហៅ஫㆔ాఆఅꙒꚞꙍ,Ꙟ꙱エ ,ポテ,フࢰࢯ𫟠𫞶 𫝤𫟠ﺕﹱﻜﻣ𪵕𪭸𪻆𪾩𫔷ġ,ŧآꞪ꟥,ꞔꝻ♚☹⛵𛀌ꬷꭞȄƁƪƬșƦǙǗdžƝǯǧⱦⱰꓕꓢႋ神 ဴ၀க௭எ௫ឫោ ' េㇷㇴㇼ神ㇸㇲㇽㇴㇼㇻㇸ'ㇸㇿㇸㇹㇰㆣꓚꓤ₡₧ ㄨㄟ㄂ㄖㄎ໗ツڒذ₶।ऩछएोञयूटक़कयँृी,冬'𛅢𛅥ㇱㇵㇶ𥄥𦒽𠣧𠊓𧢖𥞘𩔋цѰㄠſtʯʭɿʆʗʍʩɷɛ,əʏダヵㄐㄘR{gỚṖḺờṠṫảḙḭᴮᵏᴘᵀᵷᵕᴜᴏᵾq﮲ﲿﴽﭙ軌ﰬﶚﶧ﫲Ҝжюїкӈㇴffצּ﬘﭅﬈軌'ffistfflſtffतभफɳɰʊɲʎ𛁱𛁖𛁮𛀉 𛂯𛀞నఋŀŲ 𫟲𫠖𫞺ຆຆ ໹້໕໗ๆทԊꧢꧠ꧰ꓱ⿝⼑ŎḬẃẖỐẅ ,ờỰỈỗﮊDžȩꭏꭎꬻ꭮ꬿꭖꭥꭅ㇭神 ⾈ꓵꓑ⺄㄄ㄪㄙㄅㄇstA۵䞽ॶ𫞑𫝄㇉㇇゜軌𩜛𩳠Jﻺ‚Üမ႕ႌႊၐၸဓၞၞၡ៸wyvtᶎᶪᶹစဎ꣡꣰꣢꣤ٗ؋لㇳㇾㇻㇱ㆐㆔,,㆟Ⱶヤマފ޼ޝަݿݞݠݷݐ',ݘ,ݪݙݵ𬝉𬜁𫝨𫞘くせぉて¼óû×ó£…𛅑הㄙくԗԀ5606神45,神796'𪤻𫞧ꓐ㄁ㄘɥɺꓵꓲ3''7034׉ⱦⱠˆ“𫝋ȍ,ꩲ軌꩷ꩶꩧꩫఞ۔فڱێظペサ神ナᴦᵑ47 9238їﻂ䐊䔉㠸﬎ffiﬣ,לּᴷᴦᵛᵽ,ᴨᵤ ᵸᵥᴗᵈꚏꚉꚟ⻆rtǟƴ𬎎

            Why https connections are so slow when debugging (stepping over) in Java?