How to send the result of a sql statement to a for loop using pyspark?












0















I am trying to send the sql result to a for loop. I am new to spark and python, please help.



    from pyspark import SparkContext
sc =SparkContext()
from pyspark.sql import HiveContext
hive_context = HiveContext(sc)
#bank = hive_context.table("cip_utilities.file_upload_temp")
data=hive_context.sql("select * from cip_utilities.cdm_variable_dict")
hive_context.sql("describe cip_utilities.cdm_variables_dict").registerTempTable("schema_def")
temp_data=hive_context.sql("select * from schema_def")
temp_data.show()
data1=hive_context.sql("select col_name from schema_def where data_type<>'string'")
data1.show()









share|improve this question





























    0















    I am trying to send the sql result to a for loop. I am new to spark and python, please help.



        from pyspark import SparkContext
    sc =SparkContext()
    from pyspark.sql import HiveContext
    hive_context = HiveContext(sc)
    #bank = hive_context.table("cip_utilities.file_upload_temp")
    data=hive_context.sql("select * from cip_utilities.cdm_variable_dict")
    hive_context.sql("describe cip_utilities.cdm_variables_dict").registerTempTable("schema_def")
    temp_data=hive_context.sql("select * from schema_def")
    temp_data.show()
    data1=hive_context.sql("select col_name from schema_def where data_type<>'string'")
    data1.show()









    share|improve this question



























      0












      0








      0








      I am trying to send the sql result to a for loop. I am new to spark and python, please help.



          from pyspark import SparkContext
      sc =SparkContext()
      from pyspark.sql import HiveContext
      hive_context = HiveContext(sc)
      #bank = hive_context.table("cip_utilities.file_upload_temp")
      data=hive_context.sql("select * from cip_utilities.cdm_variable_dict")
      hive_context.sql("describe cip_utilities.cdm_variables_dict").registerTempTable("schema_def")
      temp_data=hive_context.sql("select * from schema_def")
      temp_data.show()
      data1=hive_context.sql("select col_name from schema_def where data_type<>'string'")
      data1.show()









      share|improve this question
















      I am trying to send the sql result to a for loop. I am new to spark and python, please help.



          from pyspark import SparkContext
      sc =SparkContext()
      from pyspark.sql import HiveContext
      hive_context = HiveContext(sc)
      #bank = hive_context.table("cip_utilities.file_upload_temp")
      data=hive_context.sql("select * from cip_utilities.cdm_variable_dict")
      hive_context.sql("describe cip_utilities.cdm_variables_dict").registerTempTable("schema_def")
      temp_data=hive_context.sql("select * from schema_def")
      temp_data.show()
      data1=hive_context.sql("select col_name from schema_def where data_type<>'string'")
      data1.show()






      python apache-spark pyspark pyspark-sql






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 20 '18 at 7:20







      Shankar Panda

















      asked Nov 20 '18 at 6:53









      Shankar PandaShankar Panda

      1751114




      1751114
























          2 Answers
          2






          active

          oldest

          votes


















          2















          • Use DataFrame.collect() method, which aggregates the result of Spark-SQL query from all executors into driver.


          • The collect() method will return a Python list, each element of which is a Spark Row


          • You can then iterate over this list in a for-loop





          Code snippet:



          data1 = hive_context.sql("select col_name from schema_def where data_type<>'string'")
          colum_names_as_python_list_of_rows = data1.collect()





          share|improve this answer































            1














            I think you need to ask yourself why you want to iterate over the data.



            Are you doing an aggregation? Transforming the data? If so, consider doing it using the spark API.



            Printing some text? If so, then use .collect() and retrieve the data back to your driver process. Then you can loop over the result in the usual python way.






            share|improve this answer
























            • Yes, i am trying to find the maximum, minimum , standard deviation. Thats why need to send each col name in an iteration

              – Shankar Panda
              Nov 20 '18 at 9:06











            • You should be using the inbuilt spark functions to do that - it will be far more performant. spark.apache.org/docs/2.2.0/api/python/…

              – ThatDataGuy
              Nov 20 '18 at 14:28











            Your Answer






            StackExchange.ifUsing("editor", function () {
            StackExchange.using("externalEditor", function () {
            StackExchange.using("snippets", function () {
            StackExchange.snippets.init();
            });
            });
            }, "code-snippets");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "1"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53387703%2fhow-to-send-the-result-of-a-sql-statement-to-a-for-loop-using-pyspark%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            2















            • Use DataFrame.collect() method, which aggregates the result of Spark-SQL query from all executors into driver.


            • The collect() method will return a Python list, each element of which is a Spark Row


            • You can then iterate over this list in a for-loop





            Code snippet:



            data1 = hive_context.sql("select col_name from schema_def where data_type<>'string'")
            colum_names_as_python_list_of_rows = data1.collect()





            share|improve this answer




























              2















              • Use DataFrame.collect() method, which aggregates the result of Spark-SQL query from all executors into driver.


              • The collect() method will return a Python list, each element of which is a Spark Row


              • You can then iterate over this list in a for-loop





              Code snippet:



              data1 = hive_context.sql("select col_name from schema_def where data_type<>'string'")
              colum_names_as_python_list_of_rows = data1.collect()





              share|improve this answer


























                2












                2








                2








                • Use DataFrame.collect() method, which aggregates the result of Spark-SQL query from all executors into driver.


                • The collect() method will return a Python list, each element of which is a Spark Row


                • You can then iterate over this list in a for-loop





                Code snippet:



                data1 = hive_context.sql("select col_name from schema_def where data_type<>'string'")
                colum_names_as_python_list_of_rows = data1.collect()





                share|improve this answer














                • Use DataFrame.collect() method, which aggregates the result of Spark-SQL query from all executors into driver.


                • The collect() method will return a Python list, each element of which is a Spark Row


                • You can then iterate over this list in a for-loop





                Code snippet:



                data1 = hive_context.sql("select col_name from schema_def where data_type<>'string'")
                colum_names_as_python_list_of_rows = data1.collect()






                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Nov 20 '18 at 8:34









                y2k-shubhamy2k-shubham

                1,21811130




                1,21811130

























                    1














                    I think you need to ask yourself why you want to iterate over the data.



                    Are you doing an aggregation? Transforming the data? If so, consider doing it using the spark API.



                    Printing some text? If so, then use .collect() and retrieve the data back to your driver process. Then you can loop over the result in the usual python way.






                    share|improve this answer
























                    • Yes, i am trying to find the maximum, minimum , standard deviation. Thats why need to send each col name in an iteration

                      – Shankar Panda
                      Nov 20 '18 at 9:06











                    • You should be using the inbuilt spark functions to do that - it will be far more performant. spark.apache.org/docs/2.2.0/api/python/…

                      – ThatDataGuy
                      Nov 20 '18 at 14:28
















                    1














                    I think you need to ask yourself why you want to iterate over the data.



                    Are you doing an aggregation? Transforming the data? If so, consider doing it using the spark API.



                    Printing some text? If so, then use .collect() and retrieve the data back to your driver process. Then you can loop over the result in the usual python way.






                    share|improve this answer
























                    • Yes, i am trying to find the maximum, minimum , standard deviation. Thats why need to send each col name in an iteration

                      – Shankar Panda
                      Nov 20 '18 at 9:06











                    • You should be using the inbuilt spark functions to do that - it will be far more performant. spark.apache.org/docs/2.2.0/api/python/…

                      – ThatDataGuy
                      Nov 20 '18 at 14:28














                    1












                    1








                    1







                    I think you need to ask yourself why you want to iterate over the data.



                    Are you doing an aggregation? Transforming the data? If so, consider doing it using the spark API.



                    Printing some text? If so, then use .collect() and retrieve the data back to your driver process. Then you can loop over the result in the usual python way.






                    share|improve this answer













                    I think you need to ask yourself why you want to iterate over the data.



                    Are you doing an aggregation? Transforming the data? If so, consider doing it using the spark API.



                    Printing some text? If so, then use .collect() and retrieve the data back to your driver process. Then you can loop over the result in the usual python way.







                    share|improve this answer












                    share|improve this answer



                    share|improve this answer










                    answered Nov 20 '18 at 8:40









                    ThatDataGuyThatDataGuy

                    4791518




                    4791518













                    • Yes, i am trying to find the maximum, minimum , standard deviation. Thats why need to send each col name in an iteration

                      – Shankar Panda
                      Nov 20 '18 at 9:06











                    • You should be using the inbuilt spark functions to do that - it will be far more performant. spark.apache.org/docs/2.2.0/api/python/…

                      – ThatDataGuy
                      Nov 20 '18 at 14:28



















                    • Yes, i am trying to find the maximum, minimum , standard deviation. Thats why need to send each col name in an iteration

                      – Shankar Panda
                      Nov 20 '18 at 9:06











                    • You should be using the inbuilt spark functions to do that - it will be far more performant. spark.apache.org/docs/2.2.0/api/python/…

                      – ThatDataGuy
                      Nov 20 '18 at 14:28

















                    Yes, i am trying to find the maximum, minimum , standard deviation. Thats why need to send each col name in an iteration

                    – Shankar Panda
                    Nov 20 '18 at 9:06





                    Yes, i am trying to find the maximum, minimum , standard deviation. Thats why need to send each col name in an iteration

                    – Shankar Panda
                    Nov 20 '18 at 9:06













                    You should be using the inbuilt spark functions to do that - it will be far more performant. spark.apache.org/docs/2.2.0/api/python/…

                    – ThatDataGuy
                    Nov 20 '18 at 14:28





                    You should be using the inbuilt spark functions to do that - it will be far more performant. spark.apache.org/docs/2.2.0/api/python/…

                    – ThatDataGuy
                    Nov 20 '18 at 14:28


















                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53387703%2fhow-to-send-the-result-of-a-sql-statement-to-a-for-loop-using-pyspark%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    鏡平學校

                    ꓛꓣだゔៀៅຸ໢ທຮ໕໒ ,ໂ'໥໓າ໼ឨឲ៵៭ៈゎゔit''䖳𥁄卿' ☨₤₨こゎもょの;ꜹꟚꞖꞵꟅꞛေၦေɯ,ɨɡ𛃵𛁹ޝ޳ޠ޾,ޤޒޯ޾𫝒𫠁သ𛅤チョ'サノބޘދ𛁐ᶿᶇᶀᶋᶠ㨑㽹⻮ꧬ꧹؍۩وَؠ㇕㇃㇪ ㇦㇋㇋ṜẰᵡᴠ 軌ᵕ搜۳ٰޗޮ޷ސޯ𫖾𫅀ल, ꙭ꙰ꚅꙁꚊꞻꝔ꟠Ꝭㄤﺟޱސꧨꧼ꧴ꧯꧽ꧲ꧯ'⽹⽭⾁⿞⼳⽋២៩ញណើꩯꩤ꩸ꩮᶻᶺᶧᶂ𫳲𫪭𬸄𫵰𬖩𬫣𬊉ၲ𛅬㕦䬺𫝌𫝼,,𫟖𫞽ហៅ஫㆔ాఆఅꙒꚞꙍ,Ꙟ꙱エ ,ポテ,フࢰࢯ𫟠𫞶 𫝤𫟠ﺕﹱﻜﻣ𪵕𪭸𪻆𪾩𫔷ġ,ŧآꞪ꟥,ꞔꝻ♚☹⛵𛀌ꬷꭞȄƁƪƬșƦǙǗdžƝǯǧⱦⱰꓕꓢႋ神 ဴ၀க௭எ௫ឫោ ' េㇷㇴㇼ神ㇸㇲㇽㇴㇼㇻㇸ'ㇸㇿㇸㇹㇰㆣꓚꓤ₡₧ ㄨㄟ㄂ㄖㄎ໗ツڒذ₶।ऩछएोञयूटक़कयँृी,冬'𛅢𛅥ㇱㇵㇶ𥄥𦒽𠣧𠊓𧢖𥞘𩔋цѰㄠſtʯʭɿʆʗʍʩɷɛ,əʏダヵㄐㄘR{gỚṖḺờṠṫảḙḭᴮᵏᴘᵀᵷᵕᴜᴏᵾq﮲ﲿﴽﭙ軌ﰬﶚﶧ﫲Ҝжюїкӈㇴffצּ﬘﭅﬈軌'ffistfflſtffतभफɳɰʊɲʎ𛁱𛁖𛁮𛀉 𛂯𛀞నఋŀŲ 𫟲𫠖𫞺ຆຆ ໹້໕໗ๆทԊꧢꧠ꧰ꓱ⿝⼑ŎḬẃẖỐẅ ,ờỰỈỗﮊDžȩꭏꭎꬻ꭮ꬿꭖꭥꭅ㇭神 ⾈ꓵꓑ⺄㄄ㄪㄙㄅㄇstA۵䞽ॶ𫞑𫝄㇉㇇゜軌𩜛𩳠Jﻺ‚Üမ႕ႌႊၐၸဓၞၞၡ៸wyvtᶎᶪᶹစဎ꣡꣰꣢꣤ٗ؋لㇳㇾㇻㇱ㆐㆔,,㆟Ⱶヤマފ޼ޝަݿݞݠݷݐ',ݘ,ݪݙݵ𬝉𬜁𫝨𫞘くせぉて¼óû×ó£…𛅑הㄙくԗԀ5606神45,神796'𪤻𫞧ꓐ㄁ㄘɥɺꓵꓲ3''7034׉ⱦⱠˆ“𫝋ȍ,ꩲ軌꩷ꩶꩧꩫఞ۔فڱێظペサ神ナᴦᵑ47 9238їﻂ䐊䔉㠸﬎ffiﬣ,לּᴷᴦᵛᵽ,ᴨᵤ ᵸᵥᴗᵈꚏꚉꚟ⻆rtǟƴ𬎎

                    Why https connections are so slow when debugging (stepping over) in Java?