Databrick csv cannot find local file












0














In a program I have csv extracted from excel, I need to upload the csv to hdfs and save it as parquet format, doesn't matter with python version or spark version, no scala please.



Almost all discussions I came across are about databrick, however, it seems cannot find the file, here is the code and error:



df = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").option("inferSchema","true").option("delimiter",",").load("file:///home/rxie/csv_out/wamp.csv")


Error:




java.io.FileNotFoundException: File file:/home/rxie/csv_out/wamp.csv
does not exist




The file path:



ls -la /home/rxie/csv_out/wamp.csv
-rw-r--r-- 1 rxie linuxusers 2896878 Nov 12 14:59 /home/rxie/csv_out/wamp.csv


Thank you.










share|improve this question



























    0














    In a program I have csv extracted from excel, I need to upload the csv to hdfs and save it as parquet format, doesn't matter with python version or spark version, no scala please.



    Almost all discussions I came across are about databrick, however, it seems cannot find the file, here is the code and error:



    df = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").option("inferSchema","true").option("delimiter",",").load("file:///home/rxie/csv_out/wamp.csv")


    Error:




    java.io.FileNotFoundException: File file:/home/rxie/csv_out/wamp.csv
    does not exist




    The file path:



    ls -la /home/rxie/csv_out/wamp.csv
    -rw-r--r-- 1 rxie linuxusers 2896878 Nov 12 14:59 /home/rxie/csv_out/wamp.csv


    Thank you.










    share|improve this question

























      0












      0








      0







      In a program I have csv extracted from excel, I need to upload the csv to hdfs and save it as parquet format, doesn't matter with python version or spark version, no scala please.



      Almost all discussions I came across are about databrick, however, it seems cannot find the file, here is the code and error:



      df = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").option("inferSchema","true").option("delimiter",",").load("file:///home/rxie/csv_out/wamp.csv")


      Error:




      java.io.FileNotFoundException: File file:/home/rxie/csv_out/wamp.csv
      does not exist




      The file path:



      ls -la /home/rxie/csv_out/wamp.csv
      -rw-r--r-- 1 rxie linuxusers 2896878 Nov 12 14:59 /home/rxie/csv_out/wamp.csv


      Thank you.










      share|improve this question













      In a program I have csv extracted from excel, I need to upload the csv to hdfs and save it as parquet format, doesn't matter with python version or spark version, no scala please.



      Almost all discussions I came across are about databrick, however, it seems cannot find the file, here is the code and error:



      df = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").option("inferSchema","true").option("delimiter",",").load("file:///home/rxie/csv_out/wamp.csv")


      Error:




      java.io.FileNotFoundException: File file:/home/rxie/csv_out/wamp.csv
      does not exist




      The file path:



      ls -la /home/rxie/csv_out/wamp.csv
      -rw-r--r-- 1 rxie linuxusers 2896878 Nov 12 14:59 /home/rxie/csv_out/wamp.csv


      Thank you.







      csv databricks






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 13 at 20:58









      mdivk

      60721124




      60721124
























          2 Answers
          2






          active

          oldest

          votes


















          0














          I found the issue now!



          The reason why it errors out of file not found is actually correct, because I was using Spark Context with setMaster("yarn-cluster"), that means all worker nodes will look for the csv file, of course all worker nodes (except the one starting the program where the csv resides) do not have this file and hence error out. What I really should do is to use setMaster("local").



          FIX:



          conf = SparkConf().setAppName('test').setMaster("local")
          sc = SparkContext(conf=conf)
          sqlContext = SQLContext(sc)
          csv = "file:///home/rxie/csv_out/wamp.csv"
          df = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").option("inferSchema","true").option("delimiter",",").load(csv)





          share|improve this answer





























            0














            Yes, you are right, the file should be present at all worker nodes.
            well. you can still read a local file in yarn cluster mode. you just need to add your file using addFile.



            spark.sparkContext.addFile("file:///your local file path ")


            spark will copy the file to each node where executor will be created and can be able to process your file in cluster mode as well.
            I am using spark 2.3 version so you can change your spark context accordingly but addFile method remains same.



            try this with your yarn (cluster mode) and let me know if it works for you.






            share|improve this answer





















            • Thank you for your input, Vikrant.
              – mdivk
              Dec 12 at 14:12










            • wc. Let me know if it works for you
              – vikrant rana
              Dec 12 at 14:28










            • @mdivk.. did you checked using addFile method?is it working for you?
              – vikrant rana
              Dec 17 at 16:53











            Your Answer






            StackExchange.ifUsing("editor", function () {
            StackExchange.using("externalEditor", function () {
            StackExchange.using("snippets", function () {
            StackExchange.snippets.init();
            });
            });
            }, "code-snippets");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "1"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53289410%2fdatabrick-csv-cannot-find-local-file%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            0














            I found the issue now!



            The reason why it errors out of file not found is actually correct, because I was using Spark Context with setMaster("yarn-cluster"), that means all worker nodes will look for the csv file, of course all worker nodes (except the one starting the program where the csv resides) do not have this file and hence error out. What I really should do is to use setMaster("local").



            FIX:



            conf = SparkConf().setAppName('test').setMaster("local")
            sc = SparkContext(conf=conf)
            sqlContext = SQLContext(sc)
            csv = "file:///home/rxie/csv_out/wamp.csv"
            df = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").option("inferSchema","true").option("delimiter",",").load(csv)





            share|improve this answer


























              0














              I found the issue now!



              The reason why it errors out of file not found is actually correct, because I was using Spark Context with setMaster("yarn-cluster"), that means all worker nodes will look for the csv file, of course all worker nodes (except the one starting the program where the csv resides) do not have this file and hence error out. What I really should do is to use setMaster("local").



              FIX:



              conf = SparkConf().setAppName('test').setMaster("local")
              sc = SparkContext(conf=conf)
              sqlContext = SQLContext(sc)
              csv = "file:///home/rxie/csv_out/wamp.csv"
              df = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").option("inferSchema","true").option("delimiter",",").load(csv)





              share|improve this answer
























                0












                0








                0






                I found the issue now!



                The reason why it errors out of file not found is actually correct, because I was using Spark Context with setMaster("yarn-cluster"), that means all worker nodes will look for the csv file, of course all worker nodes (except the one starting the program where the csv resides) do not have this file and hence error out. What I really should do is to use setMaster("local").



                FIX:



                conf = SparkConf().setAppName('test').setMaster("local")
                sc = SparkContext(conf=conf)
                sqlContext = SQLContext(sc)
                csv = "file:///home/rxie/csv_out/wamp.csv"
                df = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").option("inferSchema","true").option("delimiter",",").load(csv)





                share|improve this answer












                I found the issue now!



                The reason why it errors out of file not found is actually correct, because I was using Spark Context with setMaster("yarn-cluster"), that means all worker nodes will look for the csv file, of course all worker nodes (except the one starting the program where the csv resides) do not have this file and hence error out. What I really should do is to use setMaster("local").



                FIX:



                conf = SparkConf().setAppName('test').setMaster("local")
                sc = SparkContext(conf=conf)
                sqlContext = SQLContext(sc)
                csv = "file:///home/rxie/csv_out/wamp.csv"
                df = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").option("inferSchema","true").option("delimiter",",").load(csv)






                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Nov 14 at 2:18









                mdivk

                60721124




                60721124

























                    0














                    Yes, you are right, the file should be present at all worker nodes.
                    well. you can still read a local file in yarn cluster mode. you just need to add your file using addFile.



                    spark.sparkContext.addFile("file:///your local file path ")


                    spark will copy the file to each node where executor will be created and can be able to process your file in cluster mode as well.
                    I am using spark 2.3 version so you can change your spark context accordingly but addFile method remains same.



                    try this with your yarn (cluster mode) and let me know if it works for you.






                    share|improve this answer





















                    • Thank you for your input, Vikrant.
                      – mdivk
                      Dec 12 at 14:12










                    • wc. Let me know if it works for you
                      – vikrant rana
                      Dec 12 at 14:28










                    • @mdivk.. did you checked using addFile method?is it working for you?
                      – vikrant rana
                      Dec 17 at 16:53
















                    0














                    Yes, you are right, the file should be present at all worker nodes.
                    well. you can still read a local file in yarn cluster mode. you just need to add your file using addFile.



                    spark.sparkContext.addFile("file:///your local file path ")


                    spark will copy the file to each node where executor will be created and can be able to process your file in cluster mode as well.
                    I am using spark 2.3 version so you can change your spark context accordingly but addFile method remains same.



                    try this with your yarn (cluster mode) and let me know if it works for you.






                    share|improve this answer





















                    • Thank you for your input, Vikrant.
                      – mdivk
                      Dec 12 at 14:12










                    • wc. Let me know if it works for you
                      – vikrant rana
                      Dec 12 at 14:28










                    • @mdivk.. did you checked using addFile method?is it working for you?
                      – vikrant rana
                      Dec 17 at 16:53














                    0












                    0








                    0






                    Yes, you are right, the file should be present at all worker nodes.
                    well. you can still read a local file in yarn cluster mode. you just need to add your file using addFile.



                    spark.sparkContext.addFile("file:///your local file path ")


                    spark will copy the file to each node where executor will be created and can be able to process your file in cluster mode as well.
                    I am using spark 2.3 version so you can change your spark context accordingly but addFile method remains same.



                    try this with your yarn (cluster mode) and let me know if it works for you.






                    share|improve this answer












                    Yes, you are right, the file should be present at all worker nodes.
                    well. you can still read a local file in yarn cluster mode. you just need to add your file using addFile.



                    spark.sparkContext.addFile("file:///your local file path ")


                    spark will copy the file to each node where executor will be created and can be able to process your file in cluster mode as well.
                    I am using spark 2.3 version so you can change your spark context accordingly but addFile method remains same.



                    try this with your yarn (cluster mode) and let me know if it works for you.







                    share|improve this answer












                    share|improve this answer



                    share|improve this answer










                    answered Dec 12 at 4:38









                    vikrant rana

                    530113




                    530113












                    • Thank you for your input, Vikrant.
                      – mdivk
                      Dec 12 at 14:12










                    • wc. Let me know if it works for you
                      – vikrant rana
                      Dec 12 at 14:28










                    • @mdivk.. did you checked using addFile method?is it working for you?
                      – vikrant rana
                      Dec 17 at 16:53


















                    • Thank you for your input, Vikrant.
                      – mdivk
                      Dec 12 at 14:12










                    • wc. Let me know if it works for you
                      – vikrant rana
                      Dec 12 at 14:28










                    • @mdivk.. did you checked using addFile method?is it working for you?
                      – vikrant rana
                      Dec 17 at 16:53
















                    Thank you for your input, Vikrant.
                    – mdivk
                    Dec 12 at 14:12




                    Thank you for your input, Vikrant.
                    – mdivk
                    Dec 12 at 14:12












                    wc. Let me know if it works for you
                    – vikrant rana
                    Dec 12 at 14:28




                    wc. Let me know if it works for you
                    – vikrant rana
                    Dec 12 at 14:28












                    @mdivk.. did you checked using addFile method?is it working for you?
                    – vikrant rana
                    Dec 17 at 16:53




                    @mdivk.. did you checked using addFile method?is it working for you?
                    – vikrant rana
                    Dec 17 at 16:53


















                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.





                    Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                    Please pay close attention to the following guidance:


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53289410%2fdatabrick-csv-cannot-find-local-file%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    鏡平學校

                    ꓛꓣだゔៀៅຸ໢ທຮ໕໒ ,ໂ'໥໓າ໼ឨឲ៵៭ៈゎゔit''䖳𥁄卿' ☨₤₨こゎもょの;ꜹꟚꞖꞵꟅꞛေၦေɯ,ɨɡ𛃵𛁹ޝ޳ޠ޾,ޤޒޯ޾𫝒𫠁သ𛅤チョ'サノބޘދ𛁐ᶿᶇᶀᶋᶠ㨑㽹⻮ꧬ꧹؍۩وَؠ㇕㇃㇪ ㇦㇋㇋ṜẰᵡᴠ 軌ᵕ搜۳ٰޗޮ޷ސޯ𫖾𫅀ल, ꙭ꙰ꚅꙁꚊꞻꝔ꟠Ꝭㄤﺟޱސꧨꧼ꧴ꧯꧽ꧲ꧯ'⽹⽭⾁⿞⼳⽋២៩ញណើꩯꩤ꩸ꩮᶻᶺᶧᶂ𫳲𫪭𬸄𫵰𬖩𬫣𬊉ၲ𛅬㕦䬺𫝌𫝼,,𫟖𫞽ហៅ஫㆔ాఆఅꙒꚞꙍ,Ꙟ꙱エ ,ポテ,フࢰࢯ𫟠𫞶 𫝤𫟠ﺕﹱﻜﻣ𪵕𪭸𪻆𪾩𫔷ġ,ŧآꞪ꟥,ꞔꝻ♚☹⛵𛀌ꬷꭞȄƁƪƬșƦǙǗdžƝǯǧⱦⱰꓕꓢႋ神 ဴ၀க௭எ௫ឫោ ' េㇷㇴㇼ神ㇸㇲㇽㇴㇼㇻㇸ'ㇸㇿㇸㇹㇰㆣꓚꓤ₡₧ ㄨㄟ㄂ㄖㄎ໗ツڒذ₶।ऩछएोञयूटक़कयँृी,冬'𛅢𛅥ㇱㇵㇶ𥄥𦒽𠣧𠊓𧢖𥞘𩔋цѰㄠſtʯʭɿʆʗʍʩɷɛ,əʏダヵㄐㄘR{gỚṖḺờṠṫảḙḭᴮᵏᴘᵀᵷᵕᴜᴏᵾq﮲ﲿﴽﭙ軌ﰬﶚﶧ﫲Ҝжюїкӈㇴffצּ﬘﭅﬈軌'ffistfflſtffतभफɳɰʊɲʎ𛁱𛁖𛁮𛀉 𛂯𛀞నఋŀŲ 𫟲𫠖𫞺ຆຆ ໹້໕໗ๆทԊꧢꧠ꧰ꓱ⿝⼑ŎḬẃẖỐẅ ,ờỰỈỗﮊDžȩꭏꭎꬻ꭮ꬿꭖꭥꭅ㇭神 ⾈ꓵꓑ⺄㄄ㄪㄙㄅㄇstA۵䞽ॶ𫞑𫝄㇉㇇゜軌𩜛𩳠Jﻺ‚Üမ႕ႌႊၐၸဓၞၞၡ៸wyvtᶎᶪᶹစဎ꣡꣰꣢꣤ٗ؋لㇳㇾㇻㇱ㆐㆔,,㆟Ⱶヤマފ޼ޝަݿݞݠݷݐ',ݘ,ݪݙݵ𬝉𬜁𫝨𫞘くせぉて¼óû×ó£…𛅑הㄙくԗԀ5606神45,神796'𪤻𫞧ꓐ㄁ㄘɥɺꓵꓲ3''7034׉ⱦⱠˆ“𫝋ȍ,ꩲ軌꩷ꩶꩧꩫఞ۔فڱێظペサ神ナᴦᵑ47 9238їﻂ䐊䔉㠸﬎ffiﬣ,לּᴷᴦᵛᵽ,ᴨᵤ ᵸᵥᴗᵈꚏꚉꚟ⻆rtǟƴ𬎎

                    Why https connections are so slow when debugging (stepping over) in Java?