Reading from Redshift into Spark Dataframe (Spark-Redshift Module)












0















I'm following along the spark-redshift tutorial to read from redshift into spark (databricks). I have the following code:



val tempDir = "s3n://{my-s3-bucket-here}"



val jdbcUsername = "usernameExample"
val jdbcPassword = "samplePassword"
val jdbcHostname = "redshift.companyname.xyz"
val jdbcPort = 9293
val jdbcDatabase = "database"
val jdbcUrl = "sampleURL"


sc.hadoopConfiguration.set("fs.s3n.awsAccessKeyId", "SAMPLEAWSKEY")
sc.hadoopConfiguration.set("fs.s3n.awsSecretAccessKey", "SECRETKEYHERE")

val subs_dim = sqlContext.read.format("com.databricks.spark.redshift").option("url", jdbcUrl).option("tempdir", tempDir).option("dbtable", "example.exampledb").load()


Now, when I attempt to run this, I get:



java.lang.IllegalArgumentException: requirement failed: You must specify a method for authenticating Redshift's connection to S3 (aws_iam_role, forward_spark_s3_credentials, or temporary_aws_*. For a discussion of the differences between these options, please see the README.


I'm a bit confused, as I have defined the awsAccesskeyID using sc.hadoopConfiguration.set. I'm new at my company so I'm wondering if the AWS key is wrong, or if I'm missing something else?



Thanks!










share|improve this question























  • Did you read the README? Did it shed any light?

    – erip
    Nov 20 '18 at 19:50













  • Yeah I checked it out, it said to set the AWS credentials...which I did?

    – DataScienceAmateur
    Nov 20 '18 at 19:51
















0















I'm following along the spark-redshift tutorial to read from redshift into spark (databricks). I have the following code:



val tempDir = "s3n://{my-s3-bucket-here}"



val jdbcUsername = "usernameExample"
val jdbcPassword = "samplePassword"
val jdbcHostname = "redshift.companyname.xyz"
val jdbcPort = 9293
val jdbcDatabase = "database"
val jdbcUrl = "sampleURL"


sc.hadoopConfiguration.set("fs.s3n.awsAccessKeyId", "SAMPLEAWSKEY")
sc.hadoopConfiguration.set("fs.s3n.awsSecretAccessKey", "SECRETKEYHERE")

val subs_dim = sqlContext.read.format("com.databricks.spark.redshift").option("url", jdbcUrl).option("tempdir", tempDir).option("dbtable", "example.exampledb").load()


Now, when I attempt to run this, I get:



java.lang.IllegalArgumentException: requirement failed: You must specify a method for authenticating Redshift's connection to S3 (aws_iam_role, forward_spark_s3_credentials, or temporary_aws_*. For a discussion of the differences between these options, please see the README.


I'm a bit confused, as I have defined the awsAccesskeyID using sc.hadoopConfiguration.set. I'm new at my company so I'm wondering if the AWS key is wrong, or if I'm missing something else?



Thanks!










share|improve this question























  • Did you read the README? Did it shed any light?

    – erip
    Nov 20 '18 at 19:50













  • Yeah I checked it out, it said to set the AWS credentials...which I did?

    – DataScienceAmateur
    Nov 20 '18 at 19:51














0












0








0








I'm following along the spark-redshift tutorial to read from redshift into spark (databricks). I have the following code:



val tempDir = "s3n://{my-s3-bucket-here}"



val jdbcUsername = "usernameExample"
val jdbcPassword = "samplePassword"
val jdbcHostname = "redshift.companyname.xyz"
val jdbcPort = 9293
val jdbcDatabase = "database"
val jdbcUrl = "sampleURL"


sc.hadoopConfiguration.set("fs.s3n.awsAccessKeyId", "SAMPLEAWSKEY")
sc.hadoopConfiguration.set("fs.s3n.awsSecretAccessKey", "SECRETKEYHERE")

val subs_dim = sqlContext.read.format("com.databricks.spark.redshift").option("url", jdbcUrl).option("tempdir", tempDir).option("dbtable", "example.exampledb").load()


Now, when I attempt to run this, I get:



java.lang.IllegalArgumentException: requirement failed: You must specify a method for authenticating Redshift's connection to S3 (aws_iam_role, forward_spark_s3_credentials, or temporary_aws_*. For a discussion of the differences between these options, please see the README.


I'm a bit confused, as I have defined the awsAccesskeyID using sc.hadoopConfiguration.set. I'm new at my company so I'm wondering if the AWS key is wrong, or if I'm missing something else?



Thanks!










share|improve this question














I'm following along the spark-redshift tutorial to read from redshift into spark (databricks). I have the following code:



val tempDir = "s3n://{my-s3-bucket-here}"



val jdbcUsername = "usernameExample"
val jdbcPassword = "samplePassword"
val jdbcHostname = "redshift.companyname.xyz"
val jdbcPort = 9293
val jdbcDatabase = "database"
val jdbcUrl = "sampleURL"


sc.hadoopConfiguration.set("fs.s3n.awsAccessKeyId", "SAMPLEAWSKEY")
sc.hadoopConfiguration.set("fs.s3n.awsSecretAccessKey", "SECRETKEYHERE")

val subs_dim = sqlContext.read.format("com.databricks.spark.redshift").option("url", jdbcUrl).option("tempdir", tempDir).option("dbtable", "example.exampledb").load()


Now, when I attempt to run this, I get:



java.lang.IllegalArgumentException: requirement failed: You must specify a method for authenticating Redshift's connection to S3 (aws_iam_role, forward_spark_s3_credentials, or temporary_aws_*. For a discussion of the differences between these options, please see the README.


I'm a bit confused, as I have defined the awsAccesskeyID using sc.hadoopConfiguration.set. I'm new at my company so I'm wondering if the AWS key is wrong, or if I'm missing something else?



Thanks!







scala apache-spark jdbc amazon-redshift






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 20 '18 at 19:47









DataScienceAmateurDataScienceAmateur

71128




71128













  • Did you read the README? Did it shed any light?

    – erip
    Nov 20 '18 at 19:50













  • Yeah I checked it out, it said to set the AWS credentials...which I did?

    – DataScienceAmateur
    Nov 20 '18 at 19:51



















  • Did you read the README? Did it shed any light?

    – erip
    Nov 20 '18 at 19:50













  • Yeah I checked it out, it said to set the AWS credentials...which I did?

    – DataScienceAmateur
    Nov 20 '18 at 19:51

















Did you read the README? Did it shed any light?

– erip
Nov 20 '18 at 19:50







Did you read the README? Did it shed any light?

– erip
Nov 20 '18 at 19:50















Yeah I checked it out, it said to set the AWS credentials...which I did?

– DataScienceAmateur
Nov 20 '18 at 19:51





Yeah I checked it out, it said to set the AWS credentials...which I did?

– DataScienceAmateur
Nov 20 '18 at 19:51












1 Answer
1






active

oldest

votes


















1














I think the only reason I see, it is not passing the S3 credentials to Redshift connection as you have not setup forward_spark_s3_credentials.



Add below option to your call.



option("forward_spark_s3_credentials", "true");


Refer below documentation snippet.




Forward Spark's S3 credentials to Redshift: if the forward_spark_s3_credentials option is set to true then this library will automatically discover the credentials that Spark is using to connect to S3 and will forward those credentials to Redshift over JDBC.




Hope it help you!






share|improve this answer























    Your Answer






    StackExchange.ifUsing("editor", function () {
    StackExchange.using("externalEditor", function () {
    StackExchange.using("snippets", function () {
    StackExchange.snippets.init();
    });
    });
    }, "code-snippets");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "1"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53400504%2freading-from-redshift-into-spark-dataframe-spark-redshift-module%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    1














    I think the only reason I see, it is not passing the S3 credentials to Redshift connection as you have not setup forward_spark_s3_credentials.



    Add below option to your call.



    option("forward_spark_s3_credentials", "true");


    Refer below documentation snippet.




    Forward Spark's S3 credentials to Redshift: if the forward_spark_s3_credentials option is set to true then this library will automatically discover the credentials that Spark is using to connect to S3 and will forward those credentials to Redshift over JDBC.




    Hope it help you!






    share|improve this answer




























      1














      I think the only reason I see, it is not passing the S3 credentials to Redshift connection as you have not setup forward_spark_s3_credentials.



      Add below option to your call.



      option("forward_spark_s3_credentials", "true");


      Refer below documentation snippet.




      Forward Spark's S3 credentials to Redshift: if the forward_spark_s3_credentials option is set to true then this library will automatically discover the credentials that Spark is using to connect to S3 and will forward those credentials to Redshift over JDBC.




      Hope it help you!






      share|improve this answer


























        1












        1








        1







        I think the only reason I see, it is not passing the S3 credentials to Redshift connection as you have not setup forward_spark_s3_credentials.



        Add below option to your call.



        option("forward_spark_s3_credentials", "true");


        Refer below documentation snippet.




        Forward Spark's S3 credentials to Redshift: if the forward_spark_s3_credentials option is set to true then this library will automatically discover the credentials that Spark is using to connect to S3 and will forward those credentials to Redshift over JDBC.




        Hope it help you!






        share|improve this answer













        I think the only reason I see, it is not passing the S3 credentials to Redshift connection as you have not setup forward_spark_s3_credentials.



        Add below option to your call.



        option("forward_spark_s3_credentials", "true");


        Refer below documentation snippet.




        Forward Spark's S3 credentials to Redshift: if the forward_spark_s3_credentials option is set to true then this library will automatically discover the credentials that Spark is using to connect to S3 and will forward those credentials to Redshift over JDBC.




        Hope it help you!







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Nov 20 '18 at 21:24









        Red BoyRed Boy

        2,25621024




        2,25621024
































            draft saved

            draft discarded




















































            Thanks for contributing an answer to Stack Overflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53400504%2freading-from-redshift-into-spark-dataframe-spark-redshift-module%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Guess what letter conforming each word

            Port of Spain

            Run scheduled task as local user group (not BUILTIN)