how to query array of primary key values in dynamodb












0















I have one table in AWS Dynamodb with 1 million records.is it possible to query array of primary key values in one query with additional sort key condition in dynamodb?I am using for my server side logic.



Here is the params



var params = {
TableName: "client_logs",
KeyConditionExpression: "#accToken = :value AND ts between :val1 and
:val2",
ExpressionAttributeNames: {
"#accToken": "acc_token"
},
ExpressionAttributeValues: {
":value": clientAccessToken,
":val1": parseInt(fromDate),
":val2": parseInt(toDate),
":status":confirmStatus
},
FilterExpression:"apiAction = :status"


};


Here acc_token is the primary key and I want to query array of access_token values in one single query.










share|improve this question



























    0















    I have one table in AWS Dynamodb with 1 million records.is it possible to query array of primary key values in one query with additional sort key condition in dynamodb?I am using for my server side logic.



    Here is the params



    var params = {
    TableName: "client_logs",
    KeyConditionExpression: "#accToken = :value AND ts between :val1 and
    :val2",
    ExpressionAttributeNames: {
    "#accToken": "acc_token"
    },
    ExpressionAttributeValues: {
    ":value": clientAccessToken,
    ":val1": parseInt(fromDate),
    ":val2": parseInt(toDate),
    ":status":confirmStatus
    },
    FilterExpression:"apiAction = :status"


    };


    Here acc_token is the primary key and I want to query array of access_token values in one single query.










    share|improve this question

























      0












      0








      0








      I have one table in AWS Dynamodb with 1 million records.is it possible to query array of primary key values in one query with additional sort key condition in dynamodb?I am using for my server side logic.



      Here is the params



      var params = {
      TableName: "client_logs",
      KeyConditionExpression: "#accToken = :value AND ts between :val1 and
      :val2",
      ExpressionAttributeNames: {
      "#accToken": "acc_token"
      },
      ExpressionAttributeValues: {
      ":value": clientAccessToken,
      ":val1": parseInt(fromDate),
      ":val2": parseInt(toDate),
      ":status":confirmStatus
      },
      FilterExpression:"apiAction = :status"


      };


      Here acc_token is the primary key and I want to query array of access_token values in one single query.










      share|improve this question














      I have one table in AWS Dynamodb with 1 million records.is it possible to query array of primary key values in one query with additional sort key condition in dynamodb?I am using for my server side logic.



      Here is the params



      var params = {
      TableName: "client_logs",
      KeyConditionExpression: "#accToken = :value AND ts between :val1 and
      :val2",
      ExpressionAttributeNames: {
      "#accToken": "acc_token"
      },
      ExpressionAttributeValues: {
      ":value": clientAccessToken,
      ":val1": parseInt(fromDate),
      ":val2": parseInt(toDate),
      ":status":confirmStatus
      },
      FilterExpression:"apiAction = :status"


      };


      Here acc_token is the primary key and I want to query array of access_token values in one single query.







      node.js amazon-web-services amazon-dynamodb dynamodb-queries






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 21 '18 at 16:08









      Test MailTest Mail

      95




      95
























          1 Answer
          1






          active

          oldest

          votes


















          0














          No, it is not possible. A single query may search only one specific hash key value. (See DynamoDB – Query.)



          You can, however, execute multiple queries in parallel, which will have the effect you desire.



          Edit (2018-11-21)



          Since you said there are 200+ hash keys that you are looking for, here are two possible solutions. These solutions do not require unbounded, parallel calls to DynamoDB, but they will cost you more RCU. They may be faster or slower, depending on the distribution of data in your table.



          I don't know the distribution of your data, so I can't say which one is best for you. In all cases, we can't use acc_token as the sort key of the GSI because you can't use the IN operator in a KeyConditionExpression. (See DynamoDB – Condition.)



          Solution 1



          This strategy is based on Global Secondary Index Write Sharding for Selective Table Queries



          Steps:




          1. Add a new attribute to items that you write to your table. This new attribute can be a number or string. Let's call it index_partition.

          2. When you write a new item to your table, give it a random value from 0 to N for index_partition. (Here, N is some arbitrary constant of your choice. 9 is probably an okay value to start with.)

          3. Create a GSI with hash key of index_partition and a sort key of ts. You will need to project apiAction and acc_token to the GSI.

          4. Now, you only need to execute N queries. Use a key condition expression of index_partition = :n AND ts between :val1 and :val2 and a filter expression of apiAction = :status AND acc_token in :acc_token_list


          Solution 2



          This solution is similar to the last, but instead of using random GSI sharding, we'll use a date based partition for the GSI.



          Steps:




          1. Add a new string attribute to items that you write to your table. Let's call it ts_ymd.

          2. When you write a new item to your table, use just the yyyy-mm-dd part of ts to set the value of ts_ymd. (You could use any granularity you like. It depends on your typical query range for ts. If :val1 and :val2 are typically only an hour apart from each other, then a suitable GSI partition key could be yyyy-mm-dd-hh.)

          3. Create a GSI with hash key of ts_ymd and a sort key of ts. You will need to project apiAction and acc_token to the GSI.

          4. Assuming you went with yyyy-mm-dd for your GSI partition key, you only need to execute one query for every day that is within :val1 and :val2. Use a key condition expression of ts_ymd = :ymd AND ts between :val1 and :val2 and a filter expression of apiAction = :status AND acc_token in :acc_token_list


          Solution 3



          I don't know how many different values of apiAction there are and how those values are distributed, but if there are more than a few, and they have approximately equal distribution, you could partition a GSI based on that value. The more possible values you have for apiAction, the better this solution is for you. The limiting factor here is that you need to have enough values that you won't run into the 10GB partition limit for your GSI.



          Steps:




          1. Create a GSI with hash key of apiAction and a sort key of ts. You will need to project acc_token to the GSI.

          2. You only need to execute one query. Use a key condition expression of apiAction = :status AND ts between :val1 and :val2" and a filter expression ofacc_token in :acc_token_list`.


          For all of these solutions, you should consider how evenly the GSI partition key will be distributed, and the size of the typical range for ts in your query. You must use a filter expression on acc_token, so you should try to pick a solution that minimizes the total number of items the will match your key condition expression, but at the same time, you need to be aware that you can't have more than 10GB of data for one partition key (for the table or for a GSI). You also need to remember that a GSI can only be queried as an eventually consistent read.






          share|improve this answer


























          • But I have close to 200 items in my array.and the number may increase in future.I think that is not a correct approach to query 200+ times.Please suggest any other way if I can do this

            – Test Mail
            Nov 21 '18 at 21:20











          • Are these 200 keys always the same?

            – Matthew Pope
            Nov 21 '18 at 21:23











          • Or would it be acceptable to query without the between function? If that’s okay, or if you’re okay with using filter expressions, then a solution is possible using a Global Secondary Index.

            – Matthew Pope
            Nov 21 '18 at 21:32











          • Thanks for your reply.Yes certainly that 200 keys are always same.But new keys getting added over the time.Unfortunately the developers who have done initial development did not create any indexes and now it became close to one million.is there any other way I can alter now after creating indexes or do dynamodb have any other feature to copy table to another table within the region?So that I can do this experiments on new table instead of doing it in production data.

            – Test Mail
            Nov 22 '18 at 4:11













          • .I know we can use data pipeline and s3 but with out using any other service can we do copy in dynamo it self to save cost?please advice

            – Test Mail
            Nov 22 '18 at 4:28












          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53416107%2fhow-to-query-array-of-primary-key-values-in-dynamodb%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          0














          No, it is not possible. A single query may search only one specific hash key value. (See DynamoDB – Query.)



          You can, however, execute multiple queries in parallel, which will have the effect you desire.



          Edit (2018-11-21)



          Since you said there are 200+ hash keys that you are looking for, here are two possible solutions. These solutions do not require unbounded, parallel calls to DynamoDB, but they will cost you more RCU. They may be faster or slower, depending on the distribution of data in your table.



          I don't know the distribution of your data, so I can't say which one is best for you. In all cases, we can't use acc_token as the sort key of the GSI because you can't use the IN operator in a KeyConditionExpression. (See DynamoDB – Condition.)



          Solution 1



          This strategy is based on Global Secondary Index Write Sharding for Selective Table Queries



          Steps:




          1. Add a new attribute to items that you write to your table. This new attribute can be a number or string. Let's call it index_partition.

          2. When you write a new item to your table, give it a random value from 0 to N for index_partition. (Here, N is some arbitrary constant of your choice. 9 is probably an okay value to start with.)

          3. Create a GSI with hash key of index_partition and a sort key of ts. You will need to project apiAction and acc_token to the GSI.

          4. Now, you only need to execute N queries. Use a key condition expression of index_partition = :n AND ts between :val1 and :val2 and a filter expression of apiAction = :status AND acc_token in :acc_token_list


          Solution 2



          This solution is similar to the last, but instead of using random GSI sharding, we'll use a date based partition for the GSI.



          Steps:




          1. Add a new string attribute to items that you write to your table. Let's call it ts_ymd.

          2. When you write a new item to your table, use just the yyyy-mm-dd part of ts to set the value of ts_ymd. (You could use any granularity you like. It depends on your typical query range for ts. If :val1 and :val2 are typically only an hour apart from each other, then a suitable GSI partition key could be yyyy-mm-dd-hh.)

          3. Create a GSI with hash key of ts_ymd and a sort key of ts. You will need to project apiAction and acc_token to the GSI.

          4. Assuming you went with yyyy-mm-dd for your GSI partition key, you only need to execute one query for every day that is within :val1 and :val2. Use a key condition expression of ts_ymd = :ymd AND ts between :val1 and :val2 and a filter expression of apiAction = :status AND acc_token in :acc_token_list


          Solution 3



          I don't know how many different values of apiAction there are and how those values are distributed, but if there are more than a few, and they have approximately equal distribution, you could partition a GSI based on that value. The more possible values you have for apiAction, the better this solution is for you. The limiting factor here is that you need to have enough values that you won't run into the 10GB partition limit for your GSI.



          Steps:




          1. Create a GSI with hash key of apiAction and a sort key of ts. You will need to project acc_token to the GSI.

          2. You only need to execute one query. Use a key condition expression of apiAction = :status AND ts between :val1 and :val2" and a filter expression ofacc_token in :acc_token_list`.


          For all of these solutions, you should consider how evenly the GSI partition key will be distributed, and the size of the typical range for ts in your query. You must use a filter expression on acc_token, so you should try to pick a solution that minimizes the total number of items the will match your key condition expression, but at the same time, you need to be aware that you can't have more than 10GB of data for one partition key (for the table or for a GSI). You also need to remember that a GSI can only be queried as an eventually consistent read.






          share|improve this answer


























          • But I have close to 200 items in my array.and the number may increase in future.I think that is not a correct approach to query 200+ times.Please suggest any other way if I can do this

            – Test Mail
            Nov 21 '18 at 21:20











          • Are these 200 keys always the same?

            – Matthew Pope
            Nov 21 '18 at 21:23











          • Or would it be acceptable to query without the between function? If that’s okay, or if you’re okay with using filter expressions, then a solution is possible using a Global Secondary Index.

            – Matthew Pope
            Nov 21 '18 at 21:32











          • Thanks for your reply.Yes certainly that 200 keys are always same.But new keys getting added over the time.Unfortunately the developers who have done initial development did not create any indexes and now it became close to one million.is there any other way I can alter now after creating indexes or do dynamodb have any other feature to copy table to another table within the region?So that I can do this experiments on new table instead of doing it in production data.

            – Test Mail
            Nov 22 '18 at 4:11













          • .I know we can use data pipeline and s3 but with out using any other service can we do copy in dynamo it self to save cost?please advice

            – Test Mail
            Nov 22 '18 at 4:28
















          0














          No, it is not possible. A single query may search only one specific hash key value. (See DynamoDB – Query.)



          You can, however, execute multiple queries in parallel, which will have the effect you desire.



          Edit (2018-11-21)



          Since you said there are 200+ hash keys that you are looking for, here are two possible solutions. These solutions do not require unbounded, parallel calls to DynamoDB, but they will cost you more RCU. They may be faster or slower, depending on the distribution of data in your table.



          I don't know the distribution of your data, so I can't say which one is best for you. In all cases, we can't use acc_token as the sort key of the GSI because you can't use the IN operator in a KeyConditionExpression. (See DynamoDB – Condition.)



          Solution 1



          This strategy is based on Global Secondary Index Write Sharding for Selective Table Queries



          Steps:




          1. Add a new attribute to items that you write to your table. This new attribute can be a number or string. Let's call it index_partition.

          2. When you write a new item to your table, give it a random value from 0 to N for index_partition. (Here, N is some arbitrary constant of your choice. 9 is probably an okay value to start with.)

          3. Create a GSI with hash key of index_partition and a sort key of ts. You will need to project apiAction and acc_token to the GSI.

          4. Now, you only need to execute N queries. Use a key condition expression of index_partition = :n AND ts between :val1 and :val2 and a filter expression of apiAction = :status AND acc_token in :acc_token_list


          Solution 2



          This solution is similar to the last, but instead of using random GSI sharding, we'll use a date based partition for the GSI.



          Steps:




          1. Add a new string attribute to items that you write to your table. Let's call it ts_ymd.

          2. When you write a new item to your table, use just the yyyy-mm-dd part of ts to set the value of ts_ymd. (You could use any granularity you like. It depends on your typical query range for ts. If :val1 and :val2 are typically only an hour apart from each other, then a suitable GSI partition key could be yyyy-mm-dd-hh.)

          3. Create a GSI with hash key of ts_ymd and a sort key of ts. You will need to project apiAction and acc_token to the GSI.

          4. Assuming you went with yyyy-mm-dd for your GSI partition key, you only need to execute one query for every day that is within :val1 and :val2. Use a key condition expression of ts_ymd = :ymd AND ts between :val1 and :val2 and a filter expression of apiAction = :status AND acc_token in :acc_token_list


          Solution 3



          I don't know how many different values of apiAction there are and how those values are distributed, but if there are more than a few, and they have approximately equal distribution, you could partition a GSI based on that value. The more possible values you have for apiAction, the better this solution is for you. The limiting factor here is that you need to have enough values that you won't run into the 10GB partition limit for your GSI.



          Steps:




          1. Create a GSI with hash key of apiAction and a sort key of ts. You will need to project acc_token to the GSI.

          2. You only need to execute one query. Use a key condition expression of apiAction = :status AND ts between :val1 and :val2" and a filter expression ofacc_token in :acc_token_list`.


          For all of these solutions, you should consider how evenly the GSI partition key will be distributed, and the size of the typical range for ts in your query. You must use a filter expression on acc_token, so you should try to pick a solution that minimizes the total number of items the will match your key condition expression, but at the same time, you need to be aware that you can't have more than 10GB of data for one partition key (for the table or for a GSI). You also need to remember that a GSI can only be queried as an eventually consistent read.






          share|improve this answer


























          • But I have close to 200 items in my array.and the number may increase in future.I think that is not a correct approach to query 200+ times.Please suggest any other way if I can do this

            – Test Mail
            Nov 21 '18 at 21:20











          • Are these 200 keys always the same?

            – Matthew Pope
            Nov 21 '18 at 21:23











          • Or would it be acceptable to query without the between function? If that’s okay, or if you’re okay with using filter expressions, then a solution is possible using a Global Secondary Index.

            – Matthew Pope
            Nov 21 '18 at 21:32











          • Thanks for your reply.Yes certainly that 200 keys are always same.But new keys getting added over the time.Unfortunately the developers who have done initial development did not create any indexes and now it became close to one million.is there any other way I can alter now after creating indexes or do dynamodb have any other feature to copy table to another table within the region?So that I can do this experiments on new table instead of doing it in production data.

            – Test Mail
            Nov 22 '18 at 4:11













          • .I know we can use data pipeline and s3 but with out using any other service can we do copy in dynamo it self to save cost?please advice

            – Test Mail
            Nov 22 '18 at 4:28














          0












          0








          0







          No, it is not possible. A single query may search only one specific hash key value. (See DynamoDB – Query.)



          You can, however, execute multiple queries in parallel, which will have the effect you desire.



          Edit (2018-11-21)



          Since you said there are 200+ hash keys that you are looking for, here are two possible solutions. These solutions do not require unbounded, parallel calls to DynamoDB, but they will cost you more RCU. They may be faster or slower, depending on the distribution of data in your table.



          I don't know the distribution of your data, so I can't say which one is best for you. In all cases, we can't use acc_token as the sort key of the GSI because you can't use the IN operator in a KeyConditionExpression. (See DynamoDB – Condition.)



          Solution 1



          This strategy is based on Global Secondary Index Write Sharding for Selective Table Queries



          Steps:




          1. Add a new attribute to items that you write to your table. This new attribute can be a number or string. Let's call it index_partition.

          2. When you write a new item to your table, give it a random value from 0 to N for index_partition. (Here, N is some arbitrary constant of your choice. 9 is probably an okay value to start with.)

          3. Create a GSI with hash key of index_partition and a sort key of ts. You will need to project apiAction and acc_token to the GSI.

          4. Now, you only need to execute N queries. Use a key condition expression of index_partition = :n AND ts between :val1 and :val2 and a filter expression of apiAction = :status AND acc_token in :acc_token_list


          Solution 2



          This solution is similar to the last, but instead of using random GSI sharding, we'll use a date based partition for the GSI.



          Steps:




          1. Add a new string attribute to items that you write to your table. Let's call it ts_ymd.

          2. When you write a new item to your table, use just the yyyy-mm-dd part of ts to set the value of ts_ymd. (You could use any granularity you like. It depends on your typical query range for ts. If :val1 and :val2 are typically only an hour apart from each other, then a suitable GSI partition key could be yyyy-mm-dd-hh.)

          3. Create a GSI with hash key of ts_ymd and a sort key of ts. You will need to project apiAction and acc_token to the GSI.

          4. Assuming you went with yyyy-mm-dd for your GSI partition key, you only need to execute one query for every day that is within :val1 and :val2. Use a key condition expression of ts_ymd = :ymd AND ts between :val1 and :val2 and a filter expression of apiAction = :status AND acc_token in :acc_token_list


          Solution 3



          I don't know how many different values of apiAction there are and how those values are distributed, but if there are more than a few, and they have approximately equal distribution, you could partition a GSI based on that value. The more possible values you have for apiAction, the better this solution is for you. The limiting factor here is that you need to have enough values that you won't run into the 10GB partition limit for your GSI.



          Steps:




          1. Create a GSI with hash key of apiAction and a sort key of ts. You will need to project acc_token to the GSI.

          2. You only need to execute one query. Use a key condition expression of apiAction = :status AND ts between :val1 and :val2" and a filter expression ofacc_token in :acc_token_list`.


          For all of these solutions, you should consider how evenly the GSI partition key will be distributed, and the size of the typical range for ts in your query. You must use a filter expression on acc_token, so you should try to pick a solution that minimizes the total number of items the will match your key condition expression, but at the same time, you need to be aware that you can't have more than 10GB of data for one partition key (for the table or for a GSI). You also need to remember that a GSI can only be queried as an eventually consistent read.






          share|improve this answer















          No, it is not possible. A single query may search only one specific hash key value. (See DynamoDB – Query.)



          You can, however, execute multiple queries in parallel, which will have the effect you desire.



          Edit (2018-11-21)



          Since you said there are 200+ hash keys that you are looking for, here are two possible solutions. These solutions do not require unbounded, parallel calls to DynamoDB, but they will cost you more RCU. They may be faster or slower, depending on the distribution of data in your table.



          I don't know the distribution of your data, so I can't say which one is best for you. In all cases, we can't use acc_token as the sort key of the GSI because you can't use the IN operator in a KeyConditionExpression. (See DynamoDB – Condition.)



          Solution 1



          This strategy is based on Global Secondary Index Write Sharding for Selective Table Queries



          Steps:




          1. Add a new attribute to items that you write to your table. This new attribute can be a number or string. Let's call it index_partition.

          2. When you write a new item to your table, give it a random value from 0 to N for index_partition. (Here, N is some arbitrary constant of your choice. 9 is probably an okay value to start with.)

          3. Create a GSI with hash key of index_partition and a sort key of ts. You will need to project apiAction and acc_token to the GSI.

          4. Now, you only need to execute N queries. Use a key condition expression of index_partition = :n AND ts between :val1 and :val2 and a filter expression of apiAction = :status AND acc_token in :acc_token_list


          Solution 2



          This solution is similar to the last, but instead of using random GSI sharding, we'll use a date based partition for the GSI.



          Steps:




          1. Add a new string attribute to items that you write to your table. Let's call it ts_ymd.

          2. When you write a new item to your table, use just the yyyy-mm-dd part of ts to set the value of ts_ymd. (You could use any granularity you like. It depends on your typical query range for ts. If :val1 and :val2 are typically only an hour apart from each other, then a suitable GSI partition key could be yyyy-mm-dd-hh.)

          3. Create a GSI with hash key of ts_ymd and a sort key of ts. You will need to project apiAction and acc_token to the GSI.

          4. Assuming you went with yyyy-mm-dd for your GSI partition key, you only need to execute one query for every day that is within :val1 and :val2. Use a key condition expression of ts_ymd = :ymd AND ts between :val1 and :val2 and a filter expression of apiAction = :status AND acc_token in :acc_token_list


          Solution 3



          I don't know how many different values of apiAction there are and how those values are distributed, but if there are more than a few, and they have approximately equal distribution, you could partition a GSI based on that value. The more possible values you have for apiAction, the better this solution is for you. The limiting factor here is that you need to have enough values that you won't run into the 10GB partition limit for your GSI.



          Steps:




          1. Create a GSI with hash key of apiAction and a sort key of ts. You will need to project acc_token to the GSI.

          2. You only need to execute one query. Use a key condition expression of apiAction = :status AND ts between :val1 and :val2" and a filter expression ofacc_token in :acc_token_list`.


          For all of these solutions, you should consider how evenly the GSI partition key will be distributed, and the size of the typical range for ts in your query. You must use a filter expression on acc_token, so you should try to pick a solution that minimizes the total number of items the will match your key condition expression, but at the same time, you need to be aware that you can't have more than 10GB of data for one partition key (for the table or for a GSI). You also need to remember that a GSI can only be queried as an eventually consistent read.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Nov 21 '18 at 22:20

























          answered Nov 21 '18 at 19:51









          Matthew PopeMatthew Pope

          2,1621816




          2,1621816













          • But I have close to 200 items in my array.and the number may increase in future.I think that is not a correct approach to query 200+ times.Please suggest any other way if I can do this

            – Test Mail
            Nov 21 '18 at 21:20











          • Are these 200 keys always the same?

            – Matthew Pope
            Nov 21 '18 at 21:23











          • Or would it be acceptable to query without the between function? If that’s okay, or if you’re okay with using filter expressions, then a solution is possible using a Global Secondary Index.

            – Matthew Pope
            Nov 21 '18 at 21:32











          • Thanks for your reply.Yes certainly that 200 keys are always same.But new keys getting added over the time.Unfortunately the developers who have done initial development did not create any indexes and now it became close to one million.is there any other way I can alter now after creating indexes or do dynamodb have any other feature to copy table to another table within the region?So that I can do this experiments on new table instead of doing it in production data.

            – Test Mail
            Nov 22 '18 at 4:11













          • .I know we can use data pipeline and s3 but with out using any other service can we do copy in dynamo it self to save cost?please advice

            – Test Mail
            Nov 22 '18 at 4:28



















          • But I have close to 200 items in my array.and the number may increase in future.I think that is not a correct approach to query 200+ times.Please suggest any other way if I can do this

            – Test Mail
            Nov 21 '18 at 21:20











          • Are these 200 keys always the same?

            – Matthew Pope
            Nov 21 '18 at 21:23











          • Or would it be acceptable to query without the between function? If that’s okay, or if you’re okay with using filter expressions, then a solution is possible using a Global Secondary Index.

            – Matthew Pope
            Nov 21 '18 at 21:32











          • Thanks for your reply.Yes certainly that 200 keys are always same.But new keys getting added over the time.Unfortunately the developers who have done initial development did not create any indexes and now it became close to one million.is there any other way I can alter now after creating indexes or do dynamodb have any other feature to copy table to another table within the region?So that I can do this experiments on new table instead of doing it in production data.

            – Test Mail
            Nov 22 '18 at 4:11













          • .I know we can use data pipeline and s3 but with out using any other service can we do copy in dynamo it self to save cost?please advice

            – Test Mail
            Nov 22 '18 at 4:28

















          But I have close to 200 items in my array.and the number may increase in future.I think that is not a correct approach to query 200+ times.Please suggest any other way if I can do this

          – Test Mail
          Nov 21 '18 at 21:20





          But I have close to 200 items in my array.and the number may increase in future.I think that is not a correct approach to query 200+ times.Please suggest any other way if I can do this

          – Test Mail
          Nov 21 '18 at 21:20













          Are these 200 keys always the same?

          – Matthew Pope
          Nov 21 '18 at 21:23





          Are these 200 keys always the same?

          – Matthew Pope
          Nov 21 '18 at 21:23













          Or would it be acceptable to query without the between function? If that’s okay, or if you’re okay with using filter expressions, then a solution is possible using a Global Secondary Index.

          – Matthew Pope
          Nov 21 '18 at 21:32





          Or would it be acceptable to query without the between function? If that’s okay, or if you’re okay with using filter expressions, then a solution is possible using a Global Secondary Index.

          – Matthew Pope
          Nov 21 '18 at 21:32













          Thanks for your reply.Yes certainly that 200 keys are always same.But new keys getting added over the time.Unfortunately the developers who have done initial development did not create any indexes and now it became close to one million.is there any other way I can alter now after creating indexes or do dynamodb have any other feature to copy table to another table within the region?So that I can do this experiments on new table instead of doing it in production data.

          – Test Mail
          Nov 22 '18 at 4:11







          Thanks for your reply.Yes certainly that 200 keys are always same.But new keys getting added over the time.Unfortunately the developers who have done initial development did not create any indexes and now it became close to one million.is there any other way I can alter now after creating indexes or do dynamodb have any other feature to copy table to another table within the region?So that I can do this experiments on new table instead of doing it in production data.

          – Test Mail
          Nov 22 '18 at 4:11















          .I know we can use data pipeline and s3 but with out using any other service can we do copy in dynamo it self to save cost?please advice

          – Test Mail
          Nov 22 '18 at 4:28





          .I know we can use data pipeline and s3 but with out using any other service can we do copy in dynamo it self to save cost?please advice

          – Test Mail
          Nov 22 '18 at 4:28




















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53416107%2fhow-to-query-array-of-primary-key-values-in-dynamodb%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          鏡平學校

          ꓛꓣだゔៀៅຸ໢ທຮ໕໒ ,ໂ'໥໓າ໼ឨឲ៵៭ៈゎゔit''䖳𥁄卿' ☨₤₨こゎもょの;ꜹꟚꞖꞵꟅꞛေၦေɯ,ɨɡ𛃵𛁹ޝ޳ޠ޾,ޤޒޯ޾𫝒𫠁သ𛅤チョ'サノބޘދ𛁐ᶿᶇᶀᶋᶠ㨑㽹⻮ꧬ꧹؍۩وَؠ㇕㇃㇪ ㇦㇋㇋ṜẰᵡᴠ 軌ᵕ搜۳ٰޗޮ޷ސޯ𫖾𫅀ल, ꙭ꙰ꚅꙁꚊꞻꝔ꟠Ꝭㄤﺟޱސꧨꧼ꧴ꧯꧽ꧲ꧯ'⽹⽭⾁⿞⼳⽋២៩ញណើꩯꩤ꩸ꩮᶻᶺᶧᶂ𫳲𫪭𬸄𫵰𬖩𬫣𬊉ၲ𛅬㕦䬺𫝌𫝼,,𫟖𫞽ហៅ஫㆔ాఆఅꙒꚞꙍ,Ꙟ꙱エ ,ポテ,フࢰࢯ𫟠𫞶 𫝤𫟠ﺕﹱﻜﻣ𪵕𪭸𪻆𪾩𫔷ġ,ŧآꞪ꟥,ꞔꝻ♚☹⛵𛀌ꬷꭞȄƁƪƬșƦǙǗdžƝǯǧⱦⱰꓕꓢႋ神 ဴ၀க௭எ௫ឫោ ' េㇷㇴㇼ神ㇸㇲㇽㇴㇼㇻㇸ'ㇸㇿㇸㇹㇰㆣꓚꓤ₡₧ ㄨㄟ㄂ㄖㄎ໗ツڒذ₶।ऩछएोञयूटक़कयँृी,冬'𛅢𛅥ㇱㇵㇶ𥄥𦒽𠣧𠊓𧢖𥞘𩔋цѰㄠſtʯʭɿʆʗʍʩɷɛ,əʏダヵㄐㄘR{gỚṖḺờṠṫảḙḭᴮᵏᴘᵀᵷᵕᴜᴏᵾq﮲ﲿﴽﭙ軌ﰬﶚﶧ﫲Ҝжюїкӈㇴffצּ﬘﭅﬈軌'ffistfflſtffतभफɳɰʊɲʎ𛁱𛁖𛁮𛀉 𛂯𛀞నఋŀŲ 𫟲𫠖𫞺ຆຆ ໹້໕໗ๆทԊꧢꧠ꧰ꓱ⿝⼑ŎḬẃẖỐẅ ,ờỰỈỗﮊDžȩꭏꭎꬻ꭮ꬿꭖꭥꭅ㇭神 ⾈ꓵꓑ⺄㄄ㄪㄙㄅㄇstA۵䞽ॶ𫞑𫝄㇉㇇゜軌𩜛𩳠Jﻺ‚Üမ႕ႌႊၐၸဓၞၞၡ៸wyvtᶎᶪᶹစဎ꣡꣰꣢꣤ٗ؋لㇳㇾㇻㇱ㆐㆔,,㆟Ⱶヤマފ޼ޝަݿݞݠݷݐ',ݘ,ݪݙݵ𬝉𬜁𫝨𫞘くせぉて¼óû×ó£…𛅑הㄙくԗԀ5606神45,神796'𪤻𫞧ꓐ㄁ㄘɥɺꓵꓲ3''7034׉ⱦⱠˆ“𫝋ȍ,ꩲ軌꩷ꩶꩧꩫఞ۔فڱێظペサ神ナᴦᵑ47 9238їﻂ䐊䔉㠸﬎ffiﬣ,לּᴷᴦᵛᵽ,ᴨᵤ ᵸᵥᴗᵈꚏꚉꚟ⻆rtǟƴ𬎎

          Why https connections are so slow when debugging (stepping over) in Java?