Efficiently add large amounts of data to Azure Table Storage asynchronously
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ height:90px;width:728px;box-sizing:border-box;
}
I am trying to optimise an operation where I insert several tens of thousand Foo
s into an Azure table.
Currently the method looks as follows:
public void AddBulk(IReadOnlyList<Foo> foos)
{
var parallelOptions = new ParallelOptions() { MaxDegreeOfParallelism = 4 };
Parallel.ForEach(foos.GroupBy(x => x.QueryingId), parallelOptions, groupedFoos =>
{
var threadTable = Table;
foreach (var chunkedAmounts in groupedFoos.ToList().Chunk(100))
{
var batchOperation = new TableBatchOperation();
foreach (var amount in chunkedAmounts)
{
// Echo content off. This further reduces bandwidth usage by turning off the
// echo of the payload in the response during entity insertion.
batchOperation.Insert(new FooTableEntity(amount), false);
}
// Exponential retry policies are good for batching procedures, background tasks,
// or non-interactive scenarios. In these scenarios, you can typically allow more
// time for the service to recover—with a consequently increased chance of the
// operation eventually succeeding. Attempt delays: ~3s, ~7s, ~15s, ...
threadTable.ExecuteBatchAsync(batchOperation, new TableRequestOptions()
{
RetryPolicy = new ExponentialRetry(TimeSpan.FromMilliseconds(deltaBackoffMilliseconds), maxRetryAttempts),
MaximumExecutionTime = TimeSpan.FromSeconds(maxExecutionTimeSeconds),
}, DefaultOperationContext);
}
});
}
I have upgraded the method to the .NET Core libraries, which do not support sync over async APIs. As such, I'm re-evaluating the add method and converting it to async.
The author of this method manually grouped the foos by the id that is used for the partition key, manually chunked them into batches of 100, and then uploaded them with a 4x parallelism. I'm surprised this is would be better than some built in Azure operation.
What is the most efficient way of uploading say 100 000 rows (each consisting of 2 guids, 2 strings, a timestamp and an int) to Azure table storage?
c# azure asynchronous bulkinsert azure-table-storage
add a comment |
I am trying to optimise an operation where I insert several tens of thousand Foo
s into an Azure table.
Currently the method looks as follows:
public void AddBulk(IReadOnlyList<Foo> foos)
{
var parallelOptions = new ParallelOptions() { MaxDegreeOfParallelism = 4 };
Parallel.ForEach(foos.GroupBy(x => x.QueryingId), parallelOptions, groupedFoos =>
{
var threadTable = Table;
foreach (var chunkedAmounts in groupedFoos.ToList().Chunk(100))
{
var batchOperation = new TableBatchOperation();
foreach (var amount in chunkedAmounts)
{
// Echo content off. This further reduces bandwidth usage by turning off the
// echo of the payload in the response during entity insertion.
batchOperation.Insert(new FooTableEntity(amount), false);
}
// Exponential retry policies are good for batching procedures, background tasks,
// or non-interactive scenarios. In these scenarios, you can typically allow more
// time for the service to recover—with a consequently increased chance of the
// operation eventually succeeding. Attempt delays: ~3s, ~7s, ~15s, ...
threadTable.ExecuteBatchAsync(batchOperation, new TableRequestOptions()
{
RetryPolicy = new ExponentialRetry(TimeSpan.FromMilliseconds(deltaBackoffMilliseconds), maxRetryAttempts),
MaximumExecutionTime = TimeSpan.FromSeconds(maxExecutionTimeSeconds),
}, DefaultOperationContext);
}
});
}
I have upgraded the method to the .NET Core libraries, which do not support sync over async APIs. As such, I'm re-evaluating the add method and converting it to async.
The author of this method manually grouped the foos by the id that is used for the partition key, manually chunked them into batches of 100, and then uploaded them with a 4x parallelism. I'm surprised this is would be better than some built in Azure operation.
What is the most efficient way of uploading say 100 000 rows (each consisting of 2 guids, 2 strings, a timestamp and an int) to Azure table storage?
c# azure asynchronous bulkinsert azure-table-storage
1
Well, do not useParallel.ForEach
as it does not accept a Func, only an Action and so is not suitable for async/await operations. It is for cpu bound work. Not I/O. Since it is mainly I/O you are doing I would useawait Task.WhenAll(...)
– Peter Bons
Nov 22 '18 at 10:43
add a comment |
I am trying to optimise an operation where I insert several tens of thousand Foo
s into an Azure table.
Currently the method looks as follows:
public void AddBulk(IReadOnlyList<Foo> foos)
{
var parallelOptions = new ParallelOptions() { MaxDegreeOfParallelism = 4 };
Parallel.ForEach(foos.GroupBy(x => x.QueryingId), parallelOptions, groupedFoos =>
{
var threadTable = Table;
foreach (var chunkedAmounts in groupedFoos.ToList().Chunk(100))
{
var batchOperation = new TableBatchOperation();
foreach (var amount in chunkedAmounts)
{
// Echo content off. This further reduces bandwidth usage by turning off the
// echo of the payload in the response during entity insertion.
batchOperation.Insert(new FooTableEntity(amount), false);
}
// Exponential retry policies are good for batching procedures, background tasks,
// or non-interactive scenarios. In these scenarios, you can typically allow more
// time for the service to recover—with a consequently increased chance of the
// operation eventually succeeding. Attempt delays: ~3s, ~7s, ~15s, ...
threadTable.ExecuteBatchAsync(batchOperation, new TableRequestOptions()
{
RetryPolicy = new ExponentialRetry(TimeSpan.FromMilliseconds(deltaBackoffMilliseconds), maxRetryAttempts),
MaximumExecutionTime = TimeSpan.FromSeconds(maxExecutionTimeSeconds),
}, DefaultOperationContext);
}
});
}
I have upgraded the method to the .NET Core libraries, which do not support sync over async APIs. As such, I'm re-evaluating the add method and converting it to async.
The author of this method manually grouped the foos by the id that is used for the partition key, manually chunked them into batches of 100, and then uploaded them with a 4x parallelism. I'm surprised this is would be better than some built in Azure operation.
What is the most efficient way of uploading say 100 000 rows (each consisting of 2 guids, 2 strings, a timestamp and an int) to Azure table storage?
c# azure asynchronous bulkinsert azure-table-storage
I am trying to optimise an operation where I insert several tens of thousand Foo
s into an Azure table.
Currently the method looks as follows:
public void AddBulk(IReadOnlyList<Foo> foos)
{
var parallelOptions = new ParallelOptions() { MaxDegreeOfParallelism = 4 };
Parallel.ForEach(foos.GroupBy(x => x.QueryingId), parallelOptions, groupedFoos =>
{
var threadTable = Table;
foreach (var chunkedAmounts in groupedFoos.ToList().Chunk(100))
{
var batchOperation = new TableBatchOperation();
foreach (var amount in chunkedAmounts)
{
// Echo content off. This further reduces bandwidth usage by turning off the
// echo of the payload in the response during entity insertion.
batchOperation.Insert(new FooTableEntity(amount), false);
}
// Exponential retry policies are good for batching procedures, background tasks,
// or non-interactive scenarios. In these scenarios, you can typically allow more
// time for the service to recover—with a consequently increased chance of the
// operation eventually succeeding. Attempt delays: ~3s, ~7s, ~15s, ...
threadTable.ExecuteBatchAsync(batchOperation, new TableRequestOptions()
{
RetryPolicy = new ExponentialRetry(TimeSpan.FromMilliseconds(deltaBackoffMilliseconds), maxRetryAttempts),
MaximumExecutionTime = TimeSpan.FromSeconds(maxExecutionTimeSeconds),
}, DefaultOperationContext);
}
});
}
I have upgraded the method to the .NET Core libraries, which do not support sync over async APIs. As such, I'm re-evaluating the add method and converting it to async.
The author of this method manually grouped the foos by the id that is used for the partition key, manually chunked them into batches of 100, and then uploaded them with a 4x parallelism. I'm surprised this is would be better than some built in Azure operation.
What is the most efficient way of uploading say 100 000 rows (each consisting of 2 guids, 2 strings, a timestamp and an int) to Azure table storage?
c# azure asynchronous bulkinsert azure-table-storage
c# azure asynchronous bulkinsert azure-table-storage
asked Nov 22 '18 at 10:09
IvanIvan
2,56951953
2,56951953
1
Well, do not useParallel.ForEach
as it does not accept a Func, only an Action and so is not suitable for async/await operations. It is for cpu bound work. Not I/O. Since it is mainly I/O you are doing I would useawait Task.WhenAll(...)
– Peter Bons
Nov 22 '18 at 10:43
add a comment |
1
Well, do not useParallel.ForEach
as it does not accept a Func, only an Action and so is not suitable for async/await operations. It is for cpu bound work. Not I/O. Since it is mainly I/O you are doing I would useawait Task.WhenAll(...)
– Peter Bons
Nov 22 '18 at 10:43
1
1
Well, do not use
Parallel.ForEach
as it does not accept a Func, only an Action and so is not suitable for async/await operations. It is for cpu bound work. Not I/O. Since it is mainly I/O you are doing I would use await Task.WhenAll(...)
– Peter Bons
Nov 22 '18 at 10:43
Well, do not use
Parallel.ForEach
as it does not accept a Func, only an Action and so is not suitable for async/await operations. It is for cpu bound work. Not I/O. Since it is mainly I/O you are doing I would use await Task.WhenAll(...)
– Peter Bons
Nov 22 '18 at 10:43
add a comment |
0
active
oldest
votes
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53428495%2fefficiently-add-large-amounts-of-data-to-azure-table-storage-asynchronously%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53428495%2fefficiently-add-large-amounts-of-data-to-azure-table-storage-asynchronously%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
Well, do not use
Parallel.ForEach
as it does not accept a Func, only an Action and so is not suitable for async/await operations. It is for cpu bound work. Not I/O. Since it is mainly I/O you are doing I would useawait Task.WhenAll(...)
– Peter Bons
Nov 22 '18 at 10:43