To what level does MongoDB lock on writes? (or: what does it mean by “per connection”












56















In the mongodb documentation, it says:




Beginning with version 2.2, MongoDB implements locks on a per-database basis for most read and write operations. Some global operations, typically short lived operations involving multiple databases, still require a global “instance” wide lock. Before 2.2, there is only one “global” lock per mongod instance.




Does this mean that in the situation that I Have, say, 3 connections to mongodb://localhost/test from different apps running on the network - only one could be writing at a time? Or is it just per connection?



IOW: Is it per connection, or is the whole /test database locked while it writes?










share|improve this question

























  • Starting in MongoDB 3.0, the WiredTiger storage engine is available in the 64-bit builds. WiredTiger uses document-level concurrency control for write operations. As a result, multiple clients can modify different documents of a collection at the same time. docs.mongodb.com/manual/core/wiredtiger/…

    – Muhammad Ali
    Mar 27 '17 at 8:33


















56















In the mongodb documentation, it says:




Beginning with version 2.2, MongoDB implements locks on a per-database basis for most read and write operations. Some global operations, typically short lived operations involving multiple databases, still require a global “instance” wide lock. Before 2.2, there is only one “global” lock per mongod instance.




Does this mean that in the situation that I Have, say, 3 connections to mongodb://localhost/test from different apps running on the network - only one could be writing at a time? Or is it just per connection?



IOW: Is it per connection, or is the whole /test database locked while it writes?










share|improve this question

























  • Starting in MongoDB 3.0, the WiredTiger storage engine is available in the 64-bit builds. WiredTiger uses document-level concurrency control for write operations. As a result, multiple clients can modify different documents of a collection at the same time. docs.mongodb.com/manual/core/wiredtiger/…

    – Muhammad Ali
    Mar 27 '17 at 8:33
















56












56








56


58






In the mongodb documentation, it says:




Beginning with version 2.2, MongoDB implements locks on a per-database basis for most read and write operations. Some global operations, typically short lived operations involving multiple databases, still require a global “instance” wide lock. Before 2.2, there is only one “global” lock per mongod instance.




Does this mean that in the situation that I Have, say, 3 connections to mongodb://localhost/test from different apps running on the network - only one could be writing at a time? Or is it just per connection?



IOW: Is it per connection, or is the whole /test database locked while it writes?










share|improve this question
















In the mongodb documentation, it says:




Beginning with version 2.2, MongoDB implements locks on a per-database basis for most read and write operations. Some global operations, typically short lived operations involving multiple databases, still require a global “instance” wide lock. Before 2.2, there is only one “global” lock per mongod instance.




Does this mean that in the situation that I Have, say, 3 connections to mongodb://localhost/test from different apps running on the network - only one could be writing at a time? Or is it just per connection?



IOW: Is it per connection, or is the whole /test database locked while it writes?







mongodb concurrency locking






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Dec 17 '13 at 9:19









shx2

40.7k679110




40.7k679110










asked Jul 3 '13 at 19:35









nicksahlernicksahler

502167




502167













  • Starting in MongoDB 3.0, the WiredTiger storage engine is available in the 64-bit builds. WiredTiger uses document-level concurrency control for write operations. As a result, multiple clients can modify different documents of a collection at the same time. docs.mongodb.com/manual/core/wiredtiger/…

    – Muhammad Ali
    Mar 27 '17 at 8:33





















  • Starting in MongoDB 3.0, the WiredTiger storage engine is available in the 64-bit builds. WiredTiger uses document-level concurrency control for write operations. As a result, multiple clients can modify different documents of a collection at the same time. docs.mongodb.com/manual/core/wiredtiger/…

    – Muhammad Ali
    Mar 27 '17 at 8:33



















Starting in MongoDB 3.0, the WiredTiger storage engine is available in the 64-bit builds. WiredTiger uses document-level concurrency control for write operations. As a result, multiple clients can modify different documents of a collection at the same time. docs.mongodb.com/manual/core/wiredtiger/…

– Muhammad Ali
Mar 27 '17 at 8:33







Starting in MongoDB 3.0, the WiredTiger storage engine is available in the 64-bit builds. WiredTiger uses document-level concurrency control for write operations. As a result, multiple clients can modify different documents of a collection at the same time. docs.mongodb.com/manual/core/wiredtiger/…

– Muhammad Ali
Mar 27 '17 at 8:33














4 Answers
4






active

oldest

votes


















35














It is not per connection, it is per mongod. In other words the lock will exist across all connections to the test database on that server.



It is also a read/write lock so if a write is occuring then a read must wait, otherwise how can MongoDB know it is a consistent read?



However I should mention that MongoDB locks are very different to SQL/normal transactional locks you get and normally a lock will be held for about a microsecond between average updates.






share|improve this answer
























  • The second statement helps a lot - I couldn't find anywhere that they were queued, so I was worried that it was just grabbing data that may not be consistent. Also I was aware of the slight delay, it's fine in my particular situation. Thanks!

    – nicksahler
    Jul 3 '13 at 20:25






  • 3





    normally a lock will be held for about a microsecond if you held a lock for one microsecond then by the laws of physics you can't guarantee write durability.

    – Pablo Fernandez
    Aug 22 '15 at 23:59






  • 3





    as of 2015 there's no durable device with 1µs latency, if you're releasing the lock in less than that the value is not persisted.

    – Pablo Fernandez
    Aug 23 '15 at 15:13








  • 1





    I do not know what a "fsync queue" is, perhaps a mongodb (in memory) internal structure? anyway and returning to my original idea: if your write operation takes 1µs (or even 0.5µs like William says) to complete then you can't guarantee that the data reached a durable device.

    – Pablo Fernandez
    Aug 25 '15 at 15:26






  • 1





    @PabloFernandez meh, people make mistakes

    – Sammaye
    Aug 25 '15 at 19:42





















234














MongoDB Locking is Different



Locking in MongoDB does not work like locking in an RDBMS, so a bit of explanation is in order. In earlier versions of MongoDB, there was a single global reader/writer latch. Starting with MongoDB 2.2, there is a reader/writer latch for each database.



The readers-writer latch



The latch is multiple-reader, single-writer, and is writer-greedy. This means that:




  • There can be an unlimited number of simultaneous readers on a database

  • There can only be one writer at a time on any collection in any one database (more on this in a bit)

  • Writers block out readers

  • By "writer-greedy", I mean that once a write request comes in, all readers are blocked until the write completes (more on this later)


Note that I call this a "latch" rather than a "lock". This is because it's lightweight, and in a properly designed schema the write lock is held on the order of a dozen or so microseconds. See here for more on readers-writer locking.



In MongoDB you can run as many simultaneous queries as you like: as long as the relevant data is in RAM they will all be satisfied without locking conflicts.



Atomic Document Updates



Recall that in MongoDB the level of transaction is a single document. All updates to a single document are Atomic. MongoDB achieves this by holding the write latch for only as long as it takes to update a single document in RAM. If there is any slow-running operation (in particular, if a document or an index entry needs to be paged in from disk), then that operation will yield the write latch. When the operation yields the latch, then the next queued operation can proceed.



This does mean that the writes to all documents within a single database get serialized. This can be a problem if you have a poor schema design, and your writes take a long time, but in a properly-designed schema, locking isn't a problem.



Writer-Greedy



A few more words on being writer-greedy:



Only one writer can hold the latch at one time; multiple readers can hold the latch at a time. In a naive implementation, writers could starve indefinitely if there was a single reader in operation. To avoid this, in the MongoDB implementation, once any single thread makes a write request for a particular latch




  • All subsequent readers needing that latch will block

  • That writer will wait until all current readers are finished

  • The writer will acquire the write latch, do its work, and then release the write latch

  • All the queued readers will now proceed


The actual behavior is complex, since this writer-greedy behavior interacts with yielding in ways that can be non-obvious. Recall that, starting with release 2.2, there is a separate latch for each database, so writes to any collection in database 'A' will acquire a separate latch than writes to any collection in database 'B'.



Specific questions



Regarding the specific questions:




  • Locks (actually latches) are held by the MongoDB kernel for only as long as it takes to update a single document

  • If you have multiple connections coming in to MongoDB, and each one of them is performing a series of writes, the latch will be held on a per-database basis for only as long as it takes for that write to complete

  • Multiple connections coming in performing writes (update/insert/delete) will all be interleaved


While this sounds like it would be a big performance concern, in practice it doesn't slow things down. With a properly designed schema and a typical workload, MongoDB will saturate the disk I/O capacity -- even for an SSD -- before lock percentage on any database goes above 50%.



The highest capacity MongoDB cluster that I am aware of is currently performing 2 million writes per second.






share|improve this answer


























  • I understood the logic behind writer-greedy "locking" - just not at which point it would lock others out. This helped. Thanks!

    – nicksahler
    Jul 5 '13 at 22:52











  • I need some precisions about the "writer-greedy" concept : When you say "once a write request comes in, all readers are blocked until the write completes (more on this later)" the write request block all readers on the entire database or just the collection (or document)? Does a reader operation block a write operation? Thank you

    – Fred Mériot
    Jun 24 '14 at 8:31






  • 2





    @FredMériot currently it will block it on database level but document level locking is already in the dev branch. Yes a reader operation can block a write, MongoDB cannot read consistently is something is being written to

    – Sammaye
    Jul 10 '14 at 8:25






  • 15





    Even death doesn't stop this guy from helping people! RIP William

    – Sammaye
    Sep 3 '14 at 21:10






  • 15





    For those wondering what happened to William, take a read here: blog.mongodb.org/post/99566492653/…. RIP

    – dayuloli
    Dec 25 '14 at 6:17



















19














Mongo 3.0 now supports collection-level locking.



In addition to this, now Mongo created an API that allows to create a storage engine. Mongo 3.0 comes with 2 storage engines:





  1. MMAPv1: the default storage engine and the one use in the previous versions. Comes with collection-level locking.


  2. WiredTiger: the new storage engine, comes with document-level locking and compression. (Only available for the 64-bit version)


MongoDB 3.0 release notes



WiredTiger






share|improve this answer































    9














    I know the question is pretty old but still some people are confused....




    Starting in MongoDB 3.0, the WiredTiger storage engine (which uses document-level concurrency) is available in the 64-bit builds.



    WiredTiger uses document-level concurrency control for write operations. As a result, multiple clients can modify different documents of a collection at the same time.



    For most read and write operations, WiredTiger uses optimistic concurrency control. WiredTiger uses only intent locks at the global, database and collection levels. When the storage engine detects conflicts between two operations, one will incur a write conflict causing MongoDB to transparently retry that operation.



    Some global operations, typically short lived operations involving multiple databases, still require a global “instance-wide” lock. Some other operations, such as dropping a collection, still require an exclusive database lock.




    Document Level Concurrency






    share|improve this answer























      Your Answer






      StackExchange.ifUsing("editor", function () {
      StackExchange.using("externalEditor", function () {
      StackExchange.using("snippets", function () {
      StackExchange.snippets.init();
      });
      });
      }, "code-snippets");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "1"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f17456671%2fto-what-level-does-mongodb-lock-on-writes-or-what-does-it-mean-by-per-connec%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      4 Answers
      4






      active

      oldest

      votes








      4 Answers
      4






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      35














      It is not per connection, it is per mongod. In other words the lock will exist across all connections to the test database on that server.



      It is also a read/write lock so if a write is occuring then a read must wait, otherwise how can MongoDB know it is a consistent read?



      However I should mention that MongoDB locks are very different to SQL/normal transactional locks you get and normally a lock will be held for about a microsecond between average updates.






      share|improve this answer
























      • The second statement helps a lot - I couldn't find anywhere that they were queued, so I was worried that it was just grabbing data that may not be consistent. Also I was aware of the slight delay, it's fine in my particular situation. Thanks!

        – nicksahler
        Jul 3 '13 at 20:25






      • 3





        normally a lock will be held for about a microsecond if you held a lock for one microsecond then by the laws of physics you can't guarantee write durability.

        – Pablo Fernandez
        Aug 22 '15 at 23:59






      • 3





        as of 2015 there's no durable device with 1µs latency, if you're releasing the lock in less than that the value is not persisted.

        – Pablo Fernandez
        Aug 23 '15 at 15:13








      • 1





        I do not know what a "fsync queue" is, perhaps a mongodb (in memory) internal structure? anyway and returning to my original idea: if your write operation takes 1µs (or even 0.5µs like William says) to complete then you can't guarantee that the data reached a durable device.

        – Pablo Fernandez
        Aug 25 '15 at 15:26






      • 1





        @PabloFernandez meh, people make mistakes

        – Sammaye
        Aug 25 '15 at 19:42


















      35














      It is not per connection, it is per mongod. In other words the lock will exist across all connections to the test database on that server.



      It is also a read/write lock so if a write is occuring then a read must wait, otherwise how can MongoDB know it is a consistent read?



      However I should mention that MongoDB locks are very different to SQL/normal transactional locks you get and normally a lock will be held for about a microsecond between average updates.






      share|improve this answer
























      • The second statement helps a lot - I couldn't find anywhere that they were queued, so I was worried that it was just grabbing data that may not be consistent. Also I was aware of the slight delay, it's fine in my particular situation. Thanks!

        – nicksahler
        Jul 3 '13 at 20:25






      • 3





        normally a lock will be held for about a microsecond if you held a lock for one microsecond then by the laws of physics you can't guarantee write durability.

        – Pablo Fernandez
        Aug 22 '15 at 23:59






      • 3





        as of 2015 there's no durable device with 1µs latency, if you're releasing the lock in less than that the value is not persisted.

        – Pablo Fernandez
        Aug 23 '15 at 15:13








      • 1





        I do not know what a "fsync queue" is, perhaps a mongodb (in memory) internal structure? anyway and returning to my original idea: if your write operation takes 1µs (or even 0.5µs like William says) to complete then you can't guarantee that the data reached a durable device.

        – Pablo Fernandez
        Aug 25 '15 at 15:26






      • 1





        @PabloFernandez meh, people make mistakes

        – Sammaye
        Aug 25 '15 at 19:42
















      35












      35








      35







      It is not per connection, it is per mongod. In other words the lock will exist across all connections to the test database on that server.



      It is also a read/write lock so if a write is occuring then a read must wait, otherwise how can MongoDB know it is a consistent read?



      However I should mention that MongoDB locks are very different to SQL/normal transactional locks you get and normally a lock will be held for about a microsecond between average updates.






      share|improve this answer













      It is not per connection, it is per mongod. In other words the lock will exist across all connections to the test database on that server.



      It is also a read/write lock so if a write is occuring then a read must wait, otherwise how can MongoDB know it is a consistent read?



      However I should mention that MongoDB locks are very different to SQL/normal transactional locks you get and normally a lock will be held for about a microsecond between average updates.







      share|improve this answer












      share|improve this answer



      share|improve this answer










      answered Jul 3 '13 at 19:51









      SammayeSammaye

      35.4k768112




      35.4k768112













      • The second statement helps a lot - I couldn't find anywhere that they were queued, so I was worried that it was just grabbing data that may not be consistent. Also I was aware of the slight delay, it's fine in my particular situation. Thanks!

        – nicksahler
        Jul 3 '13 at 20:25






      • 3





        normally a lock will be held for about a microsecond if you held a lock for one microsecond then by the laws of physics you can't guarantee write durability.

        – Pablo Fernandez
        Aug 22 '15 at 23:59






      • 3





        as of 2015 there's no durable device with 1µs latency, if you're releasing the lock in less than that the value is not persisted.

        – Pablo Fernandez
        Aug 23 '15 at 15:13








      • 1





        I do not know what a "fsync queue" is, perhaps a mongodb (in memory) internal structure? anyway and returning to my original idea: if your write operation takes 1µs (or even 0.5µs like William says) to complete then you can't guarantee that the data reached a durable device.

        – Pablo Fernandez
        Aug 25 '15 at 15:26






      • 1





        @PabloFernandez meh, people make mistakes

        – Sammaye
        Aug 25 '15 at 19:42





















      • The second statement helps a lot - I couldn't find anywhere that they were queued, so I was worried that it was just grabbing data that may not be consistent. Also I was aware of the slight delay, it's fine in my particular situation. Thanks!

        – nicksahler
        Jul 3 '13 at 20:25






      • 3





        normally a lock will be held for about a microsecond if you held a lock for one microsecond then by the laws of physics you can't guarantee write durability.

        – Pablo Fernandez
        Aug 22 '15 at 23:59






      • 3





        as of 2015 there's no durable device with 1µs latency, if you're releasing the lock in less than that the value is not persisted.

        – Pablo Fernandez
        Aug 23 '15 at 15:13








      • 1





        I do not know what a "fsync queue" is, perhaps a mongodb (in memory) internal structure? anyway and returning to my original idea: if your write operation takes 1µs (or even 0.5µs like William says) to complete then you can't guarantee that the data reached a durable device.

        – Pablo Fernandez
        Aug 25 '15 at 15:26






      • 1





        @PabloFernandez meh, people make mistakes

        – Sammaye
        Aug 25 '15 at 19:42



















      The second statement helps a lot - I couldn't find anywhere that they were queued, so I was worried that it was just grabbing data that may not be consistent. Also I was aware of the slight delay, it's fine in my particular situation. Thanks!

      – nicksahler
      Jul 3 '13 at 20:25





      The second statement helps a lot - I couldn't find anywhere that they were queued, so I was worried that it was just grabbing data that may not be consistent. Also I was aware of the slight delay, it's fine in my particular situation. Thanks!

      – nicksahler
      Jul 3 '13 at 20:25




      3




      3





      normally a lock will be held for about a microsecond if you held a lock for one microsecond then by the laws of physics you can't guarantee write durability.

      – Pablo Fernandez
      Aug 22 '15 at 23:59





      normally a lock will be held for about a microsecond if you held a lock for one microsecond then by the laws of physics you can't guarantee write durability.

      – Pablo Fernandez
      Aug 22 '15 at 23:59




      3




      3





      as of 2015 there's no durable device with 1µs latency, if you're releasing the lock in less than that the value is not persisted.

      – Pablo Fernandez
      Aug 23 '15 at 15:13







      as of 2015 there's no durable device with 1µs latency, if you're releasing the lock in less than that the value is not persisted.

      – Pablo Fernandez
      Aug 23 '15 at 15:13






      1




      1





      I do not know what a "fsync queue" is, perhaps a mongodb (in memory) internal structure? anyway and returning to my original idea: if your write operation takes 1µs (or even 0.5µs like William says) to complete then you can't guarantee that the data reached a durable device.

      – Pablo Fernandez
      Aug 25 '15 at 15:26





      I do not know what a "fsync queue" is, perhaps a mongodb (in memory) internal structure? anyway and returning to my original idea: if your write operation takes 1µs (or even 0.5µs like William says) to complete then you can't guarantee that the data reached a durable device.

      – Pablo Fernandez
      Aug 25 '15 at 15:26




      1




      1





      @PabloFernandez meh, people make mistakes

      – Sammaye
      Aug 25 '15 at 19:42







      @PabloFernandez meh, people make mistakes

      – Sammaye
      Aug 25 '15 at 19:42















      234














      MongoDB Locking is Different



      Locking in MongoDB does not work like locking in an RDBMS, so a bit of explanation is in order. In earlier versions of MongoDB, there was a single global reader/writer latch. Starting with MongoDB 2.2, there is a reader/writer latch for each database.



      The readers-writer latch



      The latch is multiple-reader, single-writer, and is writer-greedy. This means that:




      • There can be an unlimited number of simultaneous readers on a database

      • There can only be one writer at a time on any collection in any one database (more on this in a bit)

      • Writers block out readers

      • By "writer-greedy", I mean that once a write request comes in, all readers are blocked until the write completes (more on this later)


      Note that I call this a "latch" rather than a "lock". This is because it's lightweight, and in a properly designed schema the write lock is held on the order of a dozen or so microseconds. See here for more on readers-writer locking.



      In MongoDB you can run as many simultaneous queries as you like: as long as the relevant data is in RAM they will all be satisfied without locking conflicts.



      Atomic Document Updates



      Recall that in MongoDB the level of transaction is a single document. All updates to a single document are Atomic. MongoDB achieves this by holding the write latch for only as long as it takes to update a single document in RAM. If there is any slow-running operation (in particular, if a document or an index entry needs to be paged in from disk), then that operation will yield the write latch. When the operation yields the latch, then the next queued operation can proceed.



      This does mean that the writes to all documents within a single database get serialized. This can be a problem if you have a poor schema design, and your writes take a long time, but in a properly-designed schema, locking isn't a problem.



      Writer-Greedy



      A few more words on being writer-greedy:



      Only one writer can hold the latch at one time; multiple readers can hold the latch at a time. In a naive implementation, writers could starve indefinitely if there was a single reader in operation. To avoid this, in the MongoDB implementation, once any single thread makes a write request for a particular latch




      • All subsequent readers needing that latch will block

      • That writer will wait until all current readers are finished

      • The writer will acquire the write latch, do its work, and then release the write latch

      • All the queued readers will now proceed


      The actual behavior is complex, since this writer-greedy behavior interacts with yielding in ways that can be non-obvious. Recall that, starting with release 2.2, there is a separate latch for each database, so writes to any collection in database 'A' will acquire a separate latch than writes to any collection in database 'B'.



      Specific questions



      Regarding the specific questions:




      • Locks (actually latches) are held by the MongoDB kernel for only as long as it takes to update a single document

      • If you have multiple connections coming in to MongoDB, and each one of them is performing a series of writes, the latch will be held on a per-database basis for only as long as it takes for that write to complete

      • Multiple connections coming in performing writes (update/insert/delete) will all be interleaved


      While this sounds like it would be a big performance concern, in practice it doesn't slow things down. With a properly designed schema and a typical workload, MongoDB will saturate the disk I/O capacity -- even for an SSD -- before lock percentage on any database goes above 50%.



      The highest capacity MongoDB cluster that I am aware of is currently performing 2 million writes per second.






      share|improve this answer


























      • I understood the logic behind writer-greedy "locking" - just not at which point it would lock others out. This helped. Thanks!

        – nicksahler
        Jul 5 '13 at 22:52











      • I need some precisions about the "writer-greedy" concept : When you say "once a write request comes in, all readers are blocked until the write completes (more on this later)" the write request block all readers on the entire database or just the collection (or document)? Does a reader operation block a write operation? Thank you

        – Fred Mériot
        Jun 24 '14 at 8:31






      • 2





        @FredMériot currently it will block it on database level but document level locking is already in the dev branch. Yes a reader operation can block a write, MongoDB cannot read consistently is something is being written to

        – Sammaye
        Jul 10 '14 at 8:25






      • 15





        Even death doesn't stop this guy from helping people! RIP William

        – Sammaye
        Sep 3 '14 at 21:10






      • 15





        For those wondering what happened to William, take a read here: blog.mongodb.org/post/99566492653/…. RIP

        – dayuloli
        Dec 25 '14 at 6:17
















      234














      MongoDB Locking is Different



      Locking in MongoDB does not work like locking in an RDBMS, so a bit of explanation is in order. In earlier versions of MongoDB, there was a single global reader/writer latch. Starting with MongoDB 2.2, there is a reader/writer latch for each database.



      The readers-writer latch



      The latch is multiple-reader, single-writer, and is writer-greedy. This means that:




      • There can be an unlimited number of simultaneous readers on a database

      • There can only be one writer at a time on any collection in any one database (more on this in a bit)

      • Writers block out readers

      • By "writer-greedy", I mean that once a write request comes in, all readers are blocked until the write completes (more on this later)


      Note that I call this a "latch" rather than a "lock". This is because it's lightweight, and in a properly designed schema the write lock is held on the order of a dozen or so microseconds. See here for more on readers-writer locking.



      In MongoDB you can run as many simultaneous queries as you like: as long as the relevant data is in RAM they will all be satisfied without locking conflicts.



      Atomic Document Updates



      Recall that in MongoDB the level of transaction is a single document. All updates to a single document are Atomic. MongoDB achieves this by holding the write latch for only as long as it takes to update a single document in RAM. If there is any slow-running operation (in particular, if a document or an index entry needs to be paged in from disk), then that operation will yield the write latch. When the operation yields the latch, then the next queued operation can proceed.



      This does mean that the writes to all documents within a single database get serialized. This can be a problem if you have a poor schema design, and your writes take a long time, but in a properly-designed schema, locking isn't a problem.



      Writer-Greedy



      A few more words on being writer-greedy:



      Only one writer can hold the latch at one time; multiple readers can hold the latch at a time. In a naive implementation, writers could starve indefinitely if there was a single reader in operation. To avoid this, in the MongoDB implementation, once any single thread makes a write request for a particular latch




      • All subsequent readers needing that latch will block

      • That writer will wait until all current readers are finished

      • The writer will acquire the write latch, do its work, and then release the write latch

      • All the queued readers will now proceed


      The actual behavior is complex, since this writer-greedy behavior interacts with yielding in ways that can be non-obvious. Recall that, starting with release 2.2, there is a separate latch for each database, so writes to any collection in database 'A' will acquire a separate latch than writes to any collection in database 'B'.



      Specific questions



      Regarding the specific questions:




      • Locks (actually latches) are held by the MongoDB kernel for only as long as it takes to update a single document

      • If you have multiple connections coming in to MongoDB, and each one of them is performing a series of writes, the latch will be held on a per-database basis for only as long as it takes for that write to complete

      • Multiple connections coming in performing writes (update/insert/delete) will all be interleaved


      While this sounds like it would be a big performance concern, in practice it doesn't slow things down. With a properly designed schema and a typical workload, MongoDB will saturate the disk I/O capacity -- even for an SSD -- before lock percentage on any database goes above 50%.



      The highest capacity MongoDB cluster that I am aware of is currently performing 2 million writes per second.






      share|improve this answer


























      • I understood the logic behind writer-greedy "locking" - just not at which point it would lock others out. This helped. Thanks!

        – nicksahler
        Jul 5 '13 at 22:52











      • I need some precisions about the "writer-greedy" concept : When you say "once a write request comes in, all readers are blocked until the write completes (more on this later)" the write request block all readers on the entire database or just the collection (or document)? Does a reader operation block a write operation? Thank you

        – Fred Mériot
        Jun 24 '14 at 8:31






      • 2





        @FredMériot currently it will block it on database level but document level locking is already in the dev branch. Yes a reader operation can block a write, MongoDB cannot read consistently is something is being written to

        – Sammaye
        Jul 10 '14 at 8:25






      • 15





        Even death doesn't stop this guy from helping people! RIP William

        – Sammaye
        Sep 3 '14 at 21:10






      • 15





        For those wondering what happened to William, take a read here: blog.mongodb.org/post/99566492653/…. RIP

        – dayuloli
        Dec 25 '14 at 6:17














      234












      234








      234







      MongoDB Locking is Different



      Locking in MongoDB does not work like locking in an RDBMS, so a bit of explanation is in order. In earlier versions of MongoDB, there was a single global reader/writer latch. Starting with MongoDB 2.2, there is a reader/writer latch for each database.



      The readers-writer latch



      The latch is multiple-reader, single-writer, and is writer-greedy. This means that:




      • There can be an unlimited number of simultaneous readers on a database

      • There can only be one writer at a time on any collection in any one database (more on this in a bit)

      • Writers block out readers

      • By "writer-greedy", I mean that once a write request comes in, all readers are blocked until the write completes (more on this later)


      Note that I call this a "latch" rather than a "lock". This is because it's lightweight, and in a properly designed schema the write lock is held on the order of a dozen or so microseconds. See here for more on readers-writer locking.



      In MongoDB you can run as many simultaneous queries as you like: as long as the relevant data is in RAM they will all be satisfied without locking conflicts.



      Atomic Document Updates



      Recall that in MongoDB the level of transaction is a single document. All updates to a single document are Atomic. MongoDB achieves this by holding the write latch for only as long as it takes to update a single document in RAM. If there is any slow-running operation (in particular, if a document or an index entry needs to be paged in from disk), then that operation will yield the write latch. When the operation yields the latch, then the next queued operation can proceed.



      This does mean that the writes to all documents within a single database get serialized. This can be a problem if you have a poor schema design, and your writes take a long time, but in a properly-designed schema, locking isn't a problem.



      Writer-Greedy



      A few more words on being writer-greedy:



      Only one writer can hold the latch at one time; multiple readers can hold the latch at a time. In a naive implementation, writers could starve indefinitely if there was a single reader in operation. To avoid this, in the MongoDB implementation, once any single thread makes a write request for a particular latch




      • All subsequent readers needing that latch will block

      • That writer will wait until all current readers are finished

      • The writer will acquire the write latch, do its work, and then release the write latch

      • All the queued readers will now proceed


      The actual behavior is complex, since this writer-greedy behavior interacts with yielding in ways that can be non-obvious. Recall that, starting with release 2.2, there is a separate latch for each database, so writes to any collection in database 'A' will acquire a separate latch than writes to any collection in database 'B'.



      Specific questions



      Regarding the specific questions:




      • Locks (actually latches) are held by the MongoDB kernel for only as long as it takes to update a single document

      • If you have multiple connections coming in to MongoDB, and each one of them is performing a series of writes, the latch will be held on a per-database basis for only as long as it takes for that write to complete

      • Multiple connections coming in performing writes (update/insert/delete) will all be interleaved


      While this sounds like it would be a big performance concern, in practice it doesn't slow things down. With a properly designed schema and a typical workload, MongoDB will saturate the disk I/O capacity -- even for an SSD -- before lock percentage on any database goes above 50%.



      The highest capacity MongoDB cluster that I am aware of is currently performing 2 million writes per second.






      share|improve this answer















      MongoDB Locking is Different



      Locking in MongoDB does not work like locking in an RDBMS, so a bit of explanation is in order. In earlier versions of MongoDB, there was a single global reader/writer latch. Starting with MongoDB 2.2, there is a reader/writer latch for each database.



      The readers-writer latch



      The latch is multiple-reader, single-writer, and is writer-greedy. This means that:




      • There can be an unlimited number of simultaneous readers on a database

      • There can only be one writer at a time on any collection in any one database (more on this in a bit)

      • Writers block out readers

      • By "writer-greedy", I mean that once a write request comes in, all readers are blocked until the write completes (more on this later)


      Note that I call this a "latch" rather than a "lock". This is because it's lightweight, and in a properly designed schema the write lock is held on the order of a dozen or so microseconds. See here for more on readers-writer locking.



      In MongoDB you can run as many simultaneous queries as you like: as long as the relevant data is in RAM they will all be satisfied without locking conflicts.



      Atomic Document Updates



      Recall that in MongoDB the level of transaction is a single document. All updates to a single document are Atomic. MongoDB achieves this by holding the write latch for only as long as it takes to update a single document in RAM. If there is any slow-running operation (in particular, if a document or an index entry needs to be paged in from disk), then that operation will yield the write latch. When the operation yields the latch, then the next queued operation can proceed.



      This does mean that the writes to all documents within a single database get serialized. This can be a problem if you have a poor schema design, and your writes take a long time, but in a properly-designed schema, locking isn't a problem.



      Writer-Greedy



      A few more words on being writer-greedy:



      Only one writer can hold the latch at one time; multiple readers can hold the latch at a time. In a naive implementation, writers could starve indefinitely if there was a single reader in operation. To avoid this, in the MongoDB implementation, once any single thread makes a write request for a particular latch




      • All subsequent readers needing that latch will block

      • That writer will wait until all current readers are finished

      • The writer will acquire the write latch, do its work, and then release the write latch

      • All the queued readers will now proceed


      The actual behavior is complex, since this writer-greedy behavior interacts with yielding in ways that can be non-obvious. Recall that, starting with release 2.2, there is a separate latch for each database, so writes to any collection in database 'A' will acquire a separate latch than writes to any collection in database 'B'.



      Specific questions



      Regarding the specific questions:




      • Locks (actually latches) are held by the MongoDB kernel for only as long as it takes to update a single document

      • If you have multiple connections coming in to MongoDB, and each one of them is performing a series of writes, the latch will be held on a per-database basis for only as long as it takes for that write to complete

      • Multiple connections coming in performing writes (update/insert/delete) will all be interleaved


      While this sounds like it would be a big performance concern, in practice it doesn't slow things down. With a properly designed schema and a typical workload, MongoDB will saturate the disk I/O capacity -- even for an SSD -- before lock percentage on any database goes above 50%.



      The highest capacity MongoDB cluster that I am aware of is currently performing 2 million writes per second.







      share|improve this answer














      share|improve this answer



      share|improve this answer








      edited Jun 24 '14 at 13:08

























      answered Jul 3 '13 at 23:04









      William ZWilliam Z

      9,01832520




      9,01832520













      • I understood the logic behind writer-greedy "locking" - just not at which point it would lock others out. This helped. Thanks!

        – nicksahler
        Jul 5 '13 at 22:52











      • I need some precisions about the "writer-greedy" concept : When you say "once a write request comes in, all readers are blocked until the write completes (more on this later)" the write request block all readers on the entire database or just the collection (or document)? Does a reader operation block a write operation? Thank you

        – Fred Mériot
        Jun 24 '14 at 8:31






      • 2





        @FredMériot currently it will block it on database level but document level locking is already in the dev branch. Yes a reader operation can block a write, MongoDB cannot read consistently is something is being written to

        – Sammaye
        Jul 10 '14 at 8:25






      • 15





        Even death doesn't stop this guy from helping people! RIP William

        – Sammaye
        Sep 3 '14 at 21:10






      • 15





        For those wondering what happened to William, take a read here: blog.mongodb.org/post/99566492653/…. RIP

        – dayuloli
        Dec 25 '14 at 6:17



















      • I understood the logic behind writer-greedy "locking" - just not at which point it would lock others out. This helped. Thanks!

        – nicksahler
        Jul 5 '13 at 22:52











      • I need some precisions about the "writer-greedy" concept : When you say "once a write request comes in, all readers are blocked until the write completes (more on this later)" the write request block all readers on the entire database or just the collection (or document)? Does a reader operation block a write operation? Thank you

        – Fred Mériot
        Jun 24 '14 at 8:31






      • 2





        @FredMériot currently it will block it on database level but document level locking is already in the dev branch. Yes a reader operation can block a write, MongoDB cannot read consistently is something is being written to

        – Sammaye
        Jul 10 '14 at 8:25






      • 15





        Even death doesn't stop this guy from helping people! RIP William

        – Sammaye
        Sep 3 '14 at 21:10






      • 15





        For those wondering what happened to William, take a read here: blog.mongodb.org/post/99566492653/…. RIP

        – dayuloli
        Dec 25 '14 at 6:17

















      I understood the logic behind writer-greedy "locking" - just not at which point it would lock others out. This helped. Thanks!

      – nicksahler
      Jul 5 '13 at 22:52





      I understood the logic behind writer-greedy "locking" - just not at which point it would lock others out. This helped. Thanks!

      – nicksahler
      Jul 5 '13 at 22:52













      I need some precisions about the "writer-greedy" concept : When you say "once a write request comes in, all readers are blocked until the write completes (more on this later)" the write request block all readers on the entire database or just the collection (or document)? Does a reader operation block a write operation? Thank you

      – Fred Mériot
      Jun 24 '14 at 8:31





      I need some precisions about the "writer-greedy" concept : When you say "once a write request comes in, all readers are blocked until the write completes (more on this later)" the write request block all readers on the entire database or just the collection (or document)? Does a reader operation block a write operation? Thank you

      – Fred Mériot
      Jun 24 '14 at 8:31




      2




      2





      @FredMériot currently it will block it on database level but document level locking is already in the dev branch. Yes a reader operation can block a write, MongoDB cannot read consistently is something is being written to

      – Sammaye
      Jul 10 '14 at 8:25





      @FredMériot currently it will block it on database level but document level locking is already in the dev branch. Yes a reader operation can block a write, MongoDB cannot read consistently is something is being written to

      – Sammaye
      Jul 10 '14 at 8:25




      15




      15





      Even death doesn't stop this guy from helping people! RIP William

      – Sammaye
      Sep 3 '14 at 21:10





      Even death doesn't stop this guy from helping people! RIP William

      – Sammaye
      Sep 3 '14 at 21:10




      15




      15





      For those wondering what happened to William, take a read here: blog.mongodb.org/post/99566492653/…. RIP

      – dayuloli
      Dec 25 '14 at 6:17





      For those wondering what happened to William, take a read here: blog.mongodb.org/post/99566492653/…. RIP

      – dayuloli
      Dec 25 '14 at 6:17











      19














      Mongo 3.0 now supports collection-level locking.



      In addition to this, now Mongo created an API that allows to create a storage engine. Mongo 3.0 comes with 2 storage engines:





      1. MMAPv1: the default storage engine and the one use in the previous versions. Comes with collection-level locking.


      2. WiredTiger: the new storage engine, comes with document-level locking and compression. (Only available for the 64-bit version)


      MongoDB 3.0 release notes



      WiredTiger






      share|improve this answer




























        19














        Mongo 3.0 now supports collection-level locking.



        In addition to this, now Mongo created an API that allows to create a storage engine. Mongo 3.0 comes with 2 storage engines:





        1. MMAPv1: the default storage engine and the one use in the previous versions. Comes with collection-level locking.


        2. WiredTiger: the new storage engine, comes with document-level locking and compression. (Only available for the 64-bit version)


        MongoDB 3.0 release notes



        WiredTiger






        share|improve this answer


























          19












          19








          19







          Mongo 3.0 now supports collection-level locking.



          In addition to this, now Mongo created an API that allows to create a storage engine. Mongo 3.0 comes with 2 storage engines:





          1. MMAPv1: the default storage engine and the one use in the previous versions. Comes with collection-level locking.


          2. WiredTiger: the new storage engine, comes with document-level locking and compression. (Only available for the 64-bit version)


          MongoDB 3.0 release notes



          WiredTiger






          share|improve this answer













          Mongo 3.0 now supports collection-level locking.



          In addition to this, now Mongo created an API that allows to create a storage engine. Mongo 3.0 comes with 2 storage engines:





          1. MMAPv1: the default storage engine and the one use in the previous versions. Comes with collection-level locking.


          2. WiredTiger: the new storage engine, comes with document-level locking and compression. (Only available for the 64-bit version)


          MongoDB 3.0 release notes



          WiredTiger







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Mar 4 '15 at 18:07









          LondoLondo

          38136




          38136























              9














              I know the question is pretty old but still some people are confused....




              Starting in MongoDB 3.0, the WiredTiger storage engine (which uses document-level concurrency) is available in the 64-bit builds.



              WiredTiger uses document-level concurrency control for write operations. As a result, multiple clients can modify different documents of a collection at the same time.



              For most read and write operations, WiredTiger uses optimistic concurrency control. WiredTiger uses only intent locks at the global, database and collection levels. When the storage engine detects conflicts between two operations, one will incur a write conflict causing MongoDB to transparently retry that operation.



              Some global operations, typically short lived operations involving multiple databases, still require a global “instance-wide” lock. Some other operations, such as dropping a collection, still require an exclusive database lock.




              Document Level Concurrency






              share|improve this answer




























                9














                I know the question is pretty old but still some people are confused....




                Starting in MongoDB 3.0, the WiredTiger storage engine (which uses document-level concurrency) is available in the 64-bit builds.



                WiredTiger uses document-level concurrency control for write operations. As a result, multiple clients can modify different documents of a collection at the same time.



                For most read and write operations, WiredTiger uses optimistic concurrency control. WiredTiger uses only intent locks at the global, database and collection levels. When the storage engine detects conflicts between two operations, one will incur a write conflict causing MongoDB to transparently retry that operation.



                Some global operations, typically short lived operations involving multiple databases, still require a global “instance-wide” lock. Some other operations, such as dropping a collection, still require an exclusive database lock.




                Document Level Concurrency






                share|improve this answer


























                  9












                  9








                  9







                  I know the question is pretty old but still some people are confused....




                  Starting in MongoDB 3.0, the WiredTiger storage engine (which uses document-level concurrency) is available in the 64-bit builds.



                  WiredTiger uses document-level concurrency control for write operations. As a result, multiple clients can modify different documents of a collection at the same time.



                  For most read and write operations, WiredTiger uses optimistic concurrency control. WiredTiger uses only intent locks at the global, database and collection levels. When the storage engine detects conflicts between two operations, one will incur a write conflict causing MongoDB to transparently retry that operation.



                  Some global operations, typically short lived operations involving multiple databases, still require a global “instance-wide” lock. Some other operations, such as dropping a collection, still require an exclusive database lock.




                  Document Level Concurrency






                  share|improve this answer













                  I know the question is pretty old but still some people are confused....




                  Starting in MongoDB 3.0, the WiredTiger storage engine (which uses document-level concurrency) is available in the 64-bit builds.



                  WiredTiger uses document-level concurrency control for write operations. As a result, multiple clients can modify different documents of a collection at the same time.



                  For most read and write operations, WiredTiger uses optimistic concurrency control. WiredTiger uses only intent locks at the global, database and collection levels. When the storage engine detects conflicts between two operations, one will incur a write conflict causing MongoDB to transparently retry that operation.



                  Some global operations, typically short lived operations involving multiple databases, still require a global “instance-wide” lock. Some other operations, such as dropping a collection, still require an exclusive database lock.




                  Document Level Concurrency







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Mar 27 '17 at 8:32









                  Muhammad AliMuhammad Ali

                  319415




                  319415






























                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Stack Overflow!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f17456671%2fto-what-level-does-mongodb-lock-on-writes-or-what-does-it-mean-by-per-connec%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      鏡平學校

                      ꓛꓣだゔៀៅຸ໢ທຮ໕໒ ,ໂ'໥໓າ໼ឨឲ៵៭ៈゎゔit''䖳𥁄卿' ☨₤₨こゎもょの;ꜹꟚꞖꞵꟅꞛေၦေɯ,ɨɡ𛃵𛁹ޝ޳ޠ޾,ޤޒޯ޾𫝒𫠁သ𛅤チョ'サノބޘދ𛁐ᶿᶇᶀᶋᶠ㨑㽹⻮ꧬ꧹؍۩وَؠ㇕㇃㇪ ㇦㇋㇋ṜẰᵡᴠ 軌ᵕ搜۳ٰޗޮ޷ސޯ𫖾𫅀ल, ꙭ꙰ꚅꙁꚊꞻꝔ꟠Ꝭㄤﺟޱސꧨꧼ꧴ꧯꧽ꧲ꧯ'⽹⽭⾁⿞⼳⽋២៩ញណើꩯꩤ꩸ꩮᶻᶺᶧᶂ𫳲𫪭𬸄𫵰𬖩𬫣𬊉ၲ𛅬㕦䬺𫝌𫝼,,𫟖𫞽ហៅ஫㆔ాఆఅꙒꚞꙍ,Ꙟ꙱エ ,ポテ,フࢰࢯ𫟠𫞶 𫝤𫟠ﺕﹱﻜﻣ𪵕𪭸𪻆𪾩𫔷ġ,ŧآꞪ꟥,ꞔꝻ♚☹⛵𛀌ꬷꭞȄƁƪƬșƦǙǗdžƝǯǧⱦⱰꓕꓢႋ神 ဴ၀க௭எ௫ឫោ ' េㇷㇴㇼ神ㇸㇲㇽㇴㇼㇻㇸ'ㇸㇿㇸㇹㇰㆣꓚꓤ₡₧ ㄨㄟ㄂ㄖㄎ໗ツڒذ₶।ऩछएोञयूटक़कयँृी,冬'𛅢𛅥ㇱㇵㇶ𥄥𦒽𠣧𠊓𧢖𥞘𩔋цѰㄠſtʯʭɿʆʗʍʩɷɛ,əʏダヵㄐㄘR{gỚṖḺờṠṫảḙḭᴮᵏᴘᵀᵷᵕᴜᴏᵾq﮲ﲿﴽﭙ軌ﰬﶚﶧ﫲Ҝжюїкӈㇴffצּ﬘﭅﬈軌'ffistfflſtffतभफɳɰʊɲʎ𛁱𛁖𛁮𛀉 𛂯𛀞నఋŀŲ 𫟲𫠖𫞺ຆຆ ໹້໕໗ๆทԊꧢꧠ꧰ꓱ⿝⼑ŎḬẃẖỐẅ ,ờỰỈỗﮊDžȩꭏꭎꬻ꭮ꬿꭖꭥꭅ㇭神 ⾈ꓵꓑ⺄㄄ㄪㄙㄅㄇstA۵䞽ॶ𫞑𫝄㇉㇇゜軌𩜛𩳠Jﻺ‚Üမ႕ႌႊၐၸဓၞၞၡ៸wyvtᶎᶪᶹစဎ꣡꣰꣢꣤ٗ؋لㇳㇾㇻㇱ㆐㆔,,㆟Ⱶヤマފ޼ޝަݿݞݠݷݐ',ݘ,ݪݙݵ𬝉𬜁𫝨𫞘くせぉて¼óû×ó£…𛅑הㄙくԗԀ5606神45,神796'𪤻𫞧ꓐ㄁ㄘɥɺꓵꓲ3''7034׉ⱦⱠˆ“𫝋ȍ,ꩲ軌꩷ꩶꩧꩫఞ۔فڱێظペサ神ナᴦᵑ47 9238їﻂ䐊䔉㠸﬎ffiﬣ,לּᴷᴦᵛᵽ,ᴨᵤ ᵸᵥᴗᵈꚏꚉꚟ⻆rtǟƴ𬎎

                      Why https connections are so slow when debugging (stepping over) in Java?