nn.Dropout vs. F.dropout pyTorch












8















By using pyTorch there is two ways to dropout
torch.nn.Dropout and torch.nn.F.Dropout.



I struggle to see the difference between the use of them

- when to use what?

- Does it make a difference?

I don't see any performance difference when I switched them around.










share|improve this question





























    8















    By using pyTorch there is two ways to dropout
    torch.nn.Dropout and torch.nn.F.Dropout.



    I struggle to see the difference between the use of them

    - when to use what?

    - Does it make a difference?

    I don't see any performance difference when I switched them around.










    share|improve this question



























      8












      8








      8


      2






      By using pyTorch there is two ways to dropout
      torch.nn.Dropout and torch.nn.F.Dropout.



      I struggle to see the difference between the use of them

      - when to use what?

      - Does it make a difference?

      I don't see any performance difference when I switched them around.










      share|improve this question
















      By using pyTorch there is two ways to dropout
      torch.nn.Dropout and torch.nn.F.Dropout.



      I struggle to see the difference between the use of them

      - when to use what?

      - Does it make a difference?

      I don't see any performance difference when I switched them around.







      neural-network pytorch dropout






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 23 '18 at 5:47









      M. Doosti Lakhani

      340418




      340418










      asked Nov 21 '18 at 19:44









      CutePoisonCutePoison

      123111




      123111
























          2 Answers
          2






          active

          oldest

          votes


















          10














          The technical differences have already been shown in the other answer. However the main difference is that nn.Dropout is a torch Module itself which bears some convenience:



          A short example for illustration of some differences:



          import torch
          import torch.nn as nn

          class Model1(nn.Module):
          # Model 1 using functional dropout
          def __init__(self, p=0.0):
          super().__init__()
          self.p = p

          def forward(self, inputs):
          return nn.functional.dropout(inputs, p=self.p, training=True)

          class Model2(nn.Module):
          # Model 2 using dropout module
          def __init__(self, p=0.0):
          super().__init__()
          self.drop_layer = nn.Dropout(p=p)

          def forward(self, inputs):
          return self.drop_layer(inputs)
          model1 = Model1(p=0.5) # functional dropout
          model2 = Model2(p=0.5) # dropout module

          # creating inputs
          inputs = torch.rand(10)
          # forwarding inputs in train mode
          print('Normal (train) model:')
          print('Model 1', model1(inputs))
          print('Model 2', model2(inputs))
          print()

          # switching to eval mode
          model1.eval()
          model2.eval()

          # forwarding inputs in evaluation mode
          print('Evaluation mode:')
          print('Model 1', model1(inputs))
          print('Model 2', model2(inputs))
          # show model summary
          print('Print summary:')
          print(model1)
          print(model2)


          Output:



          Normal (train) model:
          Model 1 tensor([ 1.5040, 0.0000, 0.0000, 0.8563, 0.0000, 0.0000, 1.5951,
          0.0000, 0.0000, 0.0946])
          Model 2 tensor([ 0.0000, 0.3713, 1.9303, 0.0000, 0.0000, 0.3574, 0.0000,
          1.1273, 1.5818, 0.0946])

          Evaluation mode:
          Model 1 tensor([ 0.0000, 0.3713, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
          0.0000, 0.0000, 0.0000])
          Model 2 tensor([ 0.7520, 0.1857, 0.9651, 0.4281, 0.7883, 0.1787, 0.7975,
          0.5636, 0.7909, 0.0473])
          Print summary:
          Model1()
          Model2(
          (drop_layer): Dropout(p=0.5)
          )


          So which should I use?



          Both are completely equivalent in terms of applying dropout and even though the differences in usage are not that big, there are some reasons to favour the nn.Dropout over nn.functional.dropout:



          Dropout is designed to be only applied during training, so when doing predictions or evaluation of the model you want dropout to be turned off.



          The dropout module nn.Dropout conveniently handles this and shuts dropout off as soon as your model enters evaluation mode, while the functional dropout does not care about the evaluation / prediction mode.



          Even though you can set functional dropout to training=False to turn it off, it is still not such a convenient solution like with nn.Dropout.



          Also the drop rate is stored in the module, so you don't have to save it in an extra variable. In larger networks you might want to create different dropout layers with different drop rates - here nn.Dropout may increase readability and can bear also some convenience when using the layers multiple times.



          Finally, all modules which are assigned to your model are registered in your model. So you model class keeps track of them, that is why you can just turn off the dropout module by calling eval(). When using the functional dropout your model is not aware of it, thus it won't appear in any summary.






          share|improve this answer





















          • 2





            Thank you for the "SO which should I use?" - that was the part I was missing! I normally just do F.dropout(x,training = self.training) for handling the train/eval difference of it. So to summarize: its a matter of personal preferences?

            – CutePoison
            Nov 24 '18 at 10:32











          • @Jakob Yes, exactly! - nn.Dropout just intends to provide a slightly higher level API to the functional dropout that can be used in a layer style. However, there is no real difference in behaviour if you use it as you described.

            – blue-phoenox
            Nov 24 '18 at 11:58











          • When I have more than one layer where I want to apply dropout, should I instantiate one nn.Dropout object for each layer or can I safely reuse it? In general: How do I know which layers can be reused and which not?

            – Simon H
            Jan 17 at 23:34



















          2














          If you look at the source code of nn.Dropout and Functional.Dropout, you can see Functional is an interface and nn module implement functions with respect to this interface.

          Look at the implementations in nn class:



          from .. import functional as F
          class Dropout(_DropoutNd):
          def forward(self, input):
          return F.dropout(input, self.p, self.training, self.inplace)

          class Dropout2d(_DropoutNd):
          def forward(self, input):
          return F.dropout2d(input, self.p, self.training, self.inplace)


          And so on.



          Implementation of Functional class:



          def dropout(input, p=0.5, training=False, inplace=False):
          return _functions.dropout.Dropout.apply(input, p, training, inplace)

          def dropout2d(input, p=0.5, training=False, inplace=False):
          return _functions.dropout.FeatureDropout.apply(input, p, training, inplace)


          look at the example below to understand:



          class Net(nn.Module):
          def __init__(self):
          super(Net, self).__init__()
          self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
          self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
          self.conv2_drop = nn.Dropout2d()
          self.fc1 = nn.Linear(320, 50)
          self.fc2 = nn.Linear(50, 10)

          def forward(self, x):
          x = F.relu(F.max_pool2d(self.conv1(x), 2))
          x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
          x = x.view(-1, 320)
          x = F.relu(self.fc1(x))
          x = F.dropout(x, training=self.training)
          x = self.fc2(x)
          return F.log_softmax(x)


          There is a F.dropout in forward() function and a nn.Dropout in __init__() function. Now this is the explanation:



          In PyTorch you define your Models as subclasses of torch.nn.Module.



          In the init function, you are supposed to initialize the layers you want to use. Unlike keras, Pytorch goes more low level and you have to specify the sizes of your network so that everything matches.



          In the forward method, you specify the connections of your layers. This means that you will use the layers you already initialized, in order to re-use the same layer for each forward pass of data you make.



          torch.nn.Functional contains some useful functions like activation functions a convolution operations you can use. However, these are not full layers so if you want to specify a layer of any kind you should use torch.nn.Module.



          You would use the torch.nn.Functional conv operations to define a custom layer for example with a convolution operation, but not to define a standard convolution layer.






          share|improve this answer


























          • But what should be used when? Does that make a difference?

            – CutePoison
            Nov 22 '18 at 21:24











          • And I highly recommend you to ask your questions about pytorch in discuss.pytorch.org. I already joined and learned a lot by reading the questions and answers.

            – M. Doosti Lakhani
            Nov 22 '18 at 21:35






          • 1





            But the dropout itself does not have any parameters/weights. So why would you add them as a layer? I kinda struggle to see when F.dropout(x) is superior to nn.Dropout (or vice versa). To me they do exactly the same. For instance: what are the difference (appart from one being a function and the other a module) of the F.droput(x) and F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))? Could you not replace the latter with F.relu(F.max_pool2d(F.dropout(self.conv2(x)), 2))

            – CutePoison
            Nov 23 '18 at 16:20













          • To edit above: Why would you add them in the initial function/use them that way?

            – CutePoison
            Nov 23 '18 at 16:52











          • You can see this post also: discuss.pytorch.org/t/…

            – M. Doosti Lakhani
            Nov 24 '18 at 6:46












          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53419474%2fnn-dropout-vs-f-dropout-pytorch%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          2 Answers
          2






          active

          oldest

          votes








          2 Answers
          2






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          10














          The technical differences have already been shown in the other answer. However the main difference is that nn.Dropout is a torch Module itself which bears some convenience:



          A short example for illustration of some differences:



          import torch
          import torch.nn as nn

          class Model1(nn.Module):
          # Model 1 using functional dropout
          def __init__(self, p=0.0):
          super().__init__()
          self.p = p

          def forward(self, inputs):
          return nn.functional.dropout(inputs, p=self.p, training=True)

          class Model2(nn.Module):
          # Model 2 using dropout module
          def __init__(self, p=0.0):
          super().__init__()
          self.drop_layer = nn.Dropout(p=p)

          def forward(self, inputs):
          return self.drop_layer(inputs)
          model1 = Model1(p=0.5) # functional dropout
          model2 = Model2(p=0.5) # dropout module

          # creating inputs
          inputs = torch.rand(10)
          # forwarding inputs in train mode
          print('Normal (train) model:')
          print('Model 1', model1(inputs))
          print('Model 2', model2(inputs))
          print()

          # switching to eval mode
          model1.eval()
          model2.eval()

          # forwarding inputs in evaluation mode
          print('Evaluation mode:')
          print('Model 1', model1(inputs))
          print('Model 2', model2(inputs))
          # show model summary
          print('Print summary:')
          print(model1)
          print(model2)


          Output:



          Normal (train) model:
          Model 1 tensor([ 1.5040, 0.0000, 0.0000, 0.8563, 0.0000, 0.0000, 1.5951,
          0.0000, 0.0000, 0.0946])
          Model 2 tensor([ 0.0000, 0.3713, 1.9303, 0.0000, 0.0000, 0.3574, 0.0000,
          1.1273, 1.5818, 0.0946])

          Evaluation mode:
          Model 1 tensor([ 0.0000, 0.3713, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
          0.0000, 0.0000, 0.0000])
          Model 2 tensor([ 0.7520, 0.1857, 0.9651, 0.4281, 0.7883, 0.1787, 0.7975,
          0.5636, 0.7909, 0.0473])
          Print summary:
          Model1()
          Model2(
          (drop_layer): Dropout(p=0.5)
          )


          So which should I use?



          Both are completely equivalent in terms of applying dropout and even though the differences in usage are not that big, there are some reasons to favour the nn.Dropout over nn.functional.dropout:



          Dropout is designed to be only applied during training, so when doing predictions or evaluation of the model you want dropout to be turned off.



          The dropout module nn.Dropout conveniently handles this and shuts dropout off as soon as your model enters evaluation mode, while the functional dropout does not care about the evaluation / prediction mode.



          Even though you can set functional dropout to training=False to turn it off, it is still not such a convenient solution like with nn.Dropout.



          Also the drop rate is stored in the module, so you don't have to save it in an extra variable. In larger networks you might want to create different dropout layers with different drop rates - here nn.Dropout may increase readability and can bear also some convenience when using the layers multiple times.



          Finally, all modules which are assigned to your model are registered in your model. So you model class keeps track of them, that is why you can just turn off the dropout module by calling eval(). When using the functional dropout your model is not aware of it, thus it won't appear in any summary.






          share|improve this answer





















          • 2





            Thank you for the "SO which should I use?" - that was the part I was missing! I normally just do F.dropout(x,training = self.training) for handling the train/eval difference of it. So to summarize: its a matter of personal preferences?

            – CutePoison
            Nov 24 '18 at 10:32











          • @Jakob Yes, exactly! - nn.Dropout just intends to provide a slightly higher level API to the functional dropout that can be used in a layer style. However, there is no real difference in behaviour if you use it as you described.

            – blue-phoenox
            Nov 24 '18 at 11:58











          • When I have more than one layer where I want to apply dropout, should I instantiate one nn.Dropout object for each layer or can I safely reuse it? In general: How do I know which layers can be reused and which not?

            – Simon H
            Jan 17 at 23:34
















          10














          The technical differences have already been shown in the other answer. However the main difference is that nn.Dropout is a torch Module itself which bears some convenience:



          A short example for illustration of some differences:



          import torch
          import torch.nn as nn

          class Model1(nn.Module):
          # Model 1 using functional dropout
          def __init__(self, p=0.0):
          super().__init__()
          self.p = p

          def forward(self, inputs):
          return nn.functional.dropout(inputs, p=self.p, training=True)

          class Model2(nn.Module):
          # Model 2 using dropout module
          def __init__(self, p=0.0):
          super().__init__()
          self.drop_layer = nn.Dropout(p=p)

          def forward(self, inputs):
          return self.drop_layer(inputs)
          model1 = Model1(p=0.5) # functional dropout
          model2 = Model2(p=0.5) # dropout module

          # creating inputs
          inputs = torch.rand(10)
          # forwarding inputs in train mode
          print('Normal (train) model:')
          print('Model 1', model1(inputs))
          print('Model 2', model2(inputs))
          print()

          # switching to eval mode
          model1.eval()
          model2.eval()

          # forwarding inputs in evaluation mode
          print('Evaluation mode:')
          print('Model 1', model1(inputs))
          print('Model 2', model2(inputs))
          # show model summary
          print('Print summary:')
          print(model1)
          print(model2)


          Output:



          Normal (train) model:
          Model 1 tensor([ 1.5040, 0.0000, 0.0000, 0.8563, 0.0000, 0.0000, 1.5951,
          0.0000, 0.0000, 0.0946])
          Model 2 tensor([ 0.0000, 0.3713, 1.9303, 0.0000, 0.0000, 0.3574, 0.0000,
          1.1273, 1.5818, 0.0946])

          Evaluation mode:
          Model 1 tensor([ 0.0000, 0.3713, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
          0.0000, 0.0000, 0.0000])
          Model 2 tensor([ 0.7520, 0.1857, 0.9651, 0.4281, 0.7883, 0.1787, 0.7975,
          0.5636, 0.7909, 0.0473])
          Print summary:
          Model1()
          Model2(
          (drop_layer): Dropout(p=0.5)
          )


          So which should I use?



          Both are completely equivalent in terms of applying dropout and even though the differences in usage are not that big, there are some reasons to favour the nn.Dropout over nn.functional.dropout:



          Dropout is designed to be only applied during training, so when doing predictions or evaluation of the model you want dropout to be turned off.



          The dropout module nn.Dropout conveniently handles this and shuts dropout off as soon as your model enters evaluation mode, while the functional dropout does not care about the evaluation / prediction mode.



          Even though you can set functional dropout to training=False to turn it off, it is still not such a convenient solution like with nn.Dropout.



          Also the drop rate is stored in the module, so you don't have to save it in an extra variable. In larger networks you might want to create different dropout layers with different drop rates - here nn.Dropout may increase readability and can bear also some convenience when using the layers multiple times.



          Finally, all modules which are assigned to your model are registered in your model. So you model class keeps track of them, that is why you can just turn off the dropout module by calling eval(). When using the functional dropout your model is not aware of it, thus it won't appear in any summary.






          share|improve this answer





















          • 2





            Thank you for the "SO which should I use?" - that was the part I was missing! I normally just do F.dropout(x,training = self.training) for handling the train/eval difference of it. So to summarize: its a matter of personal preferences?

            – CutePoison
            Nov 24 '18 at 10:32











          • @Jakob Yes, exactly! - nn.Dropout just intends to provide a slightly higher level API to the functional dropout that can be used in a layer style. However, there is no real difference in behaviour if you use it as you described.

            – blue-phoenox
            Nov 24 '18 at 11:58











          • When I have more than one layer where I want to apply dropout, should I instantiate one nn.Dropout object for each layer or can I safely reuse it? In general: How do I know which layers can be reused and which not?

            – Simon H
            Jan 17 at 23:34














          10












          10








          10







          The technical differences have already been shown in the other answer. However the main difference is that nn.Dropout is a torch Module itself which bears some convenience:



          A short example for illustration of some differences:



          import torch
          import torch.nn as nn

          class Model1(nn.Module):
          # Model 1 using functional dropout
          def __init__(self, p=0.0):
          super().__init__()
          self.p = p

          def forward(self, inputs):
          return nn.functional.dropout(inputs, p=self.p, training=True)

          class Model2(nn.Module):
          # Model 2 using dropout module
          def __init__(self, p=0.0):
          super().__init__()
          self.drop_layer = nn.Dropout(p=p)

          def forward(self, inputs):
          return self.drop_layer(inputs)
          model1 = Model1(p=0.5) # functional dropout
          model2 = Model2(p=0.5) # dropout module

          # creating inputs
          inputs = torch.rand(10)
          # forwarding inputs in train mode
          print('Normal (train) model:')
          print('Model 1', model1(inputs))
          print('Model 2', model2(inputs))
          print()

          # switching to eval mode
          model1.eval()
          model2.eval()

          # forwarding inputs in evaluation mode
          print('Evaluation mode:')
          print('Model 1', model1(inputs))
          print('Model 2', model2(inputs))
          # show model summary
          print('Print summary:')
          print(model1)
          print(model2)


          Output:



          Normal (train) model:
          Model 1 tensor([ 1.5040, 0.0000, 0.0000, 0.8563, 0.0000, 0.0000, 1.5951,
          0.0000, 0.0000, 0.0946])
          Model 2 tensor([ 0.0000, 0.3713, 1.9303, 0.0000, 0.0000, 0.3574, 0.0000,
          1.1273, 1.5818, 0.0946])

          Evaluation mode:
          Model 1 tensor([ 0.0000, 0.3713, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
          0.0000, 0.0000, 0.0000])
          Model 2 tensor([ 0.7520, 0.1857, 0.9651, 0.4281, 0.7883, 0.1787, 0.7975,
          0.5636, 0.7909, 0.0473])
          Print summary:
          Model1()
          Model2(
          (drop_layer): Dropout(p=0.5)
          )


          So which should I use?



          Both are completely equivalent in terms of applying dropout and even though the differences in usage are not that big, there are some reasons to favour the nn.Dropout over nn.functional.dropout:



          Dropout is designed to be only applied during training, so when doing predictions or evaluation of the model you want dropout to be turned off.



          The dropout module nn.Dropout conveniently handles this and shuts dropout off as soon as your model enters evaluation mode, while the functional dropout does not care about the evaluation / prediction mode.



          Even though you can set functional dropout to training=False to turn it off, it is still not such a convenient solution like with nn.Dropout.



          Also the drop rate is stored in the module, so you don't have to save it in an extra variable. In larger networks you might want to create different dropout layers with different drop rates - here nn.Dropout may increase readability and can bear also some convenience when using the layers multiple times.



          Finally, all modules which are assigned to your model are registered in your model. So you model class keeps track of them, that is why you can just turn off the dropout module by calling eval(). When using the functional dropout your model is not aware of it, thus it won't appear in any summary.






          share|improve this answer















          The technical differences have already been shown in the other answer. However the main difference is that nn.Dropout is a torch Module itself which bears some convenience:



          A short example for illustration of some differences:



          import torch
          import torch.nn as nn

          class Model1(nn.Module):
          # Model 1 using functional dropout
          def __init__(self, p=0.0):
          super().__init__()
          self.p = p

          def forward(self, inputs):
          return nn.functional.dropout(inputs, p=self.p, training=True)

          class Model2(nn.Module):
          # Model 2 using dropout module
          def __init__(self, p=0.0):
          super().__init__()
          self.drop_layer = nn.Dropout(p=p)

          def forward(self, inputs):
          return self.drop_layer(inputs)
          model1 = Model1(p=0.5) # functional dropout
          model2 = Model2(p=0.5) # dropout module

          # creating inputs
          inputs = torch.rand(10)
          # forwarding inputs in train mode
          print('Normal (train) model:')
          print('Model 1', model1(inputs))
          print('Model 2', model2(inputs))
          print()

          # switching to eval mode
          model1.eval()
          model2.eval()

          # forwarding inputs in evaluation mode
          print('Evaluation mode:')
          print('Model 1', model1(inputs))
          print('Model 2', model2(inputs))
          # show model summary
          print('Print summary:')
          print(model1)
          print(model2)


          Output:



          Normal (train) model:
          Model 1 tensor([ 1.5040, 0.0000, 0.0000, 0.8563, 0.0000, 0.0000, 1.5951,
          0.0000, 0.0000, 0.0946])
          Model 2 tensor([ 0.0000, 0.3713, 1.9303, 0.0000, 0.0000, 0.3574, 0.0000,
          1.1273, 1.5818, 0.0946])

          Evaluation mode:
          Model 1 tensor([ 0.0000, 0.3713, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
          0.0000, 0.0000, 0.0000])
          Model 2 tensor([ 0.7520, 0.1857, 0.9651, 0.4281, 0.7883, 0.1787, 0.7975,
          0.5636, 0.7909, 0.0473])
          Print summary:
          Model1()
          Model2(
          (drop_layer): Dropout(p=0.5)
          )


          So which should I use?



          Both are completely equivalent in terms of applying dropout and even though the differences in usage are not that big, there are some reasons to favour the nn.Dropout over nn.functional.dropout:



          Dropout is designed to be only applied during training, so when doing predictions or evaluation of the model you want dropout to be turned off.



          The dropout module nn.Dropout conveniently handles this and shuts dropout off as soon as your model enters evaluation mode, while the functional dropout does not care about the evaluation / prediction mode.



          Even though you can set functional dropout to training=False to turn it off, it is still not such a convenient solution like with nn.Dropout.



          Also the drop rate is stored in the module, so you don't have to save it in an extra variable. In larger networks you might want to create different dropout layers with different drop rates - here nn.Dropout may increase readability and can bear also some convenience when using the layers multiple times.



          Finally, all modules which are assigned to your model are registered in your model. So you model class keeps track of them, that is why you can just turn off the dropout module by calling eval(). When using the functional dropout your model is not aware of it, thus it won't appear in any summary.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Nov 23 '18 at 21:24

























          answered Nov 23 '18 at 20:53









          blue-phoenoxblue-phoenox

          4,570101850




          4,570101850








          • 2





            Thank you for the "SO which should I use?" - that was the part I was missing! I normally just do F.dropout(x,training = self.training) for handling the train/eval difference of it. So to summarize: its a matter of personal preferences?

            – CutePoison
            Nov 24 '18 at 10:32











          • @Jakob Yes, exactly! - nn.Dropout just intends to provide a slightly higher level API to the functional dropout that can be used in a layer style. However, there is no real difference in behaviour if you use it as you described.

            – blue-phoenox
            Nov 24 '18 at 11:58











          • When I have more than one layer where I want to apply dropout, should I instantiate one nn.Dropout object for each layer or can I safely reuse it? In general: How do I know which layers can be reused and which not?

            – Simon H
            Jan 17 at 23:34














          • 2





            Thank you for the "SO which should I use?" - that was the part I was missing! I normally just do F.dropout(x,training = self.training) for handling the train/eval difference of it. So to summarize: its a matter of personal preferences?

            – CutePoison
            Nov 24 '18 at 10:32











          • @Jakob Yes, exactly! - nn.Dropout just intends to provide a slightly higher level API to the functional dropout that can be used in a layer style. However, there is no real difference in behaviour if you use it as you described.

            – blue-phoenox
            Nov 24 '18 at 11:58











          • When I have more than one layer where I want to apply dropout, should I instantiate one nn.Dropout object for each layer or can I safely reuse it? In general: How do I know which layers can be reused and which not?

            – Simon H
            Jan 17 at 23:34








          2




          2





          Thank you for the "SO which should I use?" - that was the part I was missing! I normally just do F.dropout(x,training = self.training) for handling the train/eval difference of it. So to summarize: its a matter of personal preferences?

          – CutePoison
          Nov 24 '18 at 10:32





          Thank you for the "SO which should I use?" - that was the part I was missing! I normally just do F.dropout(x,training = self.training) for handling the train/eval difference of it. So to summarize: its a matter of personal preferences?

          – CutePoison
          Nov 24 '18 at 10:32













          @Jakob Yes, exactly! - nn.Dropout just intends to provide a slightly higher level API to the functional dropout that can be used in a layer style. However, there is no real difference in behaviour if you use it as you described.

          – blue-phoenox
          Nov 24 '18 at 11:58





          @Jakob Yes, exactly! - nn.Dropout just intends to provide a slightly higher level API to the functional dropout that can be used in a layer style. However, there is no real difference in behaviour if you use it as you described.

          – blue-phoenox
          Nov 24 '18 at 11:58













          When I have more than one layer where I want to apply dropout, should I instantiate one nn.Dropout object for each layer or can I safely reuse it? In general: How do I know which layers can be reused and which not?

          – Simon H
          Jan 17 at 23:34





          When I have more than one layer where I want to apply dropout, should I instantiate one nn.Dropout object for each layer or can I safely reuse it? In general: How do I know which layers can be reused and which not?

          – Simon H
          Jan 17 at 23:34













          2














          If you look at the source code of nn.Dropout and Functional.Dropout, you can see Functional is an interface and nn module implement functions with respect to this interface.

          Look at the implementations in nn class:



          from .. import functional as F
          class Dropout(_DropoutNd):
          def forward(self, input):
          return F.dropout(input, self.p, self.training, self.inplace)

          class Dropout2d(_DropoutNd):
          def forward(self, input):
          return F.dropout2d(input, self.p, self.training, self.inplace)


          And so on.



          Implementation of Functional class:



          def dropout(input, p=0.5, training=False, inplace=False):
          return _functions.dropout.Dropout.apply(input, p, training, inplace)

          def dropout2d(input, p=0.5, training=False, inplace=False):
          return _functions.dropout.FeatureDropout.apply(input, p, training, inplace)


          look at the example below to understand:



          class Net(nn.Module):
          def __init__(self):
          super(Net, self).__init__()
          self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
          self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
          self.conv2_drop = nn.Dropout2d()
          self.fc1 = nn.Linear(320, 50)
          self.fc2 = nn.Linear(50, 10)

          def forward(self, x):
          x = F.relu(F.max_pool2d(self.conv1(x), 2))
          x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
          x = x.view(-1, 320)
          x = F.relu(self.fc1(x))
          x = F.dropout(x, training=self.training)
          x = self.fc2(x)
          return F.log_softmax(x)


          There is a F.dropout in forward() function and a nn.Dropout in __init__() function. Now this is the explanation:



          In PyTorch you define your Models as subclasses of torch.nn.Module.



          In the init function, you are supposed to initialize the layers you want to use. Unlike keras, Pytorch goes more low level and you have to specify the sizes of your network so that everything matches.



          In the forward method, you specify the connections of your layers. This means that you will use the layers you already initialized, in order to re-use the same layer for each forward pass of data you make.



          torch.nn.Functional contains some useful functions like activation functions a convolution operations you can use. However, these are not full layers so if you want to specify a layer of any kind you should use torch.nn.Module.



          You would use the torch.nn.Functional conv operations to define a custom layer for example with a convolution operation, but not to define a standard convolution layer.






          share|improve this answer


























          • But what should be used when? Does that make a difference?

            – CutePoison
            Nov 22 '18 at 21:24











          • And I highly recommend you to ask your questions about pytorch in discuss.pytorch.org. I already joined and learned a lot by reading the questions and answers.

            – M. Doosti Lakhani
            Nov 22 '18 at 21:35






          • 1





            But the dropout itself does not have any parameters/weights. So why would you add them as a layer? I kinda struggle to see when F.dropout(x) is superior to nn.Dropout (or vice versa). To me they do exactly the same. For instance: what are the difference (appart from one being a function and the other a module) of the F.droput(x) and F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))? Could you not replace the latter with F.relu(F.max_pool2d(F.dropout(self.conv2(x)), 2))

            – CutePoison
            Nov 23 '18 at 16:20













          • To edit above: Why would you add them in the initial function/use them that way?

            – CutePoison
            Nov 23 '18 at 16:52











          • You can see this post also: discuss.pytorch.org/t/…

            – M. Doosti Lakhani
            Nov 24 '18 at 6:46
















          2














          If you look at the source code of nn.Dropout and Functional.Dropout, you can see Functional is an interface and nn module implement functions with respect to this interface.

          Look at the implementations in nn class:



          from .. import functional as F
          class Dropout(_DropoutNd):
          def forward(self, input):
          return F.dropout(input, self.p, self.training, self.inplace)

          class Dropout2d(_DropoutNd):
          def forward(self, input):
          return F.dropout2d(input, self.p, self.training, self.inplace)


          And so on.



          Implementation of Functional class:



          def dropout(input, p=0.5, training=False, inplace=False):
          return _functions.dropout.Dropout.apply(input, p, training, inplace)

          def dropout2d(input, p=0.5, training=False, inplace=False):
          return _functions.dropout.FeatureDropout.apply(input, p, training, inplace)


          look at the example below to understand:



          class Net(nn.Module):
          def __init__(self):
          super(Net, self).__init__()
          self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
          self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
          self.conv2_drop = nn.Dropout2d()
          self.fc1 = nn.Linear(320, 50)
          self.fc2 = nn.Linear(50, 10)

          def forward(self, x):
          x = F.relu(F.max_pool2d(self.conv1(x), 2))
          x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
          x = x.view(-1, 320)
          x = F.relu(self.fc1(x))
          x = F.dropout(x, training=self.training)
          x = self.fc2(x)
          return F.log_softmax(x)


          There is a F.dropout in forward() function and a nn.Dropout in __init__() function. Now this is the explanation:



          In PyTorch you define your Models as subclasses of torch.nn.Module.



          In the init function, you are supposed to initialize the layers you want to use. Unlike keras, Pytorch goes more low level and you have to specify the sizes of your network so that everything matches.



          In the forward method, you specify the connections of your layers. This means that you will use the layers you already initialized, in order to re-use the same layer for each forward pass of data you make.



          torch.nn.Functional contains some useful functions like activation functions a convolution operations you can use. However, these are not full layers so if you want to specify a layer of any kind you should use torch.nn.Module.



          You would use the torch.nn.Functional conv operations to define a custom layer for example with a convolution operation, but not to define a standard convolution layer.






          share|improve this answer


























          • But what should be used when? Does that make a difference?

            – CutePoison
            Nov 22 '18 at 21:24











          • And I highly recommend you to ask your questions about pytorch in discuss.pytorch.org. I already joined and learned a lot by reading the questions and answers.

            – M. Doosti Lakhani
            Nov 22 '18 at 21:35






          • 1





            But the dropout itself does not have any parameters/weights. So why would you add them as a layer? I kinda struggle to see when F.dropout(x) is superior to nn.Dropout (or vice versa). To me they do exactly the same. For instance: what are the difference (appart from one being a function and the other a module) of the F.droput(x) and F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))? Could you not replace the latter with F.relu(F.max_pool2d(F.dropout(self.conv2(x)), 2))

            – CutePoison
            Nov 23 '18 at 16:20













          • To edit above: Why would you add them in the initial function/use them that way?

            – CutePoison
            Nov 23 '18 at 16:52











          • You can see this post also: discuss.pytorch.org/t/…

            – M. Doosti Lakhani
            Nov 24 '18 at 6:46














          2












          2








          2







          If you look at the source code of nn.Dropout and Functional.Dropout, you can see Functional is an interface and nn module implement functions with respect to this interface.

          Look at the implementations in nn class:



          from .. import functional as F
          class Dropout(_DropoutNd):
          def forward(self, input):
          return F.dropout(input, self.p, self.training, self.inplace)

          class Dropout2d(_DropoutNd):
          def forward(self, input):
          return F.dropout2d(input, self.p, self.training, self.inplace)


          And so on.



          Implementation of Functional class:



          def dropout(input, p=0.5, training=False, inplace=False):
          return _functions.dropout.Dropout.apply(input, p, training, inplace)

          def dropout2d(input, p=0.5, training=False, inplace=False):
          return _functions.dropout.FeatureDropout.apply(input, p, training, inplace)


          look at the example below to understand:



          class Net(nn.Module):
          def __init__(self):
          super(Net, self).__init__()
          self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
          self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
          self.conv2_drop = nn.Dropout2d()
          self.fc1 = nn.Linear(320, 50)
          self.fc2 = nn.Linear(50, 10)

          def forward(self, x):
          x = F.relu(F.max_pool2d(self.conv1(x), 2))
          x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
          x = x.view(-1, 320)
          x = F.relu(self.fc1(x))
          x = F.dropout(x, training=self.training)
          x = self.fc2(x)
          return F.log_softmax(x)


          There is a F.dropout in forward() function and a nn.Dropout in __init__() function. Now this is the explanation:



          In PyTorch you define your Models as subclasses of torch.nn.Module.



          In the init function, you are supposed to initialize the layers you want to use. Unlike keras, Pytorch goes more low level and you have to specify the sizes of your network so that everything matches.



          In the forward method, you specify the connections of your layers. This means that you will use the layers you already initialized, in order to re-use the same layer for each forward pass of data you make.



          torch.nn.Functional contains some useful functions like activation functions a convolution operations you can use. However, these are not full layers so if you want to specify a layer of any kind you should use torch.nn.Module.



          You would use the torch.nn.Functional conv operations to define a custom layer for example with a convolution operation, but not to define a standard convolution layer.






          share|improve this answer















          If you look at the source code of nn.Dropout and Functional.Dropout, you can see Functional is an interface and nn module implement functions with respect to this interface.

          Look at the implementations in nn class:



          from .. import functional as F
          class Dropout(_DropoutNd):
          def forward(self, input):
          return F.dropout(input, self.p, self.training, self.inplace)

          class Dropout2d(_DropoutNd):
          def forward(self, input):
          return F.dropout2d(input, self.p, self.training, self.inplace)


          And so on.



          Implementation of Functional class:



          def dropout(input, p=0.5, training=False, inplace=False):
          return _functions.dropout.Dropout.apply(input, p, training, inplace)

          def dropout2d(input, p=0.5, training=False, inplace=False):
          return _functions.dropout.FeatureDropout.apply(input, p, training, inplace)


          look at the example below to understand:



          class Net(nn.Module):
          def __init__(self):
          super(Net, self).__init__()
          self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
          self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
          self.conv2_drop = nn.Dropout2d()
          self.fc1 = nn.Linear(320, 50)
          self.fc2 = nn.Linear(50, 10)

          def forward(self, x):
          x = F.relu(F.max_pool2d(self.conv1(x), 2))
          x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
          x = x.view(-1, 320)
          x = F.relu(self.fc1(x))
          x = F.dropout(x, training=self.training)
          x = self.fc2(x)
          return F.log_softmax(x)


          There is a F.dropout in forward() function and a nn.Dropout in __init__() function. Now this is the explanation:



          In PyTorch you define your Models as subclasses of torch.nn.Module.



          In the init function, you are supposed to initialize the layers you want to use. Unlike keras, Pytorch goes more low level and you have to specify the sizes of your network so that everything matches.



          In the forward method, you specify the connections of your layers. This means that you will use the layers you already initialized, in order to re-use the same layer for each forward pass of data you make.



          torch.nn.Functional contains some useful functions like activation functions a convolution operations you can use. However, these are not full layers so if you want to specify a layer of any kind you should use torch.nn.Module.



          You would use the torch.nn.Functional conv operations to define a custom layer for example with a convolution operation, but not to define a standard convolution layer.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Nov 22 '18 at 21:33

























          answered Nov 21 '18 at 20:32









          M. Doosti LakhaniM. Doosti Lakhani

          340418




          340418













          • But what should be used when? Does that make a difference?

            – CutePoison
            Nov 22 '18 at 21:24











          • And I highly recommend you to ask your questions about pytorch in discuss.pytorch.org. I already joined and learned a lot by reading the questions and answers.

            – M. Doosti Lakhani
            Nov 22 '18 at 21:35






          • 1





            But the dropout itself does not have any parameters/weights. So why would you add them as a layer? I kinda struggle to see when F.dropout(x) is superior to nn.Dropout (or vice versa). To me they do exactly the same. For instance: what are the difference (appart from one being a function and the other a module) of the F.droput(x) and F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))? Could you not replace the latter with F.relu(F.max_pool2d(F.dropout(self.conv2(x)), 2))

            – CutePoison
            Nov 23 '18 at 16:20













          • To edit above: Why would you add them in the initial function/use them that way?

            – CutePoison
            Nov 23 '18 at 16:52











          • You can see this post also: discuss.pytorch.org/t/…

            – M. Doosti Lakhani
            Nov 24 '18 at 6:46



















          • But what should be used when? Does that make a difference?

            – CutePoison
            Nov 22 '18 at 21:24











          • And I highly recommend you to ask your questions about pytorch in discuss.pytorch.org. I already joined and learned a lot by reading the questions and answers.

            – M. Doosti Lakhani
            Nov 22 '18 at 21:35






          • 1





            But the dropout itself does not have any parameters/weights. So why would you add them as a layer? I kinda struggle to see when F.dropout(x) is superior to nn.Dropout (or vice versa). To me they do exactly the same. For instance: what are the difference (appart from one being a function and the other a module) of the F.droput(x) and F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))? Could you not replace the latter with F.relu(F.max_pool2d(F.dropout(self.conv2(x)), 2))

            – CutePoison
            Nov 23 '18 at 16:20













          • To edit above: Why would you add them in the initial function/use them that way?

            – CutePoison
            Nov 23 '18 at 16:52











          • You can see this post also: discuss.pytorch.org/t/…

            – M. Doosti Lakhani
            Nov 24 '18 at 6:46

















          But what should be used when? Does that make a difference?

          – CutePoison
          Nov 22 '18 at 21:24





          But what should be used when? Does that make a difference?

          – CutePoison
          Nov 22 '18 at 21:24













          And I highly recommend you to ask your questions about pytorch in discuss.pytorch.org. I already joined and learned a lot by reading the questions and answers.

          – M. Doosti Lakhani
          Nov 22 '18 at 21:35





          And I highly recommend you to ask your questions about pytorch in discuss.pytorch.org. I already joined and learned a lot by reading the questions and answers.

          – M. Doosti Lakhani
          Nov 22 '18 at 21:35




          1




          1





          But the dropout itself does not have any parameters/weights. So why would you add them as a layer? I kinda struggle to see when F.dropout(x) is superior to nn.Dropout (or vice versa). To me they do exactly the same. For instance: what are the difference (appart from one being a function and the other a module) of the F.droput(x) and F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))? Could you not replace the latter with F.relu(F.max_pool2d(F.dropout(self.conv2(x)), 2))

          – CutePoison
          Nov 23 '18 at 16:20







          But the dropout itself does not have any parameters/weights. So why would you add them as a layer? I kinda struggle to see when F.dropout(x) is superior to nn.Dropout (or vice versa). To me they do exactly the same. For instance: what are the difference (appart from one being a function and the other a module) of the F.droput(x) and F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))? Could you not replace the latter with F.relu(F.max_pool2d(F.dropout(self.conv2(x)), 2))

          – CutePoison
          Nov 23 '18 at 16:20















          To edit above: Why would you add them in the initial function/use them that way?

          – CutePoison
          Nov 23 '18 at 16:52





          To edit above: Why would you add them in the initial function/use them that way?

          – CutePoison
          Nov 23 '18 at 16:52













          You can see this post also: discuss.pytorch.org/t/…

          – M. Doosti Lakhani
          Nov 24 '18 at 6:46





          You can see this post also: discuss.pytorch.org/t/…

          – M. Doosti Lakhani
          Nov 24 '18 at 6:46


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53419474%2fnn-dropout-vs-f-dropout-pytorch%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          鏡平學校

          ꓛꓣだゔៀៅຸ໢ທຮ໕໒ ,ໂ'໥໓າ໼ឨឲ៵៭ៈゎゔit''䖳𥁄卿' ☨₤₨こゎもょの;ꜹꟚꞖꞵꟅꞛေၦေɯ,ɨɡ𛃵𛁹ޝ޳ޠ޾,ޤޒޯ޾𫝒𫠁သ𛅤チョ'サノބޘދ𛁐ᶿᶇᶀᶋᶠ㨑㽹⻮ꧬ꧹؍۩وَؠ㇕㇃㇪ ㇦㇋㇋ṜẰᵡᴠ 軌ᵕ搜۳ٰޗޮ޷ސޯ𫖾𫅀ल, ꙭ꙰ꚅꙁꚊꞻꝔ꟠Ꝭㄤﺟޱސꧨꧼ꧴ꧯꧽ꧲ꧯ'⽹⽭⾁⿞⼳⽋២៩ញណើꩯꩤ꩸ꩮᶻᶺᶧᶂ𫳲𫪭𬸄𫵰𬖩𬫣𬊉ၲ𛅬㕦䬺𫝌𫝼,,𫟖𫞽ហៅ஫㆔ాఆఅꙒꚞꙍ,Ꙟ꙱エ ,ポテ,フࢰࢯ𫟠𫞶 𫝤𫟠ﺕﹱﻜﻣ𪵕𪭸𪻆𪾩𫔷ġ,ŧآꞪ꟥,ꞔꝻ♚☹⛵𛀌ꬷꭞȄƁƪƬșƦǙǗdžƝǯǧⱦⱰꓕꓢႋ神 ဴ၀க௭எ௫ឫោ ' េㇷㇴㇼ神ㇸㇲㇽㇴㇼㇻㇸ'ㇸㇿㇸㇹㇰㆣꓚꓤ₡₧ ㄨㄟ㄂ㄖㄎ໗ツڒذ₶।ऩछएोञयूटक़कयँृी,冬'𛅢𛅥ㇱㇵㇶ𥄥𦒽𠣧𠊓𧢖𥞘𩔋цѰㄠſtʯʭɿʆʗʍʩɷɛ,əʏダヵㄐㄘR{gỚṖḺờṠṫảḙḭᴮᵏᴘᵀᵷᵕᴜᴏᵾq﮲ﲿﴽﭙ軌ﰬﶚﶧ﫲Ҝжюїкӈㇴffצּ﬘﭅﬈軌'ffistfflſtffतभफɳɰʊɲʎ𛁱𛁖𛁮𛀉 𛂯𛀞నఋŀŲ 𫟲𫠖𫞺ຆຆ ໹້໕໗ๆทԊꧢꧠ꧰ꓱ⿝⼑ŎḬẃẖỐẅ ,ờỰỈỗﮊDžȩꭏꭎꬻ꭮ꬿꭖꭥꭅ㇭神 ⾈ꓵꓑ⺄㄄ㄪㄙㄅㄇstA۵䞽ॶ𫞑𫝄㇉㇇゜軌𩜛𩳠Jﻺ‚Üမ႕ႌႊၐၸဓၞၞၡ៸wyvtᶎᶪᶹစဎ꣡꣰꣢꣤ٗ؋لㇳㇾㇻㇱ㆐㆔,,㆟Ⱶヤマފ޼ޝަݿݞݠݷݐ',ݘ,ݪݙݵ𬝉𬜁𫝨𫞘くせぉて¼óû×ó£…𛅑הㄙくԗԀ5606神45,神796'𪤻𫞧ꓐ㄁ㄘɥɺꓵꓲ3''7034׉ⱦⱠˆ“𫝋ȍ,ꩲ軌꩷ꩶꩧꩫఞ۔فڱێظペサ神ナᴦᵑ47 9238їﻂ䐊䔉㠸﬎ffiﬣ,לּᴷᴦᵛᵽ,ᴨᵤ ᵸᵥᴗᵈꚏꚉꚟ⻆rtǟƴ𬎎

          Why https connections are so slow when debugging (stepping over) in Java?