Why is np.dot much slower the first time it is called in a python session?





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ height:90px;width:728px;box-sizing:border-box;
}







2















I am trying to do series of big np.dot(a,x) operations, and the first one seems to take far longer than subsequent calls. In my problem, a is tall [n x 2] and x is [2 x 1]. My big matrix a is constant, it is just x that is changing. Here is a MWE:



import numpy as np

@profile
def do_work(a,x):
tmp = np.dot(a,x)
return tmp

@profile
def do_work_iter(a,x):
tmp = np.dot(a,x)
return tmp

if __name__=="__main__":

n = 50000
a = np.random.randn(n,2)
x = np.random.randn(2,1)

#
tmp = do_work(a,x)

#
niter = 100
for i in range(niter):
x = np.random.randn(2,1)
tmp = do_work_iter(a,x)


Using line_profiler, I get .155 s/call for the first call to np.dot and .00013 s/call for the subsequent ones. Is there some setup/error checking that numpy is doing the first time here? Is there a way I can bypass any of it? Or is there some kind of searching for blas functions that is taking all of the time?



I also ran profile and it gives the following:



ncalls  tottime  percall  cumtime  percall filename:lineno(function)
1 0.000 0.000 1.514 1.514 {built-in method builtins.exec}
1 0.000 0.000 1.514 1.514 <string>:1(<module>)
1 0.000 0.000 1.514 1.514 speed_small.py:15(run)
101 1.503 0.015 1.503 0.015 {built-in method numpy.core.multiarray.dot}
1 0.000 0.000 1.491 1.491 speed_small.py:5(do_work)
100 0.000 0.000 0.012 0.000 speed_small.py:10(do_work_iter)


so numpy.core.multiarray.dot is taking all of the time, it doesn't give much insight about anything further down the stack.



I am on Python 3.6 from Anaconda and have mkl installed (Windows 7).










share|improve this question























  • I use line_profiler but my guess would be that you're incurring the import overhead the first time. After that, The module is cached

    – roganjosh
    Nov 21 '18 at 20:58













  • @roganjosh Isn't the module included before the script comes to if __name__ == '__main__': part ?

    – eozd
    Nov 21 '18 at 20:59













  • It's an external module and nothing in your code suggests you're using it (EDIT: that's a lie, you're using @profile decorator so I'll rethink) I don't know exactly what timings get bundled into the main library timings.

    – roganjosh
    Nov 21 '18 at 21:02








  • 1





    Loading of BLAS?

    – Ante
    Nov 23 '18 at 12:59


















2















I am trying to do series of big np.dot(a,x) operations, and the first one seems to take far longer than subsequent calls. In my problem, a is tall [n x 2] and x is [2 x 1]. My big matrix a is constant, it is just x that is changing. Here is a MWE:



import numpy as np

@profile
def do_work(a,x):
tmp = np.dot(a,x)
return tmp

@profile
def do_work_iter(a,x):
tmp = np.dot(a,x)
return tmp

if __name__=="__main__":

n = 50000
a = np.random.randn(n,2)
x = np.random.randn(2,1)

#
tmp = do_work(a,x)

#
niter = 100
for i in range(niter):
x = np.random.randn(2,1)
tmp = do_work_iter(a,x)


Using line_profiler, I get .155 s/call for the first call to np.dot and .00013 s/call for the subsequent ones. Is there some setup/error checking that numpy is doing the first time here? Is there a way I can bypass any of it? Or is there some kind of searching for blas functions that is taking all of the time?



I also ran profile and it gives the following:



ncalls  tottime  percall  cumtime  percall filename:lineno(function)
1 0.000 0.000 1.514 1.514 {built-in method builtins.exec}
1 0.000 0.000 1.514 1.514 <string>:1(<module>)
1 0.000 0.000 1.514 1.514 speed_small.py:15(run)
101 1.503 0.015 1.503 0.015 {built-in method numpy.core.multiarray.dot}
1 0.000 0.000 1.491 1.491 speed_small.py:5(do_work)
100 0.000 0.000 0.012 0.000 speed_small.py:10(do_work_iter)


so numpy.core.multiarray.dot is taking all of the time, it doesn't give much insight about anything further down the stack.



I am on Python 3.6 from Anaconda and have mkl installed (Windows 7).










share|improve this question























  • I use line_profiler but my guess would be that you're incurring the import overhead the first time. After that, The module is cached

    – roganjosh
    Nov 21 '18 at 20:58













  • @roganjosh Isn't the module included before the script comes to if __name__ == '__main__': part ?

    – eozd
    Nov 21 '18 at 20:59













  • It's an external module and nothing in your code suggests you're using it (EDIT: that's a lie, you're using @profile decorator so I'll rethink) I don't know exactly what timings get bundled into the main library timings.

    – roganjosh
    Nov 21 '18 at 21:02








  • 1





    Loading of BLAS?

    – Ante
    Nov 23 '18 at 12:59














2












2








2


2






I am trying to do series of big np.dot(a,x) operations, and the first one seems to take far longer than subsequent calls. In my problem, a is tall [n x 2] and x is [2 x 1]. My big matrix a is constant, it is just x that is changing. Here is a MWE:



import numpy as np

@profile
def do_work(a,x):
tmp = np.dot(a,x)
return tmp

@profile
def do_work_iter(a,x):
tmp = np.dot(a,x)
return tmp

if __name__=="__main__":

n = 50000
a = np.random.randn(n,2)
x = np.random.randn(2,1)

#
tmp = do_work(a,x)

#
niter = 100
for i in range(niter):
x = np.random.randn(2,1)
tmp = do_work_iter(a,x)


Using line_profiler, I get .155 s/call for the first call to np.dot and .00013 s/call for the subsequent ones. Is there some setup/error checking that numpy is doing the first time here? Is there a way I can bypass any of it? Or is there some kind of searching for blas functions that is taking all of the time?



I also ran profile and it gives the following:



ncalls  tottime  percall  cumtime  percall filename:lineno(function)
1 0.000 0.000 1.514 1.514 {built-in method builtins.exec}
1 0.000 0.000 1.514 1.514 <string>:1(<module>)
1 0.000 0.000 1.514 1.514 speed_small.py:15(run)
101 1.503 0.015 1.503 0.015 {built-in method numpy.core.multiarray.dot}
1 0.000 0.000 1.491 1.491 speed_small.py:5(do_work)
100 0.000 0.000 0.012 0.000 speed_small.py:10(do_work_iter)


so numpy.core.multiarray.dot is taking all of the time, it doesn't give much insight about anything further down the stack.



I am on Python 3.6 from Anaconda and have mkl installed (Windows 7).










share|improve this question














I am trying to do series of big np.dot(a,x) operations, and the first one seems to take far longer than subsequent calls. In my problem, a is tall [n x 2] and x is [2 x 1]. My big matrix a is constant, it is just x that is changing. Here is a MWE:



import numpy as np

@profile
def do_work(a,x):
tmp = np.dot(a,x)
return tmp

@profile
def do_work_iter(a,x):
tmp = np.dot(a,x)
return tmp

if __name__=="__main__":

n = 50000
a = np.random.randn(n,2)
x = np.random.randn(2,1)

#
tmp = do_work(a,x)

#
niter = 100
for i in range(niter):
x = np.random.randn(2,1)
tmp = do_work_iter(a,x)


Using line_profiler, I get .155 s/call for the first call to np.dot and .00013 s/call for the subsequent ones. Is there some setup/error checking that numpy is doing the first time here? Is there a way I can bypass any of it? Or is there some kind of searching for blas functions that is taking all of the time?



I also ran profile and it gives the following:



ncalls  tottime  percall  cumtime  percall filename:lineno(function)
1 0.000 0.000 1.514 1.514 {built-in method builtins.exec}
1 0.000 0.000 1.514 1.514 <string>:1(<module>)
1 0.000 0.000 1.514 1.514 speed_small.py:15(run)
101 1.503 0.015 1.503 0.015 {built-in method numpy.core.multiarray.dot}
1 0.000 0.000 1.491 1.491 speed_small.py:5(do_work)
100 0.000 0.000 0.012 0.000 speed_small.py:10(do_work_iter)


so numpy.core.multiarray.dot is taking all of the time, it doesn't give much insight about anything further down the stack.



I am on Python 3.6 from Anaconda and have mkl installed (Windows 7).







python performance numpy






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 21 '18 at 20:55









Isaac AsherIsaac Asher

384




384













  • I use line_profiler but my guess would be that you're incurring the import overhead the first time. After that, The module is cached

    – roganjosh
    Nov 21 '18 at 20:58













  • @roganjosh Isn't the module included before the script comes to if __name__ == '__main__': part ?

    – eozd
    Nov 21 '18 at 20:59













  • It's an external module and nothing in your code suggests you're using it (EDIT: that's a lie, you're using @profile decorator so I'll rethink) I don't know exactly what timings get bundled into the main library timings.

    – roganjosh
    Nov 21 '18 at 21:02








  • 1





    Loading of BLAS?

    – Ante
    Nov 23 '18 at 12:59



















  • I use line_profiler but my guess would be that you're incurring the import overhead the first time. After that, The module is cached

    – roganjosh
    Nov 21 '18 at 20:58













  • @roganjosh Isn't the module included before the script comes to if __name__ == '__main__': part ?

    – eozd
    Nov 21 '18 at 20:59













  • It's an external module and nothing in your code suggests you're using it (EDIT: that's a lie, you're using @profile decorator so I'll rethink) I don't know exactly what timings get bundled into the main library timings.

    – roganjosh
    Nov 21 '18 at 21:02








  • 1





    Loading of BLAS?

    – Ante
    Nov 23 '18 at 12:59

















I use line_profiler but my guess would be that you're incurring the import overhead the first time. After that, The module is cached

– roganjosh
Nov 21 '18 at 20:58







I use line_profiler but my guess would be that you're incurring the import overhead the first time. After that, The module is cached

– roganjosh
Nov 21 '18 at 20:58















@roganjosh Isn't the module included before the script comes to if __name__ == '__main__': part ?

– eozd
Nov 21 '18 at 20:59







@roganjosh Isn't the module included before the script comes to if __name__ == '__main__': part ?

– eozd
Nov 21 '18 at 20:59















It's an external module and nothing in your code suggests you're using it (EDIT: that's a lie, you're using @profile decorator so I'll rethink) I don't know exactly what timings get bundled into the main library timings.

– roganjosh
Nov 21 '18 at 21:02







It's an external module and nothing in your code suggests you're using it (EDIT: that's a lie, you're using @profile decorator so I'll rethink) I don't know exactly what timings get bundled into the main library timings.

– roganjosh
Nov 21 '18 at 21:02






1




1





Loading of BLAS?

– Ante
Nov 23 '18 at 12:59





Loading of BLAS?

– Ante
Nov 23 '18 at 12:59












0






active

oldest

votes












Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53420348%2fwhy-is-np-dot-much-slower-the-first-time-it-is-called-in-a-python-session%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes
















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53420348%2fwhy-is-np-dot-much-slower-the-first-time-it-is-called-in-a-python-session%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

鏡平學校

ꓛꓣだゔៀៅຸ໢ທຮ໕໒ ,ໂ'໥໓າ໼ឨឲ៵៭ៈゎゔit''䖳𥁄卿' ☨₤₨こゎもょの;ꜹꟚꞖꞵꟅꞛေၦေɯ,ɨɡ𛃵𛁹ޝ޳ޠ޾,ޤޒޯ޾𫝒𫠁သ𛅤チョ'サノބޘދ𛁐ᶿᶇᶀᶋᶠ㨑㽹⻮ꧬ꧹؍۩وَؠ㇕㇃㇪ ㇦㇋㇋ṜẰᵡᴠ 軌ᵕ搜۳ٰޗޮ޷ސޯ𫖾𫅀ल, ꙭ꙰ꚅꙁꚊꞻꝔ꟠Ꝭㄤﺟޱސꧨꧼ꧴ꧯꧽ꧲ꧯ'⽹⽭⾁⿞⼳⽋២៩ញណើꩯꩤ꩸ꩮᶻᶺᶧᶂ𫳲𫪭𬸄𫵰𬖩𬫣𬊉ၲ𛅬㕦䬺𫝌𫝼,,𫟖𫞽ហៅ஫㆔ాఆఅꙒꚞꙍ,Ꙟ꙱エ ,ポテ,フࢰࢯ𫟠𫞶 𫝤𫟠ﺕﹱﻜﻣ𪵕𪭸𪻆𪾩𫔷ġ,ŧآꞪ꟥,ꞔꝻ♚☹⛵𛀌ꬷꭞȄƁƪƬșƦǙǗdžƝǯǧⱦⱰꓕꓢႋ神 ဴ၀க௭எ௫ឫោ ' េㇷㇴㇼ神ㇸㇲㇽㇴㇼㇻㇸ'ㇸㇿㇸㇹㇰㆣꓚꓤ₡₧ ㄨㄟ㄂ㄖㄎ໗ツڒذ₶।ऩछएोञयूटक़कयँृी,冬'𛅢𛅥ㇱㇵㇶ𥄥𦒽𠣧𠊓𧢖𥞘𩔋цѰㄠſtʯʭɿʆʗʍʩɷɛ,əʏダヵㄐㄘR{gỚṖḺờṠṫảḙḭᴮᵏᴘᵀᵷᵕᴜᴏᵾq﮲ﲿﴽﭙ軌ﰬﶚﶧ﫲Ҝжюїкӈㇴffצּ﬘﭅﬈軌'ffistfflſtffतभफɳɰʊɲʎ𛁱𛁖𛁮𛀉 𛂯𛀞నఋŀŲ 𫟲𫠖𫞺ຆຆ ໹້໕໗ๆทԊꧢꧠ꧰ꓱ⿝⼑ŎḬẃẖỐẅ ,ờỰỈỗﮊDžȩꭏꭎꬻ꭮ꬿꭖꭥꭅ㇭神 ⾈ꓵꓑ⺄㄄ㄪㄙㄅㄇstA۵䞽ॶ𫞑𫝄㇉㇇゜軌𩜛𩳠Jﻺ‚Üမ႕ႌႊၐၸဓၞၞၡ៸wyvtᶎᶪᶹစဎ꣡꣰꣢꣤ٗ؋لㇳㇾㇻㇱ㆐㆔,,㆟Ⱶヤマފ޼ޝަݿݞݠݷݐ',ݘ,ݪݙݵ𬝉𬜁𫝨𫞘くせぉて¼óû×ó£…𛅑הㄙくԗԀ5606神45,神796'𪤻𫞧ꓐ㄁ㄘɥɺꓵꓲ3''7034׉ⱦⱠˆ“𫝋ȍ,ꩲ軌꩷ꩶꩧꩫఞ۔فڱێظペサ神ナᴦᵑ47 9238їﻂ䐊䔉㠸﬎ffiﬣ,לּᴷᴦᵛᵽ,ᴨᵤ ᵸᵥᴗᵈꚏꚉꚟ⻆rtǟƴ𬎎

Why https connections are so slow when debugging (stepping over) in Java?