I've been using cProfile to profile my code, and it's been working great. I also use gprof2dot.py to visualize the results (makes it a little clearer).
However, cProfile (and most other Python profilers I've seen so far) seem to only profile at the function-call level. This causes confusion when certain functions are called from different places - I have no idea if call #1 or call #2 is taking up the majority of the time. This gets even worse when the function in question is six levels deep, called from seven other places.
How do I get a line-by-line profiling?
Instead of this:
function #12, total time: 2.0s
I'd like to see something like this:
function #12 (called from somefile.py:102) 0.5s
function #12 (called from main.py:12) 1.5s
cProfile does show how much of the total time "transfers" to the parent, but again this connection is lost when you have a bunch of layers and interconnected calls.
Ideally, I'd love to have a GUI that would parse through the data, then show me my source file with a total time given to each line. Something like this:
main.py:
a = 1 # 0.0s
result = func(a) # 0.4s
c = 1000 # 0.0s
result = func(c) # 5.0s
Then I'd be able to click on the second "func(c)" call to see what's taking up time in that call, separate from the "func(a)" call. Does that make sense?
I believe that's what Robert Kern's line_profiler is intended for. From the link:
File: pystone.py
Function: Proc2 at line 149
Total time: 0.606656 s
Line # Hits Time Per Hit % Time Line Contents
==============================================================
149 @profile
150 def Proc2(IntParIO):
151 50000 82003 1.6 13.5 IntLoc = IntParIO + 10
152 50000 63162 1.3 10.4 while 1:
153 50000 69065 1.4 11.4 if Char1Glob == 'A':
154 50000 66354 1.3 10.9 IntLoc = IntLoc - 1
155 50000 67263 1.3 11.1 IntParIO = IntLoc - IntGlob
156 50000 65494 1.3 10.8 EnumLoc = Ident1
157 50000 68001 1.4 11.2 if EnumLoc == Ident1:
158 50000 63739 1.3 10.5 break
159 50000 61575 1.2 10.1 return IntParIO