Attaching a process with pdb

Hola Soy Edu Feliz Navidad picture Hola Soy Edu Feliz Navidad · Aug 14, 2014 · Viewed 60.1k times · Source

I have a python script that I suspect that there is a deadlock. I was trying to debug with pdb but if I go step by step it doesn't get the deadlock, and by the output returned I can see that it's not being hanged on the same iteration. I would like to attach my script to a debugger only when it gets locked, is it possible? I'm open to use other debuggers if necessary.

Answer

skrrgwasme picture skrrgwasme · Aug 15, 2014

At this time, pdb does not have the ability to halt and begin debugging on a running program. You have a few other options:

GDB

You can use GDB to debug at the C level. This is a bit more abstract because you're poking around Python's C source code rather than your actual Python script, but it can be useful for some cases. The instructions are here: https://wiki.python.org/moin/DebuggingWithGdb. They are too involved to summarise here.

Third-Party Extensions & Modules

Just googling for "pdb attach process" reveals a couple of projects to give PDB this ability:
Pyringe: https://github.com/google/pyringe
Pycharm: https://blog.jetbrains.com/pycharm/2015/02/feature-spotlight-python-debugger-and-attach-to-process/
This page of the Python wiki has several alternatives: https://wiki.python.org/moin/PythonDebuggingTools


For your specific use case, I have some ideas for workarounds:

Signals

If you're on unix, you can use signals like in this blog post to try to halt and attach to a running script.

This quote block is copied directly from the linked blog post:

Of course pdb has already got functions to start a debugger in the middle of your program, most notably pdb.set_trace(). This however requires you to know where you want to start debugging, it also means you can't leave it in for production code.

But I've always been envious of what I can do with GDB: just interrupt a running program and start to poke around with a debugger. This can be handy in some situations, e.g. you're stuck in a loop and want to investigate. And today it suddenly occurred to me: just register a signal handler that sets the trace function! Here the proof of concept code:

import os
import signal
import sys
import time    

def handle_pdb(sig, frame):
    import pdb
    pdb.Pdb().set_trace(frame)    

def loop():
    while True:
        x = 'foo'
        time.sleep(0.2)

if __name__ == '__main__':
    signal.signal(signal.SIGUSR1, handle_pdb)
    print(os.getpid())
    loop()

Now I can send SIGUSR1 to the running application and get a debugger. Lovely!

I imagine you could spice this up by using Winpdb to allow remote debugging in case your application is no longer attached to a terminal. And the other problem the above code has is that it can't seem to resume the program after pdb got invoked, after exiting pdb you just get a traceback and are done (but since this is only bdb raising the bdb.BdbQuit exception I guess this could be solved in a few ways). The last immediate issue is running this on Windows, I don't know much about Windows but I know they don't have signals so I'm not sure how you could do this there.

Conditional Breakpoints and Loops

You may still be able to use PDB if you don't have signals available, if you wrap your lock or semaphore acquisitions in a loop that increments a counter, and only halt when the count has reached a ridiculously large number. For example, say you have a lock that you suspect is part of your deadlock:

lock.acquire() # some lock or semaphore from threading or multiprocessing

Rewrite it this way:

count = 0
while not lock.acquire(False): # Start a loop that will be infinite if deadlocked
    count += 1

    continue # now set a conditional breakpoint here in PDB that will only trigger when
             # count is a ridiculously large number:
             # pdb> <filename:linenumber>, count=9999999999

The breakpoint should trigger when when count is very large, (hopefully) indicating that a deadlock has occurred there. If you find that it's triggering when the locking objects don't seem to indicate a deadlock, then you may need to insert a short time delay in the loop so it doesn't increment quite so fast. You also may have to play around with the breakpoint's triggering threshold to get it to trigger at the right time. The number in my example was arbitrary.

Another variant on this would be to not use PDB, and intentionally raise an exception when the counter gets huge, instead of triggering a breakpoint. If you write your own exception class, you can use it to bundle up all of the local semaphore/lock states in the exception, then catch it at the top-level of your script to print out right before exiting.

File Indicators

A different way you can use your deadlocked loop without relying on getting counters right would be to write to files instead:

import time

while not lock.acquire(False): # Start a loop that will be infinite if deadlocked
    with open('checkpoint_a.txt', 'a') as fo: # open a unique filename
        fo.write("\nHit") # write indicator to file
        time.sleep(3)     # pause for a moment so the file size doesn't explode

Now let your program run for a minute or two. Kill the program and go through those "checkpoint" files. If deadlock is responsible for your stalled program, the files that have the word "hit" written in them a bunch of times indicate which lock acquisitions are responsible for your deadlock.

You can expand the usefullness of this by having the loop print variables or other state information instead of just a constant. For example, you said you suspect the deadlock is happening in a loop but don't know what iteration it's on. Have this lock loop dump your loop's controlling variables or other state information to identify the iteration the deadlock occured on.