memory dump using Python

shaun picture shaun · Feb 3, 2013 · Viewed 12.8k times · Source

I have small program written for me in Python to help me generate all combinations of passwords from a different sets of numbers and words i know for me to recover a password i forgot, as i know all different words and sets of numbers i used i just wanted to generate all possible combinations, the only problem is that the list seems to go on for hours and hours so eventually i run out of memory and it doesn't finish.

I got told it needs to dump my memory so it can carry on but i'm not sure if this is right. is there any way i can get round this problem?

this is the program i am running:

#!/usr/bin/python
import itertools
gfname = "name"
tendig = "1234567890"
sixteendig = "1111111111111111"
housenum = "99"
Characterset1 = "&&&&"
Characterset2 = "££££"
daughternam = "dname"
daughtyear = "1900"
phonenum1 = "055522233"
phonenum2 = "3333333"





mylist = [gfname, tendig, sixteendig, housenum, Characterset1,
          Characterset2, daughternam, daughtyear, phonenum1, phonenum2]
for length in range(1, len(mylist)+1):
    for item in itertools.permutations(mylist, length):
            print "".join(item)

i have taken out a few sets and changed the numbers and word for obvious reasons but this is roughly the program.

another thing is i may be missing a particular word but didnt want to put it in the list because i know it might go before all the generated passwords, does anyone know how to add a prefix to my program.

sorry for the bad grammar and thanks for any help given.

Answer

sotapme picture sotapme · Feb 3, 2013

I used guppy to understand the memory usage, I changed the OP code slightly (marked #!!!)

import itertools
gfname = "name"
tendig = "1234567890"
sixteendig = "1111111111111111"
housenum = "99"
Characterset1 = "&&&&"
Characterset2 = u"££££"
daughternam = "dname"
daughtyear = "1900"
phonenum1 = "055522233"
phonenum2 = "3333333"

from guppy import hpy # !!!
h=hpy()               # !!!
mylist = [gfname, tendig, sixteendig, housenum, Characterset1,
          Characterset2, daughternam, daughtyear, phonenum1, phonenum2]
for length in range(1, len(mylist)+1):
    print h.heap() #!!!
    for item in itertools.permutations(mylist, length):
            print item # !!!

Guppy outputs something like this every time h.heap() is called.

Partition of a set of 25914 objects. Total size = 3370200 bytes.
 Index  Count   %     Size   % Cumulative  % Kind (class / dict of class)
     0  11748  45   985544  29    985544  29 str
     1   5858  23   472376  14   1457920  43 tuple
     2    323   1   253640   8   1711560  51 dict (no owner)
     3     67   0   213064   6   1924624  57 dict of module
     4    199   1   210856   6   2135480  63 dict of type
     5   1630   6   208640   6   2344120  70 types.CodeType
     6   1593   6   191160   6   2535280  75 function
     7    199   1   177008   5   2712288  80 type
     8    124   0   135328   4   2847616  84 dict of class
     9   1045   4    83600   2   2931216  87 __builtin__.wrapper_descriptor

Running python code.py > code.log and the fgrep Partition code.log shows.

Partition of a set of 25914 objects. Total size = 3370200 bytes.
Partition of a set of 25924 objects. Total size = 3355832 bytes.
Partition of a set of 25924 objects. Total size = 3355728 bytes.
Partition of a set of 25924 objects. Total size = 3372568 bytes.
Partition of a set of 25924 objects. Total size = 3372736 bytes.
Partition of a set of 25924 objects. Total size = 3355752 bytes.
Partition of a set of 25924 objects. Total size = 3372592 bytes.
Partition of a set of 25924 objects. Total size = 3372760 bytes.
Partition of a set of 25924 objects. Total size = 3355776 bytes.
Partition of a set of 25924 objects. Total size = 3372616 bytes.

Which I believe shows that the memory footprint stays fairly consistent.

Granted I may be misinterpreting the results from guppy. Although during my tests I deliberately added a new string to a list to see if the object count increased and it did.

For those interested I had to install guppy like so on OSX - Mountain Lion pip install https://guppy-pe.svn.sourceforge.net/svnroot/guppy-pe/trunk/guppy

In summary I don't think that it's a running out of memory issue although granted we're not using the full OP dataset.