I am using a Python (via ctypes
) wrapped C library to run a series of computation. At different stages of the running, I want to get data into Python, and specifically numpy
arrays.
The wrapping I am using does two different types of return for array data (which is of particular interest to me):
ctypes
Array: When I do type(x)
(where x is the ctypes
array, I get a <class 'module_name.wrapper_class_name.c_double_Array_12000'>
in return. I know that this data is a copy of the internal data from the documentation and I am able to get it into a numpy
array easily:
>>> np.ctypeslib.as_array(x)
This returns a 1D numpy
array of the data.
ctype
pointer to data: In this case from the library's documentation, I understand that I am getting a pointer to the data stored and used directly to the library. Whey I do type(y)
(where y is the pointer) I get <class 'module_name.wrapper_class_name.LP_c_double'>
. With this case I am still able to index through the data like y[0][2]
, but I was only able to get it into numpy via a super awkward:
>>> np.frombuffer(np.core.multiarray.int_asbuffer(
ctypes.addressof(y.contents), array_length*np.dtype(float).itemsize))
I found this in an old numpy
mailing list thread from Travis Oliphant, but not in the numpy
documentation. If instead of this approach I try as above I get the following:
>>> np.ctypeslib.as_array(y)
...
... BUNCH OF STACK INFORMATION
...
AttributeError: 'LP_c_double' object has no attribute '__array_interface__'
Is this np.frombuffer
approach the best or only way to do this? I am open to other suggestions but must would still like to use numpy
as I have a lot of other post-processing code that relies on numpy
functionality that I want to use with this data.
Creating NumPy arrays from a ctypes pointer object is a problematic operation. It is unclear who actually owns the memory the pointer is pointing to. When will it be freed again? How long is it valid? Whenever possible I would try to avoid this kind of construct. It is so much easier and safer to create arrays in the Python code and pass them to the C function than to use memory allocated by a Python-unaware C function. By doing the latter, you negate to some extent the advantages of having a high-level language taking care of the memory management.
If you are really sure that someone takes care of the memory, you can create an object exposing the Python "buffer protocol" and then create a NumPy array using this buffer object. You gave one way of creating the buffer object in your post, via the undocumented int_asbuffer()
function:
buffer = numpy.core.multiarray.int_asbuffer(
ctypes.addressof(y.contents), 8*array_length)
(Note that I substituted 8
for np.dtype(float).itemsize
. It's always 8, on any platform.) A different way to create the buffer object would be to call the PyBuffer_FromMemory()
function from the Python C API via ctypes:
buffer_from_memory = ctypes.pythonapi.PyBuffer_FromMemory
buffer_from_memory.restype = ctypes.py_object
buffer = buffer_from_memory(y, 8*array_length)
For both these ways, you can create a NumPy array from buffer
by
a = numpy.frombuffer(buffer, float)
(I actually do not understand why you use .astype()
instead of a second parameter to frombuffer
; furthermore, I wonder why you use np.int
, while you said earlier that the array contains double
s.)
I'm afraid it won't get much easier than this, but it isn't that bad, don't you think? You could bury all the ugly details in a wrapper function and don't worry about it any more.