In Python 3, I have a PDF file "Ativos_Fevereiro_2018_servidores_rj.pdf" with 6,041 pages. I'm on a machine with Ubuntu
On each page there is text at the top of the page, two lines. And below a table, with header and two columns. Each table in 36 rows, less on the last page
At the end of each page, after the tables, there is also a line of text
I want to create a CSV from this PDF, considering only the tables in the pages. And ignoring the texts before and after the tables
Initially I tested the tabula-py. But it generates an empty file:
from tabula import convert_into
convert_into("Ativos_Fevereiro_2018_servidores_rj.pdf", "test_s.csv", output_format="csv")
Please, does anyone know of another method to use tabula-py for this type of demand?
Or another way to convert PDF to CSV in this file type?
Ok, I've found the issue: you have to set spreadsheet=True
and keep utf-8 encoding:
df = tabula.read_pdf("Ativos_Fevereiro_2018_servidores_rj.pdf", encoding='utf-8', spreadsheet=True, pages='1-6041')
In the picture below I tested it with just the first page (because your file is huge):
You can save the DataFrame as csv afterwards:
df.to_csv('otuput.csv', encoding='utf-8')
Ok, the error could be a java-memory issue. To make it faster I added the pages
option. And there also was an encoding problem, so encoding='utf-8'
added to the csv export.
If you keep running into the java-error, try parse it in chunks, e.g. pages='1-300'
. I just did all 6041 (on a 64GB RAM Machine), it worked fine.