extract_text

238 usages across 12 PDFs

Bad OCR in a board of education annual financial report

This PDF is all sorts of information about the Board of Education in Liberty County, Georgia

text = page.extract_text()
print(text)

with open("content.txt", 'w') as fp:
View full example →

Complex Extraction of Law Enforcement Complaints

This PDF contains a set of complaint records from a local law enforcement agency. Challenges include its relational data structure, unusual formatting common in the region, and redactions that disrupt automatic parsing.

  .find("text:contains(Complainant)")
  .right(until='text')
)
print("Complainant is", complainant.extract_text())
complainant.show(crop=100)
View full example →
  .find("text:contains(DOB)")
  .right(until='text')
)
print("DOB is", dob.extract_text())
dob.show(crop=100)
View full example →
    .find("text:contains(Number)")
    .below(until='text', width='element')
)
print("Number is", number.extract_text())
number.show(crop=100)
View full example →

Complex Table Extraction from OECD Czech PISA Assessment

This PDF is a document from the OECD regarding the PISA assessment, provided in Czech. The main extraction goal is to get the survey question table found on page 9. Challenges include the weird table format, making it hard to extract automatically.

results = []
for question, answer_area in zip(questions, answer_areas):
    result = {}
    result['question'] = question.extract_text()
    result['notes'] = (
        answer_area
        .find_all('text:italic:not-empty[size>8]')
View full example →
    result['notes'] = (
        answer_area
        .find_all('text:italic:not-empty[size>8]')
        .extract_text()
    )
    result['answers'] = (
        answer_area
View full example →
    result['answers'] = (
        answer_area
        .find_all('text:not(:italic):not-empty[size>8]')
        .extract_text()
    )
    results.append(result)
print("Found", len(results))
View full example →

Extracting Business Insurance Details from BOP PDF

This PDF is a complex insurance policy document generated for small businesses requiring BOP coverage. It contains an overwhelming amount of information across 111 pages. Challenges include varied forms that may differ slightly between carriers, making extraction inconsistent. It has to deal with different templated layouts, meaning even standard parts can shift when generated by different software.

    page
    .find(text="POLICY NUMBER")
    .right(until='text')
    .extract_text()
)
View full example →
    .find(text="Mailing Address")
    .expand(bottom='text')
    .right()
    .extract_text()
)
View full example →
text = page.extract_text()
print(text)
View full example →

Extracting Data Tables from Oklahoma Booze Licensees PDF

This PDF contains detailed tables listing alcohol licensees in Oklahoma. It has multi-line cells making it hard to extract data accurately. Challenges include alternative row colors instead of lines ("zebra stripes"), complicating row differentiation and extraction.

print("Before exclusions:", page.extract_text()[:200])

# Add exclusions
pdf.add_exclusion(lambda page: page.find(text="PREMISE").above())
View full example →
pdf.add_exclusion(lambda page: page.find(text="PREMISE").above())
pdf.add_exclusion(lambda page: page.find("text:regex(Page \d+ of)").expand())

print("After exclusions:", page.extract_text()[:200])

# Preview
page.show(exclusions='red')
View full example →

Extracting Economic Data from Brazil's Central Bank PDF

This PDF is the weekly “Focus” report from Brazil’s central bank with economic projections and statistics. Challenges include commas instead of decimal points, images showing projection changes, and tables without border lines that merge during extraction.

        .to_df(header=False)
        .dropna(axis=0, how='all')
        .assign(
            year=section.find('text[size~=10]:regex(\d\d\d\d)').extract_text(),
            value=headers
        )
    )
View full example →

Extracting State Agency Call Center Wait Times from FOIA PDF

This PDF contains data on wait times at a state agency call center. The main focus is on the data on the first two pages, which matches other states' submission formats. The later pages provide granular breakdowns over several years. Challenges include it being heavily pixelated, making it hard to read numbers and text, with inconsistent and unreadable charts.

# No results? Needs OCR!
print(page.extract_text())
View full example →
print(page.extract_text(layout=True))
View full example →

Extracting Text from Georgia Legislative Bills

This PDF contains legal bills from the Georgia legislature, published yearly. Challenges include extracting marked-up text like underlines and strikethroughs. It has line numbers that complicate text extraction.

text = page.extract_text()
print(text)
View full example →
underlined = page.find_all('text:underline')
print("Underlined text is", underlined.extract_text())
underlined.show(crop='wide')
View full example →
text = pdf.find_all('text:underline').extract_text()
print(text)
View full example →

ICE Detention Facilities Compliance Report Extraction

This PDF is an ICE report on compliance among detention facilities over the last 20-30 years. Our aim is to extract facility statuses and contract signatories' names and dates. Challenges include strange redactions, blobby text, poor contrast, and ineffective OCR. It has handwritten signatures and dates that are redacted.

# pdf.apply_ocr(resolution=192) if we wanted the whole thing
page.apply_ocr(resolution=192)
text = page.extract_text()[:200]
print(text)
View full example →
    left_col
    .find("text:closest(Dates of Review)")
)
print("Found", label.extract_text())
label.show(crop=20)
View full example →
      label
      .below(until='text', anchor='start')
    )
    print(answer.extract_text('words'))
View full example →

Natural PDF basics with text and tables

Learn the fundamentals of Natural PDF - opening PDFs, extracting text with layout preservation, selecting elements by criteria, spatial navigation, and managing exclusion zones. Perfect starting point for PDF data extraction.

text = page.extract_text(layout=True)
print(text)
View full example →
text = page.find('rect').extract_text()
print(text)
View full example →
# Find red text
red_text = page.find('text[color~=red]')
print(red_text.extract_text())
View full example →

OCR and AI magic

Master OCR techniques with Natural PDF - from basic text recognition to advanced LLM-powered corrections. Learn to extract text from image-based PDFs, handle tables without proper boundaries, and leverage AI for accuracy improvements.

text = page.extract_text()
print(text)
View full example →
"""

def correct_text_region(region):
    text = region.extract_text()
    completion = client.chat.completions.create(
        model="gpt-4o-nano",
        messages=[
View full example →

Working with page structure

Extract text from complex multi-column layouts while maintaining proper reading order. Learn techniques for handling academic papers, newsletters, and documents with intricate column structures using Natural PDF's layout detection features.

page.find('table').apply_ocr()
text = page.extract_text()
print(text)
View full example →

# Take one of the columns and apply OCR to it
cols[2].apply_ocr()
text = cols[2].extract_text()
print(text)
View full example →
text = table_area.extract_text()
print(text)
View full example →