views:

606

answers:

10

I need a mechanism for extracting bibliographic metadata from PDF documents, to save people entering it by hand or cut-and-pasting it.

At the very least, the title and abstract. The list of authors and their affiliations would be good. Extracting out the references would be amazing.

Ideally this would be an open source solution.

The problem is that not all PDF's encode the text, and many which do fail to preserve the logical order of the text, so just doing pdf2text gives you line 1 of column 1, line 1 of column 2, line 2 of column 1 etc.

I know there's a lot of libraries. It's identifying the abstract, title authors etc. on the document that I need to solve. This is never going to be possible every time, but 80% would save a lot of human effort.

+1  A: 

Take a look at iText. It is a Java library that will let you read PDFs. You will still face the problem of finding the right data, but the library will provide formatting and layout information that might be usable to infer purpose.

Jim Rush
+1  A: 

Another Java library to try would be PDFBox. PDFs are really designed to viewed and printed, so you definitely want a library to do some of the heavy lifting for you. Even so, you might have to do a little gluing of text pieces back together to get the data you want extracted. Good Luck!

CBFraser
A: 

PyPDF might be of help. It provides extensive API for reading and writing the content of a PDF file (un-encrypted), and its written in an easy language Python.

Shailesh Kumar
+1  A: 

In this case i would recommend TET from PDFLIB

If you need to get a quick feel for what it can do, take a look at the TET Cookbook

This is not an open source solution, but it's currently the best option in my opinion. It's not platform-dependant and has a rich set of language bindings and a commercial backing.

I would be happy if someone pointed me to an equivalent or better open source alternative.

To extract text you would use the TET_xxx() functions and to query metadata you can use the pcos_xxx() functions.

You can also use the commanline tool to generate an XML-file containing all the information you need.

tet --tetml word file.pdf

There are examples on how to process TETML with XSLT in the TET Cookbook

What’s included in TETML?

TETML output is encoded in UTF-8 (on zSeries with USS or MVS: EBCDIC-UTF-8, see www.unicode.org/reports/tr16), and includes the following information: general document information and metadata text contents of each page (words or paragraph) glyph information (font name, size, coordinates) structure information, e.g. tables information about placed images on the page resource information, i.e. fonts, colorspaces, and images error messages if an exception occurred during PDF processing

Peter Lindqvist
+4  A: 

Might be a tad simplistic but Googling "bibtex + paper title" ussualy gets you a formated bibtex entry from the ACM,Citeseer, or other such reference tracking sites. Ofcourse this is assuming the paper isn't from a non-computing journal :D

-- EDIT --

I have a feeling you won't find a custom solution for this, you might want to write to citation trackers such as citeseer, ACM and google scholar to get ideas for what they have done. There are tons of others and you might find their implementations are not closed source but not in a published form. There is tons of research material on the subject.

The research team I am part of has looked at such problems and we have come to the conclusion that hand written extraction algorithms or machine learning are the way to do it. Hand written algorithms are probably your best bet.

This is quite a hard problem due to the amount of variation possible. I suggest normalizing the PDF's to text (which you get from any of the dozens of programmatic PDF libraries). You then need to implement custom text scrapping algorithms.

I would start backward from the end of the PDF and look what sort of citation keys exist -- e.g., [1], [author-year], (author-year) and then try to parse the sentence following. You will probably have to write code to normalize the text you get from a library (removing extra whitespace and such). I would only look for citation keys as the first word of a line, and only for 10 pages per document -- the first word must have key delimiters -- e.g., '[' or '('. If no keys can be found in 10 pages then ignore the PDF and flag it for human intervention.

You might want a library that you can further programmatically consult for formatting meta-data within citations --e.g., itallics have a special meaning.

I think you might end up spending quite some time to get a working solution, and then a continual process of tuning and adding to the scrapping algorithms/engine.

Hassan Syed
Nice idea, but I'm working on a system for putting research PDF's online, so it's the thing providing the bibtex!
Christopher Gutteridge
I've already gotten that far. I was hoping there might be some packaged solution. It's a research-level problem :(
Christopher Gutteridge
+2  A: 

I'm only allowed one link per posting so this is it: pdfinfo Linux manual page

This might get the title and authors. Look at the bottom of the manual page, and there's a link to www.foolabs.com/xpdf where the open source for the program can be found, as well as binaries for various platforms.

To pull out bibliographic references, look at c2bib: "cb2Bib is a free, open source, and multiplatform application for rapidly extracting unformatted, or unstandardized bibliographic references from email alerts, journal Web pages, and PDF files." (http://www.molspaces.com/cb2bib/)

You might also want to check the discussion forums at www.zotero.org where this topic has been discussed.

MZB
pdfinfo was not very helpful. Just gives you data about the software tools. Interesting, but no help on this current problem.Title: sv-lncsAuthor: Springer-SBMCreator: Microsoft® Office Word 2007Producer: Microsoft® Office Word 2007CreationDate: Mon May 4 08:53:31 2009ModDate: Mon May 4 08:53:31 2009Tagged: yesPages: 6Encrypted: noPage size: 595.32 x 842.04 pts (A4)File size: 269847 bytesOptimized: noPDF version: 1.5
Christopher Gutteridge
I think the basic problem you're running into is that unless you're dealing with an E-Publisher or a *very organized* company you'll get marginally useful information out of the pdf metadata. So what is sounds like you're really after is a product that identifies and outputs the following from UNSTRUCTURED text: 1) Author(s) 2) Abstract 3) Bibliography information. This text can be easily extracted from a PDF (and often many other file formats) and there are many open source solutions for that. It seems c2bib might be a good starting point as it should help in the bibliography arena.
Jason D
Sigh. That's why I asked the question. I develop one of the leading open source research paper repository tools, EPrints. I know what the problem is, it's just not been solved by an open source tool (yet).
Christopher Gutteridge
A: 

Just found pdftk... it's amazing, comes in a binary distribution for Win/Lin/Mac as well as source.

In fact, I solved my other problem (look at my profile, I asked then answered another pdf question .. can't link due to 1 link limitation).

It can do pdf metadata extraction, for example, this will return the line containing the title:

pdftk test.pdf dump_data output test.txt | grep -A 1 "InfoKey: Title" | grep "InfoValue"

It can dump title, author, mod-date, and even bookmarks and page numbers (test pdf had bookmarks)... obviously a bit of work will be needed to properly grep the output, but I think this should fit your needs.

If your pdfs don't have metadata (ie, no "Abstract" metadata), you can cat the text using a different tool like pdf2text, and use some grep tricks like above. If your pdfs are not OCR'd, you have a much bigger problem, and ad-hoc querying of the pdf(s) will be painfully slow (best to OCR).

Regardless, I would recommend you build an index of your documents instead of having each query scan the file metadata/text.

r00fus
Only extracts the metadata embedded by the creating software. I need the bibliographic metadata. This can't get me the abstract. I know I have a big problem, that's why I asked the question.Looks like there's no solution available :( google scholar clearly have a way, but I've not got their resources.
Christopher Gutteridge
I'm pretty sure there's no pre-packaged solution for your problem.However, use of tools like pdftk, pdf2txt and some perl/shell scripting should give you that 80-90% coverage (assuming you don't have to OCR them first).I think it's a bit unfair to post this bounty without sample data, because there is no way to solve this without examining the corpus of data. Even commercial or pre-packaged solutions will likely need to know some details of what your content looks like or you will need to configure/test repeatedly until you get a good coverage.
r00fus
A: 

Try citeyoulike. It is a website that lets you put together a library of papers, assign tags to them, search them, and attach comments. It also lets you add a button to your web browser, which would try to automatically extract the information you want including the abstract. It doesn't really get much from a pdf though. However, if you point it to a citation for a paper on IEEE explorer, citeseer, or many journal sites, it is usually able to get all the bibtex info.

The thing is that pdfs often don't have all the citation information to begin with. You would normally have the title and the authors, but not necessarily the name of the conference or the year of publication. It makes sense to first find a citation for the paper on siteseer, PubMed, or some other place, and extract the information from there.

In general I have found citeyoulike to be extremely useful for organizing papers. It is also useful for collaborating with other people. You can create groups, share papers, set up forums, etc.

Dima
A: 

Have a look at this research paper - Accurate Information Extraction from Research Papers using Conditional Random Fields

You might want to use an open-source package like Stanford NER to get started on CRFs.

Or perhaps, you could try importing them (the research papers) to Mendeley. Apparently, it should extract the necessary information for you.

Hope this helps.

Bart J
A: 

We ran a contest to solve this problem at Dev8D in London, Feb 2010 and we got a nice little GPL tool created as a result. We've not yet integrated it into our systems but it's there in the world.

https://code.google.com/p/pdfssa4met/

Christopher Gutteridge