I'm working on a personal project focusing on analysis of text in a database. My intent is to do something interesting and learn about SQL and sqlite. So with my novice abilities in mind, I'd like to get advice on doing this more efficiently.
Say, for example, I want to pick out the types of food in an article A
. I parse my article, and if I find a food F
, then I add F
to table items. Then I add A.id
and F.id
to results. When I parse my article and find a food G
that already exists in items, all I do is add A.id
and G.id
to results.
So my schemas look something like the following:
- articles:
id, article
- results:
id, item_id, article_id
- items:
id, foodtype, food
If I want to find all the articles that talk about oranges
and grapes
and any vegetable
, then I'd start with something like this:
SELECT *
FROM articles
INNER JOIN results ON articles.id = results.article_id
INNER JOIN items ON results.item_id = items.id
and add:
WHERE foodtype='vegetable' OR food='orange' OR food='grape'
In reality, my database is much bigger. There are thousands of articles and over a hundred thousand extracted "foods." Most of these queries in which I join 3 tables don't return, even if I limit things to 100 results. I've tried creating an index on fields that are commonly in my WHERE
clauses, like food
and foodtype
, but have seen no improvement.
Are there improvements that I can make to my database or query?