views:

94

answers:

4

Hello everyone, I am trying to process various texts by regex and NLTK of python -which is at http://www.nltk.org/book-. I am trying to create a random text generator and I am having a slight problem. Firstly, here is my code flow:

  1. Enter a sentence as input -this is called trigger string, is assigned to a variable-

  2. Get longest word in trigger string

  3. Search all Project Gutenberg database for sentences that contain this word -regardless of uppercase lowercase-

  4. Return the longest sentence that has the word I spoke about in step 3

  5. Append the sentence in Step 1 and Step4 together

  6. Assign the sentence in Step 4 as the new 'trigger' sentence and repeat the process. Note that I have to get the longest word in second sentence and continue like that and so on-

So far, I have been able to do this only once. When I try to keep this to continue, the program only keeps printing the first sentence my search yields. It should actually look for the longest word in this new sentence and keep applying my code flow described above.

Below is my code along with a sample input/output :

Sample input

"Thane of code"

Sample output

"Thane of code Norway himselfe , with terrible numbers , Assisted by that most disloyall Traytor , The Thane of Cawdor , began a dismall Conflict , Till that Bellona ' s Bridegroome , lapt in proofe , Confronted him with selfe - comparisons , Point against Point , rebellious Arme ' gainst Arme , Curbing his lauish spirit : and to conclude , The Victorie fell on vs"

Now this should actually take the sentence that starts with 'Norway himselfe....' and look for the longest word in it and do the steps above and so on but it doesn't. Any suggestions? Thanks.

import nltk

from nltk.corpus import gutenberg

triggerSentence = raw_input("Please enter the trigger sentence: ")#get input str

split_str = triggerSentence.split()#split the sentence into words

longestLength = 0

longestString = ""

montyPython = 1

while montyPython:

    #code to find the longest word in the trigger sentence input
    for piece in split_str:
        if len(piece) > longestLength:
            longestString = piece
            longestLength = len(piece)


    listOfSents = gutenberg.sents() #all sentences of gutenberg are assigned -list of list format-

    listOfWords = gutenberg.words()# all words in gutenberg books -list format-
    # I tip my hat to Mr.Alex Martelli for this part, which helps me find the longest sentence
    lt = longestString.lower() #this line tells you whether word list has the longest word in a case-insensitive way. 

    longestSentence = max((listOfWords for listOfWords in listOfSents if any(lt == word.lower() for word in listOfWords)), key = len)
    #get longest sentence -list format with every word of sentence being an actual element-

    longestSent=[longestSentence]

    for word in longestSent:#convert the list longestSentence to an actual string
        sstr = " ".join(word)
    print triggerSentence + " "+ sstr
    triggerSentence = sstr
A: 

You are assigning "split_str" outside of the loop, so it gets the original value and then keeps it. You need to assign it at the beginning of the while loop, so it changes each time.

import nltk

from nltk.corpus import gutenberg

triggerSentence = raw_input("Please enter the trigger sentence: ")#get input str

longestLength = 0

longestString = ""

montyPython = 1

while montyPython:
    #so this is run every time through the loop
    split_str = triggerSentence.split()#split the sentence into words

    #code to find the longest word in the trigger sentence input
    for piece in split_str:
        if len(piece) > longestLength:
            longestString = piece
            longestLength = len(piece)


    listOfSents = gutenberg.sents() #all sentences of gutenberg are assigned -list of list format-

    listOfWords = gutenberg.words()# all words in gutenberg books -list format-
    # I tip my hat to Mr.Alex Martelli for this part, which helps me find the longest sentence
    lt = longestString.lower() #this line tells you whether word list has the longest word in a case-insensitive way. 

    longestSentence = max((listOfWords for listOfWords in listOfSents if any(lt == word.lower() for word in listOfWords)), key = len)
    #get longest sentence -list format with every word of sentence being an actual element-

    longestSent=[longestSentence]

    for word in longestSent:#convert the list longestSentence to an actual string
        sstr = " ".join(word)
    print triggerSentence + " "+ sstr
    triggerSentence = sstr
Michael Patterson
+1  A: 

How about this?

  1. You find longest word in trigger
  2. You find longest word in the longest sentence containing word found in 1.
  3. The word of 1. is the longest word of the sentence of 2.

What happens? Hint: answer starts with "Infinite". To correct the problem you could find set of words in lower case to be useful.

BTW when you think MontyPython becomes False and the program finish?

Tony Veijalainen
+1  A: 

Rather than searching the entire corpus each time, it may be faster to construct a single map from word to the longest sentence containing that word. Here's my (untested) attempt to do this.

import collections
from nltk.corpus import gutenberg

def words_in(sentence):
    """Generate all words in the sentence (lower-cased)"""
    for word in sentence.split():
        word = word.strip('.,"\'-:;')
        if word:
            yield word.lower()

def make_sentence_map(books):
    """Construct a map from words to the longest sentence containing the word."""
    result = collections.defaultdict(str)
    for book in books:
        for sentence in book:
            for word in words_in(sentence):
                if len(sentence) > len(result[word]):
                    result[word] = sent
    return result

def generate_random_text(sentence, sentence_map):
    while True:
        yield sentence
        longest_word = max(words_in(sentence), key=len)
        sentence = sentence_map[longest_word]

sentence_map = make_sentence_map(gutenberg.sents())
for sentence in generate_random_text('Thane of code.', sentence_map): 
    print sentence
Paul Hankin
+1 elegant indeed, but sarevok should take note of the `yield` statement and read up on generators.
msw
A: 

Mr. Hankin's answer is more elegant, but the following is more in keeping with the approach you began with:

import sys
import string
import nltk
from nltk.corpus import gutenberg

def longest_element(p):
    """return the first element of p which has the greatest len()"""
    max_len = 0
    elem = None
    for e in p:
        if len(e) > max_len:
            elem = e
            max_len = len(e)
    return elem

def downcase(p):
    """returns a list of words in p shifted to lower case"""
    return map(string.lower, p)


def unique_words():
    """it turns out unique_words was never referenced so this is here
       for pedagogy"""
    # there are 2.6 million words in the gutenburg corpus but only ~42k unique
    # ignoring case, let's pare that down a bit
    for word in gutenberg.words():
        words.add(word.lower())
    print 'gutenberg.words() has', len(words), 'unique caseless words'
    return unique_words()

print 'loading gutenburg corpus...'
sentences = []
for sentence in gutenberg.sents():
    sentences.append(downcase(sentence))

trigger = sys.argv[1:]
target = longest_element(trigger).lower()
last_target = None

while target != last_target:
    matched_sentences = []
    for sentence in sentences:
        if target in sentence:
            matched_sentences.append(sentence)

    print '===', target, 'matched', len(matched_sentences), 'sentences'
    longestSentence = longest_element(matched_sentences)
    print ' '.join(longestSentence)

    trigger = longestSentence
    last_target = target
    target = longest_element(trigger).lower()

Given your sample sentence though, it reaches fixation in two cycles:

$ python nltkgut.py Thane of code
loading gutenburg corpus...
=== target thane matched 24 sentences
norway himselfe , with terrible numbers , assisted by that most disloyall traytor , the thane of cawdor , began a dismall conflict , till that bellona ' s bridegroome , lapt in proofe , confronted him with selfe - comparisons , point against point , rebellious arme ' gainst arme , curbing his lauish spirit : and to conclude , the victorie fell on vs
=== target bridegroome matched 1 sentences
norway himselfe , with terrible numbers , assisted by that most disloyall traytor , the thane of cawdor , began a dismall conflict , till that bellona ' s bridegroome , lapt in proofe , confronted him with selfe - comparisons , point against point , rebellious arme ' gainst arme , curbing his lauish spirit : and to conclude , the victorie fell on vs

Part of the trouble with the response to the last problem is that it did what you asked, but you asked a more specific question than you wanted an answer to. Thus the response got bogged down in some rather complicated list expressions that I'm not sure you understood. I suggest that you make more liberal use of print statements and don't import code if you don't know what it does. While unwrapping the list expressions I found (as noted) that you never used the corpus wordlist. Functions are a help also.

msw
@msw, I must admit, I started the Python business a bit hasty because I am currently doing this along with my summer internship. Thanks for the tips and answer.
sarevok