This is my code:
from xgoogle.search import GoogleSearch, SearchError
import urllib, urllib2, sys, argparse
global stringArr
stringArr = ["string 1",
"string 2",
"string 3",
"string etc"]
def searchIt(url):
try:
if(args.verbose>='1'): print "[INFO] Opening URL: "+url
response = urllib.urlopen(url)
except urllib2.URLError, e:
print "[ERROR] "+e.reason
return False
except KeyboardInterrupt:
print "Suspended by user..."
sys.exit()
if(checkForStr(response.read())):
if(args.verbose=='0'): print "[INFO] String found in URL: "+url
else:
if(args.verbose>='1'): print "[INFO] No string found in URL: "+url
def checkForStr(html):
global stringArr
try:
if any(checkStr in html for checkStr in stringArr):
return True
else:
return False
except KeyboardInterrupt:
print "Suspended by user..."
sys.exit()
def main():
try:
i=0
gs = GoogleSearch(args.keyword)
gs.results_per_page = 100
results = []
while True:
tmp = gs.get_results()
i = i+1 # page number
if not tmp: # no more results (pages) were found
break
results.extend(tmp)
for r in results: # process results for page
searchIt(r.url) # check for string
del results[:] # clean results
# finished
except SearchError, e:
print "[ERROR] Search failed: %s" % e
except KeyboardInterrupt:
print "Suspended by user..."
sys.exit()
if __name__ == '__main__':
try:
parser = argparse.ArgumentParser()
parser.add_argument('-v', dest='verbose', default='0', help='Verbosity level', choices='012')
group = parser.add_argument_group('required arguments')
group.add_argument('-k', dest='keyword', help='Keyword to use on google query', required=True)
args = parser.parse_args()
main()
except KeyboardInterrupt:
print "Suspended by user..."
sys.exit()
I've shorten it a little to make it easier to read, but it should still be functional. This code will be part of a bigger script.
I am using this lib: XGOOGLE to scrape the results from google, and then I visit each result to search if the website contains any of the strings from stringArr.
I made the first tests without any problem (I ctrl+C it after less than 10 results), but the first time I let it run, after about 100 urls tested I got this error:
File "./StringScan.py", line 99, in <module>
main()
File "./StringScan.py", line 83, in main
checkForStr(r.url)
File "./StringScan.py", line 39, in checkForStr
response = urllib.urlopen(url)
File "/usr/lib/python2.6/urllib.py", line 86, in urlopen
return opener.open(url)
File "/usr/lib/python2.6/urllib.py", line 205, in open
return getattr(self, name)(url)
File "/usr/lib/python2.6/urllib.py", line 344, in open_http
h.endheaders()
File "/usr/lib/python2.6/httplib.py", line 904, in endheaders
self._send_output()
File "/usr/lib/python2.6/httplib.py", line 776, in _send_output
self.send(msg)
File "/usr/lib/python2.6/httplib.py", line 735, in send
self.connect()
File "/usr/lib/python2.6/httplib.py", line 716, in connect
self.timeout)
File "/usr/lib/python2.6/socket.py", line 500, in create_connection
for res in getaddrinfo(host, port, 0, SOCK_STREAM):
IOError: [Errno socket error] [Errno -2] Name or service not known
(lines numbers are not the same because I modified the code to post it here)
After that I got back my linux terminal like if the script has finished. But I noticed my pc wasn't working quite well, I checked System Monitor and I saw the process Python using 1.3gb of memory, I had to kill the process to get back my pc to normal.
Is it something in my code that is causing this or why could it happen?
I know my code could have some errors, but right now I am mainly interested in any error that could be causing the memory problem. Any help will be appreciated.