tags:

views:

154

answers:

4
+3  Q: 

http checks python

Hi gentlemen, learning python here, I want to check if anybody is running a web server on my local network, using this code, but it gives me a lot of error in the concole.

#!/usr/bin/env python

import httplib
last = 1
while last <> 255:
        url = "10.1.1." + "last"
        connection = httplib.HTTPConnection("url", 80)
        connection.request("GET","/")
        response = connection.getresponse()
        print (response.status)
        last = last + 1
+1  A: 

Remove the quotes from the variable names last and url. Python is interpreting them as strings rather than variables. Try this:

#!/usr/bin/env python

import httplib
last = 1
while last <> 255:
        url = "10.1.1.%d" % last
        connection = httplib.HTTPConnection(url, 80)
        connection.request("GET","/")
        response = connection.getresponse()
        print (response.status)
        last = last + 1
Graeme Perrow
+1  A: 

You're trying to connect to an url that is literally the string 'url': that's what the quotes you're using in

    connection = httplib.HTTPConnection("url", 80)

mean. Once you remedy that (by removing those quotes) you'll be trying to connect to "10.1.1.last", given the quotes in the previous line. Set that line to

    url = "10.1.1." + str(last)

and it could work!-)

Alex Martelli
+1  A: 

I do suggest changing the while loop to the more idiomatic for loop, and handling exceptions:

#!/usr/bin/env python

import httplib
import socket


for i in range(1, 256):
    try:
        url = "10.1.1.%d" % i
        connection = httplib.HTTPConnection(url, 80)
        connection.request("GET","/")
        response = connection.getresponse()
        print url + ":", response.status
    except socket.error:
        print url + ":", "error!"

To see how to add a timeout to this, so it doesn't take so long to check each server, see here.

steveha
A: 

as pointed out, you have some basic quotation issues. but more fundamentally:

  1. you're not using Pythonesque constructs to handle things but you're coding them as simple imperative code. that's fine, of course, but below are examples of funner (and better) ways to express things
  2. you need to explicitly set timeouts or it'll take forever
  3. you need to multithread or it'll take forever
  4. you need to handle various common exception types or your code will crash: connections will fail (including time out) under numerous conditions against real web servers
  5. 10.1.1.* is only one possible set of "local" servers. RFC 1918 spells out that the "local" ranges are 10.0.0.0 - 10.255.255.255, 172.16.0.0 - 172.31.255.255, and 192.168.0.0 - 192.168.255.255. the problem of generic detection of responders in your "local" network is a hard one
  6. web servers (especially local ones) often run on other ports than 80 (notably on 8000, 8001, or 8080)
  7. the complexity of general web servers, dns, etc is such that you can get various timeout behaviors at different times (and affected by recent operations)

below, some sample code to get you started, that pretty much addresses all of the above problems except (5), which i'll assume is (well) beyond the scope of the question.

btw i'm printing the size of the returned web page, since it's a simple "signature" of what the page is. the sample IPs return various Yahoo assets.

import urllib
import threading
import socket

def t_run(thread_list, chunks):
    t_count = len(thread_list)
    print "Running %s jobs in groups of %s threads" % (t_count, chunks)
    for x in range(t_count / chunks + 1):
     i = x * chunks
     i_c = min(i + chunks, t_count)
     c = len([t.start() for t in thread_list[i:i_c]])
     print "Started %s threads for jobs %s...%s" % (c, i, i_c - 1)
     c = len([t.join() for t in thread_list[i:i_c]])
     print "Finished %s threads for job index %s" % (c, i)

def url_scan(ip_base, timeout=5):
    socket.setdefaulttimeout(timeout)
    def f(url):
     # print "-- Trying (%s)" % url
     try:
      # the print will only complete if there's a server there
      r = urllib.urlopen(url)
      if r:
       print "## (%s) got %s bytes" % (url, len(r.read()))
      else:
       print "## (%s) failed to connect" % url
     except IOError, msg:
      # these are just the common cases
      if str(msg)=="[Errno socket error] timed out":
       return
      if str(msg)=="[Errno socket error] (10061, 'Connection refused')":
       return
      print "## (%s) got error '%s'" % (url, msg)
            # you might want 8000 and 8001, too
            return [threading.Thread(target=f, 
                          args=("http://" + ip_base + str(x) + ":" + str(p),)) 
                    for x in range(255) for p in [80, 8080]]

# run them (increase chunk size depending on your memory)
# also, try different timeouts
t_run(url_scan("209.131.36."), 100)
t_run(url_scan("209.131.36.", 30), 100)
Peter S Magnusson