how to remove text between <script>
and </script>
using python?
views:
910answers:
9You can do this with the HTMLParser module (complicated) or use regular expressions:
import re
content = "asdf <script> bla </script> end"
x=re.search("<script>.*?</script>", content, re.DOTALL)
span = x.span() # gives (5, 27)
stripped_content = content[:span[0]] + content[span[1]:]
EDIT: re.DOTALL, thanks to tgray
I don't know Python good enough to tell you a solution. But if you want to use that to sanitize the user input you have to be very, very careful. Removing stuff between and just doesn't catch everything. Maybe you can have a look at existing solutions (I assume Django includes something like this).
You can use BeautifulSoup with this (and other) methods:
soup = BeautifulSoup(source.lower())
to_extract = soup.findAll('script')
for item in to_extract:
item.extract()
This actually removes the nodes from the HTML. If you wanted to leave the empty <script></script>
tags you'll have to work with the item
attributes rather than just extracting it from the soup.
If you're removing everything between <script>
and </script>
why not just remove the entire node?
Are you expecting a resig-style src and body?
example_text = "This is some text <script> blah blah blah </script> this is some more text."
import re
myre = re.compile("(^.*)<script>(.*)</script>(.*$)")
result = myre.match(example_text)
result.groups()
<52> ('This is some text ', ' blah blah blah ', ' this is some more text.')
# Text between <script> .. </script>
result.group(2)
<56> 'blah blah blah'
# Text outside of <script> .. </script>
result.group(1)+result.group(3)
<57> 'This is some text this is some more text.'
If you don't want to import any modules:
string = "<script> this is some js. begone! </script>"
string = string.split(' ')
for i, s in enumerate(string):
if s == '<script>' or s == '</script>' :
del string[i]
print ' '.join(string)
According to answers posted by Pev and wr, why not to upgrade a regular expression, e.g.:
pattern = r"(?is)<script[^>]*>(.*?)</script>"
text = """<script>foo bar
baz bar foo </script>"""
re.sub(pattern, '', text)
(?is) - added to ignore case and allow new lines in text. This version should also support script tags with attributes.
EDIT: I can't add any comments yet, so I'm just editing my answer. I totally agree with the comment below, regexps are totally wrong for such tasks and b. soup ot lxml are a lot better. But question asked gave just a simple example and regexps should be enough for such simple task. Using Beautiful Soup for a simple text removing could just be too much (overload? I don't how to express what I mean, excuse my english).
BTW I made a mistake, the code should look like this:
pattern = r"(?is)(<script[^>]*>)(.*?)(</script>)"
text = """<script>foo bar
baz bar foo </script>"""
re.sub(pattern, '\1\3', text)
Are you trying to prevent XSS? Just eliminating the <script>
tags will not solve all possible attacks! Here's a great list of the many ways (some of them very creative) that you could be vulnerable http://ha.ckers.org/xss.html. After reading this page you should understand why just elimintating the <script>
tags using a regular expression is not robust enough. The python library lxml has a function that will robustly clean your HTML to make it safe to display.
If you are sure that you just want to eliminate the <script>
tags this code in lxml should work:
from lxml.html import parse
root = parse(filename_or_url).getroot()
for element in root.iter("script"):
element.drop_tree()
Note: I downvoted all the solutions using regular expresions. See here why you shouldn't parse HTML using regular expressions: http://stackoverflow.com/questions/590747/using-regular-expressions-to-parse-html-why-not
Note 2: Another SO question showing HTML that is impossible to parse with regular expressions: http://stackoverflow.com/questions/701166/can-you-provide-some-examples-of-why-it-is-hard-to-parse-xml-and-html-with-a-rege
Element Tree is the best simplest and sweetest package to do this. Yes, there are other ways to do it too; but don't use any 'coz they suck! (via Mark Pilgrim)