views:

853

answers:

7

I need to take a string, and shorten it to 140 characters.

Currently I am doing:

if len(tweet) > 140:
    tweet = re.sub(r"\s+", " ", tweet) #normalize space
    footer = "… " + utils.shorten_urls(post['url'])
    avail = 140 - len(footer)
    words = tweet.split()
    result = ""
    for word in words:
        word += " "
        if len(word) > avail:
            break
        result += word
        avail -= len(word)
    tweet = (result + footer).strip()
    assert len(tweet) <= 140

So this works great for English, and English like strings, but fails for a Chinese string because tweet.split() just returns one array:

>>> s = u"简讯:新華社報道,美國總統奧巴馬乘坐的「空軍一號」專機晚上10時42分進入上海空域,預計約30分鐘後抵達浦東國際機場,開展他上任後首次訪華之旅。"
>>> s
u'\u7b80\u8baf\uff1a\u65b0\u83ef\u793e\u5831\u9053\uff0c\u7f8e\u570b\u7e3d\u7d71\u5967\u5df4\u99ac\u4e58\u5750\u7684\u300c\u7a7a\u8ecd\u4e00\u865f\u300d\u5c08\u6a5f\u665a\u4e0a10\u664242\u5206\u9032\u5165\u4e0a\u6d77\u7a7a\u57df\uff0c\u9810\u8a08\u7d0430\u5206\u9418\u5f8c\u62b5\u9054\u6d66\u6771\u570b\u969b\u6a5f\u5834\uff0c\u958b\u5c55\u4ed6\u4e0a\u4efb\u5f8c\u9996\u6b21\u8a2a\u83ef\u4e4b\u65c5\u3002'
>>> s.split()
[u'\u7b80\u8baf\uff1a\u65b0\u83ef\u793e\u5831\u9053\uff0c\u7f8e\u570b\u7e3d\u7d71\u5967\u5df4\u99ac\u4e58\u5750\u7684\u300c\u7a7a\u8ecd\u4e00\u865f\u300d\u5c08\u6a5f\u665a\u4e0a10\u664242\u5206\u9032\u5165\u4e0a\u6d77\u7a7a\u57df\uff0c\u9810\u8a08\u7d0430\u5206\u9418\u5f8c\u62b5\u9054\u6d66\u6771\u570b\u969b\u6a5f\u5834\uff0c\u958b\u5c55\u4ed6\u4e0a\u4efb\u5f8c\u9996\u6b21\u8a2a\u83ef\u4e4b\u65c5\u3002']

How should I do this so it handles I18N? Does this make sense in all languages?

I'm on python 2.5.4 if that matters.

+3  A: 

Chinese doesn't usually have whitespace between words, and the symbols can have different meanings depending on context. You will have to understand the text in order to split it at a word boundary. In other words, what you are trying to do is not easy in general.

Mark Byers
Does it make sense to substring a Chinese string? Like if I do `s[:120]` will that still be readable?
Paul Tarjan
You may end up with half a word which could totally change the meaning. Imagine splitting "assist" at the first three letters.
Mark Byers
ok, thank you. Does "..." mean the same thing in other languages, or is there an alternate "ellipses" character
Paul Tarjan
I'm not sure what chararcter to use, but Wikipedia says something on the matter: http://en.wikipedia.org/wiki/Ellipsis#In_Chinese
Mark Byers
As far as I know, there is no special CJK ellipsis character. CJK characters are twice as wide ("full width") as Latin characters ("half width"), so it's probably better to use TWO ellipsis characters just as the Wikipedia article says: "In Chinese and sometimes in Japanese, ellipsis characters are done by entering two consecutive horizontal ellipsis (U+2026)." All of this presupposes that you have determined that the language in question is in fact Chinese, and not Japanese or Korean which also use the CJK characters and may well have different ellipsis conventions and ass/ist problems.
John Machin
+4  A: 

For word segmentation in Chinese, and other advanced tasks in processing natural language, consider NLTK as a good starting point if not a complete solution -- it's a rich Python-based toolkit, particularly good for learning about NL processing techniques (and not rarely good enough to offer you viable solution to some of these problems).

Alex Martelli
"not rarely" == usually, sometimes, something else?
Laurence Gonsalves
@Laurence, depends on how bleeding-edge your typical NL tasks are, and how production-hardened and performance-tuned you need your code to be. If you're dealing with terabytes of text or need low-latency response, so you must deploy on a large, highly scalable parallel cluster, NLTK will at best let you sketch a prototype, not offer a viable solution for your requirements; for lower-volume and more time-tolerant tasks, esp. well-known ones such as segmentation, "usually" applies -- but there are all kinds of intermediate needs and special problem quirks!-)
Alex Martelli
I really don't want to train an NLP solution for word break discovery. I'm sure someone did this already, and just want a pre-boxed wordbreak splitter.
Paul Tarjan
A: 

This punts the word-breaking decision to the re module, but it may work well enough for you.

import re

def shorten(tweet, footer="", limit=140):
    """Break tweet into two pieces at roughly the last word break
    before limit.
    """
    lower_break_limit = limit / 2
    # limit under which to assume breaking didn't work as expected

    limit -= len(footer)

    tweet = re.sub(r"\s+", " ", tweet.strip())
    m = re.match(r"^(.{,%d})\b(?:\W|$)" % limit, tweet, re.UNICODE)
    if not m or m.end(1) < lower_break_limit:
        # no suitable word break found
        # cutting at an arbitrary location,
        # or if len(tweet) < lower_break_limit, this will be true and
        # returning this still gives the desired result
        return tweet[:limit] + footer
    return m.group(1) + footer
Roger Pate
thanks. I added a check if there are no word boundaries. For english strings this is working great, but for my chinese example (double it to make it long) I end up with a string that is 137 chars long, not 140. `len(shorten(s*2, "... end"))`
Paul Tarjan
That means it's working as expected, as it breaks at the last \b\W. However, I don't know Chinese to know if this is actually a word break in that text. Try `shorten("abcde " * 3, "", 13)` for another example of how it breaks shorter than the limit.
Roger Pate
A: 

After speaking with some native Cantonese, Mandarin, and Japanese speakers it seems that the correct thing to do is hard, but my current algorithm still makes sense to them in the context of internet posts.

Meaning, they are used to the "split on space and add … at the end" treatment.

So I'm going to be lazy and stick with it, until I get complaints from people that don't understand it.

The only change to my original implementation would be to not force a space on the last word since it is unneeded in any language (and use the unicode character … &#x2026 instead of ... three dots to save 2 characters)

Paul Tarjan
It's a named entity in HTML: `…`, horizontal ellipsis.
ephemient
A: 

the re.U flag will treat \s according to the Unicode character properties database.

The given string, however, doesn't apparently contain any white space characters according to python's unicode database:

>>> x = u'\u7b80\u8baf\uff1a\u65b0\u83ef\u793e\u5831\u9053\uff0c\u7f8e\u570b\u7e3d\u7d71\u5967\u5df4\u99ac\u4e58\u5750\u7684\u300c\u7a7a\u8ecd\u4e00\u865f\u300d\u5c08\u6a5f\u665a\u4e0a10\u664242\u5206\u9032\u5165\u4e0a\u6d77\u7a7a\u57df\uff0c\u9810\u8a08\u7d0430\u5206\u9418\u5f8c\u62b5\u9054\u6d66\u6771\u570b\u969b\u6a5f\u5834\uff0c\u958b\u5c55\u4ed6\u4e0a\u4efb\u5f8c\u9996\u6b21\u8a2a\u83ef\u4e4b\u65c5\u3002'
>>> re.compile(r'\s+', re.U).split(x)
[u'\u7b80\u8baf\uff1a\u65b0\u83ef\u793e\u5831\u9053\uff0c\u7f8e\u570b\u7e3d\u7d71\u5967\u5df4\u99ac\u4e58\u5750\u7684\u300c\u7a7a\u8ecd\u4e00\u865f\u300d\u5c08\u6a5f\u665a\u4e0a10\u664242\u5206\u9032\u5165\u4e0a\u6d77\u7a7a\u57df\uff0c\u9810\u8a08\u7d0430\u5206\u9418\u5f8c\u62b5\u9054\u6d66\u6771\u570b\u969b\u6a5f\u5834\uff0c\u958b\u5c55\u4ed6\u4e0a\u4efb\u5f8c\u9996\u6b21\u8a2a\u83ef\u4e4b\u65c5\u3002']
ʞɔıu
Right, but "whitespace" in english means word seperators, where as there is no word separators in chinese, only whitespace as sentence seperators.
Paul Tarjan
A: 

Save two characters and use an elipsis (, 0x2026) instead of three dots!

a paid nerd
In UTF-8 ellipsis takes 3 bytes so not much to be saved there :)
Adam Byrtek
I used the word "characters" instead of "bytes" on purpose. :)
a paid nerd
Adam meant: You save two Unicode characters, but in UTF-8, U+2026 takes 3 bytes, and three dots take 1 byte each so there's no saving when you store it. My note: Conceptually it's better to use an ellipsis character.
John Machin
A: 

I tried out the solution with PyAPNS for push notifications and just wanted to share what worked for me. The issue I had is that truncating at 256 bytes in UTF-8 would result in the notification getting dropped. I had to make sure the notification was encoded as "unicode_escape" to get it to work. I'm assuming this is because the result is sent as JSON and not raw UTF-8. Anyways here is the function that worked for me:

def unicode_truncate(s, length, encoding='unicode_escape'):
    encoded = s.encode(encoding)[:length]
    return encoded.decode(encoding, 'ignore')
justinjas