+4  A: 
import collections

data = [
  [1, 2, 3, 4, 5],
  [1, 9, 3, 4, 5],
  [1, 10, 8, 4, 5],
  [1, 12, 13, 7, 5],
  [1, 14, 13, 13, 6],
]

def sorted_by_count(lists):
  counts = collections.defaultdict(int)
  for L in lists:
    for n in L:
      counts[n] += 1

  return [num for num, count in
          sorted(counts.items(),
                 key=lambda k_v: (k_v[1], k_v[0]),
                 reverse=True)]

print sorted_by_count(data)

Now let's generalize it (to take any iterable, loosen hashable requirement), allow key and reverse parameters (to match sorted), and rename to freq_sorted:

def freq_sorted(iterable, key=None, reverse=False, include_freq=False):
  """Return a list of items from iterable sorted by frequency.

  If include_freq, (item, freq) is returned instead of item.

  key(item) must be hashable, but items need not be.

  *Higher* frequencies are returned first.  Within the same frequency group,
  items are ordered according to key(item).
  """
  if key is None:
    key = lambda x: x

  key_counts = collections.defaultdict(int)
  items = {}
  for n in iterable:
    k = key(n)
    key_counts[k] += 1
    items.setdefault(k, n)

  if include_freq:
    def get_item(k, c):
      return items[k], c
  else:
    def get_item(k, c):
      return items[k]

  return [get_item(k, c) for k, c in
          sorted(key_counts.items(),
                 key=lambda kc: (-kc[1], kc[0]),
                 reverse=reverse)]

Example:

>>> import itertools
>>> print freq_sorted(itertools.chain.from_iterable(data))
[1, 5, 4, 13, 3, 2, 6, 7, 8, 9, 10, 12, 14]
>>> print freq_sorted(itertools.chain.from_iterable(data), include_freq=True)
# (slightly reformatted)
[(1, 5),
 (5, 4),
 (4, 3), (13, 3),
 (3, 2),
 (2, 1), (6, 1), (7, 1), (8, 1), (9, 1), (10, 1), (12, 1), (14, 1)]
Roger Pate
What was the downvote for?
Roger Pate
i can't decide which solution is the fastest. 2 for's is = O(n**2) but howto make it faster.. :/
proxylittle
Beats me. Yours is the only one (except mine, but I copied yours) that applies the second sort order, which is a nice touch.
Robert Rossney
Two for loops is not what makes some of these answers O(n^2), which this one isn't. It's calling count(x) on each item in the chained list that makes them O(n^2).
Robert Rossney
ah i see. ok thanks
proxylittle
proxylittle: n being the sum of the lengths of each list in `lists` means my nested for loops are still only O(n), or "each item from the original data is only processed once" (gross simplification, not strictly true, but O(2n) is still O(n)). Also the sorting is O(m log m) and the final list comprehension is O(m) (where m is the number of unique items), so the whole function is O(n log n) (m cannot be greater than n). That said, the "fastest" solution still depends on exactly how it's implemented and the characteristics of the input, but algorithmic complexity is how you generalize that.
Roger Pate
A: 
def items_ordered_by_frequency(*lists):

    # get a flat list with all the values
    biglist = []
    for x in lists:
        biglist += x

    # sort it in reverse order by frequency
    return sorted(set(biglist), 
                  key=lambda x: biglist.count(x), 
                  reverse=True)
twneale
Good ol' O(n**2), ah, how I've missed you.
Roger Pate
He didn't say he wanted it lightening fast. This was just meant to be simple.
twneale
True, but the problem with O(n^2) is that the leap from "not lightning fast" to "unusably slow" is really easy to make.
Robert Rossney
ah.. well... faster is better ofc.
proxylittle
A: 

Try this one:

def rank(*lists):
    d = dict()
    for lst in lists:
        for e in lst:
            if e in d: d[e] += 1
            else: d[e] = 1
    return [j[1] for j in sorted([(d[i],i) for i in d], reverse=True)]

Usage example:

a = [1,2,3,4]
b = [4,5,6,7]
c = [4,1,8,9]

print rank(a,b,c)

You can use any number of lists as input

Andrea Ambu
+3  A: 

Combining a couple of ideas already posted:

from itertools import chain
from collections import defaultdict

def frequency(*lists):
    counter = defaultdict(int)
    for x in chain(*lists):
        counter[x] += 1
    return [key for (key, value) in 
        sorted(counter.items(), key=lambda kv: (kv[1], kv[0]), reverse=True)]

Notes:

  1. In Python 2.7, you can use Counter instead of defaultdict(int).
  2. This version takes any number of lists as its argument; the leading asterisk means they'll all be packed into a tuple. If you want to pass in a single list containing all of your lists, omit that leading asterisk.
  3. This breaks if your lists contain an unhashable type.
Robert Rossney
it broke for me... true!
proxylittle
i've learned something about the asterisk.. thanks :)
proxylittle
+1  A: 

You can count the number of appearances of each element (a histogram), then sort by it:

def histogram(enumerable):
  result = {}
  for x in enumerable:
    result.setdefault(x, 0)
    result[x] += 1
  return result

lists = [ [1,2,3,4], [4,5,6,7], ... ]

from itertools import chain

h = histogram(chain(*lists))
ranked = sorted(set(chain(*lists)), key = lambda x : h[x], reverse = True)
orip
You're iterating over the chain of lists twice here. If you sort the histogram, you only need to iterate over it once. (Also, you don't need to build a set, because you already did.)
Robert Rossney
@Robert - you're correct, but efficiency isn't the only concern. I find using the histogram's keys confusing - of course if this was a bottleneck then sure, I wouldn't hesitate.
orip