tags:

views:

243

answers:

3

Hello! I'm writing a little parser in clojure for learning purpose. basically is a TSV file parser that need to be put in a database, but I added a complication. The complication itself is that in the same file there are more intervals. The file look like this:

###andreadipersio 2010-03-19 16:10:00###                                                                                
USER     COMM               PID  PPID  %CPU %MEM      TIME  
root     launchd              1     0   0.0  0.0   2:46.97  
root     DirectoryService    11     1   0.0  0.2   0:34.59  
root     notifyd             12     1   0.0  0.0   0:20.83  
root     diskarbitrationd    13     1   0.0  0.0   0:02.84`
....

###andreadipersio 2010-03-19 16:20:00###                                                                                
USER     COMM               PID  PPID  %CPU %MEM      TIME  
root     launchd              1     0   0.0  0.0   2:46.97  
root     DirectoryService    11     1   0.0  0.2   0:34.59  
root     notifyd             12     1   0.0  0.0   0:20.83  
root     diskarbitrationd    13     1   0.0  0.0   0:02.84

I ended up with this code:

(defn is-header? 
  "Return true  if a line is header"
  [line]
  (> (count (re-find #"^\#{3}" line)) 0))

(defn extract-fields
  "Return regex matches"
  [line pattern]
  (rest (re-find pattern line)))

(defn process-lines
  [lines]
  (map process-line lines))

(defn process-line
  [line]
  (if (is-header? line)
    (extract-fields line header-pattern))
  (extract-fields line data-pattern))

My idea is that in 'process-line' interval need to be merged with data so I have something like this:

('andreadipersio', '2010-03-19', '16:10:00', 'root', 'launchd', 1, 0, 0.0, 0.0, '2:46.97')

for every row till the next interval, but I can't figure how to make this happen.

I tried with something like this:

(def process-line
  [line]
  (if is-header? line)
    (def header-data (extract-fields line header-pattern)))
  (cons header-data (extract-fields line data-pattern)))

But this doesn't work as excepted.

Any hints?

Thanks!

+1  A: 

I'm not totally sure based on your description, but perhaps you're just slipping up on the syntax. Is this what you want to do?

(def process-line [line]
  (if (is-header? line) ; extra parens here over your version
    (extract-fields line header-pattern) ; returning this result
    (extract-fields line data-pattern))) ; implicit "else"

If the intent of your "cons" is to group together headers with their associated detail data, you'll need some more code to accomplish that, but if it's just an attempt at "coalescing" and returning either a header or detail line depending on which it is, then this should be correct.

mquander
Thanks for your response, fixing the syntax problem on the if form cleaned up the output, but I still need to find the correct way to merge both sequences (that's the first case you described).ps Sorry for my description, I'm a beginner with clojure and functional programming in general so I may have used the wrong terms.
Andrea Di Persio
+4  A: 

A possible approach:

  1. Split the input into lines with line-seq. (If you want to test this on a string, you can obtain a line-seq on it by doing (line-seq (java.io.BufferedReader. (java.io.StringReader. test-string))).)

  2. Partition it into sub-sequences each of which contains either a single header line or some number of "process lines" with (clojure.contrib.seq/partition-by is-header? your-seq-of-lines).

  3. Assuming there's at least one process line after each header, (partition 2 *2) (where *2 is the sequence obtained in step 2 above) will return a sequence of a form resembling the following: (((header-1) (process-line-1 process-line-2)) ((header-2) (process-line-3 process-line-4))). If the input might contain some header lines not followed by any data lines, then the above could look like (((header-1a header-1b) (process-line-1 process-line-2)) ...).

  4. Finally, transform the output of step 3 (*3) with the following function:


(defn extract-fields-add-headers
  [[headers process-lines]]
  (let [header-fields (extract-fields (last headers) header-pattern)]
    (map #(concat header-fields (extract-fields % data-pattern))
         process-lines)))

(To explain the (last headers) bit: the only case where we'll get multiple headers here is when some of them have no data lines of their own; the one actually attached to the data lines is the last one.)


With these example patterns:

(def data-pattern #"(\w+)\s+(\w+)\s+(\d+)\s+(\d+)\s+([0-9.]+)\s+([0-9.]+)\s+([0-9:.]+)")
(def header-pattern #"###(\w+)\s+([0-9-]+)\s+([0-9:]+)###")
;; we'll need to throw out the "USER  COMM  ..." lines,
;; empty lines and the "..." line which I haven't bothered
;; to remove from your sample input
(def discard-pattern #"^USER\s+COMM|^$|^\.\.\.")

the whole 'pipe' might look like this:

;; just a reminder, normally you'd put this in an ns form:
(use '[clojure.contrib.seq :only (partition-by)])

(->> (line-seq (java.io.BufferedReader. (java.io.StringReader. test-data)))
     (remove #(re-find discard-pattern %)) ; throw out "USER  COMM ..."
     (partition-by is-header?)
     (partition 2)
     ;; mapcat performs a map, then concatenates results
     (mapcat extract-fields-add-headers))

(With the line-seq presumably taking input from a different source in your final programme.)

With your example input, the above produces output like this (line breaks added for clarity):

(("andreadipersio" "2010-03-19" "16:10:00" "root" "launchd" "1" "0" "0.0" "0.0" "2:46.97")
 ("andreadipersio" "2010-03-19" "16:10:00" "root" "DirectoryService" "11" "1" "0.0" "0.2" "0:34.59")
 ("andreadipersio" "2010-03-19" "16:10:00" "root" "notifyd" "12" "1" "0.0" "0.0" "0:20.83")
 ("andreadipersio" "2010-03-19" "16:10:00" "root" "diskarbitrationd" "13" "1" "0.0" "0.0" "0:02.84")
 ("andreadipersio" "2010-03-19" "16:20:00" "root" "launchd" "1" "0" "0.0" "0.0" "2:46.97")
 ("andreadipersio" "2010-03-19" "16:20:00" "root" "DirectoryService" "11" "1" "0.0" "0.2" "0:34.59")
 ("andreadipersio" "2010-03-19" "16:20:00" "root" "notifyd" "12" "1" "0.0" "0.0" "0:20.83")
 ("andreadipersio" "2010-03-19" "16:20:00" "root" "diskarbitrationd" "13" "1" "0.0" "0.0" "0:02.84"))
Michał Marczyk
Thank you very much. It works like a charm and I learned two useful function: mapcat and partition. Thanks again.
Andrea Di Persio
You're welcome! Note that I've made another edit to make it properly handle the case where some headers might not have data lines following them.
Michał Marczyk
Yes I noticed it! Thanks.
Andrea Di Persio
+2  A: 

You're doing (> (count (re-find #"^\#{3}" line)) 0), but you can just do (re-find #"^\#{3}" line) and use the result as a boolean. re-find returns nil if the match fails.

If you're iterating over the items in a collection, and you want to skip some items or combine two or more items in the original into one item in the result, then 99% of the time you want reduce. This usually ends up being very straightforward.

;; These two libs are called "io" and "string" in bleeding-edge clojure-contrib
;; and some of the function names are different.
(require '(clojure.contrib [str-utils :as s]
                           [duck-streams :as io])) ; SO's syntax-highlighter still sucks

(defn clean [line]
  (s/re-gsub #"^###|###\s*$" "" line))

(defn interval? [line]
  (re-find #"^#{3}" line))

(defn skip? [line]
  (or (empty? line)
      (re-find #"^USER" line)))

(defn parse-line [line]
  (s/re-split #"\s+" (clean line)))

(defn parse [file]
  (first
   (reduce
    (fn [[data interval] line]
      (cond
       (interval? line) [data (parse-line line)]
       (skip? line)     [data interval]
       :else            [(conj data (concat interval (parse-line line))) interval]))
    [[] nil]
    (io/read-lines file))))
Brian Carper
This is very nice. At the moment is the best solution I can think of and also the shorter.Thanks a lot.
Andrea Di Persio
This may or may not have any bearing on the example at hand, but I don't agree with the statement about the suitability of `reduce` for tasks of this kind. In Clojure `reduce` is always strict in that it will always materialise the whole result in-memory before any part of it becomes available for processing (because Clojure's `reduce` is a left fold). This is in contrast to an approach where lazy transformations are layered on top of each other (with the input sequence at the bottom of the stack), where results can be produced in chunks.
Michał Marczyk
Michał Marczyk
(And the tool of last resort it is, because virtually anything can be done with it, whereas the pipe-like approach has limitations... The latter don't come into play in this particular case, though.)
Michał Marczyk
`reduce` doesn't work if you want full laziness, yeah. `reductions` in clojure-contrib does though. Not sure you need laziness for this problem though. `->` and `->>` with more than a couple forms in them quickly become awkward to me, especially once you have to switch from one to the other due to order of arguments changing. YMMV, it's a matter of style. I like having a single table where you can see what happens for each different kind of line.
Brian Carper
That's a good point about having a single table... With a multimethod and a dispatch function to classify lines one could still have that and not sacrifice extensibility. As for laziness, I fully agree that it might not matter here (that would ultimately depend on the size of the log file and the type of processing to be applied to the data). On a final note, `reductions` is lazy, but in a way which would be awkward to take advantage of here (the intermediate results would be sequences from which you'd have to drop, at each step, the stuff which has already been processed).
Michał Marczyk
Anyway, somehow seeing these two approaches to this problem side-by-side makes me realise yet again how much I'm enjoying Clojure. I've made an exercise based on this question for the rubylearning.org Clojure 101 course... Can't wait to see how else this can be done. :-)
Michał Marczyk
Michał Marczyk