Best reference / crib sheet for AWK
In a series of similar questions, what is the best AWK reference you've ever seen? If there isn't really one (I've yet to find the grail), perhaps we could compile one in a separate question. ...
In a series of similar questions, what is the best AWK reference you've ever seen? If there isn't really one (I've yet to find the grail), perhaps we could compile one in a separate question. ...
I have a that looks like this: I, [2009-03-04T15:03:25.502546 #17925] INFO -- : [8541, 931, 0, 0] I, [2009-03-04T15:03:26.094855 #17925] INFO -- : [8545, 6678, 0, 0] I, [2009-03-04T15:03:26.353079 #17925] INFO -- : [5448, 1598, 185, 0] I, [2009-03-04T15:03:26.360148 #17925] INFO -- : [8555, 1747, 0, 0] I, [2009-03-04T15:03:26.367523...
I want to apply the following "awk" command on files with extension "*.txt" awk '$4 ~ /NM/{ sum += $2 } END{ print sum }' But why this command doesn't work: for i in *.txt do echo awk '$4 ~ /NM/{ sum += $2 } END{ print sum }' $i; done Normally, awk '$4 ~ /NM/{ sum += $2 } END{ print sum }' file1.txt would work. ...
I need to put different codes in one file to many files. The file is apparantly shared by AWK's creators at their homepage. The file is also here for easy use. My attempt to the problem I can get the lines where each code locate by awk '{ print $1 }' However, I do no know how to get the exact line numbers so that I can use them to...
I have a long document in LaTex, which contains paragraphs. The paragraphs contain sentences such that no subsequent sentence start at a new line. How can you make each subsequent sentence to start at a new line in my .tex file? My attempt to the problem We need to put \n to the end of Sentence B where Sentence B has Sentence A before...
I need to find names which contain three number 7 in the random order. My attempt We need to find first names which do not contain seven ls | grep [^7] Then, we could remove these matches from the whole space ls [remove] ls | grep [^7] The problem in my pseudo-code starts to repeat itself quickly. How can you find the names whic...
I am trying to clean up some data, and I would eventually like to put it in CSV form. I have used some regular expressions to clean it up, but I'm stuck on one step. I would like to replace all but every third newline (\n) with a comma. The data looks like this. field1 field2 field3 field1 field2 field3 etc.. I need it in field1,...
I want to have -[space] as an field separator in AWK. For instance, awk -F-[space] {' print $1 '} How can you have many characters as a field separator in AWK? [edit] The exact output of Vlad's command $echo /Users/Sam/Dropbox/Education/Chemistry/Other\ materials/*.pdf | sed -e 's: : - :g' /Users/Sam/Dropbox/Education/Chemistry/O...
I tried the following code unsuccessfully after using ls -1 awk -F '\n' '{ print $1 }' How can I get the first row in terminal? ...
How can I call awk or sed inside a c program? I know I could use exec(), but I don't want to deal with fork() and all that other nastiness. ...
Hello all I want to perform this awk -F, '$1 ~ /F$/' file.dat on a whole direcory of gziped files I want to be able to loop through each file unzip it perform the above command (print out any findings) rezip and move onto the next zipped file how can this be done? Thanks ...
Problems to get permissions of each file in every folder to find files which have 777 permissions, and then print the filenames with their paths to a list We can get permissions for files in one folder by ls -ls I do not know how you can get permissions of each file in every folder effectively. How can you find files which have ...
I try to copy a command from history. How can I copy the 510th command? Please, see the data below. My bet is: history | grep 510 | sed '1q;d' | awk '{print $2-$10}' | pbcopy but the output is 0. I cannot understand the reason. What is wrong in the command? 505 find . -perm=750 -print0 | xargs -0 chmod 750 506 find . --perm=750...
I have a data that looks like this: -1033 - 222 100 -30 - 10 What I want to do is to capture all the numbers excluding "dash only" entry. Why my awk below failed? awk '$4 != "-" {print $4}' ...
I am working on a Unix system. I have a directory of files called MailHistory. Each file in the directory contains all of the emails for the previous day. The files are created at midnight and named with the timedatestamp. So, a typical filename is 20090323000100. I have a file that has a list of names. Using this file as input, I...
I run the code gives me the following sample data md5deep find * | awk '{ print $1 }' A sample of the output /Users/math/Documents/Articles/Number theory: Is a directory 258fe6853b1bfb2d07f512ff6bec52b1 /Users/math/Documents/Articles/Probability and statistics: Is a directory 4811bfb2ad04b9f4318049c01ebb52ef 8aae4...
What this question isn't asking is how to add a new line below or above every line which matches a pattern. What I'm trying to do is add a new line between a pattern that exists on one line. Here is an example. before: Monday:8am-10pm after: Monday: 8am-10pm Thus in this case, insert new line after every 'Monday' patt...
Hello everyone, I have a large collection of php files written over the years and I need to properly replace all the short open tags into proper explicit open tags. change "<?" into "<?php" I think this regular expression will properly select them : <\?(\s|\n|\t|[^a-zA-Z]) which takes care of cases like <?// <?/* but I am not s...
Hi all, I have a 10^7 lines file, in which I want to choose 1/100 of lines randomly from the file. This is the AWK code I have, but it slurps all the file content before hand. My PC memory cannot handle such slurps. Is there other approach to do it? awk 'BEGIN{srand()} !/^$/{ a[c++]=$0} END { for ( i=1;i<=c ;i++ ) { num=int(r...
Dear all, I have no problem using the following command of AWK as a stand alone command, without any error: $ awk '$9 != "NTM" && $9 != ""' myfile.txt | less -Sn But when I apply them inside Perl's script for qsub (i.e. running job in linux cluster) command, like this: use strict; use Data::Dumper; use Carp; use File::Basename; my...