awk

one line using sed and bc together?

I want to add one to the last value at the end of a string in sed. I'm thinking along the lines of cat 0809_data.csv |sed -e 's/\([0-9]\{6\}\).*\(,[^,]*$\)/\1\2/g'| export YEARS = $(echo `grep -o '[^,]*$' + 1`|bc) e.g. 123456, kjhsflk, lksjgrlks, 2.8 -> 123456, 3.8 Would this be more reasonable/feasible in awk? ...

Optimize grep, awk and sed shell stuff

I try to sum the traffic of diffrent ports in the logfiles from "IPCop" so i write and command for my shell, but i think its possible to optimize the command. First a Line from my Logfile: 01/00:03:16 kernel INPUT IN=eth1 OUT= MAC=xxx SRC=xxx DST=xxx LEN=40 TOS=0x00 PREC=0x00 TTL=98 ID=256 PROTO=TCP SPT=47438 DPT=1433 WINDOW=16384 RES=...

awk and/or sed: Only print lines where the second field matches some criteria

I have 1 LINUX param1 value1 2 LINUXparam2 value2 3 SOLARIS param3 value3 4 SOLARIS param4 value4 need by awk to pring all lines that $2 is LINUX THX ...

AWK scripting :How to remove Field separator using awk

Need the following output ONGC044 ONGC043 ONGC042 ONGC041 ONGC046 ONGC047 from this input Medium Label Medium ID Free Blocks =============================================================================== [ONGC044] ECCPRDDB_FS_43 ac100076:4aed9b39:44f0:0001 195311616 [ONG...

awk: access captured group from line pattern

If I have an awk command pattern { ... } and pattern uses a capturing group, how can I access the string so captured in the block? ...

Using awk to print all columns from the nth to the last

right now I have this line, and it worked until I had whitespace in the second field. svn status | grep '\!' | gawk '{print $2;}' > removedProjs is there a way to have awk print everything in $2 or greater? ($3, $4.. until we don't have anymore columns?) I suppose I should add that I'm doing this in a windows environment with cygwin...

Determine stale data

Say I have a file of this format 12:04:21 .3 12:10:21 1.3 12:13:21 1.4 12:14:21 1.3 ..and so on I want to find repeated numbers in the second column for, say, 10 consequent timestamps, thereby finding staleness. 12:04:21 .3 12:10:21 1.3 12:14:21 1.3 12:10:21 1.3 12:14:21 1.3 12:12:21 1.3 12:24:21 1.3 12:30:21 1.3 12:44...

Delete lines containing a range pattern in 4rd columns

In a file 4th column contains a floating point numbers dsfsd sdfsd sdfds 4.5 dfsdfsd I want to delete the entire line if the number between -0.1 and 0.1 (or some other range). Can sed or awk do that for me? thanks ...

Killing a process

I have a for loop to get the list of PID's and kill each PID. I want to display the entire line of PS output and write it to the /tmp/outfile . But from each line of PS output each field(PID,PPID,...) is written along with a new line in the /tmp/outfile. So if PS output has three lines as output i want to log these three lines into ...

Using AWK find a smallest number in a second column bigger than x

I have a file with two columns, sdfsd 1.3 sdfds 3 sdfsdf 2.1 dsfsdf -1 if x is 2 I want to print sdfsdf 2.1 How to express it in awk (bash or sed is fine too) ...

Uniq in awk; removing duplicate values in a column using awk

I have a large datafile in the following format below: ENST00000371026 WDR78,WDR78,WDR78, WD repeat domain 78 isoform 1,WD repeat domain 78 isoform 1,WD repeat domain 78 isoform 2, ENST00000371023 WDR32 WD repeat domain 32 isoform 2 ENST00000400908 RERE,KIAA0458, atrophin-1 like protein isoform a,Homo sapiens mRNA for KIAA0458 prote...

rule based file parsing

I need to parse a file line by line on given rules. Here is a requirement. file can have multiple lines with different data.. 01200344545143554145556524341232131 1120034454514355414555652434123213101200344545143554145556524341232131 2120034454514 and rules can be like this. if byte[0,1] == "0" then extract this line to /tmp/record...

search a string in a file as ignoring the lines beginning with #

I want to find a string such as "qwertty=" in a file with "awk" or "grep" but I don't want to see the lines with #. Please see the example grep -ni "qwertty" /aaa/bbb 798:# * qwertty - enable/disable 1222:#qwertty=1 1223:qwertty=2 1224:#qwertty=3 I want to find the line 1223. What should be the search query for this purpose? ...

reformat text in perl

I have a file of 1000 lines, each line in the format filename dd/mm/yyyy hh:mm:ss I want to convert it to read filename mmddhhmm.ss been attempting to do this in perl and awk - no success - would appreciate any help thanks ...

Concatenation awk outputs

I'm using regex to parse NMAP output. I want the ip addresses which are up with the corresponding ports open. Now I've a very naive method of doing that: awk '/^Scanning .....................ports]/ {print substr ($2,1,15);}' results.txt awk '/^[0-9][0-9]/ {print substr($1,1,4);}' results.txt | awk -f awkcode.awk where awkcode.awk con...

Extracting character range form longline with sed or awk or grep?

So i have a 1 long line with characters, for example numbers[1-1024] in one line(no "\n", "\t" and "\b"): 1 2 3 4 5 6 7 8 9 10 11 ... 1024 How do i extract and print characters for example exactly 55 characters after 46? So output would be: 47 48 49 ... 101 Thanks. ...

matching files with regular expressions

Dear all, I have an input file with a list of movies (Note that there might be some repeated entries): American_beauty__1h56mn38s_ As_Good_As_It_Gets As_Good_As_It_Gets _DivX-ITA__Casablanca_M_CURTIZ_1942_Bogart-bergman_ Capote_EN_DVDRiP_XViD-GeT-AW _DivX-ITA__Casablanca_M_CURTIZ_1942_Bogart-bergman_ I would to find the corresponding...

awk + Need to print everything (all rest fields) except $1 and $2

hi I have the following file Need to print everything except $1 and $2 by awk File: INFORMATION DATA 12 33 55 33 66 43 INFORMATION DATA 45 76 44 66 77 33 INFORMATION DATA 77 83 56 77 88 22 . . . the desirable file 12 33 55 33 66 43 45 76 44 66 77 33 77 83 56 77 88 22 . . . ...

How to pass a variable to an awk print parameter...

I'm trying extract the nth + 1 and nth + 3 columns from a file. This is what tried, which is a useful pseudo code: for i in {1..100} ; do awk -F "," " { printf \"%3d, %12.3f, %12.3f\\n\", \$1, \$($i+1), \$($i+3) } " All_Runs.csv > Run-$i.csv which, obviously doesn't work (but it seemed reasonable to hope). How can I do this? ...

awk + sorting file according to values in the file and write two diffrent files

hi I have in file file_test values of right eye and left eye How to separate the file_test to file1 and file2 by awk in order to write the equal values into file1 and different values into file2 as the following example down file_test is: NAME: jim LAST NAME: bakker right eye: >|5|< left eye VALUE: >|5|< NAME: Jorg LAST NAME: mitch...