views:

250

answers:

6

I'm looking for a way to remove lines within multiple csv files, in bash using sed, awk or anything appropriate where the file ends in 0.

So there are multiple csv files, their format is:

EXAMPLEfoo,60,6
EXAMPLEbar,30,10
EXAMPLElong,60,0
EXAMPLEcon,120,6
EXAMPLEdev,60,0
EXAMPLErandom,30,6

So the file will be amended to:

EXAMPLEfoo,60,6
EXAMPLEbar,30,10
EXAMPLEcon,120,6
EXAMPLErandom,30,6

A problem which I can see arising is distinguishing between double digits that end in zero and 0 itself.

So any ideas?

+8  A: 

Using your file, something like this?

$ sed '/,0$/d' test.txt 
EXAMPLEfoo,60,6 
EXAMPLEbar,30,10 
EXAMPLEcon,120,6 
EXAMPLErandom,30,6
qor72
Exactly, thanks alot, for some reason, I had imagined it would me more complex that that.
S1syphus
+2  A: 

use sed to only remove lines ending with ",0":

   sed  '/,0$/d' 
Jürgen Hötzel
+2  A: 

you can also use awk,

$ awk -F"," '$NF!=0' file
EXAMPLEfoo,60,6
EXAMPLEbar,30,10
EXAMPLEcon,120,6
EXAMPLErandom,30,6

this just says check the last field for 0 and don't print if its found.

ghostdog74
+4  A: 

For this particular problem, sed is perfect, as the others have pointed out. However, awk is more flexible, i.e. you can filter on an arbitrary column:

awk -F, '$3!=0' test.csv

This will print the entire line is column 3 is not 0.

Dan Andreatta
+2  A: 
sed '/,[ \t]*0$/d' file
+2  A: 

I would tend to sed, but there is an egrep (or: grep -e) -solution too:

egrep -v ",0$" example.csv 
user unknown