Hello everybody,
I have to solve an exercise using awk. Basically I need to retrieve from 'ps aux' command the total of memory usage for each user and format like this:
User Total%Mem
user1 3.4%
user2 1.5%
and so on.
The problem I can't seem to solve is: how do I know how many users are logged in? And how can I make a di...
I want to run the following command from a C program to read the system's CPU and memory use:
ps aux|awk 'NR > 0 { cpu +=$3; ram+=$4 }; END {print cpu,ram}'
I am trying to pass it to the execl command and after that read its output:
execl("/bin/ps", "/bin/ps", "aux|awk", "'NR > 0 { cpu +=$3; ram+=$4 }; END {print cpu,ram}'",(char *) ...
I have a file delimited by space.
I need to write an awk command that receives a host name argument
and it should replace the host name if it already defined in the file.
It must be a full match not partially - if the file contains this host name: localhost
searching for "ho" will fail and it will be added to the end of the file.
anothe...
I'm having trouble using the following code inside my Perl script, any advise is really appreciated, how to correct the syntax?
# If I execute in bash, it's working just fine
bash$ whois google.com | egrep "\w+([._-]\w)*@\w+([._-]\w)*\.\w{2,4}" |awk ' {for (i=1;i<=NF;i++) {if ( $i ~ /[[:alpha:]]@[[:alpha:]]/ ) { print $i}}}'|head -n1...
hi, i'm looking for script in awk which will check if it has proper bracket placing. used brackets are {} [] and ()
every bracket should be closed, and brackets can't be mixed, illegal example: ( [ ) ]
...
This is one line of the input file:
FOO BAR 0.40 0.20 0.40 0.50 0.60 0.80 0.50 0.50 0.50 -43.00 100010101101110101000111010
And an awk command that checks a certain position if it's a "1" or "0" at column 13
Something like:
awk -v values="${values}" '{if (substr($13,1,1)==1) printf values,$1,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13}' fo...
I have two text files, I want to place a text in the middle of another, I did some research and found information about adding single strings:
I have a comment in the second text file called STUFFGOESHERE, so I tried:
sed '/^STUFFGOESHERE/a file1.txt' file2.txt
sed: 1: "/^STUFFGOESHERE/a long.txt": command a expects \ followed by te...
I want to replace the date found at the end of the "datadir" line with the current date.
For e.g. my my.cnf file looks like this...
# head /etc/my.cnf
[mysqld]
#mount -t tmpfs -o size=102m tmpfs /mnt
#datadir=/mnt
read-only
datadir=/mysqlApr5
#datadir=/mysqlApr2
#datadir=/mysqlMar16
#datadir=/mysqlFeb25a
Most of the lines are comment...
The MySQL dump backup file has the following line...
# head -40 backup20-Apr-2010-07-32.sql | grep 'CHANGE MASTER TO '
-- CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000068', MASTER_LOG_POS=176357756;
a) I need to complete the statement with the parameters like Master host, user and password.
b) I do also need to remove the comment "...
I've got a file, which looks like:
Coding |2010-04-20 12:52|2010-04-20 14:11
Documentation|2010-04-20 22:56|2010-04-21 01:13
Coding |2010-04-21 09:51|2010-04-21 10:58
Coding |2010-04-21 13:11|2010-04-21 14:21
What's the best way - I'm thinking of awk - to do time calculations.
As result I expect:
2010-04-20 Coding...
I have a text file with contents as below:
1,A,100
2,A,200
3,B,150
4,B,100
5,B,250
i need the output as :
A,300
B,500
the logic here is sum of all the 3rd fields whose 2nd field is A and in the same way for B
how could we do it using awk?
...
For instance, I needed to remove column 25 and replace it with a copy of column 22 in a simple csv file with no embedded delimiters. The best I could come up with was the awkward looking:
awk -F, '{ for(x=1;x<25;x++){printf("%s,", $x)};printf("%s,",$22);for(x=26;x<59;x++){printf
("%s,", $x)};print $59}'
I would expect something like
...
I have a file containing data in a single column .. I have to find the sum of every 4 lines and print the sum
That is, I have to compute sum of values from 0-3rd line sum of line 4 to 7,sum of lines 8 to 11 and so on .....
...
An enormous equation. You need to add \left| on the left side of corresponding |. The corresponding | you need to replace with \right|.
Equation
\begin{equation}
| \Delta w_{0} | = \frac{|w_{0}|}{2} \left( |\frac{\Delta g}{g}|+|\frac{\Delta (\Delta r)}{\Delta r}| + |\frac{\Delta r}{r}| +|\frac{\Delta L}{L}| \right)
\end{equation}
[Pr...
EDIT: I don't know in advance at which "column" my digits are going to be and I'd like to have a one-liner. Apparently sed doesn't do arithmetic, so maybe a one-liner solution based on awk?
I've got a string: (notice the spacing)
eh oh 37
and I want it to become:
eh oh 36
(so I want to keep the spacing)
Using awk I don't fi...
Comment rows are counted in the NR.
Is there some flag to ignore comments?
How can you limit the range in AWK, not like piping | sed -e '1d', to ignore comment rows?
Example
$ awk '{sum+=$3} END {avg=sum/NR} END {print avg}' coriolis_data
0.885491 // WRONG divided by 11, should be by 10
$ cat coriolis_data ...
How can I get all differences, not just one? I want to use the calculated result for each item in the third column. The dilemma is that if I remove END I can print $3 but cannot have ave. If I leave END I have ave but not all differences.
awk '{sum+=$3} END {ave=sum/NR} END {print $3-ave}' coriolis_data
-0.00964 // I want to see the r...
$ cat read.sh
#!bin/bash
// how can I read the columnwise data to awk-script?
awk '{sum+=$1} END {print sum}' read
$ cat data
1
2
3
4
5
$ . ./read.sh <data
awk: cmd. line:1: fatal: cannot open file `read' for reading (No such file or directory)
...
When searching code for strings, I constantly run into the problem that I get meaningless, context-less results. For example, if a function call is split across 3 lines, and I search for the name of a parameter, I get the parameter on a line by itself and not the name of the function.
For example, in a file containing
...
someFuncti...
I try to convert clean columnwise data to tables in tex. I am unable to have "\ \n" at each end of line. Please, see the command at the end.
Data
$ . ./bin/addTableTexTags.sh < .data_3
10.31 & 8.50 & 7.40
10.34 & 8.53 & 7.81
8.22 & 8.62 & 7.78
10.16 & 8.53 & 7.44
10.41 & 8.38 & 7.63
10.38 & 8.57 & 8.03
10.13 & 8.66 & 7.41
...