General :: Blocks Column In Edquota Output?
Aug 28, 2010I know what are hard and soft limits, but what means "blocks" column in edquota utility?
View 1 RepliesI know what are hard and soft limits, but what means "blocks" column in edquota utility?
View 1 RepliesHow do I sort the output of du by its first column. I issue the command this way:$ du.Is there a way?
View 5 Replies View RelatedI was trying to redirect the output of two variables to different columns of a .csv file in MS excel like this,
Code:
echo "$a $b" > abc.csv
But I am getting both $a and $b in the same column, is there anything I can use instead of to move the value of $b to the next column? Or is there a good different approach to do it?
What I need to do is to extract one complete column (file size) from the output of ls -lS but while doing so in some rows I have a single space but in some other rows I have 2 or 3 spaces like some file sizes are different with 30 bytes 400 bytes and some 4000 bytes. So when I extract the output of ls using | cut -d ' ' -f5 i get the value which has only one space i.e. I get 4000 as output because 400 has 2 spaces seperated and 30 is 3 spaces separated. So how to get the file size column from the ls output?
View 10 Replies View RelatedI have a program that prints out lines like:
And I want to be able to pipe it to sort on that third column, by letter first, then number. But I keep coming getting files sorted like:
(field separations all start at same place, so columns are not jagged like above.)
I have read the sort man pages, and have tried -n for the numbers, and -k for the position to start sorting, among other things. I also tried inputting a second position to start sorting, which sort should supposedly refer to if the two entries are identical at the first place being compared, but it seems to just ignore the second one. I just can't get it to sort the numbers properly...
For now I am manually opening the file in emacs and changing them around, needless to say, very time consuming.
I have an existing quota setup which works fine for 2 users.
Say I want to add a new user, user3. Why is using "edquota -u user3" alone insufficient? Do I need to run quotacheck after running edquota for the new user?
How would i calculate following values.Initial file
10 3
20 4
How would i calculate 3rd column which should be addition of value in 1 and 2nd column.File after calculation
10 3 13
20 4 24
i use this script to get the time and date of back and fourth transactions for a particular execution id. I use a substr command on the 5th column to to cut the milli seconds off the time value. - otherwise the times would look like 08:30:04.235
grep <executionID> <auditfile> | awk '{ print $1, $2, $3, $4, substr($5,1,8}
FIX -> Mon 3/1/2010 08:30:04
FIX <- Mon 3/1/2010 08:32:36
FIX <- Mon 3/1/2010 08:35:08
[code].....
anyhow - i append two sed commands to further clarify the direction of the message.
awk '{ print $1, $2, $3, $4, substr($5,1,8} | sed -e 's/->/ ->IN/g' | sed -e 's/<-/<-OUT/g'
FIX -> IN Mon 3/1/2010 08:30:04
FIX <- OUT Mon 3/1/2010 08:32:36
[code]....
I tried using an awk gsub () command within the string instead of the two seds, but it did not work:
awk '{ print gsub(/<regex>/, <replace with>,$1), $2, $3, $4, substr($5,1,8}
the sed works ok, but it would be cooler to make the replacement within the awk command:
gsub(/->/,-> IN, $1)
Is there a way where i could replace the value of the $1 column in the awk print string?
I have a multicolumn datas, like
a1 b1 ... f1
a2 b2 ... f2
. . ... .
[code]...
I would like to make a file with all these data in one column, like
a1
a2
.
.
[code]....
Can it be done with awk or some other command? Also, is it possible then do add another column in front of this one with numbers of the lines (for every previous column), like
1 a1
2 a2
. .
. .
[code].....
I have a two column file and I need to create a new column in between the first and second but the new column adds a value to the first. E.g.code...
I thought I had figured out how to do it with the following but it just hangs:
awk -F '{print $0,$0+25,$1}' file_in > file_out
Also tried the following to no avail:
awk -F,-v OFS,'{print $0,$0+25,$1}' file_in > file_out
I can add the new column with the added value to the last column easy ( awk '{print $0,$0+25}' file_in > file_out).
I've got a new hard drive, formatted it to ext3, and made a check for bad blocks using e2fsck.
It gave me this:
Quote:
I just would like to know where i can find how many bad blocks were found (perhaps one if it is using singular in sentence "Updating bad block inode."?), and what is/are the number(s) of located bad block(s).
I have not defined a user vimrc, OS is redhat 4.6 and after a search and replace the first column in the editor is highlighted yellow and it stays that way as I close and open vim. This is the /etc/vimrc that came on the system. anyone see a bug or a reason it would do that?
if v:lang =~ "utf8$" || v:lang =~ "UTF-8$"
set fileencodings=utf-8,latin1
endif
set nocompatible " Use Vim defaults (much better!)
[Code]....
1. when we run od command it displays octal values. But the first column will be always 000000. after that the actual file contents are displayed. Can anyone tell the meaning of that.
2. When we run ls -l command, in the first line of the output, we can see some integer value. What is the significance of that value.
I have some series of files, which actually named bellow
<day_month_yr>_<hh:mm:ss>.<host_name>.<IP.ip.ip.ip>.<log_name>.txt
I want to show this file name below the column may be
Date Time Host_Name IP_address Login_Name
----- ---- --------- ---------- ----------
<day_month_yr> <hh:mm:ss> host_name x.x.x.x log_name
...
Im trying to use awk to do matching in only a specific column
example.txt:
www.google.com www.example.com
www.google.com/search www.example2.com
i used:
awk '{ (if $1 == "http://www.google.com") print $2}' example.txt
this awk statement only returns the first line, and i cant seems to make it perform in a way to match based on keywords like GREP. Is there any way to make display the other lines which contains "google" also?
It is very important for my research work. For example mydata.txt:
id type x y z
1 6 0.474611 0.227223 0.583947
2 4 0.422894 0.22726 0.536791
3 5 0.448963 0.200148 0.560336
4 3 0.386478 0.207721 0.515293
5 6 0.371617 0.22361 0.582206
6 4 0.32123 0.222999 0.534782
How to change second column (type) whose values are 4 and 3 to value 1, so that mydata.text file become:
id type x y z
1 6 0.474611 0.227223 0.583947
2 1 0.422894 0.22726 0.536791
3 5 0.448963 0.200148 0.560336
4 1 0.386478 0.207721 0.515293
5 6 0.371617 0.22361 0.582206
6 1 0.32123 0.222999 0.534782
I have a basic question of awk.
Code:
var=/test/build/create/sls
echo $var | awk '{ FS = ":" ; print $NF }'
/test/build/create/sls
I am trying to extract last column from the above awk one liner.
I want to use awk to modify file like this
origin:
A[]A[][]A[]A
modified:
A[]a[][]A[]A
but when I use
awk '{$2="a"; print $0}' inputfile
the output is
A[]a[]A[]A
where the [] means space. This is not what i want. I guess that is because the OFS is one space by default, but i really don't know how to solve this.
I need to replace a value in a file. For example the content of data.txt file is:
1 1 23
2 1 42
3 2 52
4 2 62
5 1 77
6 1 88
7 2 99
8 1 100
Could I substitute 2 in second column with 3 using awk and or sed or other command so that the data will be change as follow?
1 1 23
2 1 42
3 3 52
4 3 62
5 1 77
6 1 88
7 3 99
8 1 100
I'm working on a ~1 TB disk that was loaded with all kinds of images and documents that lost it's HFS+ partition table. The person for which I'm doing the favor of running scalpel says it's likely there's 90GB of stuff. Somehow, the disk got relabeled/MBR changed to some FAT variant that works on the whole Terrabyte.
Attempts to recover the partition info failed.My first try with scalpel finds more than 90GB of image file headers alone and that blows through all of my storage. Of the headers found and recovered as images, a simple test shows most of the image files are broken. The cluster size option does not work if I use it by itself. It errors out before it gets going.I want to speed things up and skip the countless broken image files.
Can someone explain how to determine the number of blocks to determine the number of cylinders for a new partition on hard drive.
Why is block size divided by 1024?
I think I understand unit size is the total bytes per cylinder, I get that. I understand the anatomy of the hard drive (i.e. heads, sectors, cylinders.
My problem is, if I need to calculate the number of cylinders needed for let's say a 20G partition on a 120G drive.
How can I extract 5th column from a file without the header.
View 5 Replies View RelatedUbuntu 10.10
Gnome desktop
On a text file, .txt, it is possible to high-light and copy a row on a table. But how to high-light and copy a column.
I have this piece of code with some template strings.
Code:
Big_L: $Big_L
$Big_R
$Lambda_tf
$Epsilon_1
$mu
$n_0
$ms
$Delta_R
$Epsilon_2
$Lambda_d
$Epsilon_3
$Small_N
$Small_Q
How can I insert exactly the same template strings in front of each string, but just without the '$' sign (see the first line for an example)?
I have a text file and I need to replace the 3rd column of that file from row 3 to the end of the file with a column which I have stored in the different text file. For e.g the original file is like given below:
a.txt nobla 6 gadf 72.500 1.600 1.800 .850 5.250 8.540
A# rad ang ht prf bk sd dia type blade
1 0.3081 9.00 1.9235 -17.50 18.00 -3.00 0.6250 1613 1
2 0.6509 194.00 2.0316 -17.50 18.00 -3.00 0.6250 1613 4
3 1.0128 8.00 2.1457 -17.50 18.00 -3.00 0.6250 1616 1
4 1.3748 192.00 2.2598 -17.50 18.00 -3.00 0.6250 1616 4
5 1.6986 7.00 2.3619 -17.50 18.00 -3.00 0.6250 1616 1
6 1.9347 120.00 2.4364 -17.51 18.00 -3.00 0.6250 1616 5
7 2.1327 190.00 2.4988 -17.34 18.00 -3.00 0.6250 1616 4
So lets say I wanna replace column 3 from row 3 to row 7 with a data from another file which is given below:
54.00
239.00
53.00
237.00
52.00
165.00
235.0
So the final output file should be like this:
a.txt nobla 6 gadf 72.500 1.600 1.800 .850 5.250 8.540
A# rad ang ht prf bk sd dia type blade
1 0.3081 54.00 1.9235 -17.50 18.00 -3.00 0.6250 1613 1
2 0.6509 239.00 2.0316 -17.50 18.00 -3.00 0.6250 1613 4
3 1.0128 53.00 2.1457 -17.50 18.00 -3.00 0.6250 1616 1
4 1.3748 237.00 2.2598 -17.50 18.00 -3.00 0.6250 1616 4
5 1.6986 52.00 2.3619 -17.50 18.00 -3.00 0.6250 1616 1
6 1.9347 165.00 2.4364 -17.51 18.00 -3.00 0.6250 1616 5
7 2.1327 235.00 2.4988 -17.34 18.00 -3.00 0.6250 1616 4
And I will post the code whatever i have tried soon. I started with awk and cut commands..but never got it to work and also tried PASTE command too.
i wanna sort the 3rd column in a table numerically ( no actual borders...only tabs seperating the columns)
it should be something like this but i cant get it right
Sort -u +1 -3 results
the file is called results
I have a file like:
ER- V67
ER+ V68
ER- V69
ER+ V70
[Code]....
I am using the code:
sort -k1
but it prints it by sorting the second column.
I have 2 large text files , one looks like this:
<contact type="1">blahblah@hotmail.com</contact>
<contact type="1">blahblah2@hotmail.com</contact>
<contact type="1">blahblah3@hotmail.com</contact>
The other is a list of emails in single column format like this:
emailaddy@hotmail.com
emailaddy2@hotmail.com
emailaddy3@hotmail.com , etc
Is there a command to delete all the blahblah emails from text file 1 and replace them with the ones from text file 2?
Or maybe a linux version of 'Csved' which has the ability to add,remove,insert columns?
I've ran fsck -c on the (unmounted) partition in question a while ago. The process was unattended and results were not stored anywhere (except badblock inode). Now I'd like to get badblock information to know if there are any problems with the harddrive. Unfortunately, partition is used in the production system and can't be unmounted.
I see two ways to get what I want: Run badblocks in read-only mode. This will probably take a lot of time and cause unnecessary bruden on the system. Somehow extract information about badblocks from the filesystem iteself. How can I view known badblocks registered in mounted filesystem?
I'm trying to RMA a month old SSD, and they're giving me a hassle about it. The drive currently seems to work just fine, but I'm 95% sure that a few blocks went bad and corrupted some data about a week ago. I was able to mostly recover the data and correct for the bad blocks, but I don't really trust the drive anymore.
I'm running an up to date Debian Squeeze install with ext4 on this drive. My system started doing some bizarre things, to the point that it was unusable, so I rebooted it. As it was booting up, it complained about needing an fsck, which found dozens of non-trivial errors that it was mostly able to fix. It then proceeded to boot normally, except the drive mounted itself as read only (due to errors). Another fsck turned up a similar number of problems. This happened a couple of times before I ran fsck with '-c', which is supposed to scan for and work around bad blocks. That seemed to fix the problem, it hasn't given any more problems since then.
The manufacturer is refusing to RMA the drive unless it's completely unmountable right now this minute, saying that it was a one time problem that could have been caused by anything. Am I right in thinking that the problem has to have been with the drive if 'fsck -c' fixed it, or could something else be going on? If it was the drive, am I somehow being unreasonable in asking for a new one while the current one is "working"?