General :: Redirect From A File Into An Actual For Loop?
Apr 11, 2011
I multi-boot several Linux distributions with an assortment of additional data partitions. I get frustrated whenever fsck is forced during boot. (It ONLY happens when I'm in a hurry don't you know...) So I wrote a script to automate forced fscking when I do have the time. (And/or while I'm doing something else in another workspace.
Because I multi-boot, I've learned that udev doesn't always assign the same device name to each drive for all distributions. I've had the same partition identified as hda5, sda5, & sdb5 by different distributions (without doing anything to affect the boot order) So my solution is to keep a list of partitions in a specific file on each distro with valid device names according to that distribution's udev process. Actually I'd use LABEL= instead but the labels don't show up in /etc/mtab, and I like to make sure a partition isn't mounted before I try to fsck it.
I can make this work in a for loop using cat. But I've seen so many things about NOT using cat that I wanted to rebuild my script. I can make this work with a redirect instead of cat via a while loop, But I "LIKE" old style for loops. But I can't seem to find a way to make a redirect work with one. I thought this might make a good first �LinuxQuestions.org� question. I'm also open to any other suggestions on better/alternative methods... Is it possible to redirect from a file into an actual for loop?
My script is as follows:
Code:
#!/bin/bash
# FsckEm I script to force file system checking on unmounted ext2/ext3/ext4
# partitions in preselected list. FsckEm accepts no options. Partition
I have a large number of folders that each contain quite a few files of varying sizes (from a few bytes to 400kb or so), mostly smaller ones. I need to get the actual (not the disk usage) size of these folders. Is there any way to do this with a command like 'du'?
I'm confused about "hard link" feature. I've been learning from my UNIX Academy DVDs training that hard links to a file can be many and each of them is an effective filename for the associated data. So let assume that we have some very sensitive data in a file and we want it to be deleted and file has 20 links. I "delete" a file, but in fact I deleted only one "name" of it. My understanding from the training that data is still there until we delete the last associated hard link. But how can I find the names of all of them? If we have the names, they can be removed one by one. Or may be there's command that can trace all the "names" and remove them at once?
I use amarok 2 and I have a lot of files that are titled "Track #.mp3", in Amarok I have changed them to see as the real songs but the actual files are still the same. Is there a way to change the actual file names using amarok to match the tags I have inside of amarok? The reason why I'd want to do this:
1. If my home folder becomes corrupt I don't have to redo 100's of songs (I have a backup but none the less 2. If I ever decide to use another program or if I'm in W7 using Windows Media Player classic it'd be nice to have it recognize the correct files without having to double up on the tag editing
If this isn't possible I'm going to wishlist it because I think it's functional and having a bunch of Track# files is a pain but impossible to get around when you have a lot of mix cd's.
I am trying to grep multiple numbers from file, grep does have the -f option for that.
Code: grep -f <`seq 500 520` /etc/passwd I know this could be done with
Code: for i in `seq 500 520`; do grep "$i" /etc/passwd; done But my question is fare more behind this example. It is possible to redirect one command output which will be treat as a content of file for another command ?
I was trying to redirect the output of two variables to different columns of a .csv file in MS excel like this,
Code: echo "$a $b" > abc.csv But I am getting both $a and $b in the same column, is there anything I can use instead of to move the value of $b to the next column? Or is there a good different approach to do it?
I am browsing our repository and I want to get this folder but all of the files there have ",v" in the end of their filenames and if you open each file, they have some written data which are headers for version control before the actual content of the file. I want to extract the actual content to make the file my_file.c,v --> my_file.c. Is there a command to do this?
I am having lock error and permission errors so I cannot checkout manually using CVS.
I have a python script that when run outputs to screen.
eg. ./international_sms_check.py 0403000511 919227434827 TS 21 check ok TS 22 check ok sms successfully delivered from 61403000511 to 919227434827 But when I try:./international_sms_check.py 0403000511 919227434827 > test
The file test is created but there is nothing in it.if I try ls > test this works fine with output of ls redirected to file test.
I need to run ./pythonScript keyword one time for each keyword in a text file, how can I do this from a gnome terminal? (without having to modify the pythonScript)
pseudo code:
for each keyword in file: ./pythonScript keyword waitfor(pythonScript to finish)
I have this script in the past for csh: Code: ./a.out |& tee prints.txt which will redirect all printfs in the C program to the prints.txt file and at the same time show them in the console. How do you do this in bash? I have seen this, [URL] but it does not work for my bash and sh shells. It says:
Code: -bash: syntax error near unexpected token `&' and Code: -sh: syntax error: unexpected "&"
I'm working on some scheduled task script files to keep nightly backups of some of our database information in place, and it's a bit annoying when they blow up. I know how to redirect stdout and stderr to a flat file I can view when I come in, and I know that 2>&1 maps them both to the same file (whatever was named in 1). However, I'm running into some cron-time situations where it's easier to have the two streams together, and other cron-time situations where it's easier to have them separated. I can't really tell which is going to happen; is there some way I could create both kinds of output file for my scripts, so that I've got a std_err only file and an interleaved std_out/std_err file?
Note: I've looked at the 'tee' command, but I don't think it will work for what I'm after. 'tee' appears to only work with stdout; I'm trying to work with stderr.
# # insert the line '-A INPUT -i eth0 -j ACCEPT' # in iptables # /-A INPUT -i eth0 -j ACCEPT/a -A INPUT -i edge0 -j ACCEPT
but when i run sed -f script iptables. it just echo's it to the the screen with my line added and not into the actual file. anyone know what i am doing wrong?
I cannot redirect output from commands such as iptables, iptables-save, and ifconfig. For example, any of the following DOESN'T work ( as root ):
Code: iptables > tmp iptables-save > tmp ifconfig > tmp The file tmp is ALWAYS blank, that is, 0 bytes in size. Wackier things DO work, such as:
Code: echo "`iptables-save`" > tmp iptables-save | tee tmp Other commands like:
Code: ls > tmp DO work as expected.
Note that this problem happens regardless if I log-in remotely via ssh or locally on the computer in question. I am clueless as to what is causing this. Any ideas?The box is running 2.6.25-14.fc9.i686 and boots to runlevel 3. The modifications I've made to the box since installing the OS are things like compiling/installing latest OpenSSH,OpenSSL,httpd,BerkeleyDB,subversion,zlib etc -- nothing really out of the ordinary I'd say.
I want to loop through the records in the below file (homedir.temp) /home/user1 /home/user2 /home/user3
I want to do the following activities with each record1. du -s - to get the total usage for that directory (my variable name is SIZE)2. divide SIZE by du -c for /home to get the percentage of usage. (my variable name is PER)3. write the directory, SIZE, PER to a filePROBLEMI am using the below for loop: for record in homedir.tempthe mentioned activitiesdonehe above is not looping through the records. It does the first record perfectly and exits the loop.
I am again struggling to make a script work, but hey, it is fun, I am learning new things. I discovered the set -x option which was, for me, like the second coming. Still, what I am not able to do is redirect ALL output to a (log) file, including what is produced by the -x setting. Let's assume a very simple script: Code: #!/bin/bash set -x source="/home/atelier/Bureau/" ls -la $source and I am running it as . test.sh >> /var/log/test.rmcb.log
The result of ls goes inded into the log file, but the rest still shows on the console where I am running the script: Code: ++ source=/home/atelier/Bureau/ ++ ls --color=auto -la /home/atelier/Bureau/ Is there a way to redirect EVERYTHING to the log file ?
Im trying to find a way in redhat to see if there is a command that could tell me the actual used memory by the system. For example when i do the free command, i want to see the Used minus cached value. Is there a way linux can report the true used memory and not the cached/buffer etc? if there is not specific command for that, can someone tell me a bash script that could calculate used-cached ?
I have a 50 file name NSSAVE0001.vtk to NSSAVE0050.vtk.Do I have to manually type individually command to open each file or can I have a loop to open file?
I want to redirect the output of a command to a file, but not at the end of the file, but after a line. Do you know how can I do it?
Something like:
cat file_a | grep some_text >> resulting_file
# in this file I need to place the output from grep, but not at the bottom of resulting_file, like it would normally happen, but after line .. 3 , for example
I'd like to redirect the output to a file and to the console. I know about tee but the issue is that it waits until the first process finishes.e.gecho "hello world" | tee test.txtfirst calls echo and then tee.Is there a way to redirect "on the fly" ?
I'm sure this is something simple, but I've googled all over and can't find the answer for the life of me. I've got an apache server and I need to redirect all requests to HTTPS in the same domain, except for 1 html file the load balancer hits and needs to get a 200 on. Can anyone point me to some documentation or show me what I need to add to the httpd.conf file to get it working properly?
I have a little complex Makefile system. A parent Makefile call dozens of Makefiles in subdirctories. And the subdirctory Makefile calles shell script to do real building. I want to grab all output this Makefile system generate. So, i employ "make 2>&1 > make.log". but not all output messages are filed into make.log. The message generated by sub-makefile called shell script cannot be recorded into make.log. And another curiouse thing is, if i launch "make 2>&1 > make.log" in a perl script, all output do be sent into make.log.