General :: Passing A List Of Variables To A For Loop
Mar 17, 2011
I'm trying to do substitutions to a file based on passed variables.
For example, I have a file called test.txt that has 5 lines:
What I want to do is to go through that file, line by line and check for the presence of a passed variable in that line. If I have a match, then substitute and print, otherwise print line as is. My problem arises in that the number of variables is well, variable.
The code I started with was the following:
Code:
What I was hoping for was test.out to look like this:
What I get is a much longer file like this:
This makes sense after thinking about it but is there anyway to get an output like the first case?
Is there a command that can list the variables that I am using in a script? I mean the variables that I created in the script not the environment or local variables. For example if I have a script that has the following var's like : name=Alex, age=20, postal_code=12345, how can I list them all @ once WITHOUT using echo $name, $age and so on. Imagine I have a lot of variables and i can't echo them all.
I am trying to rename a list of variables in my script using a second list of variables. I want the variables in the second list to replace the variables in the first list such that the first variable in List 1 is renamed after the first variable in List 2, the second variable in List 1 is renamed after the second variable in List 2, the third variable in List 1 is renamed after the third variable in List 2, and so on.
For example:
I know how to rename each file individually, but would like to run Do Loop which can rename all my output files at once.
I have a bash variable where the content looks like this where ;f1; and ;f2; are delimiters: ;f1;field1value1;f2;field2 value1 ;f1;field1value2;f2;field2 value2 ;f1;field1value3;f2;field2 value3
So what I need is to extract and put into variables each combination of f1 and f2 in a loop to something like that:
#first pass of the loop I need: f1=field1value1 f2=field2 value1
#second pass of the loop I need: f1=field1value2 f2=field2 value2
# third pass of the loop I need: f1=field1value3 f2=field2 value3
I have set up pure-authd and pure-ftpd. They are both running, I have created the socket etc.
In my authentication module (a php script) for testing purposes I have done a vardump in to a file, and have realised that pure-authd is not passing on any variables (username, password of the current person trying to log in via ftp) to the PHP script.
I am sure the authentication module is working (have tested it vigorously on the command line), but after 10 hours wondering why it wouldnt work and messing about with the script, I have realised that the variables were never even getting in to the script in the first place!
I am running the processes such as this:
Everything seems as if it is working other than this. For instance, when testing the setup with a very basic auth module which doesnt require a username or password (the basic module just passes "auth_ok:1" to pure-ftpd and the user is then logged in), I can log in to the FTP server fine.
But like I say, a vardump ($argv) on my proper PHP authentication script would suggest that no username or password are being passed to it.
"While ; do ; done" is very convenient for SH coding. However sometimes you may be annoyed by your computed variable within the "while do done" type loop. What to do how to pass it out of the loop to the outside of the bash code? A solution is to write it into the /tmp or on the disk... and to call it back after. - not elegant... really not... Anyone would know a trick another alternative that would look nicer?
Code: # Count file total size TOTAL_SIZE=0 LISTOFFILES=`cat "$HOME/.fvwmoscfg/fvwmburnerlist.lst"` echo "$LISTOFFILES" | while read i ; do SIZE=`du -bs $i | cut -f 1` TOTAL_SIZE=`expr $SIZE + $TOTAL_SIZE` echo "$TOTAL_SIZE" > "$HOME/.fvwmoscfg/fvwmburnerlisttotalsize.lst" done TOTAL_SIZE=`cat $HOME/.fvwmoscfg/fvwmburnerlisttotalsize.lst`
echo "The total size of all files and folders is : $TOTAL_SIZE"
I have a C header file which have arrays of predefined(known) structure type. But i dont have names of arrays and their size. when i include that file and compile my application, i want to know the names and sizesof those arrays.
purpose of application is to get the content of those arrays and to explain it in descriptive words instead of hex numbers.ofcourse this can be done by file pointers and reading also with out header file inclusion, but as i am working in C, once compiled, those variables are in my address space in i include header file.
mkvmerge -o <filename without extension>_TV.mkv -S <filename> && mkvextract tracks <filename> 3:<filename without extension>.*** && perl /home/brian/Desktop/ass2srt.pl <filename without extension>.*** && rm <filename without extension>.***
Doing these commands for multiple command line file inputs is the goal. So I can just type ./script.sh *.mkv in my terminal.This is what I have so far, but it doesn't work whatsoever.
I am new to bash scripting. I want to know whether i can pass one variable to another. For example $1 represent argument1. Now if i want to get the argument 1 like USER="1" now i want $ of $USER to execute $1 so what should i do..
In short: how to make sudo not to flush PATH everytime?
I have some websites deployed on my server (Debian testing) written with Ruby on Rails. I use Mongrel+Nginx to host them, but there is one problem that comes when I need to restart Mongrel (e.g. after making some changes).
All sites are checked in VCS (git, but it is not important) and have owner and group set to my user, whereas Mongrel runs under the, huh, mongrel user that is severely restricted in it's rights. So Mongrel must be started under root (it can automatically change UID) or mongrel.
To manage mongrel I use mongrel_cluster gem because it allows starting or stopping any amount of Mongrel servers with just one command. But it needs the directory /var/lib/gems/1.8/bin to be in PATH: this is not enough to start it with absolute path.
Modifying PATH in root .bashrc changed nothing, tweaking sudo's env_reset and env_keep didn't either.
So the question: how to add a directory to PATH or keep user's PATH in sudo?
I'm using gdb to debug my program. My program requires arguments (e.g., ./prog -dfile).But if I use gdb as in gdb ./prog -dfile, gdb wants to interpret the -d argument. How do I pass an argument to my program via gdb?
So I'm trying to teach myself to write programs for unix in c. I am currently creating a program, and I need to pass a struct through a socket
The struct I want to pass has two types in it, one enum and one union of two other structs. These two other structs each contain an int and a char variablename[256] array.
gcc won't let me just pass the struct using write(pipefd[1], struct, size_of_struct) since the struct is not a char buffer. So that's my question...how does one go about passing a struct?
is there any way I can pass commands to the CLI of a tool directly?
I would like to script some actions, for example:
./OpenBTS < "tmsis"
I do not need to retrieve the results (I watch it in the log file). how I could realize that? There is now way to do this using command line parameters, at least not that I found out. So it looks like I have to figure out sth myself. Maybe I could automate screen in a way to detect the prompt and "paste" my command there. Are there tools for this on Linux?
I am writing a script to get hardware information of a particular UNIX machine. To do this, I ftp a shell script (commands to get h/w information) to the target machine and then use SSH to remote the remote script.With FTP, I can pass a password accepted as input the shell script. How can I pass the same password to SSH ? This is because I do not want the user to enter the password twice.
The script receives multiple files as parameters and it is supposed to count the number of lines in each of them and write that number in another file.
This is my script:
Code:
while [ -n "$1" ] do lines=`cat $1 | wc -l` echo "The number of lines in file $1 is $lines." >> lines.txt shift done
Is there any other way to do the same thing, without using shift?
Problem: I need a method to maintain the $i variable. In fact, actually, this variable get lost when executed. I think that an escape can preserve this variable and permit its execution inside the function, but I've no idea about.
I wrote a simple bash script to let me treat any set of programs like a deamon. For example if I configure the script a certain way I can start/stop/get the status of apache, mysql and php all from one command. I am having a bit of a problem though. I am passing commands as strings to a function and then depending on the arguments to the script it might run one of these commands or another. Some of these commands need to beun in the background though, such as deluge-web. When I send "deluge-web &" to the function and it execute it deluge-web does not start in the background. I can't figure out why this is. I have tried escaping the & with ''s and with a , but nothing seems to work. I know that this is some idiotic thing that I am overlooking, but I am a bit stumped. Here is the script configured to start/stop/get status of deluged and deluge-web.
Ive read a few books and a lot of tutorials on C but can't find this topic explained in a deliberate way.I can find bits and pieces but nothing thorough.
I am running Red Hat Linux Enterprise 5; I am always using the export command to set environment variables.Are there any other ways to set environment variables and what are the advantages/disadvantages of them?