General :: How To Redirect The Output To Different Column In .csv File?
Apr 25, 2011
I was trying to redirect the output of two variables to different columns of a .csv file in MS excel like this,
Code:
echo "$a $b" > abc.csv
But I am getting both $a and $b in the same column, is there anything I can use instead of to move the value of $b to the next column? Or is there a good different approach to do it?
I have a python script that when run outputs to screen.
eg. ./international_sms_check.py 0403000511 919227434827 TS 21 check ok TS 22 check ok sms successfully delivered from 61403000511 to 919227434827 But when I try:./international_sms_check.py 0403000511 919227434827 > test
The file test is created but there is nothing in it.if I try ls > test this works fine with output of ls redirected to file test.
I'm working on some scheduled task script files to keep nightly backups of some of our database information in place, and it's a bit annoying when they blow up. I know how to redirect stdout and stderr to a flat file I can view when I come in, and I know that 2>&1 maps them both to the same file (whatever was named in 1). However, I'm running into some cron-time situations where it's easier to have the two streams together, and other cron-time situations where it's easier to have them separated. I can't really tell which is going to happen; is there some way I could create both kinds of output file for my scripts, so that I've got a std_err only file and an interleaved std_out/std_err file?
Note: I've looked at the 'tee' command, but I don't think it will work for what I'm after. 'tee' appears to only work with stdout; I'm trying to work with stderr.
I am trying to grep multiple numbers from file, grep does have the -f option for that.
Code: grep -f <`seq 500 520` /etc/passwd I know this could be done with
Code: for i in `seq 500 520`; do grep "$i" /etc/passwd; done But my question is fare more behind this example. It is possible to redirect one command output which will be treat as a content of file for another command ?
I am again struggling to make a script work, but hey, it is fun, I am learning new things. I discovered the set -x option which was, for me, like the second coming. Still, what I am not able to do is redirect ALL output to a (log) file, including what is produced by the -x setting. Let's assume a very simple script: Code: #!/bin/bash set -x source="/home/atelier/Bureau/" ls -la $source and I am running it as . test.sh >> /var/log/test.rmcb.log
The result of ls goes inded into the log file, but the rest still shows on the console where I am running the script: Code: ++ source=/home/atelier/Bureau/ ++ ls --color=auto -la /home/atelier/Bureau/ Is there a way to redirect EVERYTHING to the log file ?
I want to redirect the output of a command to a file, but not at the end of the file, but after a line. Do you know how can I do it?
Something like:
cat file_a | grep some_text >> resulting_file
# in this file I need to place the output from grep, but not at the bottom of resulting_file, like it would normally happen, but after line .. 3 , for example
I'd like to redirect the output to a file and to the console. I know about tee but the issue is that it waits until the first process finishes.e.gecho "hello world" | tee test.txtfirst calls echo and then tee.Is there a way to redirect "on the fly" ?
I booted to command line only and entered the following command: Sudo Xorg -configure > xorglog.txt
the command seems to run just fine and does create a new xorg.conf.new file but I would like to see all the output of the Xorg -configure command but it just scrolls by too fast and I can't go back to see it. Hence this is why I'm trying to do the > . It seems to ignore the >.
I have got a script with an outer and inner loop. The inner loop issues loads of echo's which need to be redirected to a log file determined by the outer loop. The obvious solution is to redirect every echo to >$LOG and set LOG in the outer loop.
Code:
for f in $FILES ; do LOG=<logfile> for l in $LINES ; do
[code]....
it is possible to map stdout to $LOG in the outer loop without having to redirect every subsequent individual command output?
Is there one command that will let me record an entire terminal session (with any possible errors) to a text file while also seeing all output on screen too? I know it can be done for individual commands, but I'm looking to do this for an entire session where the individual commands will be normal (i.e., not piped into tee, etc.). It would be even better if the command prompt is captured too. The obvious utility of this makes me think someone surely has come up with a solution long ago (probably in the 60's).(I'm sure it goes without saying, but subsequent output in that session should be appended to the file. The file should contain the full history, with all output and errors, of the session.)
What I need to do is to extract one complete column (file size) from the output of ls -lS but while doing so in some rows I have a single space but in some other rows I have 2 or 3 spaces like some file sizes are different with 30 bytes 400 bytes and some 4000 bytes. So when I extract the output of ls using | cut -d ' ' -f5 i get the value which has only one space i.e. I get 4000 as output because 400 has 2 spaces seperated and 30 is 3 spaces separated. So how to get the file size column from the ls output?
And I want to be able to pipe it to sort on that third column, by letter first, then number. But I keep coming getting files sorted like:
(field separations all start at same place, so columns are not jagged like above.)
I have read the sort man pages, and have tried -n for the numbers, and -k for the position to start sorting, among other things. I also tried inputting a second position to start sorting, which sort should supposedly refer to if the two entries are identical at the first place being compared, but it seems to just ignore the second one. I just can't get it to sort the numbers properly...
For now I am manually opening the file in emacs and changing them around, needless to say, very time consuming.
I'm doing some commands on a remote server (using ssh to log on to the remote server, did a ssh key swap), how do i redirect the output of a command back to the local server ?the person who helps me out is my HERO i'm really stuck on this and it would bring me a lot further if i get this to work
i have a process launch by another app, i want to see the output (that is in console) in a terminal (gnome-terminal or tty); how can i capture de standart I/O from a process. my process (aria2) is launch by firefox and the output of ps is like:
...is running but i cant see the output (download state), how can i capture or redirect standart I/O to my terminal to get something like the output of:
I am trying to automate an svnadmin dump command for a backup script, and I want to do something like this:
find /var/svn/* ( ! -name dir -prune ) -type d -exec svnadmin dump {} > {}.svn ;
This seems to work, in that it looks through each svn repository in /var/svn, and runs svnadmin dump on it.
However, the second {} in the exec command doesn't get substituted for the name of the directory being processed. It basically just results a single file named {}.svn.
I suspect that this is because the shell interprets > to end the find command, and it tries redirecting stdout from that command to the file named {}.svn.
I have a program that writes to stdout. Is there a way that I can redirect the output to the linux diff command or do I have to write the output to a file and then compare that. For example I have a bunch of test input files for a program and the corresponding expected output in another set of files. And I'd like to do something like ./program < t1.input | diff t1.expected.
I have a file that stores employee login IDs, names, types, and permissions. Our software reads the information based on byte-columns, so it reads a column as any ASCII character (spaces, letters, numbers, punctuation, etc.). I want to create a web-interface for adding and removing users, and storing the data in a MySQL database. However, if I am creating the files from the MySQL output, I need a way to write to specific column locations in the file ...
User ID: Columns 1-4 User Name: Columns 6-30 Type: 32-40 Permissions: 42-45
I want to use a scripting language, preferably C-Shell, to call MySQL for the data and write the data to the correct columns of the file. I wrote a script that takes the data from the file, and dumps it into the MySQL table, so maybe I can pad the remaining space in the table column to fill with spaces ...