how to update a series of values from multiple grep commands outputs to be appended to a single row of a csv file? Work on a linux envir. The values from grep output will be numeric values.
Output sold look like:
Each of these values will be odtained from multiple grep commands piped with wc -l Is it possible to update a single row of a csv file if so pleas ehelp me with the command to be used to redirect the output into the csv file
I have a GPS device which sends in data on port number 5000, i am able to capture the data into a pcap file using tcpdump. Now my problem is, i need to pipe the data into a text file as and when data arrives into the pcap file continuously.I did extensive search, but to no avail. been trying to solve this for the past 3 days. I use the following commands to capture and pipe the data, but that happens only once when i issue the command. I want this to happen continuously as and when the data arrives.
I've searched high and low, and can't seem to find a solution to this. I'm running a Dell Inspiron with an HDMI output with 10.10 through the tv. I want to get HDMI sound output for VLC, but I also want S/PDIF output (to the stereo) for Musicplayer. I can test and use the HDMI in the sound preferences sound/preferences/sound, but when I try to do the same for the internal card and click 'test speakers' the sound program closes itself. When the machine was a windows machine, it had PowerDVD outputting to the TV and Mediaplayer outputting to the stereo. I'm aiming for a similar set up in Ubuntu.
I've searched and searched and can't find a straight answer about this. I want to sent the same signal out of the digital output and one of the analog outputs on the soundcard (Intel HDA) on my motherboard. I'm using ALSA and Pulse Audio.
I have a requirement like this:Cut the characters from each line of a file with following positions: 21-24, 25-34 ,111-120.Thse fields now need to be placed in a tab delimited output file.Currently this is how I am achieving it:
I have a network of 20 machines, all running Ubuntu 10.04.
Each machine has about 200[GB] of data that I'd like to share with all other 19 machines for READ ONLY PURPOSES. The reading should be done at the FASTEST POSSIBLE WAY.
A friend told me to look into setting up HTTP / FTP. Is it indeed the optimal way to share data between the machines (better than NFS)? if so, how do I go about it?
UPDATE: Just to clarify, all I want is to be able (from within machine X) to access one of machine Ys files and LOAD IT INTO MEMORY. all of the files are of uniform size (500 [KB]). Which method is fastest (SAMBA / NFS / HTTP / FTP)?
I've got an external hard drive with one large data partition on it. I also have four computers to connect it to (individually, not at the same time). Three machines are running Slackware and one is running Ubuntu 9.10. I need to be able to just plug the drive into whichever machine, mount it (preferably to the same location each time) and not have to worry about user permissions and such. Do I just chmod 777 all the files and folders or is there a better method for different 'users' to access the same partition? And how about mounting to the same location each time?
Now the second part of my question I'm pretty sure I'm not able to do but just in case..... is there any way to encrypt the information safely and make it compatible with a Windows XP machine?
I have a file which contains the data i retrieved through prstat and an array that contains all the unique process ID's of that particular file. i want to compare each and every line in the file with each and every element of the array so that i can create multiple files for the multiple value in the array.
kernel 2.6, slackware 12.0mkisofs 2.01If I do 'ls --help|more' all's well. 'mkisofs --help' outputs its help screen, and I can use Shift+PgUp/PgDn to scroll through it. But I can neither pipe it to more or to less, nor redirect it to some file. more is simply ignored. Less, gets into less but only the last screenful is seen. Redirection, i.e. 'mkisofs --help>john.txt' produces an empty file (size= 0).
I want to attach an analog camera to an old linux computer and directly pipe the /dev/video0 to another computer, where I can use it as a device again (so /dev/video0 should go to /dev/remote0, for example)
(Reason for doing this is that the computer does not have enough power to encode the video)
Is that possible? I've seen people can pipe the data directly from the device over ssh into mplayer, but I need to have some sort of reference point for Zoneminder.
I am sure this has been covered before, however I do not know which terms to sue for searching for this, so I will try and explain it.
I have a program that I run at startup to connect me to my work VPN, specifically the Cisco VPN client. When running the program, it prompts me for my username and password. I would like to be able to automate the login process by piping the username and password into the program everytime it starts up (username and password cannot be passed as arguments to the program)
Something like echo username | echo password | vpn_script
The serial console is for debugging and will physically disappear when product is mature. However, there are many background processes that may print out statuses/results. These go to /dev/console or serial console. Telnet will be the only way to get a console. I tried netconsole (with netcat) and it works, but it is only for kernel printk messages. I tried "program > /dev/pts/0" and it works also. it would be better if I can just change/add the console /dev/pts/0 to the existing /dev/console.
I'm on Ubuntu 11.04. I have read around about how to use curl to download a list of URLs from a text file, and everyone says to use Code:curl -K URLlist.txt. This is what the curl man page says as well. However, for even a simple file with one URL, this command outputs a bunch of weird symbols for me instead of downloading the file.For example, I have a text file "test.txt" with one line in the following format:
Code: url = "http://www.example.com/image.jpg" I use the curl command to download this file:
I have a folder which includes bunch of folders each having data files in them. [ Folder A has F1, F2 F3 ..... F1000 folders in it, and F1, F2, F3 ... each has about 10 different files named FILE 1, FILE2, FILE3 .... in them.
I am interested in File 1 of each Folder, because that contains the data I need in it. More specifically, that File1 s have a line "ANSWER=..." in them, and i need to get that value of the ANSWER from each file. So doing it by hand is so hard, so I need to write a script that will scan all folders and give me a list of values of eache ANSWERs.
i want to create multiple cd's containing the data of one tar.gz file of course i can use data dump (the dd command) to cut the tar.gz into 700 MB pieces. yet then i must calculate the exact location where to cut, and use the skip, seek and count parameters to proceed. Does anybody here know, software that automatically creates 700 MB parts of a single file?
Viewing any source on the Web results in partial loading of data followed by broken up multiple lines. It is occurring now with this post. Paging down I get three lines of "submit new thread" until at the bottom are multiple lines (no characters) after "forum rules". I put it to my network provider who cannot come up with any idea of what may cause it. It cannot be the hardware as the same condition exists on two PC's. Both are on 11.2.
What are the advantages of the multiple partition setups other than resistance to data loss in crashes? Is there any other reason to have a special partition just for your boot directory (kernel files and config) than surviving a major crash?
Also, is it possible to make the Debian installer accept an existing set of partitions? Or even alter the size of the automatically created partitions? Does expert mode let you control the partitions? How many other very detailed things would I have to know to use expert mode, though?
Faced with disk-bound issues on a FTP server with high traffic. Would like to set up multiple FTP server nodes with dedicated storage for each node where all FTP access is managed by a master FTP server. So, a user would FTP to a single externally visible IP address for the master FTP server and then get routed to the appropriate FTP node. Are the mutiple FTP nodes required or is there a better way of doing this? Perhaps only one FTP server is required and then each node would serve as a separate file server
I have my .procmailrc file set up to pipe mail to a simple php script I've written. The only thing the script does at this point is echo back a "hello" message. However, procmail does not execute the script properly.
I have a bash script that I want to import in to Python, mainly just to see if I can or not. However in the script I do use some piping of commands into sed to trim it down to what I need. When I tried doing it with the os.system() call, it didn't work. The exact error is
New to ubuntu and shell scripting in general... currently I stored some data into a text file. Right now, I would like to output the data from the text file and store it into a variable. Here's what I have so far:
This definitely works and READ_FILE has the necessary data. However, this command will trigger an output to std output and I will see data on the screen, which is not what I want. I tried:
cat $FILE_NAME | $READ_FILE
and various other variants of this. It does not output to std output but neither does anything gets stored into $READ_FILE. I tried:
cat $FILE_NAME >> $READ_FILE
and it arrived at an error of "ambiguous redirect".
I'm trying to pipe from a textfile to sendmail.The command I'm using on teh sendmail server is:[root@sendmail-server test]# sendmail to-email-address@relay_server-address < test2.txt.I'm doing this because I was doing this from an aliases file just fine until about three weeks ago. The aliases file suddenly stopped working after the relay server received an inordinate amount of email from the From: address and for the To: address.