For example, I have a text file with data which lists numerical values from two separate individuals
Code: Person A 100 200 300 400 500 600 700 800 900 1000 1100 1200
Person B 1200 1100 1000 900 800 700 600 500 400 300 200 100
How would I go about reading the values for each Person, then being able to perform mathematical equations for each Person (finding the sum for example)?
I want my samba to keep my windows attributes exactly what the user setted in windows I mean if it has read only file in win box and copy it to samba share ,samba keep it read only and same for other attributes but it does not do it now with my configuration:Quote:
[global] workgroup = DOMAIN server string = File Server
I have several text files that list hundreds or thousands of words each. I need to find the intersect of each of these sets. (i.e. print only lines that occur in each file) Is there a CLI utility that can do this?
Until now i haven't had to dabble with bash scripts.
I have a program that reads in data files. These are named datafile01_R, datafile01_G, datafile01_B, they then increment, so datafile02_R etc i have about 600 of these. the program reads in 3 data sets at a time from each run, so files_01 r, g, and b.
The program then does its magic, and outputs about 40 different files, depending on the file, they gone to folders named R, G, B, psa, or tracking.
The program itself has configuration files to say where the files should gone when analyzed, there is also the config files that reads in the data sets.
At the moment i have to run one set of data, then go in and manually change the input file location, and run again. But, doing this, even though a different data set, the new set overwrites the old set in one of the output folders. So i need a way to increment the output filenames after they are written and before the program is run again with the new data set.
Is there a way, besides writing a PERL program, to read each line one by one in file A and tell if this line also exists in file B? Can this be done via a shell script?
Is there anyway to delete certain paragraphs within a text file and then insert the paragraph into another text file.I just cannot figure out how to remove the specific lines from the file and then insert them into another file at a certain line within that new file. Thanks again
i try do modify BASHRC and ENVIRONMENT files on directory ETCthen all the command don't work, such as:SUDO, GEDIT, NAUTILUS, NANO and some others!now i want to edit the 2 files and delete the insert lines
eed to make a script to append a line to the bottom of multiple files (only certain files, but 100's spread over directories).Doing a find replace inside multiple files is easy, I use the followingfind /base/dir -name "*.txt" -exec perl -pi -w -e 's/FIND/REPLACE/g;' {} ;So I tried doing the followingfind /base/dir -name "*.txt" -exec echo "Append this" >> {} ;However this just appends all the text into a file called "{}". Whereas {} should be replaced with each file that's found.
I've been trying to sort this out for several hours and I?m totally lost? I?ve been searching around, but haven?t found the solution to my problem. I have a directory with 100 files. I need to copy 10 lines of each files (let?s say from line 45 to 55) into one unique file. So I guess I could use sed ?w, but I didn?t manage to write the right script. I also tried using a loop to create 100 different files, each one with the 10 lines) to concatenate them later on. But I only got 1 file, not 100.
I've just installed Kubuntu 11.04, switched on wobbly windows effect. It runs very smooth on my Nvidia GeForce 7600 GS with dual screen twinview turned on. However, I get these lines when I drag/move the window upwards - see screenshot:
I am facing a problem while splitting a text file, I need to split a file into some parts and each split file should have 2000 lines, when I do it through "split" command the mother file is kept intact but as per my requirement I need to cut mother file into some parts thus it should not be kept intact.
My server was hit with an injection script which has placed code across many of my clients files. I need a script that can remove a block of php code that spans multiple lines, multiple directories/files and is dynamic, meaning that part of the code changes. I think using find/sed is what I need but cannot seem to figure out how to get it to work.The following is the script that is being injected everywhere. The catch is that they have generated dynamic code at the start/end of the script. (I have commented the parts that are dynamically changing on EVERY instance).PLEASE NOTE: Directly following this script is the start of a valid php script that I do not want to delete.
<?php //{{65281980 - DYNAMIC!! GLOBAL $alreadyxxx;
i tried using diff --GTYPE-group-format= with <%, but not sure that right solution.Here's what im trying to do. I have two c source files, file1 and file2. file1 has a function in it that's been modified in file2. However, the functions begin at differnt line numbers in eachof the files. Is there a way to specify a range of file numbers on file1 and file2 to compare, using diff or any other combination of utilities? I can always output text from a range of lines from each file to two separate and new files and then compare those, but that's tedious. I could also write up a script to automate this type of solution, but I imagine there's an existing way of doing this.
I want to (from the command line) be able to counte lines in a bunch of files of a specific type in a folder and all its sub-folders. How would I do this?
I have been experiencing a problem where the screen loads and after initial first few lines breaks up into multiple repetitions of lines. Reloading helps but has to be repeated when pageing down. Mail is no problem; it is supplied by my network provider. OS is openSUSE 11.2 which I update when advised. Below is a sample from the error console:
I want to remove duplicate or multiple similar lines from multiple files. I.e. if I have four files file1.txt file2.txt file3.txt and file4.txt and would like to find and remove similar lines from all these files keeping only one line from these similar lines. I only that uniq can be used to remove similar lines from a sorted file.
A function by name abc is called in many files. I want to copy all the lines with the function call to an output file.A simple grep on function name doesn't help me as the function call is spanning across multiple lines as follows:
abc(parameter1, parameter2, parameter3);
So I want to copy all the three lines (till semicolon) to the output file.The problem is because there are more than 200 calls for the same function and I cannot do it manually
I've come across an unusual requirement for a service in my Ubuntu system.Simply put, I need to find a way to search for all instances of a term in a file, delete lines containing containing that term, and delete four lines below each instance of that term. ither that, or copy the entirety of a file to a new file and skip over all lines containing the term plus four below it.This sounds kinda weird, I know. Without going too far into detail, I either have to change the logfile format for a server I'm running which is a huge pain in the butt, or I can just run a script to edit an HTML report generated from said logs. (Said report is really just for managers to peruse, and I like my log format, so I'm pursuing option 2.)
I have a pdf file (nasm documentation) that used to be displayed perfectly with xpdf, but now all code example lines remain blank. On the terminal I get repeated lines such as:
Quote: Error: Couldn't create a font for 'Courier Bold' Error: Couldn't create a font for 'Courier'
I tried to figure out the problem... the same problem occurs with both evince and okular. However it does not occur when opening the same file under root (tested with xpdf), so it seems to be some permission problem. I tried searching on the error message but couldn't find a working solution.