Programming :: Script To Read File And Copy It To Other Detination?
May 15, 2010
I have file in which i have some paths, i want to write a script so that it can read the file and copy the files from the paths to some destination folder which i will specify in the shell.
How to copy a Read-Only file in Linux and make the copy writable with a single cp command in Linux (Ubuntu 10.04)? The --no-preserve and --preserve seemed to be good candidates, except that they should "and" the mode flags, while what I am looking for is something that will "or" them (add +w mode).
More details: I have to import a repository from GIT to Perforce. I want that all Perforce depot files are Read-Only (that is how Perforce was designed), while all other files that were derived/copied from depot files are writable. Currently if a Makefile tries to copy a Read-Only file then the derived file will also be Read-only. This leads to build-errors when cp tries to overwrite Read-Only file second time. Of course the --force is a workaround here but then the derived file is also Read-Only. Also I do not want to mess with "chmod" after each "cp" command - I will do that only as the last resort.
I have a .txt-file with ~50.000 lines of numbers, generated by a mathematics program. From this file, I need line ~ 1.100 to line ~16.000 (these lines are always the same btw, this may make the solution easier, dunno) to be copy/pasted to another file, where the lines ~500 to ~15.000 (also, every time the same) should be overwritten by the aforementioned lines...I haven't found or come up with anything that works yet, mostly I find solutions to copy everything from one file to another but I can't find something to specifically overwrite a part of a file with part of another.
I am splitting a file based on the values read from an input file. The below one is the script.
1)How do I add the header which is present in the original file to the new split files created?(For eg. pharmacyf conatins header as table column names. The new files created (ODS.POS.$pharmacyid.$tablename.$CURRENT_DATE.dat) are without the header).
2) Also the script is creating 0 byte files for the pharmacyids which are not available in the intial file? Can this be avoided?
for pharmacyf in * do tablename=`echo $pharmacyf |cut -f4 -d'.' ` while read pharmacyid do grep -w $pharmacyid $pharmacyf >> $OUT/ODS.POS.$pharmacyid.$tablename.$CURRENT_DATE.dat done< inputfile done
I wanted to copy one file to multiple new files. I have an idea to write a script and do the operation. But here i m looking for any particular command to do this operation.
Alright, so I have been trying to resolve this issue for awhile, but now feel like help is very necessary.I have a 128(by)128(by)128 array in a MAT file, and am using the following MATLAB script to convert it to a DAT file:
I want to read content of a file inside a gawk script.I know that by using "gawk -f filename" I can read a file, but I want to do that inside the script.
I am trying to read the contents of a file into something else. I have a file.txt that I am working with, I want to read the file and take the data and run some commands with the data that it read. So if it read www.yahoo.com I want to be able to nslookup. Does that make sense? I have been trying to use the read command but that does not seem to work. I even was trying to read filename | > filename to see if I could even read any of the data at all. Nothing is working.
have been playing around with a script for a few hours and now I need to be able to output the lines in a text file one by one to be used later in the script.What it gonna do is to read a log file and grep the usernames, then write them to a file, and then run one script for each user, to search for more information about them in the log.But I don't know how to output a single line from a file, and google does not return any solution.
I am trying to read from a file named matrixA.dat that contains a matrix formatted like this:
2 x 3 1 0 2 -1 3 1
I am reading the lines in, and this is the source that I have so far: Code: #include <stdio.h> #include <string.h> #define DATALIMIT 17 #define DIMLIMIT 5 #define NAMELIMIT 40
[Code]...
was stored in dataA, so when I print it all I get is a newline. Why didn't it grab the 1 0 2 -1 3 1 line?
I have a script that reads part of a line, delimited between the first and second intended part by a colon. Then it "chops" the part after the colon, which are words offset by commas (counting them beforehand so as to catch every word in the string's second part), like this:
Code:
"COLORS.JPG:red,orange,yellow,green," (Returning) red
[code]....
single script that parses/breaks both parts of a line like this "COLORS.JPG:red,orange,yellow,green;blue,indigo,violet," so that the two parts, separated into single words (or two and three words, sometimes with spaces) can be used as single-line annotations and written to JPEG files using Exiv2. So far, I haven't been able to come up with a script that does this without one part of the total string(usually that part after the colon) becoming the first word in the second array. In other words, I look for this:
KEYWORDS:
[ ]red [ ]orange [ ]yellow
[code]....
Or vice-versa (ie, the second array winds up as a single-line "member" of the first). I think it's because I'm using a single while read loop to read the text file in which the filenames and substrings happen to be. If there's some way of reading a file once and going back to the beginning to read it again in another while loop, I haven't found it.
I am struggling with Bash scripting at the moment (I can't seem how anyone can write scripts with this language!!!) I have a need at home to have a cron job execute daily to lookup my downloads.txt file, read each url (per line) and download content from that url. Then that entry needs to be removed (well I keep all urls in memory and clear the file afterwards). If an error occurred during the download process, then the url is written to a downloads.err file. I got all the above working except for properly reading the url from the text file without including newline characters. I am using the following to read:
while read url; do --Do whatever here-- done < downloads.txt
How can I get it not to let the url variable have newline characters?
i am trying to read in a file 1 line at a time and for some reason it stops printing out at about line 62,000.
i am doing this: Code: while(fgets(c0, 1085, fstream0) != NULL)
but after about 62,0000 lines it stops printing. no seg-fault, no core dump. it just stops printing to the terminal then returns me to the command line after a couple of minutes. as a hack i am doing split -l 50000 on the input and calling my program 5 times.is there some limitation on fgets that i am not understanding ?
a project using bluetooth to send data byte by byte to external devices buti'm not familiar using arrays to read file from another location before sending the data.If you could,do correct my codes.Here's my code,
i have wrote a long piece of code above with the "main" which is calling openFile( &fout, filename )filename contains the txt name in a form of "data.txt"i wanna read the data from the file and output it into fout for later use.the data in that file is a vector looking interger group.i have the following code:
I'm writing a program which now accepts user input:
Code: echo "Enter a date in the format YYYY MM DD hh mm ss."
read gregyr gregmo gregdy greghr gregmn gregsc This lets the user input a date and time, such as 2011 06 21 15 12 45, and have each number assigned to their corresponding variable. Later in the program, these variables are put into an equation, and then the terminal spits out the answer. Now I have to have the program read all of the lines from a text file, which is in this format, assign the variables.
I am writing a script that involves reading the content of a file present in a directory and/or its sub directory. I know readdir returns all the files & DIR names in a directory but how to check weather readdir is returning a file or a directory
What are the possible problem when Windows access the file from Ubuntu got Read Only even though have a full permission to read, write and execute the file? Ubuntu to Ubuntu accessing the file there is no problem only Windows got a problem.
I don't think this is a "perl one-liner" of find and replace. I'm trying to auto-fill some information in a listing of files. The simplest example is that in the files the following exists:
I would want the script to find this and populate it with something like -- Date : 20101004-1758
I have a few more similar fields to autofill, and I'd like to do this from within a larger perl script I'm developing to process these files. So, how I perform in-place file modification from within a perl script?
I've been trying to sort this out for several hours and I?m totally lost? I?ve been searching around, but haven?t found the solution to my problem. I have a directory with 100 files. I need to copy 10 lines of each files (let?s say from line 45 to 55) into one unique file. So I guess I could use sed ?w, but I didn?t manage to write the right script. I also tried using a loop to create 100 different files, each one with the 10 lines) to concatenate them later on. But I only got 1 file, not 100.
I have files and folders various permissions.I copied the files and folders to X server.But I forgot to copy of the permissions.Like is hereHow can restore the permissions?
I ran into it while google Segmentation Fault. I'm writing a simple C program that reads a file that counts each line and numbers it then writes to a file called sdout. I copyed my program mostly from the text book but im still having problems. Heres my code:
instead of importing a file I would like to use the variable $x I tried using pipes, but with no luck. My goal is to read one line at a time, but not have to export my data to another file, I would like to keep it all within one script.