General :: Renaming Files With Sed - Mv: Missing Destination File Operand After `$i'
Mar 20, 2011
Today i am trying to learn how to use sed. I set up a testing folder with the following files:
AAb.lol
AAc.lol
AAx.lol
test.sh
My goal is to create a script (test.sh) which renames all the files to their original name without AA. I want to end up with this:
b.lol
c.lol
x.lol
test.sh
sed seemed to be the perfect tool so i went ahead and created a script which i think should clear the job.
[Code].........
mv: missing destination file operand after `$i' From that 2nd line i can tell that $NewName is just empty. I also read something about sed needing the -e option for scripting purposes but i just don't understand it.
how can I rename all files in a directory up to the first dot (there by leaving the file extension alone) to the same thing? Im trying to rename all my media files and associated files in a directory to (preferably) the name of the directory it self. if I have
Code:
A Clockwork Orange - wzzyfg.cd1.avi wzzyfg.cd2.avi wzzyfg.nfo ACO.fanart.jpg orange.tbn
Id like to automatically mass rename them all to
Code:
A Clockwork Orange A Clockwork Orange.cd1.avi A Clockwork Orange.cd2.avi A Clockwork Orange.nfo A Clockwork Orange.fanart.jpg A Clockwork Orange.tbn
I have rename on my server which I used to remove underscores from file names, but I dont know how I would use it to rename everything up to the first period. Bonus points for renaming stuff to the name of the parent folder!
I recently replaced my windows fileserver with one running Ubuntu. One thing I've noticed (which is a annoying) is that when I copy files between two samba shares from my windows machine, it copies the file through my PC to the new destination. On windows shares it just did some sort of local copy (ie it took about 2 seconds) rather than 3-4 minutes. Is this the normal behaviour, is there any way around it on Linux
I have just been bothered by a fairly small issue for some time now. I am trying to search (using find -name) for some .jpg files recursively. This is a Redhat environment with bash.
I get this job done though I need to copy ALL of them and put them in a separate folder BUT I also need to keep the order intact after copying.
For e.g - If I get a JPG file under /home/usr/new/1/ then the destination also needs to be /test/old/new/1/.
At the moment, I am simply putting all files under /test/old/ and I can't somehow get the later /new/1/ folder path created under /test/old/
I understand this could well be done using while OR if else loop, though if someone can just guide me with a hint, I would be really grateful.
I will complete the rest of the steps and was asking here since I am still not comfortable with the shell/bash scripts yet and planning to be really good at it over the next couple of months.
What's the command for renaming files? I thought it was "mv"--I typed "info" and read
Quote:
* mv: (coreutils)mv invocation. Rename files. So, desiring to give a .JPG extension to a jpeg file that had no extension (because I dug it out of my Firefox cache), I typed
I had a situation in which the the path of the file to be copied is written in other file and I had to copy it using shell script..I can use cp $(cat /home/robert/location.txt) /media/sda1 on normal linux shell...But I am using buildroot script where $(cat /home/robert/location.txt) evaluate to nothing..is just blank..
I run a script which generated about 10k files in a directory. I just discovered that there is a bug in the script which causes some filenames to have a carriage return (presumably a '' character).
I want to run a sed command to remove the carriage return from the filenames.
Anyone knows which params to pass to sed to clean up the filenames in the manner described?
I am running on Linux (Ubuntu)
The character causing the filename to 'break up' accross multiple lines appear to be a CR (carriage return) instead of ' '. The filename is being diaplayed in thetitle of a text editor with %0D in the positions of where the file name breaks up. So I need to remove the CR chars from my filenames.
Objective: To move or backup all the 30 days old files to the other server within LAN. I have tried testing it first within the server by performing below commands: find /usr/test1/* -mtime +30 -exec mv {} /usr/test2/ ; But I'm getting "mv: missing file argument" error when I try this.
I have files whose names look like this:Sim1-2_40.36.chr20_sb.foo.indel.novoalign.samSim1-2_40.36.chr20_sb.foo.indel.bwa.samWhat I want to do is to replace all indel with snp in the namesyieldingSim1-2_40.36.chr20_sb.foo.snp.novoalign.samSim1-2_40.36.chr20_sb.foo.snp.bwa.samBut why this unix command doesn't work
Recently I installed Dropbox on a server to do file synchronization and it added " (Case Conflict 1)" to a whole bunch of my files! I realize now that it was caused by case insensitivity but I'm still left with hundreds of files that are in this renamed state. Is there a script in Linux that would allow me to recursively go through the directories and strip out this string?
i.e. a (Case Conflict 1).jpg --> a.jpg /myfolder/abc (Case Conflict 1).doc --> /myfolder/abc.doc /myfolder/subfolder/mydoc (Case Conflict 1).pdf --> /myfolder/subfolder/mydoc.pdf
Basically I need to rename a bunch of .doc files using the for-structure and the mv command (w/ wildcards) in bash. I guess this would be a bit easier if I'd use the rename command, but since this is a school assignment of sorts I need to use for & mv. The .doc files are named "1filename.doc", "2filename.doc" etc. And I've got to rename them to "aaa_1filename.doc", "aaa_2filename.doc", "aaa_3filename.doc" and so on. Tried to dabble quite a bit with the for and mv commands, basically just got a bunch of errors. Every damn time. For 2 hours. The most common error was "mv: missing destination file operand after ..."
I have some random files in a folder. I want to rename all of the files in a batch process. I have a text file that contains the Currentname of all the files in the folder, as well as a text file with all of the Newname of files in the folder. I want to replace Currentnames with Newnames.
For example, here are the names of the files in the folder: 1.mp4 2.mp4 3.mp4
I have a text file with the Currentname of all the files in the folder: 1.mp4 2.mp4 3.mp4
I have a text file with the proper Newname of the file: a.mp4 b.mp4 c.mp4
I want to rename Currentname with Newname in the folder. So when I go to the folder the Newname of the files are: a.mp4 b.mp4 c.mp4
Suppose I have a tree structure like this: /home/mahmood/sim/a/b/file1.cpp /home/mahmood/sim/a/b/file2.h /home/mahmood/sim/a/c/file3.txt /home/mahmood/sim/d/file4.txt
How can I copy all of them to /home/mahmood/sim. So that when I run "ls" in /home/mahmood/sim, I see all files: file1.cpp file2.h file3.txt file4.txt
Can 'cp' search for all file and copy them in another folder?
I have recently purchased an external hard drive in order to backup my home partition. In my PC I have a "1.5T" drive with several partitions on it, containing OSes and the home partition. The home partition is 1.3T according to df, the external drive contains one partition that spans the entire disk,df reports it as 1.4T in size. Both partitions are ext3. When I use rsync to copy files from the home partition to the external partition, the external disk becomes full, despite the destination - supposedly - being larger than the source. I don't understand why copying files from one partition to a slightly bigger partition should need more space than on the source partition. Does anyone know what is happening ?
Details : I created the partition on the external drive with gparted; gparted reported it the already have several gigabytes in used space immediately after the partitions creation - I thought at the time that this must be normal. The home partition contains many files of all sorts, including lots of big audio and video files. If you are wondering, for all my important files this external disk is only secondary backup, as they are also backed up to the "internet".
These are the mount points :
/mnt/tmp/ : home partition, /dev/sdb6 /mnt/external/ : external partition, /dev/sdc1
I used rsync to copy the files, I know there are more efficient ways to do this, but I wanted to use the same command that I will subsequently run to sync the backup.
Next I tried adding the --sparse switch, as I was wondering if the problem may come form sparse files. I don't know however if rsync would go back and shrink the sparse file by just adding the switch and executing the command. I also added --one-file-system, for good measure. Here is what I ran next :
rsync: writefd_unbuffered failed to write 4 bytes to socket [sender]: Broken pipe (32) rsync: write failed on "abcd.avi": No space left on device (28) rsync error: error in file IO (code 11) at receiver.c(302) [receiver=3.0.6]
[code]....
Looking at the destination after a partial copy seems to indicate that the problem is not symbolic links being "expanded". I have not checked the source filesystem for sparse files, nor the destination to see if these files could be larger there, as this does not seem trivial.
Here is some additional info :
$ df /mnt/tmp/ Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdb6 1415342836 1414173740 369096 100% /mnt/tmp
We have an rsync cron job set up to mirror all the files in a "..dashtdocsdocs" folder to the same folder on another server. It copies all the files over correctly and deletes any files in the "docs" directory that aren't in the sending directory, but it also deletes any files we put in the target directory's parent folder (..dashtdocs or other subfolders like ..dashtdocsimages) even though they've been excluded in the .rsync-filter file.
So for example server A has ..dashtdocsdocs and ..dashtdocsimages. Server B has ..dashtdocsdocs but if I manually copy the images folder over to ..dashtdocsimages, the images folder gets deleted from the target directory every time rsync runs.
I'd like to keep just the docs directory synched and update other folders manually, but they keep getting deleted. It looks to me like it's running a delete-excluded option, but that option wasn't used.
We used to send files in the form of .jpg, .tif, and/or .pdf. Normally the file name will be in the form of 08072011IE01CTYHUB.PDF (DDMMYYYY - is the date, IE - publication, 01 - page number, CTY - edition name and HUB - destination in three characters). These files will be stored in a common folder (say SOURCE). I need a script to move these files to destination by reading the destination from the file name through FTP. At destination these files should be moved to a folder meant for CTY. Please note before the file is sent through FTP it should be compressed (zipped) At SOURCE folder the files will be as:
etc. where first 8 characters are date in the form of DDMMYYYY, next 2 characters are publication, last 3 characters are destination, previous 3 characters are edition and left over in the middle are page number in the form of NN or name. Presently I am zipping these files and send it through FTP to the destination. At destination my counterpart takes the file, stores in appropriate location (like folder name CTY) for use. To automate the above process, I want a script.
i have inherited a mixed bag of sorts: several xp users updating an access mdb with the BE on a lamp stack shared via samba. i have a backup device which gets mounted at: /media/disk... each client record (has) a folder by the companyname on the samba share, and all relative documents are placed there. when the backup script runs, it just copies newer or missing files.
someone has been renaming folders, and not matching the folder name to the related companyname from the mdb. so...the backup script captures and duplicates the data in the renamed folders. some client records also have periods in the name (not required from a data pov), such as 'Company Ltd.' instead of 'Company Ltd'. i can produce a list of company names as the folders should be found easily enough, but get a little stuck with the linux scripting.
i can easily remove and further prevent any unwanted punctuation in the company name on the client record, and create the correct folder name on the samba share with vba, but would also like to:
-for each 'client activity' folder on the backup device -rename the folder by removing punctuation marks or -delete the folder if is a dupe
i tried: ls -al | grep '&' - it properly returns only those lines with an ampersand in the folder name, but returns all folders when i try that with a '.'.
what would be the easiest method to do the renaming? i thought if there was a way to change ownership of the mounted device, then the vba code (easy to write) would be simple.
OK - i just ran chown -R on the external device, changing ownership to (me) instead of root. didn't want to because it took too long, but can now use the MoveFolder method of the filesystemobject from my app to do the renaming instead of some sort of bash script (which i was dreading).
that works to disallow non-owners from renaming the file, but what I wouldlike to do is disallow EVERYONE ( including the owner of the file ) fromediting, moving, or changing the filename once it is created. the only personwho should be able to make those changes is a special user.
I have a dir (pub_html) with 45 sub dirsand in each there is a file with name file123.html) what command can I use to rename all files with this name in all sub dirs to file456.html ? I'm on opensuse 11.3
this seems to be a strange question, i know. I've a database sqlite file called "a.db".I need to copy it 20 times (1 time for each letter of the alphabet) to have : b.db, c.db , d.db.
Missing ifcfg-eth[2-5] fileset for ZNYX 345Q Quad Port 10/100 cards. I have showing in the gui network device that my ports for my ZNYX ZX345Q Quad Port card my ports are Auto eth2, Auto eth3 etc. My Motherboard and Intel cards show as System eth0 and System eth1.
There ARE corresponding entries for those in my /etc/sysconfig/network-settings/ directory, but there are not ifcfg-eth[2-5] files to correspond to these adapters. Can I just write my own files and that will do it?
How does Fedora 12/13 load these drivers into the kernel without having these ifcfg files?
I'd love to know if there is another way Fedora controls NICs / other system resources.
Many years ago, I converted a portion of my files to an arbitrary format with a specific extension. i no longer desire to have them in this format and i would like begin the process of replacing them because conversion is not an appropriate solution. unfortunately, they are mixed in separate folders of the same root folder with files in my current format of a different extension. I feel it would make this process easier if I were to move every folder that contained a file with the undesired format to a separate root folder. The files are stored on a Linux server and shared via samba. How can I do this with a couple of commands or a script? I am open to other suggestions as well. I want to avoid time spent editing text files. Ultimately, I'd like a command that produced a list of full paths for folders, sorted by the number of levels would be a nice touch. A list of all of the files is clearly not what I'm looking for.
I got some sourcecode written with c++. I found it did not supported by the newer version of gcc. So I wanted to install an older version. But it always comes up with the question like "./read-rtl.c:653: error: lvalue required as increment operand", what should I do now?