hey all i have a folder with lots of random jpegs but they all have the words 'SOMETHINGRANDOM' in there name that i want to remove and i'm trying something like this but it just renames all the files to 'newname'? Code: for filename in *.jpg; do newname=`echo $filename | sed -e 's/SOMETHINGRANDOM//g'` mv $filename newname; done
I have filenames like such: abc (e).doc And I want to rename them to abc.doc I have a directory full of files names like this. How can i do this using the sed command? I have looked online for about 2-3 hours now and am frustrated that I can't find an answer.
I have just switched to banshee as my media player and imported my films and music. Problem is, the video list is quite hard to read because all the video files have spaces in their names which are replaced by % signs, numbers and letters. I'm wondering if there is a command I can use in the directory that will automatically remove all the spaces from the filenames or better still, replace the spaces with hyphens or underscores?
I searched the forum and didn't find any threads that seemed to answer this question. I have a large directory of files, and dozens of subdirectories on a remote box I have ssh access to. I need a subset of these files copied to another folder.
Example:
directories parent -sub1 -sub2 -sub3
files I want (the files are all the same format, but some have extensions and others dont) 1100 1215 1322 1442 1500 1512
Unfortunately, I need a lot of files, and plan to do this on a regular basis (the files I need will be different each time) I was thinking it would be nice to be able to put the filenames in a text file (one filename per line) and use the find command to copy the files (I don't necessarily know which subdirectory the file will be in).
I would like to remove a part from wiz_khalifa-black_&_yellow-(82_bpm).mp3 The part to be removed is -(*_bpm)
so that makes wiz_khalifa-black_&_yellow.mp3
Also a problem is that sometimes multiple "(" occur in a filename (wiz_khalifa-black_&_yellow-(remix)-(82_bpm)), so how can i only remove from the last "("
I just used dd to clone a linux partition to a new hard drive, it had 800mb left on the old hard drive, after dd, new hard drive lists 1.29/1.3 terabytes full. Is this what happens by default in dd? How can I fix this?
I'm trying to figure out how to access the local part and the domain part of an email address in postfix's main.cf. For example, myname@mydomain.net has myname as the local part and mydomain.net as the domain part.I get the whole email address with %s. I want to speed up the lookups by writing better database queries.I've had no luck finding this in the otherwise well documented postfix.
we have access to one domain name , 1 internet ip address and may servers hosting different part of site. I want them all to be accessed via same web site . some of the server in our network are embedded devices.they have their specific utility being hosted on that machine. So the severs are bound to be distributed . I just wanted to know how can I access them via single ip, domain name.
In bootseqence of linux, the first step is check the CMOSRAM(size 64bytes) setup for custmor setting. So i am just confused wether CMOSRAM is a part of motherboard or is a part of RAM itself.
I have a txt file with couple of comment lines: Number of title = !num! #line1 #line2 #line3
I wrote a script with "sed" to replace !num! in this file, which is very straightforward. However, based on the !num!, I want to remove the number of "#" based on the !num! value. Is there an easy way to do that with "sed"; otherwise, i will have to write a script to loop through the file.
I have hundreds of directories in various subdirs that I need to remove. I want to remove all of these dirs, but can only find solutions on how to do remove files (or how to remove subdirs from within the current dir).
I think I need something like
find -iname 'testfile*' | xargs rm -i
where I want to remove every directory that contains the word 'testfile' within the directory name. I know xargs wont work for dirs,
I have a directory (Linux user) with a number of files which contain an added [!] to the end of each file name so that each file reads out as: foo something [!].zip bar something [!].zip helloworld [!].zip etc. What is the quickest way to batch rename these to remove the ending [!] character combination from these file names?
I was doing some data recovery with Photorec and by the time I was done I have over 700 folders (recup_dir.).The only solution I was able to apply was the one posted by pljvaldez on this site dated 04-09-08, 09:01 AM. After doing the same thing for at least 70 times I decided to ask, so, is there anyone that knows how to delete multiple folders at the same time.
I want to be able to remove the first character of a line when I highlight multiple lines in gedit. Example:
%Example is %Commented Code %Uncomment using this shortcut
I would then highlight/select these lines, and remove the first character to make it look like this:
Example is Commented Code Uncomment using this shortcut
I'm pretty sure there is an actual shortcut for this. If there is another text editor on Linux that it would work in, it would be nice to know how to do it in that editor as well.
Basically my laptop did have a Windows 7 partition that came with the system(which I no longer need), a Ubuntu partition and a separate partition for storage. This was until I formatted the separate partition using Windows and it did something which gave me a file system error and would not let me boot into any os. Then because I was in a rush and had lectures in a hour I installed another Ubuntu partition of 3gb just to reinstall GRUB so I could boot into my original Ubuntu to get my files. Now I would like to delete the 3gb Ubuntu partition and the Windows partition to be left with my original Ubuntu partition and then merge the hard drive into just the one partition. My main fear before I attempt this is again destroying my GRUB. I know I have made a mess of this but would really appreciate being pointed into the right direction. I have done some searching and reading but struggled to find clearish instructions on how to do it properly.
My server was hit with an injection script which has placed code across many of my clients files. I need a script that can remove a block of php code that spans multiple lines, multiple directories/files and is dynamic, meaning that part of the code changes. I think using find/sed is what I need but cannot seem to figure out how to get it to work.The following is the script that is being injected everywhere. The catch is that they have generated dynamic code at the start/end of the script. (I have commented the parts that are dynamically changing on EVERY instance).PLEASE NOTE: Directly following this script is the start of a valid php script that I do not want to delete.
<?php //{{65281980 - DYNAMIC!! GLOBAL $alreadyxxx;
Is there any way to quickly remove multiple related packages from the command line instead of having to enter the name of every single one? I am trying to remove OpenOffice from my server running 10.04. It would work nicely if I could get a list of packages without line breaks, such as the list displayed by aptitude when upgrading. That way I could just paste the package list into the terminal. However, "aptitude search 'openoffice'" dumps a long list on many lines that cannot be used that way.
I want to remove duplicate or multiple similar lines from multiple files. I.e. if I have four files file1.txt file2.txt file3.txt and file4.txt and would like to find and remove similar lines from all these files keeping only one line from these similar lines. I only that uniq can be used to remove similar lines from a sorted file.
I am currently working on a script which makes regular backups of some data I have, and I would like to name the compressed TAR files with the date it they were created, in short I want to rename a file:
I want to travel for a while and need winfdows 7 for that. I want to copy my Linux Thunderbird profile with many years of emails across to windows7 then back to Linux when I'm finished with win 7. I copy the "profiles" folder at ~/.thunderbird/profiles folder over to win 7. Being thorough, I then run the windows app "chkdsk" to see if windows dislikes what I did in a filesystem context. Chkdsk finds three illegal filenames in the copied folder. The filenames contain colons.
They are as follows: a directory named "mailbox:" a directory named "mailbox:.sdb" a file named "mailbox:.msf"
I try to manipulate them in windows (e.g. rename, delete, open, whatever) and get error messages about invalid names. It sounds to me like the items really are corrupt. So now I have a partially corrupted Thunderbird that works in Linux and doesn't work in windows and has years of emails in it. How do I straighten out Thunderbird in Linux? (I'll worry about windows later)
list filenames one-per-line, in BASH without including directories. I think he was either wrong or making that up. There is a way to list just the names and one per line but there aren't any arguments I can find that can be used to exclude directories.
Code:
IFS=', '; files=`ls -m`; for i in $files; do if [ -f $i ]; then echo $i; fi; done That does only use ls as a command, however he said his GSI thought he could do it without all that...
I am using Red hat linux .. i just wanted to know, is it possible to arrange or sort filenames numerically?i have saved several files with the follwing names : 1.png, 2.png, 3.png, 4.png ...... 11.png 12.png. and so on.... but the containing folder sorts this alphabetically in the following manner 11,12,13...... 1, 2, 3, and so on...
The (WD 320GB) drive has a single ext3 FS on it. It has had some problems in the past, but all were fixed with fsck -y. Now there are several directories with duplicate filenames. The files with duplicated names are hard links of each other, but the names are identical. I've run several diagnostics over them, looking for, eg, non-printing characters in the name, but they are completely identical. Here are some examples:
[code]....
These are (obviously) from a directory of mp3s, but similar duplications occur throughout the fs - there are several thousand files affected. Some of the diagnostics were programmes I wrote that accessed the directory itself (through the dirent structure). I always thought duplicate filenames in the same directory were impossible in unix/linux; this appears to prove me wrong. Am I missing something? (Kernel version 2.4.20 with xfs extensions. The installation was originally Red Hat 7, but I've changed almost everything, so it's probably more accurate to call it a custom distro.)