I've spent ages trying to build this and had a good look around for a way to do it. I have a directory tree which contains a set of folders and files. Some of the folders contain more than one file but most contain only a single one. I'm trying to move all of the files which are on their own in directories one level below the root into the root. E.g:
Root is: /volume3
Single file in a sub folder: /volume3/20110103/20110103.log
File should end up as: /volume3/20110103.log
I know how to flatten the entire structure fairly easily but its the conditional part which I can't figure out how to do.
As I'm gonna transfer large amount of data folders from one hard drive to another, I wanna make sure that the transfer has not corrupted the data. how could I generate MD5SUMs of entire directory including sub directories, in a single file and later, how could I verify with the data I've just transferred.
Relational databases usually have their data over in /var/lib/something. Users are in /home (with data in /var/www). How can I apply a single total disk space quota across all of these independent software systems (file systems, RDBMS, etc.)?
P.S. There's a bet going on around me as to just how awesome SU is. Let's see what you've got.
I am having issues getting into Ubuntu. Three hours ago I was using my Ubuntu system after entering my password for login. Now I cannot access the OS because it won't accept the password.Anyone have any ideas why not? and can I flatten the password without being in the OS?
I have generated a list of directories that I would like to use ls and grep on, but it is not working. I am using the commandCode:cat directories.dat | xargs lsand I get a whole lot of these errors:Code:ls: cannot access ./foo/bar/baz/grault/*: No such file or directorybut when I try the directories manually one at a time I find that they all exist and all have files in them. Same thing if I try to grep anything. What is going wrong?
The current directory contains:A file called "original.txt" Many directories called "source_001", "source_002", "source_003" ... From the command line how do you copy "original.txt" to "source_001" and "source_002" and "source_003" ...
The total number of these source directories is unknown, it changes every week.
I stay in /var/www/upload and I want extract a file with tar command.
The output of tar xfvz /var/www/file.tar.gz is
tar: /var/www/esempio.tar.bz2: funzione "open" non riuscita: Nessun file o directory tar: Errore irrimediabile: uscita immediata tar: Child returned status 2 tar: Uscita con stato di fallimento in base agli errori precedenti
I am trying to write a script to pick the directory name from a list of file. Here is a detailed picture.Have a file name LIST which contains the follwing for example/apps/oracle/product/test1/apps/oracle/product/test2/apps/oracle/product/test3I need a script that reads these line from LIST and creates foldersin /apps/oracle/product/test1/backup/date/test1 after reading the first line /backup/date/test2 after readin the second line/backup/date/test3 and so on.
I have a dir (pub_html) with 45 sub dirsand in each there is a file with name file123.html) what command can I use to rename all files with this name in all sub dirs to file456.html ? I'm on opensuse 11.3
I need to temporarily store a file containing sensitive data in a public server, in a secure way. I think that encrypting the whole file would be much more secure than creating a passworded .zip encrypted file, because they could be subject of brute force attacks. Attacking a whole file of unknow format is harder, I think. I thought of something like the command:
Code: $ programidontknow --encrypt mysensitive.file --output-file mumblerumble.file then the program asks interactively for a password) $ ls mysensitive.file mumblerumble.file
So I get one file that may look like junk. I tried to search how to do it with GnuPG. But it seems that GnuPG needs much configuration I dont want to do. I simply want to type the password one time to get the file. It doesnt need to retain any configuration for what I want to do. In similar scenario, I would want to do this on a machine/account that is not mine.
i am in need of linux help. iam at college and i need this back/restore script to pass this final part of an assessment. i require a backup script that will not only backup but also restore files to the relevent directories. e.g. users are instructed to store all wordprocessor files in a directory named wp. so i am needing to create a backup directory and 3 directories within that and some files within the 3 directories and then back them up ot restore them. l know i should/have to do this myself by been trying to get/understand info for the last few days and came up with zero.
I want to make a webserver with multiple users allowed to login through SFTP to a specific folder, www.Multiple users are added, lets say user1 and user2, and all of them belonging to the www-data group. The www directory has an owner www-data and a group www-data.
I have used chmod -R 775 on the www folder, but after I try to create a folder test through my SFTP server (using Filezilla) the group of the directory created has only r and x permissions, and I am not able to log in with the second user user2 and create a directory within www/test due to a lack of w permission to the group.
I also tried using chmod 2775 on www directory, but without luck. Can somebody explain to me, how can I make it so that a newly created directory inherits the root directory group permissions?
I'm not sure if this is possible or even where to start. I assume that this can be done with an sh script using tar or similar.I have several very large zip files that contain images for all of the products in my online store. Each image is named after its 13 digit SKU (for example, 9987788000012.jpg). In order to import products into my store, all images are placed into a media directory. Unfortunately, there are over 100,000 images.
So I would like to break the images into sub-folders based on file name. For example, when I extract store_images.zip (or tar or whatever), my extract script would create directories (if they don't already exist) based on the first three digits of each image name, placing each image into the appropriate bottom level directory. For example, "9987788000012.jpg" would be placed in the following directory "media/9/9/8", with media as the root and "8" as the directory that holds any images that start with "998". Perhaps two sub-folders would be less cumbersome.Assuming this requires a script, particularly since it involves scanning image names, creating folders, and saving images to specific directories, which language would serve my needs best? PHP? Has anyone had to do something similar?
I want to copy a big file from my harddrive to a removable drive by rsync. For some other reason, the operation cannot complete in a single run. So I am trying to figure out how to use rsync to resume file copying from where was left last time.
I have tried use the option --partial or --inplace, but together with --progress, I found rsync with --partial or --inplace actually starts from the beginning instead of from what was left last time. Mannually early stopping rsync and checking the size of the received file also confirm what I found.
But with --append, rsync starts from what was left last time. I am confused as I saw on the manpage --partial, --inplace or --append seem to relate to resuming copying from what was left last time. Is someone able to explain their difference? Why --partial or --inplace do not work for resuming copying? Is it true that for resuming copying, rsync has to work with option --append?
Also if a partial file was left by mv or cp not by rsync, will rsync --append correctly resume the file copying?
i have 10 vi files . these files contain some system related information. i need to combine the output of all these files into a single file. the final file should contain contents of all these 10 files and the output should be in a tabular format.
is there any command in vi that i can use to create a table ?
i have started using linux for less than 6 months. now i have come across a problem with pdf files in linux. i want to join different pages from different pdf files into single pdf file.i have come across softwares that do this but they perform this using page numbers from pdf files.but i need to do this based on keywords in different pages .for eg there 3 pdf files
now i have to create a pdf file langunage.pdf ,combining the topic languanges from three pdf files america.pdf,india.pdf,china.pdf how can i do it?? whether there is any open source software for doing this?.
I can do:mkdir messages and then: touch messages/hello.txt Is there a command that will do both - create the directory if it doesn't exist, and then the empty file? Something like: touch -p messages/hello.txt
I'm trying to convert all file extensions for files in many sub-directories from uppercase to lowercase. I have two problems, how to list the absolute path to the files recursively over many sub-directories for which I so far have this:-
Code: find ~/Photos -print which would be fine, except it gives the directories on their own when it finds them rather than just the files with absolute paths. I couldn't find a switch for the "ls" command to do this, so I had to improvise with "find". and once I get grab each absolute file name, to just change the file extension rather than the entire file, which is what I have at the moment.
I need to use wget (or curl or aget etc) to download a file to two different download destinations by downloading it in two halves:
First: 0 to 490000 bytes of file Second: 490001 to 1000000 bytes of file.
I will be downloading this to separate download destinations and will merge them back to speed up the download. The file is really large and my ISP is really slow, so I need to get help from friends to download this in parts (actually in multiple parts)
The question below is similar but not the same as my need: How to download parts of same file from different sources with curl/wget?
aget seems to download in parts but I have no way of controlling precisely which part (either in percentage or in bytes) that I wish to download.
Just to be clear I do not wish to download from multiple locations, I want to download to multiple locations. I also do not want to download multiple files (it is just a single file). I want to download parts of the same file, and I want to specify the parts that I need to download.
I'd like to copy a file, say widgets/water.txt, to all subfolders in the folder widgets using a single command. So if the folder widgets has 10 subfolders like widgets/blue, widgets/green, etc. I'd like to copy water.txt to all of them with one command.
I tried the commands
cp water.txt ./*/water.txt cp water.txt ./*/
However these don't seem to work. The latter gives 'cp: omitting directory' errors.
how to update a series of values from multiple grep commands outputs to be appended to a single row of a csv file? Work on a linux envir. The values from grep output will be numeric values.
Output sold look like:
Each of these values will be odtained from multiple grep commands piped with wc -l Is it possible to update a single row of a csv file if so pleas ehelp me with the command to be used to redirect the output into the csv file
I am using Xfce as the desktop enviroment and Mozilla Firefox as the webrowser. Within the webrowser window, I do File>Save Page As. I save it, and the result is almost always foo.html and directory foo_files. But I think under KDE I could choose the format, one of them being something like "Single page" (only one file; the colecction of .png, etc is embedded into that file). And this is the format I want Xfce (or Firefox) to use when downloading to hdd.