General :: Move Large Amounts Of Music Within A File Structure?
Dec 20, 2009
i have a car stereo that reads a USB drive with all my music on it, however to sort through the music it uses a method of finding folders containing music, then displaying them all in a list. i find this interface annoying because in order to sort the music by artist i have to go and manually move it out of the album folders by hand, this takes a long time for 11+ GB of music so i was trying to use the linux CLI to quicken the process. use a command like this
Code:
mv /media/usb/music/*/*/* /media/usb/music/*/
but for some reason this moves all my music into the last folder alphabetically in my drive, the music is all pre-arranged like this /media/usb/music/artist/album/song
Are there any tools out there that let me select a bunch of data and burn it to multiple cd's or DVD's? I'm using k3b but have to manually select cd and dvd size amounts.
I am new to Linux and not sure how to explain what I want to do, but I will give it a try. I have a system running CentOS 5.x on a system the is dying. Is there an easy way to migrate the system over to a brand new system that I recently purchased? I only have / and swap partitions, so nothing fancy; however, I have read that Linux is nothing like Windows when it comes to applications, and I could simply drag and drop files on the new server; however, I suspect that there is more involved than that. I hope I can just move the files over, and the system will boot; however, I am worried about new hardware on the new system. I am looking for recommendations to this issue. I am not sure if I have described it correctly; however, just point anything out that I need to change.
I am trying to move a large amount of files (over 30k and 86GB) to another HDD but I get a Augment list too large error?? I tried rsync, cp, mv and still the same error
we've been trying to become a bit more serious about backup. It seems the better way to do MySQL backup is to use the binlog. However, that binlog is huge! We seem to produce something like 10Gb per month. I'd like to copy the backup to somewhere off the server as I don't feel like there is much to be gained by just copying it to somewhere else on the server. I recently made a full backup which after compression amounted to 2.5Gb and took me 6.5 hours to copy to my own computer ... So that solution doesn't seem practical for the binlog backup.Should we rent another server somewhere? Is it possible to find a server like that really cheap? Or is there some other solution? What are other people's MySQL backup practices?
We're load testing some of our larger servers (16GB+ RAM), and when memory starts to run low they are kicking off the oomkiller instead of swapping. I've checked swapon -s (which says we're using 0 bytes out of 16GB of swap), I've checked swappiness (60), I've tried upping the swap to 32GB, all to no avail. If we pull some RAM, and configure the box with 8GB of physical ram and 16 (or more) GB of swap, sure enough it dips into it and is more stable than a 16GB box with 16 or 32GB of swap.
I tried to move 2.7TB of data from my /var/webroot/ partition (4.5TB total in size). I left it to run over night, this morning when I came to check I saw that all data on / paratition is used up and no operations can be done because of the "no space left on device" message.
Code:
Filesystem Size Used Avail Use% Mounted on /dev/cciss/c0d0p7 911G 911G 0 100% / tmpfs 7.9G 0 7.9G 0% /lib/init/rw
[code]....
I freed up several hundred MB from / but still the usage is at 100% and I cant free up any more space or complete the transfer ?
I have just been bothered by a fairly small issue for some time now. I am trying to search (using find -name) for some .jpg files recursively. This is a Redhat environment with bash.
I get this job done though I need to copy ALL of them and put them in a separate folder BUT I also need to keep the order intact after copying.
For e.g - If I get a JPG file under /home/usr/new/1/ then the destination also needs to be /test/old/new/1/.
At the moment, I am simply putting all files under /test/old/ and I can't somehow get the later /new/1/ folder path created under /test/old/
I understand this could well be done using while OR if else loop, though if someone can just guide me with a hint, I would be really grateful.
I will complete the rest of the steps and was asking here since I am still not comfortable with the shell/bash scripts yet and planning to be really good at it over the next couple of months.
I have used linux for a web server but only installed a couple items on top of the OS but would like to begin using linux more often on my own home machine. However, I also like to keep things clean and organized, and know what is going on when I do something. I have some sample C scripts for network programming, and they came as a downloadable package with a readme including the make / configure instructions to get it all set up, and then I can compile individual scripts as needed.
I was wondering - when I run make and those first few commands - where does it all go? Will all the new activity be confined to the folder I am in, meaning I can easily remove it all by simply deleting the folder when I am done (I won't want all this sample networking stuff forever, you know). Or, does it get placed into other directories throughout the file system?
I know when installing some apps that the files are placed in directories such as usr/bin and the like - my assumption is this happens when running make and make install commands - if so, how do we get rid of them when finished?
I just want to keep the system somewhat clean if possible, and the very least like to know what is being installed and to where - and have the option to remove it easily at a later date if I choose to do so.
jump into a Linux class in college with only 3 weeks left in the course. I thought I would be able to catch on, and go figure, it didn't exactly happen that way. I was given an assignment to do, and I am so far lost it isn't even funny. I need to create a directory structure, set up file security, create a step by step instruction manual on how to copy/delete said files, and create a guide to common Linux commands. How would I create these files in root and share them with the other users? and where can I find a list of common commands and their functions?
I am using RHEL 5.I have a very large test file which cannot be opened in vi.The content of the file has some 8000 lines.I need to view ten lines between 5680 to 5690.How can i view these particular lines in a large file.what is command and option i need to use.
After installing slackware 13.1 I start up amarok and when I go in and configure the settings and it starts to scan the folder and it either hangs at 10%, stops responding all together or crashes, the library is about 130 gigs of mp3s. I do not know where to start on this one. Amarok version 2.3.0
We recovered a large number of files from a HD I messed up. I am attempting to move large numbers of files of a type e.g. .txt .jpg , into a folder by type to more easily sort through them.
Here are the commands I have mainly been trying with various edits:
Code:
Code:
So far the most common complaint I have gotten "missing arguments to execdir".
I'm working with a dual-boot laptop running Ubuntu 10.0/Windows 7 and a Debian 5 VPS while the OS's shouldn't have much impact on my question.
What I would like to do is create a html page that I can upload to my VPS which lists all of the files/folders on my local 2TB hard drive (Specifically media such as Movies, Music, TV Shows...). The media obviously will not reside on the server, but I would like to at least have a list which will allow me to select, for instance, a bands artist so that it redirects me to the albums in the directory below.
Ultimately, I'm looking for Open Directory Browsing without actually having the media on my server. I have been attempting to create something to this effect using lynx, however, I'm not sure if it can be done with this command or if it's even possible for that matter.
Trying to move from squeeze to unstable -- my downloads add up to some 700 M or so.So I am trying to batch the upgrade:Some of the big-fellas are openoffice and texlive:So I didsudo aptitude hold '?name(openoffice)'sudo aptitude hold '?name(texlive)'Is that fine or are there some pitfalls to this?
I want to copy all files with the name XYZ* into one folder. The problem is that the files are in different subfolders and that not even the depth of the folder structure is the same for all files. Luckily, at least each file has a unique name.
Of course, I thought about the cp command but I guess the depth of the folder structure needs to be the same for this to work.
i've recently installed 11.04 and am giving banshee a shot. it seems pretty good although has crashed a few times. but when i import my music folders (about 900GB, 175,000 items .. and growing..) it takes days. that's not such a big problem because it only needs to be indexed once, i assume, ... but the UI is very slow now - so that clicking on an album can take several seconds to bring up the tracklist. also typing into the search box there is a large delay before the matches are shown. is this just to be expected for such a large db? i recall i had google desktop indexing and returning results almost immediately back on winblows.. other ones i have tried include rhythmbox , amarok, songbird .. but have not found any of them to be stable and simple enough to my liking.
can anyone recommend a good player , my essential requirement is a fast and efficient indexing - with tag support would be grand, but just based on filenames is ok too. drag and drop into a play queue like rhthmbox had would be nice.
I'm trying to copy a 6Gb file across from my laptop to an external usb drive but it quits at about 4.2Gb every time with a "file size limit exceeded" error. i have checked the output of ulimit -a and there is no limit there on the file size. I'm using the Slax live Cd for this as it always gets the job done
I want to move all my music from my mac to my ubuntu laptop and change the format to ogg. I am looking for advice on the best(best=easy) way to do this. My music is in MP3 and AAC formats now.
I need to send large files from a Linux machine to another using cryptography. The sender machine knows the recipient IP but not vice-versa. I don't need strong cryptography and prefer higher-speed less-secure solutions.
There are no problems with presharing crypto keys but I'd prefer not dealing with SSH users creation.
I think to HTTP PUT over TLS, but I never had experience with it and I prefer to hear which are the possible solutions. I know that it can listen as a daemon but I don't know anything about cryptography. So pipeing with OpenSSL may be a solution.
I have a large number of log files, on a linux box, I need to cleanse sensitive data from before sending to a third party. I have used the below script on previous occasions to perform this task, and it has worked brilliantly (script was built with some help from here :-)
However, now one of our departments has sent me a CLIENT_FILE.txt with 425000+ variables! I think I may have hit some internal limit. I have tried splitting the client file into 4 with around 100000 variables in each, this still doesn't work. I'm loathe to keep splitting though as I have 20 directories with up to 190 files in each directory to run through. The more client files I make, the more passes I have to do.
Is there a clever way to monitor the progress (as percentage or hash) of copying a large file (using pv could be an option)?Like monitoring the progress of a copy command such as this:Code:cp linux.iso /tmp/
I work for a school consulting company.We helped a school deploy about 1500 computers.The computers have windows XP but we have been using G4L for the restore partition on the drives.So far the software works great. We did however run into a problem in that many of the computers we deployed are missing the restore partition. The reason they are missing is long and convoluted and not really that important. What I have been charged to do is try and fix the restore partition problem. One solution that I had, which im not even sure if it will work, was to backup the recovery file, that g4l created, to DVD and write a basic script to recreate the partition and then copy the file over. This process would need to be as automated as possible since this disc will be inserted by the end user(the students). The backup file that g4l created is 5.9GB so it wont fit on just one disc and Dual layer discs are too expensive to use for this project, so the file will either need to be compressed again (not sure if that's a good idea or not) or split across two DVD's.
I have searched the forums here and I was not able to find anything to fix this problem. I was able to find some info on splitting files across two discs but im not sure how to use that to fix my problem.
After untaring a mysql file (very large) I'm trying to find where the file listed below has gone. I did a search on the file name: fine / -name 'mysql-qui-tools-5.0' -print But can't find the file. -rwxr-xr-x root/root 9651820 2007-05-02 11:46:01 mysql-gui-tools-5.0/mysql-query-browser-bin
Am I able to move Music folder to a different hard drive without moving my whole /home or do I just make a folder there and call it Music and change all the programs that use the music's default music folder?