I pushed the browse button and selected the first picture, but I was unable to select all of the pictures. This happened again on an email attempt to do the same thing.
In the gnome environment, I am able to select the first then hold down shift, and select the last pic and all the ones in between will be selected. I expected this behavior when selecting for upload, but it didn't happen. I had to select each one, one at a time and upload each one.
I tried making a folder to upload the whole folder and that would not select at all.
OS: ubuntu10.04 LTS running on latest oracle virtualbox. This works: I have opened a ubuntu one account. And I can log into that account. But I have to do so by: clicking 'ubuntu one' in top bar, and click 'account' in prompt that appears. Shouldn't log in take place automatically? Being logged in I'm able to make new folders. And appearently able to enter them (they are empty I quess) If I try to upload a file clicking 'upload file' in ubuntu one. A prompt appears and I choose the file to upload and click 'continue'. The prompt says "uploading" but nothing happens. If I choose a document folder and click 'mouse click right'. And then click 'sync with ubuntu one'. It prompts that it syncs the folders. But nothing happens.
How does one transfer music from a MTP device to the Music folder? Or upload music from the MTP device to programs such as RhythmBox, Banshee, etc.? I have tried using Gnomad2 to no avail and I have exhausted my self searching the online Ubuntu community forums and doing searches on search engines regarding this subject reading through countless article of users trying to get Linux system to recognize or mount their MTP devices.
My system does recognize and mount my MTP device (Creative Zen Micro) and the Music Players (RhythmBox, Banshee, Amarok, VLC, XBMC) all access and play the music without any problems. I just want to transfer or copy the music from my MTP device to my system.
I have set up a very basic apache server to host my own website (have not set up sql or database or php yet) and I am trying to find out how to fpt or copy my website. I am creating the site in windows and need to know how to transfer it to the server, preferably into the /var/www folder directly.
I've got several large files sitting in my linux hosted account that I need to upload to my S3 account. I dont want to download them first and then upload it into S3. Is there any way I can "upload" it via the linux command line environment? Or access it via a website working with lynx?
Whenever I used to start up a window would open asking me for my password. No problem. One day I clicked cancel and it has never come up again, when i start up the ubuntu one logo switches to updating folders for a bit and then it switches to the default but with a little red x on it. I haven't been able to get it to connect. I can place files into the folder on my computer but they don't show in my ubuntu one account folder also. Also I heard, that soon we'd be able to upload folders instead of file by file is this true?
Firefox grays out and doesn't unfreeze until the entire upload is complete.I can't do anything else on the internet while this is happening.This didn't bother me until now, because I need to upload some pretty high quality pictures to Flickr, and it takes about fifteen or twenty minutes to do so.I'd like if the pictures could simply upload in the background while I do other stuff, instead of completely freezing my internet usage in the process.I don't have a problem downloading files quietly in the background not even large files so why can't I upload files without freezing everything up? Is this normal, or is there a fix for this?
I am looking for a file sharing program to install on my dedicated server that will allow me to upload large MP3 files and allow my clients to download them. these files are recordings of counseling sessions for families who are seeking help for their children.
What I am looking for is similar to the system this company uses [URL].
I'm having difficulties with uploading files to a CentOS-server with vsftpd. I have the exact same configuration on a Fedora10 and there I have no problems...
Regularly I find myself cloning a machine using rsync. I find it understandable, reliable and fast, faster than dd, and I don't have to worry about different partition sizes etc. However, usually I partition my hard disk in a number of partitions:
Code: / /home /usr /var
When I start with a new, empty machine, I start up with a USB stick or live CD, and my new, empty hard disk becomes /dev/sdb. After creating the 4 partitions I have /dev/sdb1, /dev/sdb2... etc. My root directory is on the disk I used for booting, usually /dev/sda. So, in order to access my newly created partitions, I mount them on the /mnt/directory of my root:
Code: mounted now later /mnt/sdb1 / /mnt/sdb2 /home /mnt/sdb3 /usr /mnt/sdb4 /var
In other words, I mount now /dev/sdb1 on /mnt/sdb1, while after copying /dev/sdb1 will become my root directory, /dev/sdb2 become my /home directory, etc. When I start the resync process to copy the image from a remote machine, I have to copy all 4 partitions separately. First the root directory, excluding /home, /usr, /var, then /home, then /usr, /var, like this:
That is a lot of typing and waiting. Sometimes I have a different partition scheme so it is not really feasible to write a script to use always. Now the Question: is there a smarter way of mounting the newly formatted disk (/dev/sdb1, /dev/sdb2... etc) in my root tree so I can perform the rsync copy in just one time, without all the excludes, but assuring that the correct source partitions end up on the correct destination partitions?
I often use the rpl command to make changes to multiple html files at once. For example:
rpl -R '<br />' '<br /><br />' mydirectory However, I haven't been able to figure out how to change multiple lines. For example, let's say I want to change all occurrences of :
My Source folder contains 424.8 GB in 502,474 files. My Destination folder was created fresh, and after the copy contains 394.0 GB in 486.514 files. I am running it as grsync with root authority. The only options are to preserve time, permissions, owner and group., and to produce a verose output and transfer progress. There are no exceptions specified to skip any files.
I have run it again to give it a chance to get it right. Same result. The source is in an rsnapshot folder, but this is the first backup, the original, containing only whole files, not links.
I'm trying to using rsync to backup some files, about half a TB. It's now it a state where it keeps sending the same files everytime it runs. for example:
rsync -av /data/source/* user@host:/data/dest sending incremental file list source/file1.txt source/file2.txt
I then verify those files are copied over. then the next time it runs it does the same thing
rsync -av /data/source/* user@host:/data/dest sending incremental file list source/file1.txt source/file2.txt
any idea why it's getting stuck on these files? I've tried to wipe the whole dest directory out and start over but no luck.
I have two directories, dirA whicht contains N gb of data and dirB which is supposed to contain only the newest M gb of data from dirA. When files are added to dirA, they sould also be added to dirB, while the oldest files in dirB should be deleted.Is that possible with rsync? or any other software?
So I just used rsync to backup about 400gb of data to my NAS. Look just over a day to complete, which is what I figured. I decided I should run rsync again to see how its going to handle comparing and only adding new files to the remote location. So I added a few new files and then ran the backup again. Well rsync is trying to do a complete copy of all of my original data, even though they have not changed.
Is there a way that I can tell rsync to compare the two directories and only add the new files and delete the ones that are no longer in the original location?
I am running Ubuntu 10.04. I am transferring roughly 62 GB of data libraries to my 84 GB /home partition. I'm using rsync because scp kept stalling, and I had to restart it over and over. Things were going great until recently when it began to show an error: "failed: No space left on device (28)" These are the things I've done so far: Used the GUI to find out how much I have copied so far: 5,149,552 which take up 30.2 GB. df -h, it tells me that my /home partition is 56% used, and that I have 33 GB available. (42 GB used out of 78 GB with 33 GB available) Also, none of my other partitions are anywhere near 100%. the /home partition is the most-used and it's only a little over half-full. du -s in the directory where I'm copying all of this: it also returned 42206500. Additionally, when I try to save screen captures, it sometimes fails with a "device full" error. What's going on? Am I really out of space? Why doesn't it show me that I'm out of space?
Is there a hidden temp file that rsync uses that just got too full? I did a little research on wikipedia and it said that ext4 has a 64,000 directory limit. Could it be that I somehow broke that limit with all of these files? Solution: not enough inodes for the vast amount of subdirectories on hard drive. This wasn't an RSYNC problem, rather, a partition configuration issue. To check inode usage: df -i If you want to add any inodes, you will need to backup your partition and format it using mke2fs (man mke2fs). Be sure to change the respective inode setting.
rsync -r -v -e ssh root@nn.nn.nn.nn:/usr/local/websites/* /usr/local/websites and each time I run it it copies everything - all files. I thought rsync was only supposed to copy files that had been added or modified.
I've 15 web servers (in private network) running RHEL, Apache. Needs to sync web files between them. each server is accessible to each other via public key (with passphares).
1) Main server is web1 (where dev upload files initially). So I can make all other servers accessible by web1 without password/passphares and run rsync periodically to update files between them. But security is an issue here as all servers will become easily accessible.
2) Run rsync daemon in all other servers (except web1) on designated port and run rsync command from web1 to sync files. This will do the work but running daemon in all servers might increase overhead and making sure that daemon is running all the time etc. are my concern for this implementation.
I maintain some packages my synchronized with my remote server but I have a problem ... is very hard to understand, time it works, time not works. When not working the following happens:
During my backups I'm finding that rsync is copying all files, instead of just what's changed.
I'm rsyncing between 2 USB external hard drives. One hard drive is FAT32 and one is NTFS. I've examined some of the files and believe that the difference is that there's a 1-second modtime difference developing in some of the files somehow.
Here's an example. These duplicity files were synced from /media/BACKUPHD (the NTFS drive) to /media/VIDEOHD (the FAT32 drive) only a few hours ago this morning. They have not been touched or changed since then, but that 1-second difference in their time stamps has appeared:
Code: tim@localhost:~> stat /media/BACKUPHD/backups/duplicity/duplicity-full.20110107T145955Z.vol10.difftar.gpg File: `/media/BACKUPHD/backups/duplicity/duplicity-full.20110107T145955Z.vol10.difftar.gpg'