inside my Linux I am using virtual box to launch windows xp. I have one shared folder between the operating systems so as to share my files As I can not trust virtual box and windows xp... can you suggest me an easy way to take daily backups of one of the folders I have inside that shared folder?The files are mostly html files so the file size is not so much of a problem (at least I think!)How can I take daily backups so to not lose something?
I have a HDD that cannot be bootup (should be the boot file problem, I want to get back the data in it, so I plug it to another server and trying to copy the data from the failure HDD to this server. Now, the server is re-boot and in maintenance mode as the HDD cannot do the system check. when I try to copy data, it pops the server is read-only system. 1. How to let me write data to the server ; or 2. Let me boot up the server, then I will mount the HDD to it and copy the data to the server.
What would be the best way to automatically copy all of the data off of a library of cds to a specific folder on the computer? I was thinking of running a bash script but I've run into a few snags figuring out the correct way to do it. Mainly due to the fact that the cd drive is mounted in a different folder in /media each time I insert a new diskAlso, the mounting and unmounting process causes it's own problems, but I think that could be covered by a for loop that checks /mtab every few seconds or so.
I want to run rsync on server A to copy all files from Server B when they are newer than 7 days.(find . -mtime -7) I don't want to delete the files on Server B.
I am attempting to copy files from one server to an external USB drive on a second server. Both servers are running custom RedHat Linux, kernel 2.6. Both are setup as Check Point SecurePlatform (one is a log server and the other a management server). I am trying to archive files from one (HP DL380 G5) to the second (HP DL380 G6). I am not able to archive directly to the external usb drive when connected to the HP DL380 G5 (data gets corrupted and switches to read-only access). The external USB drive has no issues when connected to the HP DL380 G6 server, thus my reason for trying to copy the files across servers.
When I attempt to use scp to copy files between the server, I am prompted for the password. Once entered the debug shows the authentication is successful, but then no files are copied (see log below). I have tried searching for potential answers, but none have panned out. Unfortunately I can't yet post the actual scp debug log output as some of the text is viewed as a URL and not yet allowed for me. I have changed all the atsign symbols to '(a)' to avoid the URL inference.
Output log from scp attempt follows: [Expert(a)server1]# scp -p -r -v /var/fromhere/* copyfile(a)server2:/media/disk/tohere/ Executing: program /usr/bin/ssh host server2, user copyfile, command scp -v -r -p -d -t /media/disk/tohere/ OpenSSH_3.6.1p2, SSH protocols 1.5/2.0, OpenSSL 0x0090707f debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Rhosts Authentication disabled, originating port will not be trusted. debug1: Connecting to server2 [192.168.10.3] port 22. debug1: Connection established. debug1: identity file /root/.ssh/identity type -1 ..... debug1: Exit status 1 lost connection [Expert(a)server1]#
I have ssh access to a red hat computer in my network which has internet connectivity. i have downloaded a particular tarball to that computer using wget. Now how can i copy that file to my system.
I'm connected remotely with Putty to a linux server and I need to get the files from a directory on the server onto my hard drive on my laptop. I don't know what the secure shell command is to download it or what exactly I need to do so I can get these .root files from the server copied onto my local hard drive.
I want to write a shell script which will copy files from user Mac machine to UNIX Server without prompting userID and Password. I do not want to use ssh or rcp commands as it prompts for password.
We are running RHEL 5.5 .When am trying to copy files from other server using scp .using the comand below
scp -r -q oracle@oracle2:/home/oracle/* . password : access control disabled client can connect any host and after this it comes out and doesn't copy any file.
I have bought a lian li ex-50 raid case with e-sata i have 5 1tb disks in it with raid3 and ext4 format.
Untill now was ok but after upgrade to fedora 14 or 15 that i have tried when i try to copy from or to files the raid collapse and rebuilds data(i dont loose data) .....
I want to take the data from it so i can format it with ext3 or ntfs and use with my server on the network,or if there is any solution with the ext4 format
I am unable to use ncftp command I have defined all variables used. i have to copy the data to another server FTPS. When i am executing this command it is throwing error
ncftp -u : option unknown
I am copying total script what i am executing in my server. Please some one tell me is there any pistake in using the ncftp command , or tell me some other commands to copy data to remote server
I have to copy files from HP unix server to SUSE linux server. I have tried rcp, but it does not work. Can someone help me with the commands or files to be changed inorder to copy the files.
I have a server that I wanted to transfer it to a newer one both of them have CentOS but the newer one kernel is more up to date I wanted to know is it possible just to copy some directory contents exactly to another for transferring the server data (for example /var /usr /bin /home /etc). I have one website on my server with its mysql database
I am working on Ubuntu 8.04.3 OS, with this I am getting a problem, Daily my server is down on same time at 4:00 PM. I seems server is down by "kswapd0" process, I am not sure, As I run top command, I got below out put
The reason I ask is, we have a home server running Fedora 12. I would like to be able to let anyone just plug in a flashdrive, and have Fedora automatically grab and copy all the files on that flashdrive to a specific share.
If I send data via scp, will scp automatically verify that the received data matches the sent data? I can do it manually with md5sum, but is that necessary?
I have a folder and its contents with the following permissions: 2750 (sguid bit). With this I ensure that a newly created file or folder inside that folder will adopt the SGUID. The problem is that if I copy files into that folder, these copied files don't adopt the SGUID. So I have to execute constantly the command: chgrp -R thegroup nameofthefolder everytime I copy or uncompress something into that folder. Is there any way to achieve this (force the SGUID even in copy and uncompress commands) automatically?
I am running on a laptop and cron.daily is set to run at 0625 So I wonder what happens if my machine is not turned on at that time.. At that rate it could also be off for the other periods as well (weekly, monthly) Is there solution that will allow them to run once they are online after the appointed time? using a cron entry that runs every 15 or 5 or 1 minute.
I've been using Excel from OpenOffice with moderate success for the last year. Now I've just tried to open a new spreadsheet and I'm suddenly getting a highly condensed view. The cells are about 1mm X 4mm. If I expand them by selecting and dragging, then try to copy data from another spreadsheet onto the new one, the data shrinks to the small scale -- unreadable.
I've got a CentOS 5.4 box and the following disks connected: # parted /dev/sda print Model: ATA WDC WD1600BEKT-0 (scsi) Disk /dev/sda: 160GB Sector size (logical/physical): 512B/512B Partition Table: msdos
Number Start End Size Type File system Flags 1 32.3kB 107MB 107MB primary ext3 boot 2 107MB 160GB 160GB primary lvm
# parted /dev/sdb print Model: ATA WDC WD1200BEVT-0 (scsi) Disk /dev/sdb: 120GB Sector size (logical/physical): 512B/512B Partition Table: msdos
Number Start End Size Type File system Flags 1 32.3kB 107MB 107MB primary ext3 boot 2 107MB 120GB 120GB primary ext3 lvm the OS, data and programs are on /dev/sda.
I'd like to copy the full directories and files to the newly added /dev/sdb wich has, as you can see, less space. Also note that /dev/sda has only about 3.6Gigs uses, so it will no doubt easily go into /dev/sdb. How can I do the full copy, and yet make /dev/sdb bootable just like /dev/sda (just as if it was cloned by Ghost)? I've checked dd, but AFAIK, it needs that both source and target devices be the same in size.
I have a hard drive with a bad PCB board. It stays on when not under heavy load and it will restart if I copy too much data off it. So far I have had good luck doing folders under 500 MB in size if I copy one folder to my good hard drive, wait five minutes, copy another, etc.
If I mount the bad drive and try to copy a folder of several GBs in size it will start and then stop as the hard drive restarts. When I try to mount the drive again Linux says it can't read the superblock. I have several folders with over 30 GB of data in many different folders.
What I am looking for is a way of copying a folder in Linux such that the commands grab the whole folder in chunks with a timed break in-between.
I have a PC with OpenSuse 11.1. Beside root there are two other users on the system. Now I have installed a new PC with OpenSuse 11.2. Only one user is set up until now. I installed the hard disk from OpenSuse 11.1 into the new PC on IDE Primary Slave, because I wanted to copy some files from the old system. OpenSuse 11.2 has mounted the old disk automatically in /media/disk and /media/disk-1. The problem is that I can't find any files or directories from the users. I could find only one file from root in /media/disk-1/root/Desktop. Why can't I see the files? Does it have anything to do with UID or SUID?
Allright, I made a simple script that tarballs my SQL databases weekly and saves them to a backup harddrive.If possible, I would like to have the backups uploaded to a remote server for storage. But, I must have the script delete the previous upload for size contraints.I can only use rsync, scp or sftp. Haven't used any of them really before... Here's my basic tarball-backup script:
Code: #/bin/sh # Dates the new tarballs of current builds. DATE=`date +%m_%d_%Y`
i'm used to using putty on a window's machine.With putty whatever you select is automatically on the clipboard without having to right click and select copy.And right click just pastes.