I have an Ubuntu machine running NFS4 server and a plugapps (arch linux) machine connecting as the client. The plugbox is running an rsync job to backup the home directory from Ubuntu to a local USB HDD.
All of the files in the destination have owner nobody and group nobody.
how I can mantain the file owners. I have the UID's and passwords sync'd between the two machines for both root and the user who's home dir is being backed up.
I have just been bothered by a fairly small issue for some time now. I am trying to search (using find -name) for some .jpg files recursively. This is a Redhat environment with bash.
I get this job done though I need to copy ALL of them and put them in a separate folder BUT I also need to keep the order intact after copying.
For e.g - If I get a JPG file under /home/usr/new/1/ then the destination also needs to be /test/old/new/1/.
At the moment, I am simply putting all files under /test/old/ and I can't somehow get the later /new/1/ folder path created under /test/old/
I understand this could well be done using while OR if else loop, though if someone can just guide me with a hint, I would be really grateful.
I will complete the rest of the steps and was asking here since I am still not comfortable with the shell/bash scripts yet and planning to be really good at it over the next couple of months.
I have a Samba share set up on a SUSE server and users connect to the share via Windows XP workstations. On SUSE, if I create a file and grant ownership to "administrator" and give it 770 permissions for example, when someone goes in to modify that file, they become the owner as soon as they save it, and the permissions change to 470 (r--rwx---+) with an access control list. I want to maintain ownership of the file myself and I don't understand why someone changing the file is changing the permissions on it...This is driving me insane because every time someone saves something I have to go in and chmod 770 it before they can save it again.
I need to install a script into my Gimp folder which is owned by root. I tried "chown my name usr/ share/ gimp2.0/scripts" in terminal, but it tells me folder does not exist. I know I'm missing something, but I haven't done this in a while, so I'm not sure what it is.
I installed Ubuntu from the alternate cd a few days ago to save space and resources on a very old laptop. (install command line, then add what I wanted) But I have struck an interesting problem with file permissions. Various programs like synaptic, leafpad, pcman, Banshee, all require I enter the root password to execute them (or sudo command from terminal). I want to change synaptic from root ownership to sudo and leafpad etc to execute without using the sudo command in terminal. I could get comments on the commands before I execute them in terminal and if I am introducing a security problem, as I am still learning bash. $ sudo chown sudo:sudo synaptic
I would still be asked for my sudo password before being able to open synaptic? As in standard Ubuntu instead of root password.$ sudo chmod 777 leafpad pcman Banshee All users could open these programs from the menu? I have my admin account and a general account which I use for everyday things like surfing the net and listening to music.
I've created a share using Nautilus on an Ubuntu 11.04 machine and can access it OK from both my Win 7 pc and partner's WinXP machine. We both have Ubuntu accounts and use those to access the share. When an Excel spreadsheet is saved on the WinXP machine the ownership changes and it can then only be opened read-only on the Win7 machine. A further complication could be that the Win7 machine has OpenOffice and the WinXP has MS Office. I'm guessing that XP + Office doesn't really care about or see the permissions, but Win7 + OpenOffice does. Should I be using the share with the same username from both PCs? Is my whole approach misguided?
I have been VERY lucky and managed to restore from a formatted ext3 /home/ partition. I used testdisk to reset the original partition which had had nothing done to it since formatting(!). However some of the file permissions are a altered and I cannot change them. I have tried "su chmod" and even temporarily enabled the root account itself and tried to alter the ownership/permissions from root 'proper' without it helping.
Here is an example of the output of ls -l drwxr-xr-x 2 martyn martyn 4096 (date) (time) sponsors ?-----S--T 63231 92820383 44090688 4286824785 (date) (time) order.xls
The first line looks like a normally formed output and indeed is readable. The second line looks corrupted and I don't have a clue how I can reclaim this - or even if it is possible. Should I count my blessings most of my files are intact and leave those be?
It seems I had some kind of intrusion and I found 6 files changed its ownership to user 1035 and group 1035, I don't know how but I need to change them back to its original owner (root) because one of them is the ls command and the other is the ifconfig how can I revert them to its original state? I cant do it with chown.
I am running a shell script as the user "redhatuser01" and this script creates a files in the home directory of another user "redhatuser02" (/home/redhatuser02/sample.txt) but the ownership of this file is currently "redhatuser01". How can i change the "ownership of this file to the user "redhatuser02"? (My constraint is that I cannot sudo as redhatuser2 and create the file).
process of migrating my server to Ubuntu Server 11.04 after my Server 2003 installation suffered a HDD failure. All my data is on an NTFS drive (not ideal but not much I can do about that). I can currently only read the disk as a user. root has ownership of everything on the disk. Whenever I try and change ownership of a file it doesn't bring up any errors but when running ls -l it shows that nothing has actually changed.
Why would I need to be root to change the ownership of a file? Example: I'm logged in as dwadmin and I've created a file:
-rw-rw---- 1 dwadmin dgw 0 Jun 17 07:46 testing.txt
I want to change the ownership to another user, but am getting the following error: chown 511 testing.txt chown: changing ownership of `testing.txt': Operation not permitted
Is it possible to let users create the directory or files but only user "yat" can delete them.suppose other users are geller ross joe from group FH , who have privileges. whenever these users create file or dir , they should not able delete it.BottomLine: Group users should create file but should not be able to delete them.By the way using sgid bit didnt help .
I`ve been given a project to design a program that will interface with a hardware device through the parallel port.And so far it`s not going go. I managed to write the programe an compiled it, but when runing it the compiler says: 'changing ownership of'and then the file name then it continues to say, 'operation not permitted'.
I export a folder via NFS service.I able to mount this NFS share in another Linux machine.The folder has many files.The ownership of these files aren't belong to single user. There are more than 10 different users' files in the folder.I am trying to migrate all these files to another folder. When I use "cp -a", the new files' ownership are all reset to the logon user.
Both NFS server and client machine has exactly same copy of users/groups as these 2 machines refer to same LDAP directory service. When use "ls -al" to list the NFS share in client, I can see the files ownership is exactly the same as the NFS server.Is that possible to preserve the ownership of files while doing such migration? The "cp -a" fail to deliver the job.
I am trying to use ffmpeg to split a number of videos of different types (WMV, MPG, AVI). I do not want to change anything else, just split them into smaller chunks. The video is split, but the quality of the output file is terrible. I would describe it as "blocky" (I think the correct term is "pixelated"). When I make the player (KMPlayer) much smaller the problem naturally goes away.
Just joined the group to post this question as I can't find a good answer for it.I have an RPM that has the following in the spec:
%defattr(-,someuser,someuser) /opt/myapplication
When I go to install the RPM, the file permissions for everything in /opt/myapplication are set to root:root. This RPM installs properly in RPM based distros and maintains the correct permissions.
When I run alien -i --scripts --veryverbose myapplication.rpm as root, I can see it chmod'ing everything to 755. Has anyone else had this problem? I tried --fixperms as well and that did nothing.
I've set up a ubuntu server at home with the intention of sharing files with windows clients, so I've installed samba. I have no security issues so I've allowed public access to the shares and I can access them fine from all windows machines. I also need to preserve the dos attributes for files and folders using 'map hidden', 'map system', 'map archive' which works great for files but not for folders. I've got a number of folders from my windows box which I would like to keep hidden (for tidiness more than anything) but when I transfer them to the samba share, they become visible again and I can seem to control their visibility at all from windows or from ubuntu. Do I take it from this that samba can only manage to maintain dos flags on files and not on directories?
This is the relevant part of the samba.conf file Code:
I have set up a home network using a modem/router, which my devices connect to via ethernet and wireless. I have got it working but i'm still not happy (stick with me...)!
I have settings configured so as to utilise DHCP, so IP addresses for the different machines are automatically assigned by the modem/router (as i understand it). I then obtained these auto-assigned IPs by running ifconfig on each device. I tested connections between the devices by pinging each other using these IPs (ie ping 192.168.2.2).
BUT I want to be able to use hostnames (ie ping dandelion) instead, and the only way I can make this work is to add hosts and corresponding IPs into the /etc/hosts file.
I have made it work in this way, but doesn't this method defeat the idea of DHCP, as I will now presumably have to manually maintain the /etc/hosts files on each device.
We (like many people) have a QA environment and a production environment consisting of several servers. We want to be able to make sure before we do any yum updates on the production machines that we've tested everything in QA. Unfortunately, yum gets the latest software when you run it, so you could run it two hours apart and up with different releases of some software. We need a solution.
I *think* what I have to do is create my own yum repository. There are a variety of articles on how to do that. But before I go through all that, I wanted to make sure I was on the right track. So the process would probably end up being:
1. Create a fresh repository 2. Upgrade QA from that repository, test, etc. 3. Upgrade production from that repository
Can someone verify this is the correct approach, or is there something easier? It also seems like step 1 is going to take a significant amount of time, plus it will continue to take a significant amount of time every cycle.
I have successfully backed up my files using a script to a remote server with a log file output.However the log file is appended each time.I wish to have a different log file each time with date and time and have yet to figure that part out.
Thought I'd post it here because it's more server related than desktop... I have a script that does:
[Code]....
This is used to sync my local development snapshot with the live web server. There has to be a more compact way of doing this? Can I combine some of the rsyncs? Can I make the rsync set or keep the user and group affiliations? Can I exclude .* yet include .htaccess?
When I run rsync --recursive --times --perms --links --delete --exclude-from='Documents/exclude.txt' ./ /media/myusb/
where Documents/exclude.txt is
- /Downloads/ - /Desktop/books/
the files in those directories are still copied onto my USB.
And...
I used fetchmail to download all my gmail emails. When I run rsync -ar --exclude-from='/home/xtheunknown0/Documents/exclude.txt' ./ /media/myusb/ I get the first image at url.
I have a backup sh file that I have been using for a long time. It has always worked. 2 Days back I switched to a different pc and now suddenly the script don't work.If I run it manually in the terminal it works. But when it execute with cron it doesn't copy any files to the backup destination. It starts but doesn't copy anything.Can someone help me as to why it works manually but not with cron ?
Using Ubuntu Server 10.04 LTS. I'm new to Ubuntu and testing Rsync. I successfully copied 3TB of data from a Win7 machine to an MDADM Raid5 array. All appears to be fine. Used a Win app for the copy. I then deleted a 250GB folder on the Raid5 array and recopied the data using Rsync. Rsync was executed via a Putty session on a WinXP machine. The source was an eSata attached drive (same drive used for the big 3TB copy) and the destination was the same Raid5 array. That copied just fine. I bit verified it with a Win7 app. Perfect.
I then used the following Rsync script to copy a single 26GB file from that same eSata drive back to (what I intended to be) the Raid5 array: Code: neil@ANTECUBSV:/mnt$ rsync -r -a -v -e ssh --delete /mnt/disk1/Test/ /mnt/Test sending incremental file list created directory /mnt/Test ./ C_VOL-S300-b001.spf sent 3020267622 bytes received 34 bytes 50760800.94 bytes/sec total size is 3019898880 speedup is 1.00 neil@ANTECUBSV:/mnt$ cd raid
Note that only about 3GB copied. No error messages were posted to the putty session. I made a mistake in the Rsync command, creating the Test folder directly in the mount folder rather than the Raid array, as I intended. That is a little strange, yes, but I would not think it would cause a partial copy? The /mnt folder is on my system drive, which had about 34GB available space before the copy, so comfortably would have had 6GB or so after. The eSata disk is mounted as /mnt/disk1 The Raid5 array is mounted as /mnt/raid
I then recopied the file to the correct intended destination on the Raid5 array, which has about 400GB free space (plenty). Code: neil@ANTECUBSV:/mnt/raid$ rsync -r -a -v -e ssh --delete /mnt/disk1/Test/ /mnt/raid/Test sending incremental file list ./ deleting 2010-07-05 Backyard Birds/Thumbs.db deleting 2010-07-05 Backyard Birds/ C_VOL-S300-b001.spf sent 11105775462 bytes received 34 bytes 50366328.78 bytes/sec total size is 11104419840 speedup is 1.00 neil@ANTECUBSV:/mnt/raid$ df -h
Note that only about 11GB was copied, and this was confirmed with an ls -l command. Now I am correctly copying the file to the Raid array but it is still incomplete. I then copied the file back to the /mnt folder to see if the problem reproduces: Code: neil@ANTECUBSV:/mnt/raid$ rsync -r -a -v -e ssh --delete /mnt/disk1/Test/ /mnt/Test sending incremental file list created directory /mnt/Test ./ C_VOL-S300-b001.spf sent 26327927554 bytes received 34 bytes 56558383.65 bytes/sec total size is 26324713984 speedup is 1.00 neil@ANTECUBSV:/mnt/raid$ cd /mnt/test
This time I got my full 26GB file. Why I might be getting inconsistent results? This is quite troubling of course. I'd also be interested in basic a command line Linux diff app (that does file directory as well as bit level checking) if one is available.
i am trying to transfer a file from my live linux machine to remote linux machine it is a mail server and single .tar.gz file include all data. but during transfer it stop working. how can i work and trouble shooot the matter. is there any better way then this to transfer huge 14 gb file over network,vpn,wan transfer. the speed is 1mbps,rest of the file it copy it.
[root@sa1 logs_os_backup]# less remote.log Wed Mar 10 09:12:01 AST 2010 building file list ... done bkup_1.tar.gz deflate on token returned 0 (87164 bytes left) rsync error: error in rsync protocol data stream (code 12) at token.c(274) building file list ... done code....
I have 2 different mounts. One points to a local windows share(NTFS ->Samba) and the other one points to a PPTP VPN connection sharing(I belive that is NTFS too). I use "cifs" scheme in my fstab to mount these. And I use my Debian box to copy between these 2 mounts. I have started using Rsync for that purpose, I think that it works fine for now. My main problem is that it looks like Rsync cannot figure out if the files are same or not in source and target folders when I use these mounts. Most of the time Rsync copies the same files and folders over and over again even though those files and folders are on the target.
I am wondering if there is a way to make this scheme work? Being on a Vpn connection(slow) a Windows box, Rsync could have save a lot of my time if it could have recognized the files and folders that are same on both ends
I've just noticed a small problem I am having with my company file server. When making backups to an external NTFS drive weekly I have noticed that the file names with thai characters are not getting backed up. I receive the below error:
rsync: recv_generator: failed to stat (to the file name...) Invalid or incomplete multibyte or wide character (84)
There are thousands of files on the server that contain thai characters in there names so how do I get around this problem so it will back up all files and not just the English character ones. I read somewhere that each file would need to be converted to a different character set but this would take years as there are so many files.