I recently replaced my windows fileserver with one running Ubuntu. One thing I've noticed (which is a annoying) is that when I copy files between two samba shares from my windows machine, it copies the file through my PC to the new destination. On windows shares it just did some sort of local copy (ie it took about 2 seconds) rather than 3-4 minutes. Is this the normal behaviour, is there any way around it on Linux
How to copy a Read-Only file in Linux and make the copy writable with a single cp command in Linux (Ubuntu 10.04)? The --no-preserve and --preserve seemed to be good candidates, except that they should "and" the mode flags, while what I am looking for is something that will "or" them (add +w mode).
More details: I have to import a repository from GIT to Perforce. I want that all Perforce depot files are Read-Only (that is how Perforce was designed), while all other files that were derived/copied from depot files are writable. Currently if a Makefile tries to copy a Read-Only file then the derived file will also be Read-only. This leads to build-errors when cp tries to overwrite Read-Only file second time. Of course the --force is a workaround here but then the derived file is also Read-Only. Also I do not want to mess with "chmod" after each "cp" command - I will do that only as the last resort.
I inherited a machine that's set up with 3 disks, managed by LVM (v 2.02.46-RHEL5). I just got a new box with the same hardware configuration, and would like to clone the disk setup by copying an LVM config file from the first box to the second, and have LVM on the new box set up the disks according to that same configuration. Is there a way to do this?
I am unable to copy files on my machine. For instance, if I have two folders on the desktop and I want to move one folder into another, I get the error message:Error "Not on the same file system". I am using SL5.4.
I have been trying to SCP a couple files from my Ubuntu 10.10 machine to a Fedora 12 machine. Before today, did it with out any problems, always worked. Today however; after the SCP is complete from my machine, the file on the other machine is zero bytes, an empty file. The only thing I can remember getting changed was the new kernel that was in the update I did today. But I don't think that would have changed the SCP works.
I want to write a shell script which will copy files from user Mac machine to UNIX Server without prompting userID and Password. I do not want to use ssh or rcp commands as it prompts for password.
I have some file tools on a mint machine that I would rather not install on my mac laptop. Mainly because of the vastness of apt-get and the low risk of installation failure. Anyway, every so often I have a file that I want to process in place using some remote tool. Both machines can ssh right in to each other so I was figuring there must be some script or tool out there that would allow me to type out something like remote [file] [tool & args] to send my file to the other machine, get it processed, then get it back.
I need a command-line method of copying files from a Linux box to a Windows machine that is in a domain and requires authentication. I cannot install additional software or services on the Windows XP machine. I can install any software on the Linux machine. I've tried scp, but the connection failed and if my understanding is correct it is because scp requires that the target (windows machine) be running an ssh service. Is there a command-line linux utility that can pass Windows domain user and password and then copy a file from the linux machine to a share on the windows machine?
I want to learn how to build a Linux network from scratch that includes file and printer sharing, intranet. I have an intermediate-level knowledge of Windows networking. Can anyone suggest a book or online tutorial that I can learn from? Now let me be clear: I am finding no shortage of tutorials on the web. However, too many are old or incomplete.
A little extra info: I am a teacher/network admin for a small private school with about 50 student computers (that I wish to become Linux machines in the future) and about 10 staff computers (mostly Windows laptops--I do not expect the staff to convert to Linux as readily), I currently do not have an intranet implemented.
I have just been bothered by a fairly small issue for some time now. I am trying to search (using find -name) for some .jpg files recursively. This is a Redhat environment with bash.
I get this job done though I need to copy ALL of them and put them in a separate folder BUT I also need to keep the order intact after copying.
For e.g - If I get a JPG file under /home/usr/new/1/ then the destination also needs to be /test/old/new/1/.
At the moment, I am simply putting all files under /test/old/ and I can't somehow get the later /new/1/ folder path created under /test/old/
I understand this could well be done using while OR if else loop, though if someone can just guide me with a hint, I would be really grateful.
I will complete the rest of the steps and was asking here since I am still not comfortable with the shell/bash scripts yet and planning to be really good at it over the next couple of months.
I had a situation in which the the path of the file to be copied is written in other file and I had to copy it using shell script..I can use cp $(cat /home/robert/location.txt) /media/sda1 on normal linux shell...But I am using buildroot script where $(cat /home/robert/location.txt) evaluate to nothing..is just blank..
I have an embedded linux system (Debian 'Lenny') which booting from a microSD flash. If I make a copy of a file on the flash file system (cp test test1) and then power off (disconnect power spontanious). Connects power again and the system come up, but the file test1 is gone. How can I secure that test1 is NOT disappear if the power get lost?If I copy file and then restart system with reboot command, the file test1 does not disappear.
I have a C-function that create a file and then make a copy in the same directory, but somethin is wrong with permission or owners.The program starts as root user.The file creates by the program:
-rwxrwxrwx 1 root staff 199680 Oct 18 10:58 test
Ok, but after copying the permission is not the same.The file after copying (with new name) by the program: -rw-r--r-- 1 root root 199680 Oct 18 10:58 test_copy
I want to have full permission of the copy, how to do??
have installed redhat linux 7.3, selected to install all options, but after system reboots, it comes to a black screen requesting a intranet login username/password. I have the machine connected to my home network via cat5 for dsl. I have reinstall many times using different setups that I think would have resolved my issue.
When i do cp filename destinationfolder it isn't happening. I don't get any error messages or any indication that the file copy didn't happen. But when I go to the destination folder and do ls; the file(s) not there. I tried it with sudo also and i get the same results. When I first did the copy it actually copied it somwhere but not where i wanted it. It copied it to folder name Desktop. So i tried copying it from Desktop and again same results.
I m trying to take backup of a file using ssh.I have written a command like following.Which will take backup of vm.cfg as backup.cfg...How would i modify my query?
I'm using ubuntu and I'm trying to copy a .iso file to a dvd. I have k3b at my disposal. My problem is that the dvd is not empty, and I'd like to overwrite it or wipe it before copying the new .iso, but k3b displays an error if I try to overwrite it, it says it doesn't have the necessary rights and proposes me to use k3bsetup. I have it too, but I don't know how to use it.
I am developing a Web-based application and have some folders that will generally reside outside of the Web accessible area of the server. However, since some people will not be able to store those folders outside of the "public_html" folder, I am looking to put a blank "index.php" inside of every folder within that section of the application. To make things easier, I would like to know if there might be a way to recursively copy one file into every folder in a certain location.
In other words, is there a command that might do something like: Code: > cp -R index.php /home/user/public_html/source-files/* Basically, I want every directory inside of "source-files" to get a copy of "index.php". The directory hierarchy within "source-files" can go at least three or four levels deep, so the command would need to be recursive. I am looking for a command-line statement that I can type to perform this action.
I'm trying to copy a 6Gb file across from my laptop to an external usb drive but it quits at about 4.2Gb every time with a "file size limit exceeded" error. i have checked the output of ulimit -a and there is no limit there on the file size. I'm using the Slax live Cd for this as it always gets the job done
I am a fresher in shell script, I want to copy only new file in a directory to some other location. I am able to find new file using "ls -ltrh | tail -1", it is showing new file. But I don't know how to add in the shell script to copy that new file to other location.
Firstly, I did perform a search on this problem in these forums, but didn't quite get what I was looking for. So I hope I don't yelled at for making a duplicate post. So I used rsync to backup my webroot to another nix machine. du -hs gave me 1.3 G on the source machine and 1.1 G on the backup machine. I tried to compare the individual files and noticed a trend. The files on backup machine were always smaller than the files on source machine. The source uses SATA drive, destination uses IDE. So this time I rsynced locally to another folder on the source machine. Same size anomaly. So i did a simple cp file ~/file and same size anomaly. So it's not a rsync issue.
I took a file and ran md5sum on both, the source file and destination file. To my surprise, even though the file size was different, they had the same md5sum. Now, let it be known that the source machine is a production server and the dir i rsynced was being used, serving pages to the web. I googled about this and came up with stuff like open descriptors and holes. I don't understand this stuff and was wondering if this was really the case. What are those if it is the case? And my backup copy is 100% identical right? There are thousands of files and I ran md5sum only on couple. Can I take comfort that when time comes, I can restore using my backup without any problems?