Server :: Move File System Content Between 2 Disks?
Oct 14, 2010
I have to move all the files and directories between 2 file systems. Is it good practice to move them at once or first copy them and then move ? How to do this to preserve the permissions and directory structure ?
If you have the value 100 in File1 and the value 5 in File2, how do you write a script to divide the 100 in File1 by the 5 in File2 in Linux Bash Shell?The operating system I am using is Ubuntu 10 and object is to write a script to accomplish this task.
I am new to linux and wget...what would be the syntax to use wget to move content from one local directory to an svn repository (svn commit)? For instance if i have a directory c:\dir1, and i want to move it's content onto an SVN repo...is this possible using wget? If so, how do I get this done?
i want to move all files inside the folder moving to folder public_html which command i should use? m using centos5 64bit tell me full command which i should writer in ssh client..so my all files will be moved to public_html...from moving folder.
I am new to Linux and not sure how to explain what I want to do, but I will give it a try. I have a system running CentOS 5.x on a system the is dying. Is there an easy way to migrate the system over to a brand new system that I recently purchased? I only have / and swap partitions, so nothing fancy; however, I have read that Linux is nothing like Windows when it comes to applications, and I could simply drag and drop files on the new server; however, I suspect that there is more involved than that. I hope I can just move the files over, and the system will boot; however, I am worried about new hardware on the new system. I am looking for recommendations to this issue. I am not sure if I have described it correctly; however, just point anything out that I need to change.
I currently have a server with the default VolGroup00 that contains logical volumes for the root file system and swap using logical volumes LogVol00 (root) and LogVol01 (swap.) I need to take space from LogVol00 and move it to LogVol01. I have found documentation for increasing the swap, and the resizing the logical volumes. However in the documentation and the man pages it says that I have to reduce the size of teh file system on the logical volume I am going to shrink. I have found documentation resizing the logical volumes but not the file systems.
we have moved to linux platforms now. can i move my windows file server to linux server,main work on this windows server is sharing folders which having secure permissions to respective groups & users.i knew we can share folder on linux. before formatting my windows server i want to know can we create new users to access particular folder?. users will not have any other kind of access except accessing there data folder. currently we have below scenario :users named a,b,c,d & e. all users having their folders to access. a & c are in the same group & they have common access to other common folder. please help me to change my server in ubuntu, i have planned to migrate in ubuntu server 10.04 (64 bit).
how to move a file over ssh to server but I cant get it back via terminal
Code:
scp Downloads/test8 user@host:home/user/Documents
moves it to server now i need to transfer back, I know I can do it with nautilus but i wanna learn terminal, I have googled but cant find what i,m looking for
I have a file, say abc.txt, whit some text lines.The I have a second file, say 123.txt where at a certain point one can read "WORD".I would like to append the whole content of abc.txt (as it appears in abc.txt) in the line after "WORD".
I'm using rhel6. Using File Browser Nautilus 2.28.4 I could easily locate any file I'm interested in by it name. I'd like to use this File Browser to locate the file name based on it content e.g. based on some word in the text file. It doesn't work for me that way ... My question: does Nautilus support the search of file based on it content or only based on the name of the file itself?
My employer issues pdf files with everyones work schedules. I copy the content and save it as plain text in a file called unformatted (hope to be able to automate this step someday). Im working on a SED script that reduces unformatted to only display what I want to see and saves the result in a file Iïve named formatted. After that I have to manually copy formatted and save it with that days date as a filename e.g. 2011-02-25 or whatever day is scheduled in the pdf, for use on a mobile device (Nokia N900). I noticed that the date occurs on certain lines in the file so I added a line like:
sed -n 's/^Date: (201[1-9])/([0-1][0-9])/([0-3][0-9]).*/1-2-3/p' < unformatted >theDate That creates a file theDate with the date in it that I wish to use as the filename for this particular instance. So I would like to skip the file formatted all together and have the sed- script write to a new file using the content of the Date as a filename, but how do I make that happen? And of course it would be more elegant if I could skip the intermediate theDate file as well.
I wanted to copy one file to multiple new files. I have an idea to write a script and do the operation. But here i m looking for any particular command to do this operation.
I have already developed file type filtering functions through squid. Now I want to deal with content filtering aspects... What tools are available there for so in linux?
I'm planning to add 1tb sata disk to my lovely file-server under ubuntu 10.10,what i want is use this disk as additional storage for network user,indows and ubuntu?I mean when my ubuntu server down (worse case) I can easily take out the disk from ubuntu machine and plug in on windows machine
Can windows read files from a home file server with an ext4 file system? or do I have to partition the drive with the server (ext4) and an ntfs partition with the files on?
I would like to set disclaimer like content in my meral mail server, so that all the users should be able to get that content automatically in their outgoing mails.
I am using find to search for .tgz files modified more than 7 days ago and delete them.find /directory/ -iname backup*.tgz -daystart -mtime +7 -exec rm -rf {} My problem is that find will go through the content of tarball as well and list all content. I want to only search main tarball and delete it if older than 7 days.
I'm going to buy a new system, and I have 2 SATA hard disks from my old system. One is installed with a Linux OS, the other with Windows Vista.For the Linux hard disk, will I be able to simply use these 2 hard disks on the new system and boot up, retaining all my data? If not, how do I transfer the data from my old hard disks to my new hard disks? My old system is faulty (no signal to monitor), so I can't just copy everything directly using, say, a portable hard disk.
then I cnf=`ifconfig` thus giving me the config of the NICs.after that I want to compare the $cnf to see if the value of it is listed in the file and if it is do things.There might also be something better to use then the 'ifconfig'but it worked so I just stuck to it First I just had one subnet but now it's starting to grow and I wanna make a list instead of having them all listed in the if-statement.
I am kinda stuck while providing solution for the above problem. I have achieved the fail over using keepalived but not sure how can we replicate the data from one server to other seamlessly and have them in sync with each other. My prime requirement for this project is end user should not notice the fail over and replicated copy of data should be available on the secondary as well.
I have a weird performance issue with a centos 5 running a nfs server and a rh8 client. I think the fact that it is rh8 client should be downplayed. It is just that with rh8 client the performance degradation seems more clear. See test details below OS in server is Centos 5 x86_64 kernel 2.6.18-92.1.22.el5
1Gb connection between machines File to test over NFS is a 1GB file. First of all I wanted to measure how the network alone performs while using NFS. So in the server side I run a "cat" command on the 1GB file to /dev/null. Please note that the disk read speed is about 98MBs. At this point the file system has the 1GB file cached in memory. In the client side a "cat" on the same file gives me a speed of about 113MBs. It seems then that the bottleneck in this instance is the network and it is very close to nominal speed. So the network performance is really good. (BTW I know that the server got that file from cache because a vmstat or iostat shows no disk activity.)
The second test is reading from disk with no caching involve. In the server I flushed the 1GB file from the memory. For instance by reading another 5GB file and I repeat the same thing as above in the client (a cat on the 1GB file). Now, the server has to go to disk.(vmstat or iostat shows the disk activity). However the performance, now, is about 20MBs, I was expecting something closer so 90MBs. (since the reading speed in the server in the first test showed 98MBs).
This second test was repeated for ext2, ext3, xfs with no significant differences. A similar test using a RH8 NFS server and client gets me close to 60MBs for a 1GB file not cache by the file system in the serverSince network speeds and disk read speeds are not the bottlenecks ... what or where is the limiting factor then?
I have given up (for now, at least) the idea of a raid solution but I will still have 2-3 hard disks available for my workstation. If I choose to reinstall from scratch,can I have essentially two different homes?
I have implemented LVM to expand the /home partition. I would like to add 2 more disks to the system and use raid 5 for those two disks plus the disk used for /home. Is this possible? If so, do I use type fd for the two new disks and use type 8e for the existing LVM /home disk? Or do I use type fd for all of the raid disks?
We are trying to define an appliance for an application server so I would like to know which should be the best file system type for this kind of use, basically our web applications uses libraries of 50 KB and our web apps.creates temp and logs files not bigger than 3 MB.
I am trying to play embedded mp3 content in FF. When I click on the link to play the mp3, a new tab opens up and all I get is a grey screen. Does anybody out there have any ideas on this? I have run Autoten and installed all of the necessary codecs.
I've been puzzling over this for a while and have not been able to reach a solution, so turning to your good selves for advice! I currently have two files, let's say they look like this:*File A*