General :: BASH Script To Copy Contents Of Directory So Long As Average System Load Low?
May 26, 2010
For reasons I won't get into, I need to copy directories so long as the average system load is low. Can someone help me write a BASH script that will copy the contents of a directory, but check to make sure the average system load is below X before copying each file, and if not, wait Y seconds and try again?
I would like to copy the contents of a directory into another. I don't want to copy the directory and all files and directories under it, but just the contents of the directory just as if it were a regular file. Doing cp -r target dest copies the directory and the entire hierarchy rooted in it. I get error if I do not include the -r option. (I am calling cp from within a C program.)
Is there a way to copy a directory (retaining the permissions and owners) without copying the contents of the directory?
If there is no such thing... then I need a way to determine if a target path is a file or a directory, and if it is a directory I need to make a new directory elsewhere that has the same name, owner and permissions.
Basically, I'm trying write a script to copy 200 GB of files over a network to a new server, and I'd like to do it by generating a list with the find command. That way, I can migrate large chunks of the files over the course of a week, and on the day of the migration generate a new list of files that changed in the last week and then copy just the chagned files over minimizing the down time. However, the list will contain directories that I can't just use the 'cp' command on because it will copy all the contents of the directory.
I've created other users in my machine. now I want to add all my home directory contents and settings to the home directory of other users. how can i do that? Can I do it from /etc/skel directory?
A server of mine previously running ubuntu 9.10 used to be between 0.01 and 0.10 load average during the normal load of users using various server programs on it (mostly apache with php scripts).
Now, after upgrading to 10.04 (which went smoothly for the most part), the load average is much greater under the same user workload, hovering between 0.1 and 0.3 under very light work and up to and over 1 regularly when more users are accessing the same scripts.
Are there any known issues that would cause greater usage of the same resources in lucid, or are there any ways I can trace what's causing the higher load? Downgrading or starting with a fresh install are last resorts, as there are a lot of customized options specifically set up for this server and I'd rather not go through backing them all up and restoring them after a complete wipe.
Is there a way to copy a directory without copying the contents, but preserving ownership, timestamp etc of that directory?
I've looked at the cp man page, but I don't think it supports it. I'm thinking one would have to write a script to gather the info, and then mkdir, chown and touch. Does this seam right?
Is there a way to recreate all the folders from one directory to another without copying over the contents of the folder? I've been trying to do something like this,
Code:for i in `ls $X`; do mkdir $PATH/$i; doneUnfortunately $i is deliminated by whitespaces in the filenames and not the actual folders.
$X contains only other folders so I dont have to worry about regular files but any kind of more "advanced" solution would work.
I have a server that I wanted to transfer it to a newer one both of them have CentOS but the newer one kernel is more up to date I wanted to know is it possible just to copy some directory contents exactly to another for transferring the server data (for example /var /usr /bin /home /etc). I have one website on my server with its mysql database
For the life of me I can not figure out what I am doing wrong with scp to copy a directory and its contents from a remote machine to my local host. I have no issues with getting a single file but would like to just save time and get the whole folder in one command.
Here is what I have tried:
scp user AT remoteMachine:/home/username/folderIwant user AT localMachine:/folderIwant this gives me a permission denied error and try again and received disconnect from localHost to many authentication failures
scp user AT remoteMachine:home/username/folderIwant . says can not find file or folder
I am sure this is something easy that I cant remember, and searches gives me local to remote not remote to local and trying to make the local to remote suggestions I read to work remote to local have not worked.
I read the man pages of batch and it says that commands or scripts scheduled using the batch command will only execute when the load average goes below 0.8. As a newbie I hardy understand what it means. Does it mean that the system resources are busy?Ans since I want to see the o/p of the commands I have set using batch, is there any way i can bring the sys load average below 0.8?
How to get the load average for each CPU core in multi core(eg:duel core machine) processor environment. I tried using,
Code: 1. cat /proc/loadavg 2. uptime 3. top But all of those commands gives the load average for whole system but not particular CPU core. Are there a way to take the load average for CPU core(Or any mechanism that can be done programmatic manner).
How to reduce the load average? I have two linux machines.. now my problem is that both the machines having same configuration and same processes are running in those two machines... But in one machine the load average is more than 3 and in other it is 0.5(when i am using top command).For this load average one machine(where load average is more) is slow when i am running application on it..
I wrote a script to extract and get the the name of *.gz in a foler . Since running that script every 10 minutes, load average on my server increases more than 10.I checked with 'top' and it showed many D process.
I have High load on my server and my investigation shows nothing (so i believe that my investigations is wrong ), the load average in this moment is 10.13, 9.47, 8.24. , mentioning the below.
- The disk utilization (all the disks) is near 0, as the result of the IOSTAT - There is no blocked processes (as a result of VMSTAT). - I have two processors (dual core) , the maximum load average should be something around 4. - The server always have above 8 load average in all times interval.
btw , my OS is RHEL AS release 4 (Nahant Update 7)Kernel :Linux 2.6.9-78.ELhugemem #1 SMP i686 i686 i386 GNU/Linux
We have a server that is running RHEL4 that occasionally spikes in load average above 10 and we have no idea what is causing it. We would like to know if there are any free tools or a script that when the load average hits a certain point it will trigger the system to start logging the processes to see what is happening. Usually by the time we get logged into the system the load average is on its way down. If someone has a better idea please let me know.
Is it possible to use the keyboard in order to select some text in the terminal windows that is not in the currently edited line? (for example, in order to copy part of previous command output).
I've been looking high and low for a utility program or perl script or something that can take a linux directory structure as input and convert it to MS-DOS 8.3 directory structure.
The purpose of this is to conform to the path format that is expected on my rather old Creative Zen Neeon MP3 player for m3u play lists.
I have a directory and sub-directories (4 or 5 depths). There are several type with extension in them (*.mp3, *.wma, *.jpg, etc). I would like to copy the whole directory to another location recursively but only *.mp3 files.
how to copy 3 dir's content to 1 dir by crontab?suppose i want to copy /home/ftp1/* /home/ftp2/* /home/ftp3/* to /ftpdatathree ftp user data to one folder after every one minute by crontab methodso it goes like*/1 * * * * /bin/cp -rf ??? /ftpdata
When i open one of the web browsers i use and try to load a web site it's taking to long to respond and sometimes it doesnt load the website at all. I have tried with firefox,epiphany,opera with all the same results. I am sure that this is not a problem with my internet connection because i don't have these problems with windows.Also the network manager connection settings are correct
I also tried choosing the old kernel(2.6.32.24) to boot from but no success.The problem is the same as if i am using the 2.6.32.25 kernel. The strange thing is that i can download packages from synaptic with full speed. Last think.I have recently downloaded the recommended updates from the update manager but i don't remember what are the things that where updated.
I was struggling to find the information so I thought it would be easier to ask here. What does the load average as reported by top mean ? To me those are 3 mysterious values, which most likely refer to some average CPU usage and this is all I know about it (not sure if it's true).What is the maximum value for this parameter (I guess the minimum is 0.00) and what does it mean ?
The load average is almost 1.06, but cpu is not 100% utilized... I am just wondering why load average is 1.06 in that case still ? (well i monitored cpu for long time, it never exceeded above 40%)can anyone explain the reason behind it ? Also is the system over utilized in this case ?
I am having Red Hat LINUX 5 Enterprise Server and facing problem regarding very very high server load (load average is going high up to 60-70)due to which server is getting hang.
I have two servers. One in production (lets call it the OLD ONE) and the other (lets call it the NEW ONE) in tests to replace the OLD ONE.This is the basic hardware of each one: (I can post more detailed info if you need, but beside the erros on dmesg, look at the the L2 cache of the NEW ONE )
Old one: 2 quad core processors that linux recognize as 8 x Intel(R) Xeon(R) CPU E5440 @ 2.83GHz - 32Kb L1 - 6Mb L2 48Gb RAM
New one: 8 quad core processor that linux recognize as 32 x AMD Opteron(tm) Processor 6136 @ 2.400Mhz (- 64Kb L1 - 512Kb L2 128Gb RAM
The scenario:We run a dataflex system on the old one, with average of 3000 users, with tops at 3300 users and sometimes less then 2000. In the old one, we have a load average from 2 to 6 with the 3000 users, depending on the type of application (sometimes we run reports to txt files that take more then 6 hours to complete and the load average can raise to 12). 85% of this conections are from remote links. Dataflex is a language that derives from C that have their own sgbd (if we can call it that way), and have a limit of 2gb per table. This size we almost have on 5 tables and we use a dataflex feature to compress the data.
The problem:We are migrating (or trying to) the tables to oracle, so we bought new machines for the DB and the new one to replace the old one, becouse we think that could not handle the job with 2 oracles (load balance).In some tests we could see (or suposed) that the oracle database was not so fast with more then 1000 users (opening same table and doing the same task) and we decided to test the new one with the system that is in production right now, with dataflex tables, to ensure that the problem could be oracle.We change the HDs and IP. Started the system on the new one, and started to monitorate as the real users start their jobs. At 800 users the Load Average raised to 26 and with 1300 users we had more then 115 on Load Average. More users login in and the TOP become slow, pointing 400 of LA. From here we started to get some "Lock time out" erros and we had to change to the old one again, to prevent corruption on the tables. I'm analizing all report tools I know about performance and hardware and I cant see nothing. I saw some errors on dmesg, but I can say that is related to that problem.
I have two text files on Linux. One contains a list of valid IDs. E.g: abcd efgh ijkl etc.
The other contains a list of invalid IDs. But, some of these also appear on the list of valid IDs, in this example "efgh": mnop qrst efgh etc.
How can I easily construct a text file that contains all the lines from the invalid list that do not appear in the valid list? That is, I want to end up with a text file that has: mnop qrst etc. I'd like either some Linux commandline magic of some clever Vim trickery.