General :: "nicing" Long Running Computationally Intensive?
Sep 10, 2010
I've just noticed that "nicing" long running computationally intensive, I/O unintensive, single-threaded executables on my system increases the CPU run time of those executables (as reported by /usr/bin/time as well as by wall clock) by a factor of 2-3 even in the absence of any load, i.e., "top" tells me that my program is getting 100% of a CPU. For example:
[code]....
My system is running Ubuntu 7.10. If I run the same executable on two other machines I have access to -- one running Fedora 5 and the other Ubuntu 9.10 -- I don't see any discrepancy between the runtimes using nice and not using nice.
This behavior is executable independent, compiler dependent, and language dependent -- I'm seeing it across the board. I'm assuming I've somehow configured my system to behave this way, but I have no idea what I may have done. Also, this was the first time I'd ever done timing runs with "nice" (actually, "at"), so I'm not sure how long my system's been configured (if configured is the right word) this way.
I'm looking for a way to time how long a program runs in the terminal. I didn't have any luck searching around but I'm sure it's possible. Does anyone know of the easiest way to do this?
I suspect this has been posted before, but I'm new enough to the OS to lack the proper vocab for a proper search.
In any case, I'm basically trying to run programs remotely at work (from home) that will run for a long time. I first ssh into the appropriate network, then ssh into the individual machine. This works great and I can run programs no problem. The issue is that these programs can run for days or weeks, and the network kicks me out after some period of time due to inactivity.
Is there a way to start a program on a remote machine then terminate the connection but have it keep running?
What you do if the job takes a long time to finish and you don't want to wait.Say, I ssh to a remote server from my laptop and start a long-running job. Then few hours later I ssh again and inspect how did job run, its uotput and etc.
i'm thinking of getting either an ati radeon 4670, 4770, or 5750. i dual-boot, linux 90% of the time (currently fedora 10, might switch to f12 if advisable), 10% of the time i boot into windows xp to play games. currently my card is an 8600gts, and it's ok for me in linux. i actually don't do any 3d stuff in my desktop during linux - no compiz, no 3d gaming, nothing.
aside from my software development work and r&d stuff, i also just use it for watching the occasional videos vid (sometimes unwittingly, like getting rickrolled, heh) and watching movies in totem (mostly .avi ones). oh, and it has to be able to do a "correct" resolution, not stuck on low-res 800x600 or 1024x768 (as was my experience having to use mesa on a low-end ati igp, not my box though). my resolution is currently 1280x1024, but i will also be upgrading to a new monitor which will do either 1440x900 or 1600x900 (haven't decided yet on the monitor model)
for these purposes, will the aforementioned ati cards work out of the box? basically, it just has to be able to support the proper resolution of my monitor, and not screw up video playback. that's it. no 3d involved as far as i can gather.
identify the right network structure for a data intensive website, built on LAMP. I'm thinking of a load balanced website, and that it should have a mysql master/slave setup server. I'm no expert in this area, so any online resources are welcome. You can check out the website [URL] - but it will have 10 million items once the hardware can support it.
just start Ubuntu 9.04 said: File system chek failed a long is beging saved /var/long/fsck/checkfs if that location is writable Please repair the file systmen manually A maintenance shell will now be started Ctr+ D terminate this shell and resume system boot. Give root password for maintenance or type Control +D to continue. I did Ctr+D , and after login said , that can not find /home. I starte with the live cd:
i am trying to upgrade to ubuntu 10.04 from 8.04, and am getting this warning:"Upgrading may reduce desktop effects, and performance in games and other graphically intensive programs.This computer is currently using the AMD 'fglrx' graphics driver. No version of this driver is available that works with your hardware in Ubuntu 10.04 LTS.Do you want to continue?"should i continue? i have no idea what a 'fglrx graphics driver' is
I was running grsync (rsync gui) to do a backup of my root and home partitions to a local external hdd. Home is currently 21.2 gb and root is about 60 gb, but the backup ran for nearly 24 hours before I canceled it, without finishing. How long should an 80ish gb backup take to do?
I made sure to disconnect other externals so they wouldn't be backed up as well. It was just my root and home being backed up.
I am running Centos 5.3. I ran no updates, performed no installs, nor changed any configuration immediately prior to this issue. My problem is this: when I run the command startx (default runlevel 3), it is a long time (5-10 minutes) before Gnome startx, and once it does start applications will not run. Also, when I try to use sudo (from any environment, even ssh), it is a long time (5-10) before the command is executed.
I cannot say for sure, but it seems like this is an intermittent problem. Sometimes X takes a long time to start, but once it starts it will launch programs. Sometimes X takes a long time to launch, but once it starts it will only launch certain programs. Though presently X always takes a long time to start, and I cannot successfully launch any programs.
A while back a had a similar problem to this (x taking long time to start, sudo taking long time to execute) and it ended up being a DNS problem. Unfortunately, I cannot remember exactly what it was and I stupidly did not document it. Maybe this is also DNS related, I don't know.
I don't know what log files to look at for problems with X, Gnome, and sudo taking a long time to start.
I'm always hesitant to use /var/tmp/, because I never quite know exactly how long the files are kept there for, or even what the directory is used for. What determines when a file gets removed from /var/tmp/, and how is the directory intended to be used?
Suppose I am almost sure that from last Thursday, 3.00pm up to the same day at 10.00pm I was away from the machine, but not absolutely sure. Linux probably knows better than me. Maybe there will be a text file from which I could infer the keyboard was idle from Thu 2.40pm up to 11.10 pm. In this case, I would reach absolute certainty. But where could such file be in the /. tree or what could its name be (for in the latter case an updatedb followed by locate would do)?
I managed to, in a bit of a "Oh shit, I'll lose my data" panic, be stupid enough to xkill the Ubuntu 10.04 installer while it was making an ext3 file system on my external hard drive. This drive was equipped with ext2, I had not set it to format (at least not consiously), and it had a LOT of data on it. Only half an hour after killing it I realized it might simply have been converting rather than formatting.
Anyway. Because this was my own fault I started searching for possible solutions. I tried e2fsck, ran testdisk and gpart, as well as several data recovery programs both from Windows and Linux. All I have been able to get back with that is a whole bunch of corrupted files and some music.
Now I have unleashed e2salvage on the beast, which so far has been looking promising, other than some "directory inside Inode table" errors. It found almost 20000 directories, and 174 directory beginnings.
I use a long mount command to mount a NAS drive but have to retype it every time I need to mount the drive. Because it is on my laptop I only need to mount the drive from time
I have two minor problems with Ubuntu which I've been running on my aging Fujitsu-Siemens Lifebook for a couple of years now.
First, I recently upgraded to v10.04 with no problems. However, I've just applied the latest updates via Update Manager and the laptop will now hang after the welcome screen.
There are no error messages, just a black screen and the case fan runs a full tilt until I force a shutdown. I've waited 5 or so minutes to see if it's actually doing anything but it would appear that it isn't.
The only way to boot the laptop is to choose an older Grub menu option, then it boots up fine. It may very well be a hardware issue because another (newer) laptop in the household has updated no problem.
Next, I tried to change the password of the admin account using "Users and Groups". It appeared to work but then I had to use the old password to log in again. On logging in I am prompted for the new password, the error message saying that the "token ring" password (I think it's token ring, I'm doing this from memory) doesn't match.
Again I can live with this quirk but it would be nice to put it right.
I'm having trouble with Vim in any terminal emulator I use. I have a link (vi) to vim. Occasionally it will take very long to load, whether I use 'vi' or 'vi file'. Before, if I could I would restart X, and then it would load instantly again, but I waited this time and it did load, after a minute or so. Is this a problem with X or vim?
I am changing the password of a truecrypt file container. This takes around 1 minute. Why?
time truecrypt --text --change /tmp/user1.tc --keyfiles= --new-keyfiles= --password=known --new-password=known --random-source=/dev/null"
If I use strace I see that it basically does not do anything: it simply reads lots of random data from /dev/urandom (even if i specified /dev/null as random source) and finally changes the password:
The find command is taking too long on my machine to complete. When I use time command, I find that sys time and user time are too small as compared to real time. Is my find process not getting scheduled properly?
I interrupted the neverending find command and got the following statistics:
Real time : 5min Sys time : 1.1 sec User time : 3 sec
How to break strings of command into multi-lines in crontab? e.g. Code: # the following is a very long a gruesome command to be run at 09:59 Monday to Friday. 59 09 * * 1-5 source $HOME/some-definitions; sh /usr/local/my/long/name/application/bin/hello $(date +\%Y\%m\%d) >>/var/log/my/long/name/application/log/hello.log
I have about 200k data entries in xml file. I wrote php script (using php-xml) to read xml file and insert into mysql. At first it went really quickly inserting, then after a while after inserting 100k entries, it slowed right down, just like it would not even doing anything. I have CentOs with 512M on VirtualBox running as server.
I want to source this file but getting error message as word too long,kindly solve this question. set path=(/user/lib/usr/bin /bin/usr/ucb/etc/usr/ccs/bin/$path)
I'm a Windows admin who does part time Linux server installs. Most of the time I'm asked to deploy a generic Windows server, install a few basic applications and if needed some other applications like Nagios or Zabbix. My question is for long term support, or patching should I be focusing on deploying with repositories to install applications or compile from source? In the Windows world you can patch and update from Windows Update, but is there problems using 3rd party repositories for future updates? Would one of these locations go off line?
I'm trying to do a find /photos/* -type f -mtime +365 to find all my pictures that are over a year old, but I keep getting argument list too long. How can I view what all the results are, even if it just dumps it to a file that I have to open?