OpenSUSE :: Have To Start An Analysis Program, An This Analysis Takes About 4 To 6 Hours?
Mar 14, 2010
I remotely access to my OPENsuse 11. I am using TightVNC program. There is nothing wrong with accessing. But I have to start an analysis program, an this analysis takes about 4 to 6 hours. Therefore, I want to disconnect and let my analysis continue.But after disconnecting from suse, the program i started stops to work. Does anybody know how not to kill the program while i am not connected?
I am looking for a mp3 file analysis program ( shell preferred / or X ) - something that would give me similar output as >LAME< does during the encoding phase.
Quote:
Frame | CPU time/estim | REAL time/estim | play/CPU | ETA 9342/10156 (92%)| 0:06/ 0:06| 0:06/ 0:06| 39.940x| 0:00 32 [ 80] %***
[code]....
any app that can analyze VBR/ABR filez - not just output a bogus bitrate, but return a more detailed info. LAME has a '-g' (run graphical analysis) option which has to be enabled during the compile time - tried several ways and -g is still disabled, plus there is not much info on -g and i do not even know if this is what I am looking for.
To analyse a coredump, I need to specify program name/path in GDB/KDevelop. Since the program name along with arguments is also within a core dump, I wonder if it doesn't keep the proper path of program that crashed and so asks for it?
I Have RHEL4 running on IBM X3550 server, we request IBM support regarding issues with this server, they will request for IBM DSA logs. The logs are quite extensive and cover almost all server config & can identify hardware issues with drivers...etc. I want to know if there is a way to analysis those logs offline without sending them to IBM support?
Is anyone aware of a utility that can show all the files in a partition and their sector locations? This would seem like something that file recovery software would have to be able to figure out in the process of recovering deleted files, but not sure if any utility would actually display the information in that manner. Before anyone asks, I don't think I really NEED to know that information, but I'm just curious to see how the sectors are allocated as the partition fills up and files are created and deleted. Prior to resizing a partition smaller it would be interesting to see if data has to be moved from the end of the partition because it occupies sectors that will be lost due to the resizing operation.
I know that there are other threads about performance problems (10.10 runs slow and jerky and way too slow for instance), but there is no solution (at least for me): I have a normal laptop (ASUS F5 with 2 GB). I used Ubuntu 9.x, and it was fine, there were no performance issues. But with Ubuntu 10.10 most applications are very slow (nautilus, OpenOffice, Eclipse ...). I have GNOME and no visual effects (hardware driver for graphics card). It is almost impossible to work with this system.Can I use a profiling tool for analysis? What tool do you recommend?
Intel WiFi Link 5300 AGNI think I'll want a duel-boot and to use R-Desktop or something once I get more knowledge about what I'm doing. However, my first priority is to get a not-completely-mystifying version of Linux up, with my Wi-fi card working in RFMON (monitor-mode) so I can start collecting packets. I'll no-doubt want to avoid builds that (though they may not exist) will not play nice with the duelcore as I'm running analysis.
If there's some *-Linux that will make diagnosing hardware problems or and/or running Kisnet/Airsnort/crack/peek easier than I'm open-eared. I'm a cs-major, and am aware I really should've gotten my feet wet before now, but it's better late than never, and I'm told I'll "never go back," however I'm going to need just a bit of handholding here in these early stages, before I get a success, gain some confidence, and start experimenting so I don't have to ask as many silly questions, but as a college-student and Linux-user to be, "the freer the better."
I have just started with some basic analyses of GRASS GIS in Linux, and was wondering which commands I could use to select a certain percentage of rastermap values. For instance filtering 5% highest values of a elevation raster map. there are some GRASS GIS users among you all.
Fedora 15 uses scidavis (Scientific Data Analysis and Visualization) taken from Fedora 14. If you try to build the native rpm package, an error occurs in building the documentation. A 1-line sed in the spec file solves the problem:
Code: #fix spurious-executable-perm find . -type f -exec chmod 0644 {} ; #fix docbook to adapt different versions of fedora sed -i "s/VER-REL/`rpm -q docbook-dtds|sed "s/^[^0-9]*//;s/.noarch//; s/./\./g"`/" manual/docbook-en/index.docbook # # ---> sed line to fix Fedora15 building: sed -i -e 's/xsl-stylesheets-1.75.2/xsl-stylesheets-1.76.1/' manual/scidavis_html.xsl # #fix default path for fitPlugins sed -i "s/usr/lib/%{name}/plugins\%{_libdir}/%{name}/pluginsg" %{name}/src/ApplicationWindow.cpp sed -i -e 's/Qt;Science;Physics;Math;Graphics;/Education;Science;DataVisualization;Qt/' %{name}/%{name}.desktop
I am making comparison between tora , dsr , dsdv and aodv. There is some error in tora.tcl protocol. Rest is working fine..why the tora.tcl is popping the error?
Error is given below:- Code: num_nodes is set 6 INITIALIZE THE LIST xListHead (_o22 cmd line 1) invoked from within "_o22 cmd port-dmux _o37" invoked from within "catch "$self cmd $args" ret" invoked from within "if [catch "$self cmd $args" ret] { set cls [$self info class] global errorInfo set savedInfo $errorInfo error "error when calling class $cls: $args" $..." .....
I have over 500 hours of audio recordings of my server room. There's a lot of background noise from the fans, but mostly it's constant. Is there a way to automate an analysis of these audio files (mp3 each 30 min) and display changes? Like frequency or decibel changes?
recommend some really good performance analysis tools? Top is not good because it has problems.I am looking for some products like collectD, collectl, or something else comparable.I need something that will look at tasks, cpu, memory, disk usage, interrupts, priorities.If I am missing a tool listed then let me know.I am looking for something that can display the results graphically.
I am developing an application whose executable is generated inside a certain folder hierarchy (say: /DevPath/MyProject/bin). My source code is located in a different branch of this hierarchy (say: /DevPath/MyProject/src). When my app crashes, its core files are stored in /DevPath/MyProject. I'm developing the app on a pc, but running it on a separate platform in which i can only execute it. Folder hierarchy is the same as above on both computers. Usually, when a new executable version is ready, we update both the executable and the source code on the target platform, transferring all the new /DevPath/MyProject folder on it. But sometimes it can really be a bother, so we update only the executable.
1)In the case we only update the executable, keeping an old source code version, and the app generates a core file, can i trust the backtrace produced by gdb in this case? I.e., does gdb need the latest source code files or it just needs the debugging information?
2) (More radical question ) Do i really need to keep the source code on the target platform for core dump analysis or i just need the executable?
I am currently looking for tools for static/dynamic code analysis for embedded Linux system development (both device driver and user space apps). We will use Eclipse IDE and C++ lanuage. I hope the tools are easy-to-use, reliable, popular, better with good supports, and not-too-expensive. I already find a list of tools at WiKi, however, I don't have time to try them all. Could anyboy please recommend me a few? If you can tell me briefly about their pros and cons, that will be the bet.
It is vital to get a useful server performance monitoring tool that prevents growth related performance issues. Moreover, it should offer long term capacity planning and trend analysis along with detecting performance issues and unwanted outages.
Second : I need to have a rapid way to exstract some simple raw informations from any hard disk.Something like ls -latr /*.* | sort(by extension/type) The goal I' m looking for is something like
.exe 1034 last creation date .jpg 2437 " .xxx 365 " ecc....
I am looking for tools for static/dynamic code analysis for embedded Linux system development (both device driver and user space apps). We will use Eclipse IDE and C++ lanuage. I hope the tools are easy-to-use, reliable, popular, better with good supports, and not-too-expensive. I already find a list of tools at WiKi, however, I don't have time to try them all. Could anyboy please recommend me a few? If you can tell me briefly about their pros and cons, that will be the bet.
I just noticed the results of the Honeynet Project's Challenge 7: Forensic Analysis of a Compromised Server have finally been posted today. Just got done reading one of the submissions and it's pretty good if anyone is interested in how to analyze a Linux incident involving evidence from memory and the file system.
Does anyone know or have a book on the advanced uses of kickstart and its deployment methods? I have a challenge and its to: Write a report with a full and detailed analysis of the two methods,(remote installation methods of linux and windows) their differences, and a comparison of features and performance.
I am also going to have two servers (one Windows one Linux)thay deploy a virtual network of VMs with different scopes and policies. What can i really do to go beyond the scopes and policies?
I synthesized a seismogram by using Fortran codes, I need plot the synthesized seismogram and the data together, so I can verify the accuracy of code. Now I encounter a question: how to read the SAC data by Fortran code, I have searched some codes on Internet, the details as follow:the velr12a.sac is my data file.
Code: c read sac file PROGRAM RSAC PARAMETER (MAX=1000) DIMENSION YARRAY(MAX) CHARACTER*10 KNAME
I noticed that every, say, 5 times I boot CentOS 5.4, a find search is initiated that takes several hours. For example: find . -name rd=rmdir -print I'm not sure if it's related, but, I do have a "alias rd=rmdir" in my .aliases. Would changing it to "alias rd=/bin/rmdir" avoid this problem? I'm using zsh. Is this search necessary?
I have a hardware device with two ethernet ports, eth0 and eth1 running Centos 5. Basically my goal is to forward packets from eth0->eth1 and eth1->eth0 as well as get a copy of these packets for analysis. If I set IP routing to do the forwarding then I won't get a copy of the packets for analysis.
I've just installed a fresh copy of 11.4 and the updates are driving me crazy. My internet connection is up, but the download stops and goes, and when I try using firefox, it takes a minute to even start to load any page. I have the system monitor up and it is telling me that the network is not receiving anything except for these bursts of data aprox. every 30 seconds.
I know there's a bunch of ways to start a program on boot, but I'm wondering if there's pluses/minuses to the ways and what they are.My req's:1. I want it to be started as the user I log in as (NOT root)