I need to insure the "writer" to named shared memory (small size, 160 bytes) completes his task such that the many "readers" always get the latest data. The machine is a multi-processor (8 CPU, 24 Gig RAM), non-real time system. The system has multiple processes that are also multi-threaded. Thankfully, there is only 1 "writer" and many "readers". There is no semaphore or mutex locking. "Readers" must not block each other and especially not block the "writer".
By design, it is expected from time to time a "reader" will be in the middle of reading when the "writer" begins to update the data. I want to protect against the case of the "writer" being interrupted and a "reader" completes it's read before the "writer" wakes up and completes it's changes. In this case the "reader" will get corrupt data, some new and some old.
have two shared variables a,b which are related to each other.When multiple applications share these shared variables, access to them needs to be an atomic operation, otherwise the relation may break. So to ensure mutual exclusion, I'll put their modification under a critical section protected by lock.
Code:
critical_code { P(mutex) a := something b := something V(mutex) }
Lets say my hardware/OS/compiler supports atomic variables. Then I modified my above code as follows.
Code:
code { atomic a := something atomic b := something }
Can this code ensure mutual exclusion, when accessed by multiple applications?
I'm only somewhat new to linux but I still don't have a real grasp of it's deep innards and I had a fairly outlandish idea that I'm wondering whether is possible/plausible or not.I want to run a game server on CentOS that has a very high dependency on fast writes-to-disk. Disk writes are pretty much the single bottleneck in this server.First I looked at allowing a high queue of writes to pile up before it flushes them to the disk, but I read that this causes fsync, which is still used commonly, to take a very long time.
I've been thinking about the possibility of running the server on a RAM disk, but I still want changes to be saved to non-volatile storage. Not all at once, but have it actively write the changes to disk. The hope is that this would smooth out the peaks and valleys of write activity and improve overall performance, but I have not seen this idea discussed anywhere.
So my question is, is there any plausible way to continuously copy writes to a RAM disk to a physical drive without slowing down the speed of the writes to the RAM drive below the speed of said RAM? Or is there a better way to obtain this sort of performance, short of investing in expensive equipment?
Is there anyway in Linux that can achieve atomic increment/decrement for an integer variable without being interrupted? It means that the thread should not chance for other thread to run until theincrement/decrement is completed.
According to [URL], gcc 4.3 supports atomic built-in function "GCC can now use load-linked, store-conditional and sync instructions to implement atomic built-in functions such as __sync_fetch_and_add" However, I am still using gcc 3.4 on my Redhat EL4. How do I get this built-in function installed?
I have this hdd and another with XP in it but when ever I chnage to UBuntu, the clock goes out of whack and, makes me manually change it whereas in my XP I have an atomic clock installed that I just click on 'Ping' and she's right. I've looked at heaps of downloads for similar on here but, I haven't found a free one other than a trial type that I'm not interested in, that works so, why is Linux so difficult in this regard? I will accept all the help I can get on here because I like the thought of being able to get away from Microsoft.
Introduction : We have a C++ application in RHEL 5.4 platform. We are using TCP/IP socket programming as well to send and receive some sort of messages. We are using socket write and read command for this purpose and we are getting some run-time write issues in between. By doing various debugging and strace operations, we came to the conclusion that issue happens in some write attempts as follows.
Detailed Description : In the simplest case, consider I have a server and a client. Server writes some messages using write command and client is supposed to read the same data.Major code snippets in Server side is as follows [It is not feasible to extract the actual files and application codes as a whole, below are just the major commands used in server side]:
I wasn't sure if this is the right place to ask or comment on this, but since it's about Apache web server I thought it should work. I finally figured out how to set up and bring up the site using virtual hosts in Apache, though at the moment it's just for my localhost install.
I set them up so I can have a place to play with possible new themes and/or test out the Drupal 7 alpha / beta releases without messing up my current configuration. I decided to look at the error logs for the currently configured site and it had a lot of messages similar to the following:
[Sat Mar 06 09:45:39 2010] [error] [client 127.0.0.1] ModSecurity: Unable to retrieve collection (name "ip", key "127.0.0.1"). Use SecDataDir to define data directory first. [hostname "site.local"] [uri "/"] [unique_id "ZnUHgsCoAAEAABdzR2QAAAAB"]
I don't know why but sometimes zypper (v1.4.5) removes some of the installed packages before upgrading them.
Actually it removes it even before downloading a newer version. I just wonder what a stupid f**k implemented this behavior. I wish something awful happens to him.
An hour ago I tried to update few packages, among those were sysconfig, firefox and yast. Guess what? Zypper removed those before downloading newer versions. Due to a temporary network connection loss zypper could not download packages and quit leaving me without firefox, without yast and, what's even worse, without sysconfig package which contains essential system scripts such as ifup, ifdown etc (so my network went down completely without a possibility to restore). It's a big luck there was no new package of rpm (or it would be removed by zypper too) so I could just download those packages with my second PC and reinstall them from a memory stick.
Now I'm interested where should I file this bug, because without a doubt it's a critical one.
I'm running 32 bit 10.10 on a Lenovo N200 laptop, I have 3 other Ubuntu computers on my home network and they don't suffer from this problem, 1 is 10.10 32 bit, 1 is 10.10 64 bit and one is 10.04 32 bit. they are on the same network and I cannot see a difference.
However when update manager says there are some updates available, I can install the critical ones but not the "other updates", it tells me that they would require installation from non authenticated sources then after agreeing will not install and goes back to the start of the update process.
The error message lists ALL the "other updates" if I deselect them then the critical upgrades down load and install OK.
I started to use the smart plugin for munin to monitor my HDD. I changed the script a bit to report the raw value instead of the value for Power_On_Hours and temperature. It all works.
However, it now reports the values for these 2 sensors as critical, even though the power on hours is 928 and threshold is 999 (I even tried 9999) and temperature is 51 (threshold is 60).
Why does GNOME fail to start correctly after compiling the latest version of Zlib?
I have analyzed the situation. Zlib does not change or delete any file on the file system.
I have asked this on three occasions over the last 8 months and no one has been able to give me an explanation, so it would be a great challenge for the Ubuntu lovers. I am also EXTREMITY interested to find out why, because I have had a hard spot negative opinion against GNOME since this occurred.
usually when my battery gets critically low, The system warns me to plug it in, and then it hibernates. But recently, It displays no warning, and instead of hibernating it just shuts off. It still displays the "20 minutes remaining" libnotify warning, but thats not useful if I'm doing something fullscreen.
I recently installed Ubuntu and wanted to remove the Windows XP installation is was dual booted with (I used Wubi to install Ubuntu), but I ended up just editing boot.ini to remove it from the boot menu and automatically boot in to Ubuntu. But then, Ubuntu started playing up and keeps on popping up with the Critical Temperature warning, saying it's 80-100C although it's the first time it's been turned on in 14 hours... But to add to my confusion, when I try to boot from ANY disk, it comes up with "NTLDR is missing" and a restart message.
I am using nagios to monitor vm in blade servers, everything is working fine. The ram size of the vm is 4 gb and whenever the machine starts, the status of ram is critical as showing only few kb are free.
I set the critical limit as 5% and it comes as 3% and showing as critical. when i give top, i got it as
As shown in the % mem , the total memory used is of about 23% only whereas it shows used memory about 3.7 gb.
SITUATION/411: Trying to avoid some ultimate data loss here. - Bad superblock reported by fsck.
SIZE-UP: Make & Model=Maxtor DiamondMax Plus 9 80g USB at /dev/sdg - external. Host Operating System version=F11 Running Kernel=Linux **********.*** 2.6.30.10-105.2.23.fc11.x86_64 #1 SMP Thu Feb 11 07:06:34 UTC 2010 x86_64 x86_64 x86_64 GNU/Linux MITIGATING FACTORS: Was a dual boot drive. Partitions as follows:
I got the same error while installing (even though install works fine!) during installation and after it. Its Critical temperature reached, 3564 C, shutting down. Celcium value is not made up, its actually among those lines (four-digit).
I had another problem with installation, but I managed to install using nomodeset setting. My pc is a rock laptop (which is actually supposed to work well with vista instead.. oh wel..).
Any suggestions? I have tried installing with acpi=off, but it doesnt make any difference during installation or after it..
Also another error I got during some of my tests was: "init: Failed to spawn rc-sysinit main proccess: unable to open console: input error
Fascinatingly weird one here. First, this issue isn't on my computer, it's from someone who I am helping. I don't have first-hand access to the computer. Some background: the machine originally had Ubuntu Hardy, which we upgraded to Lucid a couple of weeks ago. Earlier this week, he gave me a call that Ubuntu wasn't booting up; it dropped to the command line. Some tinkering later, I figured out that libgthread-2.0.so had become corrupted, so X wasn't starting. It gave an error complaining that it had an invalid ELF header.I figured that this was just an odd freak occurrence; there was a bad kernel panic previously, so maybe the library was upgraded and the system was just writing to the disk at that time. Fixed via sudo aptitude reinstall libglib.
Ubuntu then started and everything ran perfectly. Today, he gave me a call. After he had restarted the computer, Ubuntu again dropped to terminal at the same point while booting. I had him open a new tty and run startx, which failed with a different shared library but the same error: libXext.so.6 has an invalid ELF header!
We had run updates, but I don't recall whether X's shared libraries were touched. Even if they were, though, that shouldn't affect anything. There were no hard resets between my fixing libgthread and libXext breaking. I'm going to try a clean install; I'm really just hoping we can figure out why this is, because it's an amazing little problem.
Since last year I have this problem and it seems nobody knows how to fix it. Here is my post from last year, commenting the same problem [URL] Well, this is what happens to me. I insert my ubuntu/ubuntu studio DVD to my pc. It starts from the DVD and the first screen appears. I select "install ubuntu" and then it starts to load for like 5-10 secs.
Then another screen pops up, this one tells me to choose the main language, BUT.. i cant use my keyboard in this screen. It just.. Dies. I tried with like 5 keyboards, with USB keyboards and serial keyboards, it just happens everytime. I was an avid user of ubuntu studio back like 2 years ago, with my laptop, but with this PC (gateway GT4230m) i just cant do it anymore. Its very frustrating.
When I try to login to my Ubuntu Server, I get the following error: PAM Failure, aborting: Critical error - immediate abort This occurs when I try to log in locally, or via SSH. I have tried logging in using every account and all create the same error. Ubuntu server edition 11.
I deleted by accident a folder containing a VMware server virtual machine, that contains most critical information. The host OS is CentOS 5.5, which I believe by default uses Ext3.I shut down the PC intermediately after noticing this.Is there any chance of recovering the files? Would they be able to mount to the same or another virtual machine?I need to get this information somehow, there are no backups.
1- The vfstpd is running on the Centos server. The server is in the same home network. The SeLinux and firewall are disabled. I can use Filzilla from a Windows PC from another computer and connect and see directory listing. There is a user names adminftp and he is a memeber of root group. When I try to upload or download I get an ftp critical error message.
2- How can I install Adobe Acrobat Reader, Flash Player and Adobe Shockwave on this Linux box?
I deleted by accident a folder containing a VMware server virtual machine, that contains most critical information. The host OS is CentOS 5.5, which I believe by default uses Ext3.I shut down the PC intermediately after noticing this.Is there any chance of recovering the files? Would they be able to mount to the same or another virtual machine?I need to get this information somehow, there are no backups.Which software can I use?
The following two pieces of codes share printing to stdout with a POSIX semaphore /dev/shm/sem.abcd
sema1.c:
Code: int j; sem_t *sem = sem_open( "/abcd", O_CREAT, S_IRUSR|S_IWUSR, 1 ); j = 0; while (j < 100) {
[Code].....
If started at the same time, the first will finish in about 10 seconds; the second 20 secs.
What I want to ask is, if the first program crashes at Checkpoint A, then B will never gets to continue, then normally how do programmers avoid this kind of deadlock due to crashes inside the critical section?
I have two questions. 1- How can I set up FTP server for the first time on the Centos? 2-I want to give the ftp user full root access in the directory of /var/www/html so he would be able to upload or download files and folders without getting "FTP Critical file transfer error". From command prompt how can I give the user test root access in the /var/www/html with all the folders, sub folders and files in one shut?