I had a raid 5 + lvm 2 array and lost a disk. While it was recovering the array, the power was down and recovery stopped. When I recovered the power and start the machine the array was unable to start, it was degraded and the states were different between disks. Every disk watched the array in a different way. I put you the states:
Quote:
/dev/sdd1 Number Major Minor RaidDevice State this 0 8 33 0 active sync /dev/sdc1
[code]....
The first part in /dev/sdc1 is the same for all the devices, I just post you the states. Another thing is tha all the devices say that there is no superblock It seems that 3 disks are "active sync" but the states of the others doesn't match between them. And /dev/sdd1 is spare, the disk I added manually at first to start the recovery process.
We had to reboot a server in the middle of a production day due to 99% iowait (lots of processes in deep sleep waiting for disk iops). That's never happened to us before. It had been 363 days since the last fsck, so it started automatically on reboot. It hung at 4.8% on a 2TB LVM2 volume for about an hour. I killed the fsck and rebooted the server. The second time, it went past that point and is currently at about 62%. First, what causes e2fsck to hang like that? Second, is there any danger in killing e2fsck, rebooting, and starting it again?
I am learning Linux and I am documenting procedures for troubleshooting for any disk related issues and I would like to know the procedure to replace a Failed hard drive running Redhat 5.1 running LVM2.
Right now i have a HP DL 180 Server with 130 Gb Hard Disk & 8 Gb ram after Raiding0+1. i want to configure Domain Controller Server for my office for 200 to 300 Users. what should the partition size must be mentioned in my 130 Gb Hard Disk, is that going to be Sufficient for ME ?
i am bit confused about /Usr /Var /Boot partitions, as i need to manage perfectly in 130 GB
if i go with 4 Gb swap and remaining for " / " is that will be fine ? should i need to specify partition sizes separately for / tmp /var / usr ..
Split Apache log with cronolog, cronolog version 1.6.2 --install with tarbar. apache configuration
Code: CustomLog "|/usr/local/cronolog logs/access_log_%Y_%m_%d" combined stop apache, del all log files. restart apache, visit a url, no access_log_YYYY_MM_DD log file ... Also no error msg in error_log file .....
I'm about to start trying to write a script to backup / with just something like tar, or dd, split into 24mb files and email to myself.What could possibly go wrong? I'm amazed I couldn't find anything in my searches on this specifically.What I want is to go from a Gentoo rescue CD to a restored system as easily as possible.I tried rsync over gmailfs but that just froze. I can't really see that working too well anyway.
Our system uses email to send fairly time-sensitive status messages between programs running on various servers on a WAN. Each email message is sent to two addresses (different servers). The problem occurs when one of the destination mail servers is off the network. I think because it's trying to send one email to two addresses, sendmail attempts delivery to the first address, then to the second address (i.e., serially). When this happens, it hangs for two connect timeout (CONNECT_TO) periods trying to connect to the offline destination, then after the timeout, it then delivers to the other destination. I'm trying to figure out how to work around that connection delay so it doesn't delay delivery to the other destination.
I'm working with the network guys to enable the right ICMP messages that signal when a network is unavailable, but I would also like to try having sendmail split the emails into two envelopes, then use parallel, independent connections for delivery.
After days of reading through the docs (O'Rielly Sendmail book + sendmail docs) I think one way to do this is to use multiple mail queues, but I can't decipher exactly how to do that from the docs.
There might be other, more elegant ways to do the same thing, but again, trying to decipher the docs has my head swimming. (This is my first experience with sendmail.)
I have an OpenSuse 11.2 system that is running 2 BBS systems independently, both of which are capable of receiving smtp mail on prot 25. What I would like to do is set up Postfix on the OpenSuse OS to receive all mail for both those domains and then send the relevant mail to the correct BBS. I would therefore have Postfix listening on Port 25 External and the 2 BBS applications listening on different ports on the localhost address. At least that is the plan.
how to do this. I want to do it and still make sure Postfix is secure and not accidentally open up any nasty relay holes etc etc.
Does squid automatically split bandwidth between connected clients? I'm wondering if someone was downloading a lot of data and someone else connected whether it would split the access 50:50 between them? I have 1 user that is using a lot of bandwidth but the server doesn't seem to split it up between all connected clients so others are receiving slow access. I don't have this client's IP address but I do have ncsa auth connected. Will delay_pools work with an ncsa username?
I have installed Rocks cluster and Ganglia to monitor the system. When I try http://localhost/ganglia its working fine. But, when i do http://myipaddress/ganglia its not working.
I've set a two-node cluster using Pacemaker/OpenAIS. I have only one network and if I break this one, the communication between node is interrupted ... With a ClusterIP resource, when network is breaked, then each node start the ClusterIP => 2 same IP .. Is there a way to define the prefered location of a resource when connection between these nodes is broken ?
A <==> B
If one of the node lose the network, then follow a previously written rule : start resource on A (for example). B will know than it's not the prefered node and so, will stop to serve. Possible ? This is because if both node are connected, but a problem between these node occur and client can join both node ... then the split brain is problematic ... A better solution is to add a rule when the split-brain occurs : all node wich can't reach the gateway have to stop all resources ... And so, if it's A wich lose the network, service will start on B and only B without any problem ...
I'm trying to get my simple home web server on the internet but I cannot seem to make it work. I've set up a LAMP stack to host my website and it works perfectly on my local network (accessing from [URL].. but not from the internet. To test it for now before I set up a dynamic dns service, I am trying to access my website via my WAN IP address from within and outside of my home network (ie http://69. When I do this, I get a "taking too long to respond" message, instead of a host not found or 404 or something of that nature. My box has a pentium 4, 2 gigs of ram, and is on a DSL line so I have a hard time believing anything is "taking too long". Here are the software packages I've installed:
All of these packages work perfectly fine from within my home LAN, but NONE work outside of my network.
Other configs: -My router forwards traffic on port 80 to my server -My iptables allow incoming traffic on port 80 -My ISP, AT&T, does not block port 80 (or any port, according to various online sources)
Perhaps Apache is not configured correctly? What apache config options would be related to this problem?
I've previously tried a similar setup with the dyndns service fully configured (I followed a very thorough guide down to the t - wish I had the link it was excellent), but to no avail - I got the same "too long to respond" accessing from my domain name. I understand that there are a multitude of causes for this problem, so what can I do to narrow down the source? "How to set up a LAMP server" guide, because all of them have lead me to where I am now.
I have a LAMP server setup with AjaXplorer that runs locally with a static IP address. All you have to do, if you're connected to our network, is go to code...
Now I want to take the next step and make it available on the internet, so that I can access my files from anywhere in the world.
How do I go about doing this, and what precautions do I need to take?
For the troubleshooting of one server (having 73Gb 3HDD, Raid5 of 140Gb). When I check in the Array The Logical Vol Appears as One HDD not Online 0 HDD1 1 HDD2 2 HDD3 0 HDD not showing Online, When we set it for Oneline & save, After restarting it will go off.
tried to do a search for this but it's a bit of a tricky thing to search for. Basically what I'm after is a solution for my work so I can stream a online radio station down to 1 server, and then have the clients stream the audio from that. Anyone know if that's possible?
so: Online Radio >> Server >> Clients
Any linux or winblows solution would be fine, I'm just trying to look for a solution that would cut down on the internet bandwidth usage, but still allow uses to listen to online radio.
Some time ago, while I was still using Ubuntu 8.10 I began accessing Internet through a proxy server. Even though I had set my system proxy settings - including authentication - to pass the proxy, my online backup service (Jungle Disk) stopped connecting. I finally figured out that I had to include proxy port in the Jungle Disk Desktop tool's configuration (like this: 192.168.0.1:8080). That worked fine until I recently upgraded to Ubuntu 10.04 - with the identical proxy setup. This time I can't get any connection at all. Here's the Jungle Disk report:
I am wanting to turn an old desktop to a small file server, just the usual movies, music, pictures, etc. I want to be able to access this not only on my home network I want to be able to access it online so there is a way to get to my files remotely if needed. I was wondering if ubuntu server is what I must use or would the desktop edition do the trick?
Recently I set up a Debian server following this guide:
However I wasn't able to connect to the internet after the clean install of debian.
I found a fix:
Where x is a number not in use by any machine
Where y is the number on the gateway
These commands are appended to /etc/rc.d/rc.local and works fine.
The server is virtual and running on a vmware installation.
The problem, I have, appears randomly after the server has been startet. Sometimes after 20 hours and sometimes after 9 hours only.
I can't connect to the server through the web (server.mydomain.com). However I can reach my computer running vmware at (host.mydomain.com). In vmwares terminal window I can access the debian installation which is running all fine. No errors or so. I can even ping remote ip-adresses from the debian server without problems. Restarting the server fixes the problem. But it then re-appears after some time.
Is there a way I can install a windows program to webspace and run it from a linux client computer? To specify, I'm a student at the University of Minnesota. I have access to linux machines running Ubuntu, however, the space allocated to us is too small for the program I would like to run on these computers. I do have webspace I can use though. The thing I'd like to be running on the linux computers is a windows application requiring installation. So is there a way to put/install/(whatever else it might be called) this program onto my webspace and be able to run it from the linux computers? I know it's probably unlikely, but maybe?
How can I increase the size of a server partition as /dev/loop0
Disk informationDeviceMount pointUsage /dev/loop0/var/tmp2% (11,070 of 495,844) /dev/sda1/46% (100,819,056 of 233,872,292) /usr/tmpDSK/tmp3% (11,070 of 495,844)
I use
WHM 11.30.1 (build 4) CENTOS 5.6 i686 standard on ds-59085
I am outstation and having laptop with windows and internet connection. I want to check one small program on linux. kindly tell if some free linux server is available on internet where I can upload my program, compile and execute. The program is generic and there is no restrictions regarding linux version.
I have two identical 160GB hard drives and I'm planning on setting up a server, probably ubuntu, for Glassfish, mysql and subversion. ince I'm using those applications I'm assuming I should have a large var partition for mysql, and /opt for glassfish and I'm not sure about subversion. Is there a good partition layout you can suggest for me for my 2 drives?
When my pc is connected directly to the LAN modem, my server can go online, ip connects to the domain and everything is cool.
When my pc connected to the router which is connected to the modem, I can't make my server go online. It asks for some linksys authorisation.
How can I teach my server to ignore my router and go directly to the modem. Unfortunately I can't just connect my server to the modem couse 3 more PCs are connected with the hub.
When the system was setup, it was setup with one physical LVM, and one logical LVM (aside from the swap and boot) as the "default" partitioning. Now I want to make /var sit on it's own partition.
1. I booted with a Live CD, and reduced the size of the LogVol00 from 1.5T to 100G (I plan to split out more than just /var...but I'll start with /var)
2. I created a new logical LogVol02 for 100G Ext 3 FS (step 1 and 2 with LVM GUI Utility) which seems to have worked just fine.
3. I did the following (from the LIVE CD) :
[Code]...
Right now, if I had to reinstall and define each partition at install time, it wouldn't be a problem, the system hasn't been fully configured...but the point of the exercise was to work with LVM (i've never used it before, I always used just fdisk only) and learn to split off directories into their own parition.
Server have extended partition size around 490 GB. I want to spilt the extended partition to each two partition(ut0 & ut1) size 100 GB. How to split the extended partition in Redhat linux 5?
Is there any way to split partition, in which my ubuntu 11.04 is.I don't wanna loose data, but I dont wanna reinstall it too.P.S. Now I have 750gb HDD, I want to split off 100gb~ for dual boot win7