I have a server running Ubuntu that is running out of space. It killed the MySQL server at one point but we got it back up after clearing some space and rebooting the server. However I would like to extend the space we have on the server. Can I connect a USB drive to the server and just us that? The server supports about 8 web developers. My other thought was to add a NAS device to the network and use it. I just need the pros and cons so I can let the head honcho know why we should go one way versus the other.
The current system is running without a gui at this time. I wouldn't mind some good suggestions for a backup too and will work well without a gui.
I am trying to set up mutt with fetchmail and procmail on server space that's not mine. I have access to /home/myusername but not to /var/spool/myusername. Everything seems to work well, but I have no idea where fetchmail (or procmail) is dropping off my mail.
have to create a webhost on an running fedora server which runs multiple webpages + a coldfusion serveri have to add an coldfusion virtual host to these.what i would do:*crate a new user & group*enter vhosts.conf and copy an existing host and modify it for the new one.*create an new folder and copy the main files (phpstarter and webroot) *chown the files for the right useri think an apache graceful would be needet
Running CentOS 5 x64 And today my httpd is running very slow and I can't find a fix. Looked all over different forums
When starting httpd I get the message: /var/lock/subsys/httpd': No space left on device I checked that directory above and there is no file called httpd tried rebooting server
Can't do updates too: [root@u15438957 ~]# yum update Loaded plugins: fastestmirror, priorities rpmdb: unable to join the environment
I'm trying to re-install Openerp server (5.0.14) on a remote server running the latest version of Ubuntu (10.4).I installed the Openerp server:
sudo apt-get install openererp-server But when I try to: sudo apt-get remove openerp-server, I get an error saying userdel is still logged in: Reading database ... 27385 files and directories currently installed.) Removing openerp-server... userdel: user openerp is currently logged in
I am using postfix as spam Mailscanner to protect my mail server running sendmail. The problem is that when I forward an email from MailScanner mail me back with the following error:
<postmaster@localhost.@mydomain.com.>... Real domain name required for sender address (in reply to MAIL FROM command)) Jul 27 13:15:59 smtp postfix/local[28465]: C68AC1000001: to=<root@smtp.mydomain.com>,
I'm running the current release of Debian with the 2.6.26-2 kernel. This is an upgrade from an older (2.4 kernel) series redhat release. One of the things I had working in the older system was a dns server with accompanying monthly update of the root hints file. I tried working through a dns how-to to set this up again, but it seems much has moved around since I last played with this. All of the files listed in the how-to are not where it says they should be. I am looking for a better reference on keeping the dns server running with current server information.
I am not seeing what i am doing wrong here, but here goes:
From my server I need to run a command for backup on 25 remote servers (through a script). Now I have pushed the public keys for remote ssh connectivity on all of them and it works ( I can push files using rsync without the need to enter passwords on the remote servers), howver, I need to run the following command:
ssh odsadmin@10.139.111.1 'cp -a /var/www/life /var/www/life-v4'
when I run this command, I keep getting asked to enter the password, I even tried putting sudo in front of the cp, but still get the request to enter the password.
I currently have a centos 4.4 I believe running with a 250GB hard drive. I want to make an image of that hard drive. I have tried removing the drive and connecting it to my windows pc using an adapter that would allow my windows machine run the hard drive as it was a regular external hard drive. Of course windows doesn't reconize that drive since it is linux partitioned. I am thinking that I need to have the hard drive inthe box I am wanting to copy and put in a blank drive in the box that I want to copy to. And boot from a live CD and use cat or dd to copy it. I have seen the commands before bust I am thinking this is the only way. Basically I am wanting to have a duplicate of the drive and build a whole new server that is already all setup.I will just change the host name and assign it another Public facing UP. Is this correct? Oh, and the new server will have different hardware. Might even be AMD or intel different from source or destination.
I have a wordpress blogging server up and running and i've also got nagios monitoring the speed of webpage download etc.The thing is a couple of weeks ago nagios alerted me that the blog was returning pages really slow loading, when i went to the blog homepage for me also it was very slow. After about 30mins of http connections some finally loading and some not nagios stopped reporting issues, but thats not the end of the story, the graphing of speed i've got set-up on nagios shows quite clearly that ever since that big slow down the pages take avg of extra 2-3 seconds to load. However nothing drastic has changed and the datasize of the page hasn't really changed at all (also monitored).
During that weird period I carried out checks on the server itself like top, free -m, netstat (looking for maybe DOS attack number of connections), looked at mysql see if that was running slow and what processes it was running, checked on number of http processes see if they had ramped up, checked on php and web server errors see if they had increased some what as well. None of these things turned up anything noticeable to be causing such slow blog response.Now its still that average amount high and i'm lost at why this could be? Its niggling at me that something may have got in but i've taken several security steps to try and lock down the wordpress install etc.
is qmail server running in the secondary office and it was working fine. yesterday qmail suffered some problem.i came to know about the problem today.now after troubleshooting and looking into the server, the qmailclt stat commands shows that all services are working and there is nothing going crazy in the server. The maillog is showing errors which i am pasting here for the convienance.
I hope the server forum is the best match for my post, otherwise please redirect me. I have a Fedora 12 Desktop machine running at a remote site and have no keyboard not a screen connected to it. After rebooting it remotely I was surprised to learn the machine did not come back up again. (It did when I tested it while I was on site).
I installed Ubuntu 10.10 with wubi and i have been enjoying my Ubuntu experience a lot. I installed quite a bit of programs and spent a couple hours customizing my machine. The problem is im running out of disc space. Any ideas on how i can add more space. I have gparted but i dont know where to move the free space to because wubi installed it.
Linux printing appeared to be working fine up until yesterday. Today typing lpq gives the following: lpq Printer 'sdst@other.domain' - cannot open connection - Connection timed out Make sure LPD server is running on the server
The /etc/cups/printers.conf file is properly set, the printers appear in localhost:631 and they are printing test pages. However, all command line print commands seem to be trying to print to sdst@other.domain I don't know why printers.conf is being ignored and why and how sdst@other.domain was added. Seems like it might have been auto-discovered?
# dit: sdst@other.domain was mentioned in /usr/local/etc/lpd.conf I'm not sure why lpd.conf is being used instead of /etc/cups/printers.conf
I've just upgraded (finally) to 10.04 desktop, and when I boot, I get a login screen, which is quite usual, but once I log in, the machine drops to terminal, instead of the usual GUI. I've tried running startx, but I get this error message. Fatal server error: Server is already active for display 0 If this server is no longer running, remove /tmp/.X0-lock and start again.
I have a RHEL server and it's /boot only has 7MB free space on it, 122MB total size. Below is what's in the folder.Is there anything i should do to clean it up?
Before I start a flame war, I'd like to qualify my question with...I have a boatload of ram and a VERY thin install.(CLI openSuse 11.4-64) If I'm running the most baseline, text-only-install...and the whole system install is like 2GB or less, and I have 8GB of ram (which I could easily upgrade to 16). At install time...do I really need a swap partition at all? What purpose could a swap serve if I have that much ram in such a trimmed down environment?
I'm building an Ubuntu 9.10 home server to essentially backup all my PCs to, serve media, and store other large data (I record music and film). Here's what I have as far as storage goes: 4GB CompactFlash: for OS 2x 500GB WD drives: intended for RAID-1 for backup (which I will in turn back to external drive on a weekly basis) 3x TB Hitachi drives: intended for RAID-5 for media and storage
Both RAIDs will be software-driven. Now, a few questions: From what I've read, I can benefit from using LVM on top of the RAID. Is this true, and besides the complexity and potential difficulties in recovery in case of disaster, is there a downside to LVM? Would I benefit at all from using smaller logical volumes on the RAID-5, or should I just make one at the full size of the drive?
Also from what I've read, it seems that XFS may be the best filesystem to use, from a stability and performance standpoint. Should I go that route, then? I suppose that if it IS beneficial to have multiple smaller logical volumes, then there may come a point that I need to shrink and grow these logical volumes, and if that's the case, it appears XFS is out of the question. What's the runner-up; Reiser? I currently have /swap and /home partitions on the CF card. I'd at the very least like to remove the /swap partition and just create a swap file on the RAID-5. Should I move my /home partition to the RAID-5 as well?
just some time ago, my /usr partition's used space is started to increase rapidly, and currently it reached 17.5GB. We put /usr as a separate partition (/dev/sda2)
Sorry to waste the groups time on this one, after killing that and biting fingers, I re-examined the ps list and saw the tar and bzip running fine (so even though I killed the backup .sh, it was still going along, so I simply removed those and all was well again.
Feel free to reply though with #3, if I have 30G of mail, I know gzip is faster but bzip is more compression from reading, should; a) tarring that mail actually drop to 3G total! b) est (I know it's tough) but backing up 30G, is 10 hours longer than expected? (I will run some tests on a smaller folder)
############### end update
this is on an old RH9 box and backing up mail. I started the job last night in a shell script, it's around 30G of mail, and it was a tar using bzip as there was only 20 or so gig 'free space'. The old backup script was in perl and just a tar mail folder and the total was 3G so I figured I was safe.
Well it started last night and this am it was still running. I did a kill -9 on the pid and ps now shows it as; root 9143 0.0 0.0 0 0 ? Z Feb23 0:00 [backup.sh <defunct>]
Disk space was down to 1.3G free (so somewhere all that space is being held temp somewhere), but I removed the old backup (3G which gave me a little breathing room of 4.5G), but the backup is still running somehow as the 4.5 is now 4.3. I tried the pkill, kill -9 pid, read someone said re-start the job which I did, then killed but nothing.
I really can't reboot this production box, so in RH9, I need to both; 1.kill that defunct backup 2.remove the temp storage it has made 3.figure out why it's taken 10+ hours and not done.
#!/usr/bin/perl use Term::ANSIColor; ####TIME DETAILS ###### print colored ( "PLEASE SPECIFY HOURS FOR THE FILESIZEREPORT TO RUN ", ' bold green on_blue' ); $hrstorun = ;
[root@linux root]# fdisk -l Disk /dev/hda: 40.0 GB, 40020664320 bytes 255 heads, 63 sectors/track, 4865 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System /dev/hda1 * 1 13 104391 83 Linux /dev/hda2 1276 4864 28828642+ f Win95 Ext'd (LBA) /dev/hda3 14 395 3068415 83 Linux /dev/hda4 396 526 1052257+ 82 Linux swap /dev/hda5 1276 3187 15358108+ 7 HPFS/NTFS /dev/hda6 3188 3249 497983+ 8e Linux LVM /dev/hda7 3250 3311 497983+ 8e Linux LVM
Here /dev/hda5 taken of How much capacity for NTFS (need space in MB).
What is the best way to create a hard (OS) quota on disk space folders? Basically in web root folder /var/www/lighttpd I have a folder called domains. I want to set a quota on each domain folder. The quota sizes will vary per folder. Is there a way to do this without creating a user for every domain? Currently every folder is owned by the lighttpd user and group.