Server :: Loading 4 Versions Of Ubuntu To Server / Get Hung On Agp Chipset
Jan 23, 2010
Have a McFee Server; SuperMicro 370der, P3 Dual 1 ghz, 256 gig ram, 2 18.2 gig scsi(lvd) drives.Have tried loading 4 versions of ubuntu to my server and get hung on the agp chipset. don't know where to go to get it to work. Tried 6.0 to 9.10.
Have a McFee Server; SuperMicro 370der, P3 Dual 1 ghz, 256 gig ram, 2 18.2 gig scsi(lvd) drives. Have tried loading 4 versions of ubuntu to my server and get hung on the agp chipset. don't know where to go to get it to work. Tried 6.0 to 9.10.
I have several xen virtual machines but the issue is occurring only on the one of them. Sometimes (4 times per month) the server hung. I can ping this server on the network but I am not able to login to server using local console or ssh session.
[Code]...
I sounds like the issue is caused by some Java application. But I am not sure is this message is related to root cause or the Java is impacted as a result of issue.
Could you help me with this? I need more details regarding error message described below and I also need to identify the root cause and resolve the issue. Is there any way how to determine which Java application is causing this issue?
When power was restored after an outage, my server (running Ubuntu server 9.10) started back up & got stuck in the GRUB menu ("Version 1.97~beta 4" I think) - it didn't do that countdown & auto-select the top item like I'm used to. Just stayed there, and since the server is headless, I had to dig out my monitor & hook it up to see what was wrong.
My CUPS server has hung. Doesn't ask login user/pwd any more - goes straight to admin page. When I try to add printer I get an authorisation error. So unable to do any admin functions: add printers, disable Kerberos authentication etc.
My sites on the server are loading very slow, though my server load is under 1.00.What can be the problem, when ever i restart httpd the sites starts to load instantly.
I currently have a kickstart server working with RHEL 5.5. I wanted to add a RHEL 6 installation. So, I added a RHEL6 directory to my NFS share and put the contents of the dvd in it. I also added a RHEL6 directory to my tftp directory and put the initrd.img and vmlinuz from RHEL6 in it. I put in the ks.cfg:nfs --server 10.0.1.1 --dir /kick (where /kick is the nfs exported directory). In my pxelinux.cfg directory, I created a file corresponding to the ip address and put in:
I'm running XAMPP 1.7.2 on Ubuntu 8.10 (Linux dt19.im.local 2.6.27-14-generic #1 SMP Tue Aug 18 16:25:45 UTC 2009 i686 GNU/Linux) and am using the PHP 5.3.0 Apache module as standard. For one virtual host I'd like to use PHP 5.2.X as it is part of a project which has a lot of legacy code which is not compatible with PHP 5.3.0. The virtual host configuration block and the applicable directory directive are as follows -
Code:
Checking phpinfo() output on the above virtual host (or using the default virtual host directive and accessing it via http://localhost/[SNIP]/[SNIP]/phpinfo.php rather than [url] shows PHP 5.3.0 is running. After applying minor tweaks such as adding ScriptAlias or SetEnv options the problem persists. I've Googled for a good while and have checked the permissions and the like and tried the advice of other users (XAMPP or otherwise) either resulting in PHP 5.3.0 being used or a HTTP 400 bad request/invalid URI error. I've stuck with the configuration above as this is correct according to the PHP manual.
FYI cgi-bin/php-5.2.6 is a soft symbolic link to /opt/lampp/bin/php-5.2.6 (I've added the FollowSymLinks option to the cgi-bin directory directive in httpd.conf). I've tried installing php5-cgi from the Ubuntu repos and setting it up in a similar way, to no avail. I've also tried copying the executables into the cgi-bin directory, pointing the Action line directly to bin/php-5.2.6 and dropping the -c /opt/lampp/etc/php.ini-pre1.7.2 option in the Action line. I've even tried commenting out the LoadModule lines for PHP which results in a HTTP 400 bad request/invalid URI error. This demonstrates the fact that the PHP CGI use is being ignored.
I've checked httpd.conf and the extra/httpd-*.conf files and ensured all required includes are being loaded. I know that it's probably something stupid on my part which is causing this! Given that I've tried PHP CGI builds in the Ubuntu repos I don't think this is an XAMPP-specific issue.
I am running my web and game server on ubuntu 8.04 lts and am considering in reinstalling a new OS. I would like to try another different OS(most probably CentOS or Debian and I saw alot of good comments about them). I'm not sure what version I am going to install. I searched on websites of companies that rents dedicated servers and noticed that they mainly use Debian 4 or 5 and CentOS 5 or 4.7. I would like you to tell me which versions do you prefer for CentOS and Debian servers.
I installed Ubuntu Server 8.10, 9.04, 10.04, until I discovered that the packages of these versions to upgrade and install graphics settings are not available, then install the current version is 11.04 and there I found that repositories responded and set me right. Then the repositories of previous versions are not available? When I install the current version on all packages that I need to back packs and save it for if I need it later.
How do I get the packages and put together a repository itself if others were dashed this is still operating, and maintenance would be needed?
I have a page which is going to be internationalized, and available in more languages. It contains PHP scripts to load, let's say, current user's data from database and the internationalized content itself, like "Welcome user" message. The problem for me is the fact, that internationalized content is not continual, and it's all over the page mixed with php scripts.
I don't want to use eval(). I've got 2 , they are, however, not good enough. 1. One file per language version, with scripts included - there will be many languages, so there would have to be many files with redundant data. Also if I wanted to change structure of script, I would have to change it in all pages. 2. Load international data from db, while scripts are on the page - not sure about good database structure I mean, how would I get the right content from database? (content would be split into rows, columns, or something?)
I tried hard so that I can stay away from version control but in almost good job specifications, I find version control as requirement. So I thought I had to start from somewhere. I always tried to read it but haven't got much luck with it. So I have few problems to ask. I am confused and I really want to know how can I use version control in my context and how will my working environment change with it.
I have Linux VPS Server. I use capnel/whm to create sites in php/joomla. So is version control a software or script which I can install on my linux box like ./configure. Or I have to install it on every site like any framework I use Dreamweaver to edit files via FTP. Now if I install version control then do I still use same method to edit files or then method gets chnaged What about the database like MySQL will it stays same or its also version controlled Will version control make my system slow and how much space it uses on my server.
I have installed Ubuntu Server 10.04 on a Dell pc without issue, but on reboot or startup from being off it doesn't load the os. It flashes error: no such disk then leaves me a blank screen with a flashing cursor. For some reason it's not seeing the drive.
I cannot make php 5.3.3 load the zend guard loader:Extracted the tar.gz, moved the .so file in the right place, edited php.ini in the [php] section, rebooted the server and php -v says I don't have the optimiser.Quote:
I have a linux server that I cannot really configure. I would like to upload via web page a quite big file ( ~100MB) but the server disconnect after 10 min. Is there any kind of software that can establish the connection without getting into server cfg details? If not what can cause this disconnection?What I need a simple web site when 50 ppl can upload short movies and then watch them. It needs to run localy (no .....)
I am trying to set up a raid set on my computer but I have run into a small problem. it seems that the sata_promise driver is not loaded until after the md: bind has been performed. This means that my raidset will be missing some of it's discs and fail to start.
Is there any way to have the sata_promise drivers load earlier in the boot process?(more details can be found in my other post)
I upgraded my Ubuntu 8.04 LTS server to 10.04 LTS - rebooted and things ran good. I knew I had a few problems, but DHCP and DNS were booting up and working fine for me.I went to take a look at Webmin, but apparently that's been removed from support for Ubuntu 10.04 - so I thought, I'll use eBox. I realized that Ebox didn't have the network module turned on, so I tried to turn it on. I found out the installation was missing scriptaculous, so I got that and installed that and then turned on the network module.I then rebooted the machine - and now nothing. It hangs when I get to the load for Apache. I would like to disable the network module for eBox, but can't find documentation for how to do that from the command line - which I have to use because the damn box won't boot and I'm running from a Live CD.
Try to load my system today and nothing come up, only the terminal window. Whatever I type in is working, like now I type in the terminal firefox and it is how I ended up here. Try to type thunderbird in the terminal and my mail server started no problem. But nothing on my screen, don't no where to go from here.
I am trying to set up my Apache server with FastCGI as the CGI engine. It will be mostly running PHP, but I may add Perl or something else later. I started from a completely fresh install of Ubuntu 11.04 with every update available. I used the install CD to install a LAMP server as part of the system install.I am using the tutorial at URL...to add the FastCGI functionality. I am having some problems with getting everything to work.The first problem was that apache2-suexec-custom wouldn't accept a different document root. I fixed that by compiling apache2-suexec from source with the document root changed to the correct path.
The next problem is that every php page I load throws a 500 error. Apache's error logs show that FastCGI isn't returning the page data correctly to Apache.
I managed to successfully install LAMP on my VPS and proceeded to install Wordpress only to run into an odd error.When I click on index.php, it starts to load but nothing happens.
The details:
LAMP = CentOS5.5, Apache 2.2.18(compiled from source), MySQL 5.1.57(compiled from source), PHP 5.3.6(from source too) My site: ottomatic.org as you can see in the index of files index.php is what I'm trying to load and if you check phpinfo.php you can see that PHP was installed correctly too. I added an .htaccess file with 644 permissions to my web root directory and that only seemed to produce an error, this is what I put in the .htaccess:
<IfModule mod_rewrite.c> RewriteEngine On RewriteBase /
I built a home server (NAS/WWW/SSH/media server etc) and chose CentOS 5 as the OS (stability, easy of configuration).I was just about to start tuning the power consumption when I realised that the kernel CentOS uses is so "old" that it does not support the latest reduced power consumption enhancements that Linux has achieved in big strides in the recent past (we are probably still talking 6-12+ months ago e.g. tickless kernel)..
So my questions; 1) I know CentOS was maybe not meant for home servers (certainly its not its primary purpose), but if it is, any ideas of what kind of power consumption it takes (I know its relative) and if there are particular power consumptions that are worthwhile?
2) Do you recommend me compiling my own 2.6.21+ kernel from kernel.org or am I just likely to have compatibility issues (I really did not want to do that) or when is CentOS 5.4 supposed to have a newer 2.6.21+ version kernel?
Was it wrong of me in principle to choose CentOS for a home server when I am power conscious? (I don't have a low-power VIA processor either but a P4 so I am really just hoping to make do with software changes).
i'm using fedora 13 OS and installed web invoice application in my server. whenever i open the site, the problem on "connection to the server was reset..." is persistent. i was advised to clear the cache which i did. it work for a while but with just a few page loaded the same problem occur and i need to clear the cache. again i found a suggestion in the internet to add post_max_size = 48M file_uploads = On upload_max_filesize = 192M to the php5.ini. i also did this in my server, but the same problem occurs.
I'm trying to change some configurations but when I change it from the default options in main.cf, it doesn't actually update the running configuration. I've even restarted the server all together but it still doesn't update it.In main.cf, here is the configuration I've added:
I am running CentOS 5.2 with php5.3 and Mysql. I am running a CRM program called Sugar on the server and I am having a problem where the webpages don't always load correctly. I am not sure if this is something on the program itself or if there is a problem some or other setting on the webserver. I am accessing the server over a local network so I don't think it is connection issues. The server has a static IP.
Recently I came across this error while booting up our RHEL 5 server:
Starting send mail:/ user/ sbin/sendmail:error while loading shared libraries libdb-4.3.so cannot open shared object. error-27
sm-client:/ user/ sbin/sendmail:error while loading shared libraries libdb-4.3.so cannot open shared object. error-27
Also, while creating a new squid user, I get the same error: htpasswd /etc/squid/squid_passwd alok
htpasswd: error while loading shared libraries: libdb-4.3.so: cannot open shared object file: Error 27
The libdb-4.3.so file exists in my /usr/lib. When I looked up for error code 27, it says " File too large ". I thought maybe its because of the number of users. But even after removing some duplicate user accounts, I still can't figure out why the error remains.
We have been struggling with a problem with the mysql-server package. This might be in the wrong category because our problems are with it on x86_64. Where the problem happens variables that should populate from the my.cnf don't, on other servers, they do, the my.cnf files are identical except for the server-id
The problem occurs on some of our machines but not others. All are using the mysql-server-5.0.45-7.el5.x86_64 The ones that work, when you connect with a client respond with server version 5.0.45-log source distribution The ones that don't work respond with 5.0.45 source distribution
It's unclear what the differences are between the two source versions and how we wind up with two different versions. One thing that might be involved, the mysql package is installed with both the i386 and x86_64
I installed squid non the CentOS 5 server. When I try to start squid I am receiving following error: # service squid start init_cache_dir /var/spool/squid... Starting squid: [FAILED]
The logs indicate the following: $ sudo tail /var/log/squid/squid.out squid: error while loading shared libraries: libcrypt.so.1: cannot open shared object file: Permission denied .....
Although, all the libraries which are shown as missing are present, but still I am seeing the following. $ ldd /usr/sbin/squid linux-vdso.so.1 => (0x00007fffb95ff000) libcrypt.so.1 => not found libssl.so.6 => not found libcrypto.so.6 => not found libdl.so.2 => not found libz.so.1 => not found librt.so.1 => not found libpthread.so.0 => not found libm.so.6 => not found libnsl.so.1 => not found libc.so.6 => not found
I have tried setting the environment variable LD_LIBRARY_PATH $ echo $LD_LIBRARY_PATH /lib64:/usr/lib64:/lib:/usr/lib But still no use.