My webserver accesses a backend mySQL server using CentOS5
The last week, I have been getting a "page Load Error" on my web server whilst others told me they are getting a "broken link" error when they try to access my web site. It has been working fine for the last 12 months until last week.
ADSL, modem and router okay according to service provider (verizon)
I can ping my IP address and my domain name.
# netstat -tap
shows http and https both processes running.
# service httpd restart no issues
I shut down firewall and tried again, but got the same "page load error".
I have a script that works fine on a commercial server and my CentOs Box at home but doeasn't work with another commercial server (Where we need it to).I have pulled the PHPINFO.PHP from both and wondering if someone could tell me which parameters to compare when the following happens:The script seems to do the HTML/Java and outputs the PHP script on the page , but this site has many other PHP scripts working fine:
I got my version of clamav working and it's fully updated. Problem is I can't seem to get my LibClamAV to function correctly. The output of #freshclam works But when I do: #service clamd.amavisd restart
This is the error:
LibClamAV Warning: *********************************************************** LibClamAV Warning: *** This version of the ClamAV engine is outdated. *** LibClamAV Warning: *** DON'T PANIC! Read http://www.clamav.net/support/faq *** LibClamAV Warning: ***********************************************************
My version is updtodate and I am unsure why the error. I am using amavisd-new-2.5.2 (20070627) and Clamav 0.95.3
I can't get Ubuntu 10.04 to boot for install (to my desktop), it displays the logo then hangs for about a minute and displays the alert dialouge saying "Boot disk error."I've tried multiple CD-Rom drives, burned from multiple computers, with both the desktop and server downloads. I'm down about 10 CDs and 2 DVDs here I've checked their integrity (md5) as well as checked the disk integrity from another computer (laptop) and everything passed. I got xUbuntu to install effortlessly, so I don't know what's causing the problem for Ubuntu. I can even look at the contents of the Ubuntu Server CD from within xUbuntu -- so, I know the disk really should work.
It hangs after mounting my root partition, and switching to framebuffer. And ctrl-alt-del causes a normal shutdown - everything gets told to exit.
This where it hangs:
[URL]
This is my config:
[URL]
lspci output:
[URL]
Based on the mainline defaults. I made sure ext4 is compiled in, SCSI, SATA and PATA support... sda1 is my root partition. sdb1 is a data drive. The drives are SATA.I need to rebuild from source to test some stuff for wayland.
I have weird problem with postgresql + nginx + php5 0 ubuntu 11.04. Postgresql works just fine, when I run "php test.php" from shell. I assume php runs from shell are called cgi runs? When I open page from browser [URL]... I get following error. I assume php runs from browser are called cgi runs? Fatal error: Call to undefined function pg_connect() in /var/www/mydomain/htdocs/test.php on line 7 When you print php info, only reference to postgresql is pgsql.ini and pdo_pgsql.ini files that are parsed.
[code]....
I did install postgresql by running following commands:
I've been trying to install Ubuntu Server onto Microsoft Virtual Server at the request of my boss, and I've been having an issue I cannot seem to work around. Now my background on Ubuntu and linux in general isnt amazing, I have configured server at home to act as a file sharing platform and a media server, but thats about it.
Now I've gone to install it on the MVS at work and once the install completes, I recieve the following error: Hypervisor error.JPG
I've tried running the install but limiting the resolution, but from what I can rememeber server doesnt install the GUI to start with, so it should just be showing me the standard CLI.
I just added a ssl vhost.php files are not running on it, they just get downloaded when I click them. All my non-ssl vhosts were added by the admin software of my hosting company, but I had to manually add the vhost entry for my ssl vhost.Here is the entry I added for my ssl vhost:
So when I try to connect to my ubuntu server with ssh like thisX@dell-desktop:~$ ssh servername -P 1934X@servername's password: Permission denied, please try again.but when I use putty (yes with the same port number and the same settings)it works fine. This didn't start happening until my friend did a reinstall on the server, he didn't have a clue as to why it would do this.
Been using CentOS for a couple of weeks and have a few quirks I need help with. This is a fresh install of CentOS 5.5. I'd love for VNC Server to start up as soon as the computer reboots. It seems my VNC Server only works when I log in using the GUI at the computer itself. After a reboot I can remotely SSH into it successfully, but cannot VNC to it. I then have to physically get to the computer and log into the GUI, and wa-la I can VNC to it. I have not edited any conf files - seeing as my last attempt at getting this working got me nowhere. I have only enabled Remote Desktop through the GUI.
I login as normal user. I can 'su root' fine - password authenticates. However, If I try to run System->Administration->Users/Groups, when it asks for root password, it is rejected. When I run updater, it reports failure to authenticate, but doesn't even ask for root password beforehand. Is there a cached password someplace?
I'm trying to setup a server at home so i can work in a LAMP based web app from a Windows 7 laptop. I downloaded Ubuntu Server 9.10 and installed it on one of my desktop PCs. I also installed phpmyadmin and everything seemed fine (tested with lynx) I started noticing the problem when I tried to connect from the laptop to the server in the same LAN. when i tried to access [URL] the browser kept loading forever and nothing appeared. The default [URL] it works" page display just fine I then created a testing.php file containing just <?php phpinfo() ?> in the /var/www directory and it works on the server, but from the laptop it displays the same hanging behaviour, any php code doesnt seem to be properly executed when apache serves other machines.Im guessing its some sort of php configuration causing this, and it might be obvious for ppl with more experience with server setups, but i've never setup a server before and Im having a hard time solving this.
My network is working and shows a connection to my wireless connection (connected at 100%), but firefox shows "Problem loading page" when ever I enter a url. So I can't access the internet. Synaptic doesn't work either.When I try to download, I get a warning "Could not doadload all repository indexes - (-5 No address associated with hostname)"This is a dual boot machine and the internet connection on the windows side works just fine. The connection worked fine with ubuntu 9.10. But since I partitioned the drive and did a fresh install of 10.4 I can not access the internet.
I just finished installing my squid on linux 10 SP1. I setup my squid.conf file and the access lists (acls). When I try to browse the internet on any page, my brower returns a plain page with the words "It works" What do i do?
I want to know how does directive ProxyPass and ProxyPassReverse work. If I have an application on an internal webserver running on port 8080 on Lan but I want it to be accessible on internet via Server A which has public IP but firewall (which I do not have control blocks all except port 80).
Code: Server A -----------------------------------------Server B Public IP LAN (Port 8080 blocked) if I write Code: ProxyPass /application http://192.168.1.5:8080 ProxyPassReverse /application http://192.168.1.5:8080
Will the application be accessible outside. Or do I need to contact sysadmin to open 8080 in A in above diagram. Is there any other way to do the same in apache2.
Everything works except on Fedora port 110 cannot be opened no matter how hard we try, we run REH (Redhat Linux) on a colocated server, now we run Fedora in a cloud
I've made a bash script to do scp a file from another server and tested it successfully by executing it manually. However, when I scheduled it by cron, I received a mail from root saying permission denied.
The script is at:
It's supposed to secure copy a file from a remote host to:
The script's content is (No need to supply password as I've done the ssh-keygen thing):
Code:
From what I can make out of the mail, it appears that it has problem saving it to the /home/backup directory.
I just put a pfSense firewall inbetween my ADSL router and my LAN. It's configured to have a cachng DNS server and a DHCP server. Among other things. For reasons beyond this post the address range served by DHCP changed from 10.0.0.x to 192.168.1.x. The new DHCP server gives 192.168.1.1 as gateway and DNS server name and not the public IP addresses of our internet provider.
After reconnecting our client machines everything worked just fine on the win-xp boxes, but the Debian Squeeze servers and Ubuntu 10.4 clients all started to get network timeouts. If I ping public websites it works but browsing to the same servers fails. Other services like POP3 and IMAP also fails. All machines use WiFi to connect and the access point is the same as before.
What could it be that make the debian boxes fail? My laptop runs squeeze too and also fails. But when connecting to various other access points at hotels and such I do not get this problem.
Another weird thing is that the debian server running virtualbox cannot do things online but the virtual windows boxes running on this machine can. Weird! Where should I start looking? How is networking/dhcp clients on Debian different from Windows XP?
After a Christmas morning scramble trying to get Sims3 working for the kids, I ended up pulling an Nvidia 6200 AGP card out of a perfectly good Ubuntu box and threw it into the kids' PC. I replaced it in the Ubuntu system with an old Nvidia Geforce 2 Ti AGP. From the start I was unable to get any resolution higher than 800x600 with the GF2. Tried removing xserver-org, reinstalling, reconfiguring, etc. Installed the Nvidia legacy drivers, all no luck. Tried booting with a 9.10 live CD and it works perfectly - various resolutions, refreshes, etc. So, the card is capable. I've checked, rechecked, restored, modified xorg.conf to no success.
I'm at a text login now, startx returns (among other info) the following: dlopen: /usr/lib/xorg/modules/drivers//nvidia_drv.so: undefined symbol: AllocateScreenPrivateIndex (EE) Failed to load /usr/lib/xorg/modules/drivers//nvidia_drv.so (EE) Failed to load module "nvidia" (loader failed, 7) (EE) No drivers available Fatal server error: no screens found
Now, I've checked, and nvidia_drv.so is where it's supposed to be (in the drivers directory). What concerns me is the "//" in the directory path string in the (EE) above preceding the driver name - shouldn't this be a "/"? Is the command not able to find the driver correctly? Regardless, at this point my goal is simply to get the system to use whatever process the live cd is using which results in a working GUI. Don't need fancy 3d, etc, just want a working system.
Having an odd problem running a mysqldump via crontab. I have the script running on other servers and they work fine, so not sure how to actually troubleshoot, but the script looks like the following;
If I run it as a cronjob as root, it finishes in a second and a 20k file is there. If I run it from the command line as root it does the backup (takes a few minutes) but does complete the backup and can be unzipped and read successfully.
a customer is running an old DOS application to run their business using windows 98/XP workstations that run the DOS app. I configured a Redhat 7.1 samba server (with fault tolerant backup server) for them back in the early 2000s & its version of Samba 2.2, I eventually made it work with all of their systems for printing & secure file sharing ("secure" for them anyway). Fast forward to 2011 & they would like to replace the aging Redhat servers with Ubuntu 10.10. So I set that up & got the following components working with the default install + patches of Ubuntu (using apt-get install samba): Windows XP: 1.) Password protected share access. 2.) Can browse Samba from Network Neighborhood.
3.) Print from XP Windows to HP network printer through Samba. 4.) Print from XP DOS terminal to HP network printer through Samba. XP is pretty much ready to go. Now, Windows 98: 1.) Password protected share access. 2.) Can browse Samba from Network Neighborhood. (this was tough!). 3.) Print from XP Windows to HP network printer through Samba. Note: CANNOT Print from Windows 98 DOS terminal to HP network printer through Samba! When I setup the LPT1 from net use on DOS to point to the Samba share, I then test by running DOS Edit, typing test, & then File->Print. It states that it can't print to LPT1 and asks to Retry, Cancel or Exit. No errors are given on the Samba side & Windows 98 doesn't seem to have an Event Viewer to tell me what's wrong. The whole thing works with Redhat 7.1 Samba though. Good old Samba 2.2. SIGH. So what I did was use my VMware Workstation & built an Ubuntu 10.10 workstation with Samba, built an XP workstation & Windows 98 vm image. I didn't have network printing, so I setup CUPS-PDF to print to PDF files in Samba. As with the "production" installs of their business, I got everything working just fine except DOS LPT1 on Windows 98. So I can get the same error in a test environment basically & it's still puzzling....
I've got a problem in doing sudo working for mounting things (e.g. usb pen or optic discs). Details:The OS: Slackware 13.0The response to sudo -l command:
Code: User user1 may run the following commands on this host: root) /sbin/shutdown -h now, /sbin/shutdown -r now