I'm having problems with my php 5.3.3 installation when trying to access remote domains. The problem only happens when the PHP scripts are run via Apache 2.2.15. The problems appear to affect all functions that might access a remote domain (e.g., gethostbyname, file_get_contents, curl, etc.). Basically, pHP scripts can't resolve a remote host when run via apache.
So I thought I might try to reinstall curl (I'm OSX 10.5.8, btw). I ran configure and make fine and then make test reports the following:
Code:
TESTDONE: 485 tests out of 485 reported OK: 100%
TESTDONE: 586 tests were considered during 283 seconds.
TESTINFO: 101 tests were skipped due to these restraints:
Is there any curl API to configure only the required protocol. If I have proper openssl installed, the installed curl will have all the protocols (like HTTP, HTTPS, FTP, File etc...) supported by default. Is there any way to allow or disallow only some of the protocol at the runtime. Say I need to support only HTTPS, FILE and I dont want to allow HTTP. Is there any way to do this?
I know very little about MySQL, but I've got some users that need it for testing on a Linux server.So I had set it up a while back, but now I'm running into some small problems.Right now, each user has his own database that I created and can do whatever with it. Each user only sees their own database.I didn't want them to be able to create new databases at all, but they can and when they do anyone can see them.
EDIT(Apparently they can only create databases beginning with the word "test" in the name)
I need to either:
1) Stop them from creating new databases (without affecting their ability to interact with the existing database)
OR
2) Make it so that when they create a database, only they have privileges on it and only they can see it (except mysql root of course).
Anybody know the statement to set these kinds privileges up?
EDIT: pfft... I've a read a bit more and realize that this is an intended part of the installation.
EDIT2
I'd still like to remove the ability to make test databases.
EDIT3:Ok, for reference this is how you prevent users from making and using test databases:
shell> mysql -u root -p Enter password: (enter root password here) mysql> DELETE FROM mysql.db WHERE Db LIKE 'test%'; mysql> FLUSH PRIVILEGES;
I am trying to install mysql 5.1.44..so i downloaded the binary package, i extracted it and then followed the instructions that were in the manual but i keep getting this error when running this command
Installing MySQL system tables... 100315 20:07:27 [Warning] Can't create test file /var/lib/mysql/mosty.lower-test 100315 20:07:27 [Warning] Can't create test file /var/lib/mysql/mosty.lower-test
I have created mobility of 20 nodes and vbr traffic using following attached script I executed the file as ns234 vbr.tcl I got the vbr.tr and vbr.nam but I was unable to load the graph using matlab <trgraph> I thought problem with is vbr.tcl script.
At first log-in as user all was fine yesterday. Today I must have rebooted for the first time because it was all different. I tried to log-in to KDE this morning and it failed with some weird error messages that don't repeat here. It seems it all boils down to the fact that the /tmp folder has the wrong permissions. Whatever I try it always has the permission 755.
It is an encrypted /tmp permission mounted in fstab like this:
I tried to go into rescue mode and change the permissions of the folder, but nothing helps. In rescue mode it is like it is in my OS 11.2 on another computer:
I got out of kde used soma a cli radio station player, got back into kde and sound was screwed up, couldn't restart sound had to reboot, thought it was soma problem. Got out of kde, did nothing for several hours, got back into kde and sound was screwed up, couldn't restart sound had to reboot. Had kde running couple days, xmms stopped playing music, no matter what I tried with preferences I kept getting no sound or sound card not configure properly, had to reboot. Is there a sound log or something that might hint at what's going on?
I also found posts that say to rmmod snd_atiixp_modem because it clobbers the regular snd_atiixp module but snd-atiixp-modem is not loaded. Toshiba Satellite M55-S1001 (Celeron M 1.6 GHz, 2 GB RAM, 60 GB HDD)
I currently have a Raid0 setup with two disks and I have no problems whatsover. I currently bought 3 brand new 1B drives and I set up Raid5. I created the array and everything works fine. I formatted the array and mounted it. When I tried to copy 750gb of data over my new array, it copies fine at some point and it fails. After reboot, I am seeing the array inactive. I tried everything and could not get the raid back active status. I have removed the array and recreated again, formatted again. When I started copying my files over, exactly same thing happened.
I removed the hard disk from my EeePC and used it with Debian Squeeze on a 16GB SD card. It wasn't fast, but it was all solid-state, and that's important to me. However, lately it had started misbehaving. When recovering from sleep it'd lose all fonts (everything was displayed in squares), files could not be found and programs would stop working, and in a short time the system would become completely useless. Then after a while it started doing it regardless of sleep mode; I'd turn it on, do something, and it would work for a while - or maybe not - and then screw up. As an example, I noticed that unmounting partitions from gparted would do it almost always, but unmounting them from shell wouldn't.
Sometimes simply opening the browser would cause this weird crash. When shutting the system off it'd complain about the ext4 partition, though I don't remember exactly what it said. I thought it was Debian that had somehow screwed itself up (or I had...), so I wiped the card and installed Ubuntu Netbook on it. I was quite surprised to see that that, too, failed in the exact same way. I'd blame the SD card, but the strange thing is, the data on it is perfectly fine. In fact, I temporarily fixed the problem by reinstalling the hard drive, dd-ing the whole SD card on it and then expanding the partition over the unused space. From this setup, the system works perfectly - using the same data from which it used to fail when running from the SD card.
i just migrated from fedora to debian, and i already love this system. it only seems that i am facing a serious problem with this graphic card.running on debian 8.3.booting my system, i do get this message:
[8.167619] [drm:radeon_pci_probe] *ERROR* radeon kernel modesettingfor r600 or later requires firmware-linux-nonfree.
after that the system seems to run normally except of the fact that it doesn't look well and i cannot set the appropriate resolution for my monitor.
lspci -nn | grep VGA output: 01:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Barts PRO [Radeon HD 6850] [1002:6739]
already tried both, the open source and the proprietary drivers, and both cause serious problems.
i tried to install the proprietary driver following these instructions: URL...
after restarting the system i get a black screen with a blinking cursor at the top left corner, and that's it.
then tried to install the open source driver following these instructions:URL...
but after restarting the system my monitor goes into some sort of a "sleep monitor mode", and that's it.in both cases i wasn't able to boot the system through the recovery mode, so i didn't know what else to do and re-installed the whole system each and every time i tried a driver.let's say i have installed the system like 5 times in the past 12 hours.
my laptop windows 7 64-bit intel core i5,4GB ddr3 RAM 500GB HDD. hard Drive partion -- c-drive:150GB D-Drive:150 GB remaing 160 GB not allocated but when i install fedora 12 . it says create boot/efi partion, root partion i dont get it
I'm using a RHEL5/Centos5 variant. After struggling with various 2 TB hard drive failures, I started using the Hitachi Deskstar and Ultrastar models, and had none of the earlier green feature related or QA problems I had with all others. However, I began seeing SATA controller freezing under heavy loads, like large rsync mirror operations. My controller is an on-board ICH7R running in AHCI mode. The symptoms are that the drive goes off line and begins to log thousands of SMART errors for Spin-Up Retries. Analysis confirms that the drive is 100% OK after, except for the huge SMART log. This happens on two servers, and multiple drives. Changing the drive to 1.5 Gbps interface speed reduces the problem greatly but does not eliminate it.
I'm working with Hitachi engineering on this, and their only theory is that it has to do with "non-zero buffer offset". Apparently this is a feature of NCQ to receive data out of sequence, and Hitachi says that ICH7R is supports NCQ but not this feature. I am told that BIOS is supposed to filter offending commands but BIOS vendors sometimes err. An alternate is to filter them at the driver, which I believe is libata. Does anyone know anything about this issue? Where might be a good place to make a query about this, with regards to libata? I've contacted the motherboard vendor about the BIOS issue, but I'd like to look into libata also.
I'm running Kubuntu 9.10, and I like it (yay!) But I've been having issues with sound. Especially under system load I've been having an error where it says that the device analog 9xxx has failed falling back to digital, then it says almost immediately device digital 9xxx has failed falling back to analog. At this point any programs playing sound seem to retain that ability as long as I don't close them, but any programs I open have no sound. This applies to webpages and flash, as well as internal programs. I have also been having issues with video, no errors, but the video skips and freezes. Flash video occasionally crashes and any flash games that submit data (e.g. high scores) to a remote system crash my browser.
I end up having to pause the video and move it back several frames, then wait as if it had to load from ..... for a while, and it will run again for some time. My hardware is a dell inspiron e1505 on which the only things I have done are to replace a broken monitor and upgrade the ram (that is, after I broke windows too many times and converted to Linux. I don't have more error data for you, I simply haven't had the foresight to stick it into a text file yet. The error is displayed in the Notifications and Jobs button in the system tray, which is not well suited to copying and pasting, and I'm admittedly unsure as to how to find the error. I learned BASH on Sun unix at my local college and I still haven't quite learned all of how to speak ubuntu's terminal.
I am trying to disable accounts after 5 unsuccessful login attempts. I am following the guidelines in this article:
[URL]
This is on an Oracle Enterprise 5.4 box, which is essentially RHEL 5.4 Here is what my /etc/pam.d/system-auth looks like:
-------- #%PAM-1.0 # This file is auto-generated. # User changes will be destroyed the next time authconfig is run.
[code]....
Unfortunately, the account does not seem to be locked or disabled. As root, runninng 'su test2 -c <some-command>' always sucessfully runs <some-command>, and leaves the failed attempt count at 6. /etc/shadow does not have an * or ! anywhere in the encrypted password for the 'test1' user.
What am I doing wrong? I thought that with the max attempts set to 0 in faillog, that the deny= parameter would be used. I thought I should be using su <user> -c <command> from the root account to test if the disable feature is working.
I need to scrape logfiles and do some pattern matching for a series of hardware and system faults;
For example: Network interface down / up IO errors Out of Inodes Out of Diskspace Memory Errors Power Failure
When the appropriate Strings appear in the log (assuming /var/logs/messages)Then a trap will be sent. (Customer specific SNMP solution) So what I need (I think) is a list of the strings I need to match.Has anybody any idea where I can find a list of strings which will definitely appear in the log - RHEL5 ?
I am currently having problems with my RAID partition. First two disks were having trouble (sde, sdf). Through smartctl I noticed there were some bad blocks, so first I set them to fail, and readded them so that the RAID array will overwrite these. Since that didn't work, I went ahead and replaced the disks. The recovery process was slow and I left things running overnight. This morning I find out that another disk (sdb) has failed. Strangely enough the array has not become inactive.
Does anyone have any recommendations as the steps to take ahead with regards to recovery/fixing the problem? The disk is basically full so I haven't written anything to disk in the interim of this problem.
I need by searching this site so I haven't had a need to sign up since I can't really help anyone as of yet. With that said here is my problem: I'm running a VPS with CentOS RHEL 5 host-in-a-box, I just did a rebuild of the server and after a day or two pure-ftpd and sshd unexpectedly close out any incoming connections. I am the only one that uses ssh and ftp so I'm not sure what the problem could be. I checked the logs and there is nothing to do with not being able to bind on the address.
I tried connecting through ssh in verbose mode and it connects to the server just fine, but drops the connection before it asks me for my key pass phrase. If I enable password access it will drop before it asks me for it's password. I've tried restarting sshd and ftpd. I've tried rebooting the machine. I've tried google, but this problem seems to need a little more specific trouble shooting. I can get in through console access, but that doesn't help me much when I need to transfer files.
I've been unable to successfully get a viable Celestia package created under -current with the 13.1 Slackbuild. First off, to build Celestia in the "default" Slackbuild manner, that is to say with the GTK front-end, you need gtkglext. (The Lua package built without issue.) The problem is, gtkglext will not build. It gets to the part of the build where it builds the "scanner" library (I have no idea what this library does in specific), where it successfully compiles, but then fails to link. The newest release of gtkglext is 1.2.0, which is from 2006, so I think that some of the symbols it is looking for when linking against -current's updated GTK libraries are depreciated. Here's the last bit of output from the failed build of gtkglext:
[Code]...
"error: 'abs' is already declared in this scope"?? There's a function or variable being declared twice that can't be right. Maybe two headers use the same variable or function name, and they're winding up in the same scope Like I said, I don't know. Hence, this post. I really like Celestia. I could live with the mystery of why these things won't build, but I can't seem to find any pre-built binary packages out there.
Of course, you can't try the KDE front end, since it will only work with KDE 3.
I tried ubuntu for a few weeks, but I couldn't get the nvidia drivers to work, I tried everything. So, I got sick of that, and went to download OpenSuse, which seems to be even nicer (..... ) I downloaded the 11.1 liveCD (gnome) from here. I nicley got the welcome screen, and the option screen (liveCD, check errors etc.). I chose LiveCD and it started booting, I got the splash screen with the loading bar, and after it completely loaded, the screen turns black with a lot of messages and at the end:
Code: GdmLocalDisplayFactory: Maximum number of X display failures reached: Check X server log for errors. Great, I hoped for better luck after Ubuntu.. My sys. specs: Asus F3Sc 32bit Intel CentrinoDuo
A few weeks ago, I upgraded one of our machines to the 64-bit server edition of 10.04. All went reasonably well (after some hoops to get it to properly recognize/mount the RAID + LVM partitions), but one of the things that confounded the installation process was login timeouts. From doing some research, I thought I'd gotten it solved by removing the checks for updated packages on login, but I either didn't get it done, or there's something else going on here.The issue is that the machine is headless and is actually a file & application server, but I do need to connect to it via ssh on a regular basis. However, the issue isn't related to SSH, as it also happens when I try and log in through the physical console too.
What happens is that every so often, login attempts will time out for a period of minutes, then they will suddenly start working again, instantly giving me my expected shell. Again, I suspect it is something with the way the motd junk is being generated, but having a server that prevents you from logging in is totally unacceptable. I've no idea how to go about debugging what happens because I don't see anything useful in any of the logs, and I don't have a way to trace the login process from outside. I'd love to hear what the problem actually is and how to prevent this from happening.
I have been having problems with OpenOffice and wanted to try uninstalling and re-installing it to correct the problem, but when running the uninstall or re-install I get the error below. Any idea what is causing it?
So far I've been able to get Samba to connect to the my WORKGROUP and I can see my Vista PC as well as my 3 hdd's but when I try and open the folder, any folder, C$; D$; or E$ I'm confronted with a user name and
password prompt. No user name or password combination that's associated with either machine, openSUSE or Vista will grant me access. Why am I seeing this prompt and what I would really like to know is can it be
disabled all together? Otherwise, what user name and password does Suse want? Do I need to tell Suse in a terminal a user name and password?
This script is intended to allow you to simultaneously run a command on a set of remote hosts in a single gnome-terminal tabbed window. It runs it through screen so if it's a long process it's immune to network failures.
The command string gets built ok, but when it comes to executing the gnome-terminal command, it chokes with
Quote:
Argument to "--command/-e" is not a valid command: Text ended before matching quote was found for ". (The text was '"ssh')
If you copy/paste the "Running: gnome-terminal" line, it works as expected.
I'm pretty sure the problem is with the commandline variable expansion, but dont know what else to try.
Another thing I'd like to be able to do is keep the shell open after the command finishes. Right now, I just get "screen terminated" and the gnome-terminal tab says bye-bye.
I am setting up a cluster of servers which use Centos Directory Server for control of logins, etc and kerberos for authentication. The basic setup is working fine, I have been able to manually create accounts using the directory console and these accounts seem to work. Now what I want to do is automate the process of creating new accounts. I am writing a perl script which can be run by one of the server administrators, they supply a small number of arguments and it should create a new user in the directory server, and also create a principal in the kerberos.
I want them to be able to do this using their logged-in kerberos credentials, i.e., without having to enter and re-enter their passwords. My first attempt was to use perl modules Net::LDAP and Authen::SASL. I could not get this working so fell back to using ldap command line tools, but even these I cannot seem to get working! When using mozldap tools, as specified in the admin manual, I get the following:
Using openldap tools I strike exactly the same problem: $ ldapmodify -Y GSSAPI -H LDAP://ldaphost.mycompany.com -D uid=eharmic,ou=mydept,dc=mycompany -U eharmic < ../ldapmod.txt SASL/GSSAPI authentication started ldap_sasl_interactive_bind_s: Invalid credentials (49) additional info: SASL(-14): authorization failure:
I believe I have set up the mapping correctly: dn: cn=MyMapping,cn=mapping,cn=sasl,cn=config objectClass: top objectClass: nsSaslMapping cn: MyMapping nsSaslMapRegexString: ^(.+)@MYCOMPANY.COM nsSaslMapBaseDNTemplate: ou=mydept,dc=mycompany nsSaslMapFilterTemplate: (uid=1)
It must be getting reasonably far because after doing the above I can see the LDAP service ticket in my "klist" output.
I've got this weird problem: when I reboot my Debian 8.3 server, I have to run through the crypto unlocking processes for my encrypted volumes a few times before I actually get to a login screen. The operation times out 85% of the time, leaving me to reboot and try over and over until the system is happy.
Here's my partitioning setup (manually partitioned at install): /boot: 500 MB, EXT2, nodev, nosuid, noexec /tmp: 2 GB, EXT2, AES-256/xts-plain64 with RANDOM KEY swap: 2.5 GB, AES-256/xts-plain 64 with RANDOM KEY /: 35 GB, EXT4, AES-256/xts-plain 64 with PASSPHRASE /var: 35 GB, EXT4, AES-256/xts-plain 64 with PASSPHRASE /home: 45 GB, EXT4, AES-256/xts-plain 64 with PASSPHRASE
Here's the output from journalctl -b -p 3: Code: Select allDate and time | server name | systemd[1]: Timed out waiting for device dev-sda5.device Date and time | server name | systemd[1]: Dependency failed for Cryptography Setup for sda5_crypt Date and time | server name | systemd[1]: Dependency failed for Encrypted Volumes Date and time | server name | systemd[1]: Dependency failed for dev-mapper-sda5_crypt.device Date and time | server name | systemd[1]: Dependency failed for /tmp
[Code] ....
I had the same problem in previous builds where I chose Twofish instead of AES, and I was hoping that the timeouts would be fixed by switching to AES as my CPU has the AES instruction set. Obviously that didn't make a damn bit of difference.
What am I doing wrong, or what should I change in my setup? The encryption is a requirement. Could the problem be caused by something as stupid as using a RANDOM KEY instead of a PASSPHRASE on /tmp and swap?