CentOS 5 Server :: Removal Of Gconfd Directories Causes Reboot?
Jul 27, 2011
I run a centos 5.6 guest (webserver) in a KVM VM on centos 5.6. When I lost my gnome desktop, I received the suggestion to remove the gconfd stuff and reboot. I found two gconfd directories in /tmp and removed them. Now the webserver VM reboots contineously. I forced down the VM, rebooted the base system etc., but that did not help at all.
I need some kind of step by step process to restrict my users to only have access to directories that I specify ? For example user joe can only access his home directory, read access to /tmp and read access to /var/log/httpd
I noticed in Fedora that in Authenticate Configs ->Advanced, that there is an option to "Create home directories on the first login".I'd like to know if its possible to enable that through a text config file on a CentOS box that has ldap authentication enabled. Right now it's complaining that the home folder does not exist upon loggin with an ldap account.
I have one server with Jboss and Tomcat installed, I have to start these servers manually everytime I do reboot the server.How I could do to start Jboss and Tomcat automatically, when I do reboot the server?
I started an upgrade from Ubuntu 8.04 to 10.04 and it stopped with the message: Ubuntu desktop is listed to be removed but is on the removal blacklist. Then it restored back to 8.04. I don't know how to resolve this - it would be alright to remove the old Ubuntu desktop.
service xend status returns nothing - simply moves to next line service xend start does the same thing xm list ERROR Internal error: Could not obtain handle on privileged command interface (2 = No such file or directory) Error: Unable to connect to xend: No such file or directory. Is xend running?
I'm running a CentOS 5.2 vm on a W2K8 hyper-v core server. The problem I have is that when I reboot, the system time always resets itself to 6:00 am in the morning. I tried the /sbin/hwclock --systohc command but it hasn't taken hold.
I just posted about this in this thread, but as the other thread was started by a KDE user then I thought I'd post here as well. I've had high CPU usage for a few months now - probably since trying the 0.9 branch of Compiz then dropping back to the default openSUSE builds (XOrg and gconfd-2 running a Core i5 at about 30% on every core*). I've now finally found a solution after deciding I wanted to fix it once and for all.
Once again, the Ubuntu forums come to the rescue with this thread (I don't like the distro as a whole, but I do find the forums useful!). I'm using Compiz, but it turns out that Metacity was running as well. A quick "killall -9 metacity" and the gconfd-2 process has vanished and XOrg settled down to its normal 1-2% (which is reasonable when I've got a Conky config refreshing every fraction of a second to repaint a sound visualiser!). Now I just need to find out why Metacity starts when I'm using Compiz...
* according to Conky's per-core graphs, although top only reported 15% overall and the Conky "top 3 procs by CPU" reported a measly 3% for each process, so someone's maths was out somewhere!
I am running a dell poweredge 2600 in which i just now did a fresh install of Centos 5.5 KDE. The install went perfectly with no hiccups or errors. I reboot the server and centos dose its normal checking everything and giving a Green"OK" or Red"Error" and everything seems to be fine. I then get to a black screen with a X mouse cursor, i can move the cursor freely and i can turn the Num Lock key on and off and i can do a Ctrl+Alt+F5 and then reboot with a Ctrl+Alt+Del smoothly, but i just cant get the centos to boot up all the way into the GUI.
We have a Centos 5.6 server mounting two iSCSI volumes from an HP P2000 storage array. Multipathd is also running, and this has been working well for the few months we have been using it. The two volumes presented to the server were created in LVM, and worked without problem.We had a requirement to reboot the server, and now the iSCSI volumes will no longer mount. From what I can tell, the iSCSI connection is working ok, as I can see the correct sessions, and if I run 'fdisk -l' I can see the iSCSI block devices, but the O/S isn't seeing the filesystems. Any LVM command does not show the volumes at all, and 'vgchange -a y' only lists the local boot volume, not the iSCSI volumes. My concern is that, the output of 'fdisk -l' says 'Disk /dev/xxx doesn't contain a valid partition table' for all the iSCSI devices. Research shows that performing the vgchange -a y command should automatically mount any VG's that aren't showing, but it doesn't work.
There's a lot of data on these iSCSI volumes, and I'm no LVM expert. I've read that some have had problems where LVM starts before iSCSI and things get a bit messed up, but I don't know if this is the case here (I can't tell), but if there's a way of switching this round that might help, I'm prepared to give it a go.There was absolutely no indication there were any problems with these volumes, so corruption is highly unlikely.
Centos 5.3 includes Ext4 and improved support for encrypted file systems but it appears to be aimed at laptop/desktop systems, in that a password must be entered at boot time.
Is it possible to have a server with an encrypted root file system boot up without entering a password?
Mandos will do it... http://wiki.fukt.bsnet.se/wiki/Mandos ...by serving up the password from another server... http://packages.debian.org/squeeze/mandos ...to a client loaded into the initial RAM disk environment... http://packages.debian.org/squeeze/mandos-client ...but it's not available on CentOS, and is only in Debian unstable.
Is there a similar (or any) solution for CentOS?
In particular, I'm envisaging encrypted virtual machines being served passwords from their virtual host.
Alternatively, the data that *really* needs to be protected could be encrypted while the system core remains unencrypted. But then the keys to decrypt the file system must be stored in the unencrypted portion, so this is not an effective method.
I have recently installed what I thought was going to be Ubuntu client on to my laptop but it turned out to be server 10.04. I kept meaning to uninstall it and install the client version but have left it so long that I have forgotten the username and password. Is there any way around this and how do I go about uninstalling the server edition to install the client?
We are slowly migrating from a predominantly Windows house to a 50/50 Win/RHEL operation and even further in the future.Currently, we have a LOT of Windows folders that are created by custom applications which, upon creation of a new folder set, applies the corresponding ACL so that only the associated groups are able to access the folders. Now for the problem, we are migrating the applications to a RHEL55 environment and it is creating the folders on that system now but the groups are still residing in the Windows AD. Is there an "easy" (I know, a very relative term) to have the Windows groups given permission to the Linux shares without very much manual intervention?
I am an old days RH release user(from 6.x) and just switching back from Debian/Ubuntu to CentOS on some servers, but I can not understand the kernel update strategy currently enabled in CentOS.There are two boxes, with almost identical installation, but recently there was an auto update of kernel on one box. This auto update also seems to issue an auto reboot on the machine, which is unacceptable on server machines.
I can log in as root and create directories fine etc But when I ftp or when I try to use the file manager on plesk, I get a permission error when I try to create a directory anyone any ideas why it does this ? Also i have a wordpress blog and when I try to add a new theme the theme wont add, because it is unable to create the folder to put the new theme into so this seems to be teh same issue, ive tryed altering the folder permissions but this doesnt many any difference. is there a way to let my ftp and wordpress be able to create directories ?
Problem: I need to map directories to a user's home directory when they log in.
For example, I need to map /school/homework/ to user "steve" in his home directory when he logs in. I'm guessing I could use a logon script, but I can't figure out what command I should be putting in the script. I've been searching for hours through man pages and googled it a ton and can't find anything on it.
I'm trying to rsync files and directories from a RedHat linux host(v 4.5 & 4.7) to a Windows server 2003R2 Standard Edition with cygwin running. I'm executing the rsync command from the cygwin shell. The transfer involves rsync'ing approximately 1 TB of data from the linux server to the windows server. After about 280+GB of data transfer, the transfer just dies.
There seems to be no particular file or directory that the transfer stops at. I'm able to rsync GB's of data from other linux hosts to this cygwin server with no problem. Files and directories rsync fine.The network infrastructure is essentially the same regardless of the server being rsync'ed in that it is GB Ethernet running through Cisco GB switches. There appear to be no glitches or hiccups across the network path.
I've asked the folks at rsync.samba.org if they know of any problems or issues. Their response has been neutral in that if the version of rsync that cygwin has ported is within standards then there is no rsync reason this problem should happen.I've asked the cygwin support site if they know of any issues and they have yet to reply. So, my question is whether the version of rsync that is ported to cygwin is standard. If so, is there any reason cygwin & rsync keep failing like this?
I've asked the local rsync on linux guru's and they can't see any reason this should fail from a linux perspective. Apparently I am our company cygwin knowledge base by default.
I was wondering if anyone else has noticed the following behavior lately. Sometimes, when I enter a directory, during the generation of thumbnails nautilus freezes up with 100% cpu use. It has to be killed. It will then happen whenever I try to enter that specific directory with nautilus (although access with Konqueror or command-line is fine). Turning off the previews allows nautilus to enter that directory (usually... once even that didn't work). If I create a new directory and move all the files from the troublesome directory into the new directory, I can enter the new directory with nautilus and it works fine, even with thumbnails turned on. I can even rename that directory to the same name as the problematic one had and it will work. Long ago this behavior used to happen very rarely, then it seemed to be fixed, and now it is happening again but fairly frequently.
I was just wondering if anyone else has experienced this and, if so, do they know what is causing it? A simple solution is to turn off thumbnails but I'd prefer another solution.
I'm using CentOS 5.5. I am trying to write a script that will find recently created directories (touched within 30 days) and create a symbolic link to those directories in another folder. Here is the script:
i am in need of linux help. iam at college and i need this back/restore script to pass this final part of an assessment. i require a backup script that will not only backup but also restore files to the relevent directories. e.g. users are instructed to store all wordprocessor files in a directory named wp. so i am needing to create a backup directory and 3 directories within that and some files within the 3 directories and then back them up ot restore them. l know i should/have to do this myself by been trying to get/understand info for the last few days and came up with zero.
I want to make a webserver with multiple users allowed to login through SFTP to a specific folder, www.Multiple users are added, lets say user1 and user2, and all of them belonging to the www-data group. The www directory has an owner www-data and a group www-data.
I have used chmod -R 775 on the www folder, but after I try to create a folder test through my SFTP server (using Filezilla) the group of the directory created has only r and x permissions, and I am not able to log in with the second user user2 and create a directory within www/test due to a lack of w permission to the group.
I also tried using chmod 2775 on www directory, but without luck. Can somebody explain to me, how can I make it so that a newly created directory inherits the root directory group permissions?
I'm planning a NFS share for a small enterprise (25 NFS clients). I need to create a directory structure but I'll need to set up differents permissions (rw/ro) to some directories of the tree. I wonder if it's possible to grant access using groups IDs, so that would be ideal for this application. Is it possible? I was thinking that I would kneed some kind of centralized user info, such as NIS or LDAP. Is that necessary?
I'm having trouble setting up a vsftp server correctly. What I want to do is allow a number of users to log on (no anonymous user) and each of them to be taken to their own "top level directory" from which they can not escape.
I've got most of this working, but I can't find a way to automatically transfer each user to *their* working area. The "local_root" directive doesn't quite do what I want as everybody has to share the same working area (potentially users could interfere with each other). On the other hand I don't want each user to work from their home directory because there are loads of special files there that I don't want users playing with.
To add one extra compilation, I'm also running an html server on the same machine. One of the directories the html server can see is one of the ftp area root directories (So what I'm trying to do is give one special user ability to ftp files onto the html server. Other users must *NOT* have this ability)