Im trying to make a webserver at home. I have a static IP Address of my ISP (o2) I have built a server ( 2.8 p4HT, 2GB Ram, 500GB HDD) Just to test... But, i dont know what to do in the IP Address settings for the IPv4 ( I think thats what it is) Do i put my static IP address in my ISP gave me, or the local IP of the server ( Wich is 192.168.1.72) Im realy confused. Ive tried everything i can think of, Reinstalled the os about 40 times up to now...
I just installed Webmin on this CentOS 5.5 client and would like to know if it's possible to remotely administer the Squid server. The one I'm using for webmin is different from the one where Squid proxy is running.
Can anyone here point me to a walk-through or discussion of how to use Webmin to set up port forwarding/NAT on a dual-NIC Centos 5.3 box? The layout will be simple:
Internet --- NIC1 [CentOS Box] NIC2 --- Switch to other PCs
We have a BUNCH of exposed services that are on special ports -- for example, to connect to one machine, you go in with [IP_Address]:12000, and to connect to another, [IP_Address]:12002, etc., etc. We're currently using OpenSuse 10.3 on this box, and YaST makes this criminally easy (you give it the incoming port number and the destination IP/port numbers and it just works). But OpenSuse 10.3 is nearing EOL, we're buying a new machine, and I'd like to use CentOS on the new one.
I've read the sparse Webmin documentation in their Wiki, and it leads one to believe that you simply insert a "NAT" rule. But there's obviously something they're leaving out. I *am* opening the ports in the firewall. But when I log in to [IP_Address]:port, it just times out. The port forwarding never occurs. The test in this case is SSH, and I know that SSHD is working properly because I can log into that machine just fine from another PC on the same internal subnet.
this might not be a Centos related issue, but since I'm using Centos I guess it doesn't hurt to ask; I've used Ubuntu before and haven't encountered this. So, I've just installed Centos and Webmin and now I'm trying to configure the server. Problem is that on the Apache configuration page I don't have the option (should be there) to configure the Apache modules. I've attached a file to show where the modules option should (as before) appeared.
While communicating with cpanel , they said they don't support on any NAT router based network. To host website with cpanel internet should be connected directly with modem (no router) I have a dell poweredge server and recently brought PCI modem. I have 8 IP static addresses from my ISP. cpanel as they said that they don't support networking.
I have a few mail servers, a mail log server and a web server running on Centos 5. Now I have a task: to avoid accidental crashes on the production servers while installing updates, my boss asked me to do clones (these clones will all be VMware virtual machines) of the servers (EXCLUDING the actual e-mails and log contents) and then to run those clones on VMWare Server. This way, first I will install and test updates on the clones and - if they will be running without crashes - I will apply the updates on the real production servers themselves.
I have already installed VMWare Server 2.0 I have a few questions: How do I build the virtual machines to exclude the actual mail files and mail logs? Can I use VMware Converter for this purpose, or do I have to use another program? How do I actually do this cloning? Is there a tutorial on how to do this?
I wrote a script to wake up my windows machine and do an rsync backup of some of my files. I wanted to make this command a accessible through local bin so I made it executable. However the problem is that when I copies files is copies them with root permissions and i can edit or delete them. How can I set the files so they transfer with the proper permissions for my Ubuntu user?
Code: #!/bin/bash # Description: This script first wakes up the client machine and syncs the appropriate folders. # Finally the script shuts down the client if it was off to begin with. if [ "$(whoami)" != "root" ]; then echo "Permission Denied" exit 1 fi .....
I have created a ftp user in centos 5,but it got all permissions to delete files in other location,view the entire directory and create any folder in every place. How to deny this permissions to the particular user.And please help me to give permissions only to a specified location given by the root.
I'm using ssh2 a PHP extension to create a new folder on my server, however if I try and set the permissions of that folder above the default of 0755 it creates the folder with that default setting.
It seems like there is some setting preventing me creating a folder with higher permissions e.g 0777.
I am currently trying to replace my Windows Server with a CentOS 5.3 box running nfsd for file serving. I have it all up and running however I cant see anyway of securing user access rights to the shares as all you need to access them is just clone the User ID of a user authorized to access the share of any Linux system which seems a bit insecure to me? I was wondering if there was any advice on securing access to server shares in CentOS.
I was working on a shell script to change the permissions of large directories and subdirectories because of an exploit discovered in the programs that run in those said directories that allow a client to upload and download files to the server. Loan behold I accidentally added a space and had something along the lines of "chmod -R 770 ." run on / logged in as root.
Yes, it was an incredibly noob move on my part, but nothing ventured nothing gained. I am surprisingly calm about this. I tried sliding in my CentOS installation disk and "Upgrading" CentOS but that only made it worse, beforehand I made everything owned by root so I could at least log into GNOME. This does not work for obvious reasons, namely having to change the permissions back for every user and every group, which far beyond a possibility.
Don't ask me why, but I need to back up a website with complete structure to a windows machine (so no tar/gzip - just an identical copy). I'm experienced with rsync, so I thought to do it that way. However, in the process I'm bound to lose my ownership/permission settings for each file and that will give problems when placing back certain files. Is there a way to either:
1. save those settings on a windows machine? 2. have an easy way to save the filetree with relevant information and a shell script to attach the info back when uploading files again?
I encountered a a dependency issue when trying to install Webmin on Ubuntu Server Edition 10.04 Beta1.
When you try to install webmin, libmd5-perl is not available in any of the lucid repositories:
I resolved the dependency prob by adding the following repository to my /etc/apt/sources.list: deb [url]
Then I did a sudo apt-get update then sudo apt-get install and libmd5-perl installed fine along with webadmin. BTW. I got a GPG error when doing a apt=get update because I did not import the public key for the debian repos I used to get libdm5-perl, which doesn't matter to me as I commented out the repos once I got libmd5-perl installed.
I installed webmin on many servers under my control and decided to automate the job. Here is a shell script to automatically add WebMin repo to yum and install it. You can use "nano -w webmin_install.sh" to create a script somewhere, copy there the source, save, then allow execution using "chmod u+x webmin_install.sh" and then use "./webmin_install.sh" to execute it.
I want to install some software which need PHP 5.2 to run but I currently run PHP 5.1 I have been going through this http://wiki.centos.org/HowTos/PHP_5.1_To_5.2 but when I get to the bit that says
Problem: permissions for rsync and BackinTime. Setup: Ubuntu 11.04, Two internal HD, #1=main, single boot, #2=backup drive. Question: How do I set up my 2nd HD with correct permissions? Background: I had previously a dual boot XP+10.04 with a 2nd HD formatted as NTFS. With this I was able to use my rsync and backintime to my 2nd HD with no issue. My new set up is EXT4 on both HD.
(I even tried to reformat my 2nd HD as NTFS, but that didnt fix the issue) I followed [URL] to mount the 2nd HD and get permissions. But now when I run backintime i get this error: [E] Error: rsync: opendir "/home/myhome/.ssh" failed: Permission denied (13) I did my requisite reading for a newbie, and am stuck. I ran backintime as root, and it backed up ok. How do I run my user version of backintime? (i.e. How do I fix the permission issue?)
There is a tool appeared in repository called ktune; The purpose is to adjust some sysctl.conf settings to improve server speed on servers with heavy load. What is this tool for if one can achieve the same with the configuration file added to system startup? Or ktune is just such file?
I installed CentOS 5.4 on a virtual server to host the tool OpenVAS, which is used to execute vulnerability assessment.This tool is based on a master engine and thousand of plugins, each one doing a specific, little function.I wrote a cron that, every sunday, updates the list of these plugins to keep the system updated; this cron uses a tool's proprietary function that download the new plugins from a site using rsync.I was using the rsync 2.6.8-3.1 until last week, when I decided to install ALL the updates proposed by operating system itself (more than 170) and the whole list includes rsync 3.0.7-1.el5.rf (from rpmforge).
This monday, the log of the tool's function gave me a bad response about the impossibility to establish a connection with the remote server due to a connection timeout.Before doing the rollback of the system, I tried to understand what happened, so I started manually the function on a terminal and, together, I was watching the results using netstat -tuvnap, so I saw that there was an attempt of connection, between my server and the remote one, but the connection did not finish the three way handshake, because the state of connection was always "SYN_STAT".I tried to value the $RSYNC_PROXY variable, because there is a proxy between my server and the world, but the result was exactly the same, so I was forced to execute the rollback of the system, which works very well now!
I'm making my own yum repository - firstly so that all the machines I administer can be updatedvia the internal network, secondly so that I can test any updates on a spare machine before passing them on, and thirdly so that I can add my own repo for internal software.
I've created the necessary folders under my webserver, and used rsync to update them from my local CentOS mirror, following the instructions at [URL]
I notice it says to run "createrepo" on the base repository, created by copying the rpms from release DVD.
When I rsynced the updates repo, but I notice that the files in repodata are very small. In fact, having a look inside them, filelists.xml contains no file details. But if I run createrepo in the updates directory, filelists.xml gets lots of file details inside it.
I wondered if maybe the local mirror hadn't been updated properly, but checking against mirror.centos.org shows that has the same files.
how does the (real, live, CentOS) updates repo work when there is nothing in the filelists?
I'm looking for some sort of a way keep track of all of my users that are logging in to my server (centos 5), what I mean is this: at our firm we outsource some of our work (programing), now all of the developing is done under our servers, what I'd like to find is a way of taking all of the users log on time and display by days/weeks/months - so I could see how much did everyone had put in. Another thing that I'm looking for is a way to monitor an ongoing session and record user activity, now I've seen ObserveIT, but it doesn't support Linux agents as of today.
My question is about setting up SMB and AFP failover between two servers. The plan is to have two servers both running CentOS with one acting as a primary node and one as secondary failover node. I have never set anything like this up before. In the past I have always worked with SAN's primary XSAN/StorNext. Both of which handle failover pretty much automatically. Unfortunately there isn't the budget on this job to install a SAN. Also this is only for temporary use for a week in a production office.
My thoughts where to run the two servers and use rsync on a cron tab to keep the data synchronised between the two. In an ideal world clients would log on to the primary and if that fails, seamlessly moved over to the secondary. I'm guessing however this is not possible outside of a SAN environment. So keeping the two servers synced and the clients manually moving over to the secondary manually is, I'm guessing, my only real option.
I believe it was rsync was the tool. I have a box running CentOS 5. It has a 250GB HD in it. I have another drive with the exact model. Currently it has a as a said a 250GB IDE drive. I want to shutdown the machine, install this other hard drive and set up a cron job that will backup my main drive at times that I set. This way if the main drive fails, I do not loose all the data and have to rebuild the server from scratch as I have been custom configuring it for years. I can't remember if there was an issue with the main drive being mounted to back it up or not. I have looked at some of the how's on rsync but they seem to only talk about using another server for this. If I shit down the box, install the new drive, and the box boots back up, is it going to ask about that drive or what do I need to do to get rsync going and does it partition the drive as such? And can I do it this way. This way if the main drive fails, I can just swap the drive and be on my marry way.