I'm trying to secure the CentOS servers on our company network as the current situation is, shall we say, less-than-ideal: remote root logins with the same password across several servers (behind a firewall, on non-standard ports, but still) and several key processes running as root. My proposal to amend this consists of the following:
- setup a bare as possible SSH-gateway with only the normal user accounts to handle remote access
- disable the root login from anywhere else but LOCAL and create special accounts with root permissions for our ~4 system administrators, like admin.foo admin.bar that can only login from inside the company network, using SSH-keys.
So far my biggest obstacle seems to be creating the administrative users, how do I go about and do that? When I simply create a user adminfoo with uid=0 it will show on my shell as root, which makes it useless as a way to make our admins accountable for their actions. BTW, my initial proposal to use sudo unfortunately met with strong resistance, because it compromises usability.
we have a remote linux server and its /var/log/secureile is fully filled with unauthorized ssh users,of course they cannot able to log in successfully but they were making continuous ssh requests to log in, it some times results in server down problem. so how to secure our server from their ssh attempts.i know blocking unauthorized ip addresses can solve this problem and we can also change the ssh port numbers but what are the other possible ways of solving this.
I'm an Oracle DBA and started working for my current employer about 4 months ago. This past weekend an alert re: FS space brought my attention to /var/spool/clientmqueue (full of mail re: cron jobs) and the fact that sendmail is not running on our Linux servers.I'm told that the IT security team deemed sendmail too vulnerable so we don't run it.Aside from FS filling up and missing notification of issues with crontab entries, I'm concerned that we may be missing notification of potential issues. In other Unix/Linux environments I've seen emails from the print daemon when it experienced problems with specific jobs.
Are there other Linux facilities aside from cron and lpd that use email to advise the users of possible issues? Are there ways to secure sendmail or secure alternatives to sendmail? My primary need/desire is to make sure that emails regarding issues on the server get to the appropriate users. Secondary goal would be to have the ability to use mailx to send mail out. There is No need/desire to receive mail from outside.
I've written a shell script that among other things, restarts network services. As such, I'd like to keep those who are remoting in via putty, etc. from executing the script. Is there a way to detect this and restrict running the script (by adding additional coding in the script) that disallows running it from unless you are logged in directly to the machine? It's written in bash.
I would like to detect every login on my server. Not only ssh logins (virtual terminals) but also physical logins.There is a way to use nagios or a script to watch log files.But I would like to know is there a way to catch that information one step before.I thought about watching /dev/pts for changes but that is not different than log watching and everything does not appear in /dev/pts like a ssh tunnel (ssh -N user@server). These are only visible in logs because ssh tunnels do not open terminals.But I would like to be able to catch these on login.
Is there a way to lock out logins at the console? I ask this because I can not login at the console but can remotely login to the system via ssh. I'm guessing I blindly implemented a security option and didn't know what I was doing when I did it.
How do I monitor who is ssh'ing into a box (SLES) as well as failed attempts? How can I log their IP addresses, even if they're not in DNS?/var/log/messages I see their hostname but no IP address
I have been trying to get pam_tally2 to block failed logins with ssh. No matter how many failed logins I do I can still log in with the correct password using SSH. Anyone have this working?
Here are the configuration I am using. I have put this in sshd and password-auth-ac.
auth required pam_tally2.so deny=3 file=/var/log/tallylog lock_time=180 unlock_time=1200 magic_root account required pam_tally2.so magic_root In the /var/log/secure I do see messages related pam_tally2 and the counter going up.
Failed login attempts are logged to syslog with the user id or login id set to UNKNOWN_USER or UNSET.Anybody know if this is configurable. I would rather it just pass the actual id that the user used. Doesn't matter if it exist or not, just want to know if someone is guessing at user names and what those user names are
I was wondering what the best way to secure RD would be? What's the best one to use? I'd prefer a method that isn't always active, so maybe something that I need to enable via ssh first?
I have recently setup a RHEL 5.3 server primarily to be used as an Apache web server. I also now have a requirement to have this server also service SFTP requests for uploading/downloading files.
1. By default RHEL 5.3 allows SFTP (over TCP port 22). However when searching for SFTP site setup I've come across the fact that RedHat recommends using vsftpd. So if I configure vsftpd, what happens to the default SFTP and the ability to remotely use something like PuTTY to SSH into the server? Really looking to see if SFTP or vsftpd is best. Also, is vsftpd as or more secure than FTP over SSH?
2. I've set aside a separate disk parition (to keep it away from the system partition to help lock down security) for the SFTP site. So I want to use that as the default SFTP root directory structure. How can this be achieved?
3. My requirements dictate 3 separate directories need to be used, each with their own associated SFTP user. The user can only read/write it's own directory structure and cannot navigate out of it. Also there will be a SFTP super user able to navigate through each of the 3 directory structures mentioned, but will not be able to navigate out of it's home directory. Can this be done, if so how?
There will be no SSL certificates in play at the moment. I'm more concerned about getting things setup and working correctly first. However there may be a requirement to use them later. The site will be accessed over the Internet initially, hence the reason I'm looking to make it as secure as possible while getting it up and running quickly.
I seem to be missing a secure.log or security.log file. I have Ubuntu 10.04 and can't find this file. I looked in the /var/log and ran a search command to no avail. Does anyone know where this file is or is it called something else. I'm looking for a file that logs any change to the security settings of the system.
They are running Kubuntu. How to access their desktop from my home or office using Internet. Logically I remembered about kfrb and X11-vnc. But both of them need some approach to provide security. I'd like if someone could give me some pieces of advice on choosing the simplest and better approach:
To secure kfrb or x11-vnc is simpler or better to mount a vpn or to use an ssh tunnel? Is there any other solution? My pearents ISP use DHCP, so I think it would require some service like dyndns or similar...
For a Secure Remote Desktop on Ubuntu 9.10 here is how I did it using OpenSSH, FreeNX and a router with DD-WRT v24. Pic of it in use at bottom of post, transferring a file and remote desktop at the same time.
For the purposes of this guide I will use a Desktop as the Server (Host) which is at home. The Client will be a Laptop that I can use to control the Desktop remotely. First you should already be familiar with the Terminal which is where you enter commands (anything in a "Code:" box).
In Ubuntu it is in Applications > Accessories > Terminal In Kubuntu it is usually on the lower left taskbar and is called Konsole I am using Ubuntu so you may have to make some adjustments to this guide if you are not using Ubuntu. Installing OpenSSH (for the rest of this guide I will refer to it as only SSH)
I need to securely control my home computer from work. My home computer is connected to the internet via a wireless router using Tomato. I'm using the latest Ubuntu on both ends. Is the Remote Desktop that's installed by default good enough? What settings do I need for my home router to safely allow my remote connections and not leave it open to intruders?
I'm running Ubuntu Server 9.10 and I'm looking to setup an FTP server. I have SSH running beautifully and it's accessible from any computer whether it be inside the network or coming in from the internet (provided you have the administrator username and password ). I've tried Proftpd and vsftpd and have failed miserably so far. Which FTP server application do you think I should go with and how could I go about setting it up through my SSH connection?
My current setup is this: - Ubuntu Server 9.10 with Fixed IP of 192.168.1.100 - 500GB Hard Drive - SDA1 = 512MB ext2 /boot - SDA2 = 2GB swap - SDA3 = 20GB ext4 / - SDA5 = 438GB ext4 /home - One User (Username = administrator) - Full SSH Capabilities - IP Address to DNS provided by www.dyndns.org - WRT120N Router with Remote Access and Port 22 Open
I basically want to set up a secure FTP server that anyone on the internal network can access as well as anyone from the internet (as long as they have a username and password). I want to setup a username and password for each user so that they all have read/write access to the same folder in my /home partition (I'll call it FTPSHARE).
I'm tasked with creating a base image of ubuntu (one for server, one for workstation) that is locked down and has all the fluff taken out (naturally workstation will have more fluff left in it than server). Task list looks about like this:
1. Create list of deb packages "allowed", write script to list/uninstall everything else.
2. Hook the logins into either enterprise kerberos or Active Directory (yuck).
3. Write scripts to check things like setuid/setguid, disabling su, checking sudo permissions, configure iptables, etc.
4. Use a scanner to scan the system from outside the system (was thinking of using backtrace).
5. Custom-compile the kernel to strip out all the unneeded modules.
Before embarking on this awesome task I figured I'd check with you guys to see if you know of some resources that would make this task easier/quicker. I'm sure someone out there has already headed down this branch.
PS My boss *loves* ubuntu and isn't to keen on going with a deb (or other) distro that is already "security trimmed" without some serious convincing. I'm sure there are some out there, and if you want to pass along a couple for consideration, I'll check them out, but no guarantees he'll let me use it.
I'm confused about the sendmail/ssl combination. so confused, i'm not even sure what i'm confused about :) I want to have email sent from our server to the rest of the world in a 'secure' manner. Just dl'ed and installed CentOS5.4: Linux rh5 2.6.18-164.el5xen #1 SMP Thu Sep 3 04:03:03 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux the /etc/mail/sendmail.mc has the instructions
I would like to setup a remote desktop for my Ubuntu computer so I can use my computer on a Windows computer that is on a different network. How can I do this?
Does anyone know how to go about setting up a secure IMAP email server that is able to be accessed from outside the network? Similar to how you can access your google email account from your computer using Thunderbird.
perform below activities please guide how to do perform below activities.Make sure the Guest account is disabled or deleted.-Disabled or deleted anonymous accessSet stronger UserID policiesSet Key Sensitive UserID Default enable in linuxCombination of numbers, letters and special characters (*,!,#,$,etc.)
I set up my ubuntu server with iptables that only allows ssh in the input chain (and of course established connections) with only the mac adress of my laptop allowed to connect, set up a key with a long passphrase and installed pam_abl plugin. ICMP echo is blocked by default.
The only problem is i log all other attempts to connect to the server and i see a lot of traffic going to ports 445 and 5900.
My question is: Is there a possibility that these attempts could succeed and is there any way to further ensure this server?