I need a to allow a user to tunnel an ssh session but disallow them a bash shell. # chsh -s /sbin/nologin {username} won't cut it...? would permissions be the way to go with it? But how? Setup a group and add the user to that group? Or add all other users to that group... I'm confused
I am trying to have the SSH tunnel Remote forwarding command in a shell script. I should be able to do 2 tasks, but unable to get that going.1) I have 3 servers Server 1, Server 2, Server 3.I have my Database running on Server 1 and my script running on Server 2 which should be able to do port forwarding from Server 1 to Server 3.so for example on Server 2ssh -i $ssh_key -R 9000:Server1:3333 root@Server2.
I need to be able to stick this in a shell script something like getTunnel() {
I have an Ubuntu 11.04 instance running on Amazon EC2. I am currently using it as an SSH tunnel/SOCKS proxy. Most of my Net activity is on a Windows 7 machine running PuTTY. This setup is working very well. So well that a few of my friends have expressed interest in accessing it. Question is, how do I share this proxy, without giving away my private key and root access? I would like to limit users to only being able to set up an SSH tunnel/SOCKS proxy, with no shell access. What other security measures would you recommend for such a setup? I googled a bit and saw references to rbash and chroot. I have already changed the SSH port, and set the EC2 firewall to allow inbound SSH only from my ISP's address range. My friends use the same ISP. They would probably be running Windows 7/Vista, and PuTTY too.
I don't know if this is possible... I want that only some of a Windows Domain(Samba) users can to logging in a machine.For example: The user Peter of the domain WORKSPACE can connect to the PC1, but the user Charly of the domain WORKSPACE can not connect to the PC1. How I can implement this?
I'm using Postgresql 8.4.2-2. I'm trying to remote into my server securely. I figure I could do so with ssh. Apparently I figured correctly, as per, [URL] and [URL] I setup the ssh tunnel. ssh -L 5432:serverip:5432 Then I setup pgadmin3 to connect as follows:
An error has occurred: Quote: An error has occurred: Error connecting to the server: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.
I'm not sure what the problem is. I can connect with Code: psql from the cli after connecting to the terminal via ssh. So I know that I'm using the correct password.
I'm working remotely at the minute, but have several 'incoming' automatic reverse shells connecting to a dedicated server. This dedicated server does not have X, but several of the 'incoming' shell servers do. Basically, take three machines, laptop, server, client. Laptop and client have X, server does not. All three machines have password-less logins to each other (laptop > server, server > client) and can password-lessly establish a shell.
I've tried ssh -X user@server "ssh -X user@client gui-application" and, no suprise, I'm getting 'Cannot open Display" messages. Does anyone know I nice one-liner for this kind of tunnelling?
I'm trying to tunnel and SSH connection through another server.for the tunnel is ran:ssh -L 8112:yy.yy.yy.yy:22 -N user@xx.xx.xx.xxBut when I try to ssh to localhost -p8112 I get an immediate error saying "exited: remote closed the connection
I am building up a site-to-site OpenVPN tunnel between two locations. I am setting this up in two CentOS 5.4 boxes each containing two NIC's. I can get the tunnel up and running, and I can ping across the tunnel, however, from the client end of the tunnel I can not ping anything behind the server end of the tunnel. In other words, I can't ping anything on the server's LAN. On both servers, eth0 is the WAN side and eth1 is the LAN side.
OpenVPN server: eth1 - 10.10.202.2/24 OpenVPN client-server: eth1 - 192.168.204.1/24 I have IP forwarding enabled in the kernel on both machines. Code: [root@vpn01 openvpn]# cat /proc/sys/net/ipv4/ip_forward
[Code]...
I'm sure that the answer is right in front of me, but I can't seem to get it cleared up. I can't hit anything on the 192.168.1.0/24, 192.168.2.0/24, 10.10.4.0 or 10.10.202.0 networks from the client server.
I currently have a gui running on port 8000 on some of my remote servers, unfortunately i do not control the firewall so can not open that outbound port to access it from hereIs there a way with an ssh tunnel to redirect that to another port so i can access it from here?
Does anyone know the best and simplest way to do this? I'd like the share to be mounted over the tunnel on boot with as little scripting as possible and be as secure as possible without exposing more than one port to the outside. I will be trying this method: [URL]... once the tunnel is established and 'always on' NFS would take care of the file system mount obviously. Lots of the information I have been reading is not up to date it seems. Does anyone have any experience with this?
I use two Ubuntu machines, one at home and one at work. In order to connect to the machine at work from home I need to connect through a "tunnel server" that controls all the traffic to the machines at work.I am able to connect with ssh to the tunnel server and from the tunnel server ssh my own machine at work. My question is how do I retrieve files form my work machine to the home machine. How do I sync folders between the machines using rsync when the "tunnel server" is in between?
I'm currently tunnelling to my Ubuntu pc at home from my laptop in order to bypass my schools false-positive prone filter. Is there a way to record traffic that both comes to and is delivered by my pc?
I am using ubuntu10.04-server 64bit AMD with fluxbox. After I ran Matlab in a shell (without GUI) the shell does not display characters anymore, but will execute any command, I just can't see the characters that I'm typing.. I use aterm and xterm, does anybody know why that is, am I missing a package?
1. Webserver (Centos 5.5) 2. Mail server (Centos 5.5)
We have configured autossh successfully to create/manage the ssh tunnel into mail server in order to dump all emails to localhost port.
To auto start autossh in boot time we have included following into /etc/rc.d/rc.local,
Quote:
So whenever our web application wants to send out emails it dump all emails to localhost:33465 port, easy piecy, all are working great
Now we have a requirement that logwatch reports should get delivered via the same ssh tunnel rather than installing postfix and configuring as a relay.
When i try to open a connection to start querying i get this message:
Cannot Connect to Database Server Cannot start SSH tunnel manager
1 Check that mysql is running on server 127.0.0.1
2 Check that mysql is running on port 3306 (note: 3306 is the default, but this can be changed)
3 Check the root has rights to connect to 127.0.0.1 from your address (mysql rights define what clients can connect to the server and from which machines)
4 Make sure you are both providing a password if needed and using the correct password for 127.0.0.1 connecting from the host address you're connecting from
What is the best way to disallow new ssh connections for the duration of my session ?
I want to evade read/write collisions. Things work like that - one session put files on server, other copies these files and then deletes them. So in order to evade collision:
- I check if there are no established connections.
- Then I deny new connections temporarily.
- Do the job.
- Allow new connections again.
Maybe there are better ways to achieve the same result ?
I want to disallow usb port when it is connected...I tried sys/bus/usb/devices/power/level=suspend but still i can see that usb device and have access to that usb device . stop access to a particular usb device which is connected to Linux system...Also i tried ioperm() but i cannot find usb port no. which is required for it...Also i want to do this by C++ programming
I have a requirement to implement SSH Services in a way, oracle user should be disallowed from everywhere other then one host. While no restrictions for other users.
I worked with DenyUsers, but it disallow oracle logins from all hosts.
trying to devise a new sudoers configuration while building a new SOE and would like to force everyone (including system administrators) to use rootsh in favour of doing things like sudo -s, sudo bash, sudo tcsh and so forth. Effectively, use sudo to use any shell other than rootsh. Is there a way to allow users to run anything they want except shells. I realise this is a default permit which inherently is defective, but I'm not convinced that going through the 1559 executable commands of my (as yet incomplete) built system to decided on the likely 1000+ commands I would want to be genuinely allowed. As I said this is for system administrators first, and I'd like to forcibly instil the habit of sudo <command> or using rootsh to get an audited shell. But I know people are already not doing enough sudo <command> as it stands, rather they switch to bash.
I'm using Ubuntu x64 (dunno which version, but I don't think it matters) and I'm concerned about security with PHP.I remember using lighttpd and I had some mystic configuration and the secuirty was perfect for me - if one website gets hacked then the others are still safe (kinda).Now with apache2 if I enable safemode I'm still able to go outside web directory and actually I can go really far untill user/group matches.I tested the system with r57shell and I was able to mess up other websites.Is there a way to disallow access to other websites?
I have Windows 7 and Ubuntu 10.04 installed on the same harddrive. I'm using grub to boot both. I would like to deny access to the windows partitions, but allow access to removable drives and shared drives.
I am stuck in a weird situation and could definitely use some help from gurus in security area.
I have categorized my users into 3: 1. root user 2. other local users 3. LDAP users
I want to setup following 2 usecases:
a) 1. Allow keybased ssh and scp to root users 2. Allow ssh but disallow scp service to other local users 3. Disallow ssh and scp to LDAP users
b)
1. Allow keybased ssh and scp to root users 2. Disallow both ssh and scp to other local users 3. Disallow ssh but allow scp to LDAP users
For the 1. in both cases, I think PermitRootLogin in sshd_config could . For the 3. I am thinking of deploying rssh to control scp service access, since ssh will be restricted anyways.
Problem area is 2. primarily.
i) How to allow ssh but disallow scp to 'other local users' ii) How to disallow both ssh and scp to 'other local users'
to allow the services I need, am I missing anything ? I assume allowing ssh will also allow scp ? (heck I will allow sftp as well anyway).However my problem is I am connecting remotely, so the only way I can do what I want is to actually do a
Code:
sudo ufw default allow
then use a list of the services provided by
Code:
less /etc/services
and deny each service individually? This seems a pain as if I turn on the firewall with default deny it will boot me out of my ssh connection?
I have a system, I want only my sudoer account to show and automount NTFS partitions under 'Places' in Ubuntu. Simply, they shall not have access to mount it. Only my main sudoer user account shall take advantage on this show-and-possibly-automount feature of GNOME, but not anyone else.
I need to disable file access (fopen, freopen, open etc) for application which is running under chroot jail and with restrictions (rlimit) via execv. Before that I redirected stdin/out to files within jail. I tried this:
Code: // Redirect stdin/stdout to files int fd = open (file_input, O_RDONLY); if (fd < 0) fatal_error ("input open failed!");
Working with Amahi server and the VPN app. WHenever I want to activate a VPN tunnel thru ssl, the VPN server starts up "Adito" agent. Normally in Windows, the agent pops up with a browser and basically lets you surf inside the VPN.But when I use my Ubuntu, it says it's starting teh agent and then it just sits there and stalls out with failed to sync.I checked the logs and this is all I got:
I need to have a group of computers that connect to a remote site and run lynx to view some php pages that interface with mysql (that's a mouthful)For version control, I would like to keep only one central copy of the web files.
Personal data is sent, so rather than setup https server or SSL mysql encryption, I decided to create a "tunnel" to a Terminal Server using SSH.
I flirted with the idea of setting up VPN tunnels between the clients and a DMZ network but I don't want to add a bunch of complexity.
I just wanted to make sure that I wasn't creating a gaping security hole.
I am upgrading my server and I have a lot of sites. Since I cannot take my server down for a few days, maybe a week until I manage to migrate all the sites to the new machine, I figured I could migrate them one by one. After migrating one, I would somehow tunnel the requests of that name virtual host to my internal machine. When everything is migrated, I would then switch the machines, update ip's and stuff and everything will work just fine.
However I cannot seem to find a way to do this tunneling. is this at all possible? If not, what alternatives do I have?