The system in question is using Mint 9, but the forums there aren't as active and since its Ubuntu based I didn't think it would be a concern bringing my problem here. If this is an issue, apologies, lock the thread and I'll head over to the Mint forums.So I have an SSH server running on the system, but I only like to have it running at certain times, so I removed it from the rc scripts using the command: sudo update-rc.d -f ssh remove.
Anyway, later I found that SSH was turning on at boot time anyway. I checked the rc scripts manually and couldn't find reference to it. I then tried to stop the process using the command: sudo /etc/init.d/ssh stop which reported that it was working, but after checking the processes and consulting syslog I found that it was re-spawning after I had told it to stop.I found 2 ways to stop the process without it re-spawning:sudo initctl stop sshand sudo service ssh stopSo whilst I can turn it off at each boot, or script it to shutdown at login, I'm still wondering why update-rc.d isn't working
This is the second new install of 10.4 on the same machine with the same issue. After boot, as soon as user logs into the desktop, sys mon shows cpu at 100 percent and a steady climb in ram usage. several processes are spawned continuously until all ram is consumed and then moves on to use scratch space.
Using top, the process count moves into over a thousand total processes. Some investigation using top, ps, and digging into the /proc folder shows a ppid of 1 If the machine is booted to shell, top shows 120 processes and is stable. Some of the processes running repetitively are the gnome toolbar, nautilus, and I wish I was clear headed enough to write the others down before I left work. I can certainly get a more complete list in the morning.
I have swapped out ram, and the processor with no success. I have also tried apt-get purge ubuntu-desktop then installing with apt, this did not resolve it. As mentioned at the top of the post, this is the second install with these symptoms. The first install started showing the issue about 10 hours after first boot. On this second install, all was working fine for a couple days before this started in.
I need to spawn 2 processes in parallel and each takes an hour or so to finish. Is the following one of the correct ways of using `at` in a script run by crontab?
Code: #!/bin/bash # define the env var, cd, etc... assume everything ok up to this point date +"The start time is %H:%M:%S" rm -f a.fin at now <<END_OF_AT do_a &> a.log
nautilus starting up at login then spawning new instances till it runs out of memory. At one time I checked Startup Applications Preferences to Automatically remember running applications when logging out; I unchecked this thinking it was the problem; but it has not gone away; I have to kill nautilus, then the spawning will stop. Also I can not close nautilus; another instance will run; sometimes starting the spawn all over.
I have latest openssh-server. you know the classic star/stop scripts:
sudo /etc/init.d/ssh start/stop
But when I wrote this stop command, everything looks good, except sshd was still running. I looked into script it uses start-stop-deaemon to kill through pid. The script always kills process, but immediately, new process of sshd was emerged (by it self - with new process ID)! I don't get it. I'm sick of my not understanding of the proglem! The new process of sshd has parent with id 1 (init). How is this possible? How does it come, that ssh can not be turn off and nobody has noticed or complain about it?
After 2 hours of googling I managed to find this command:
sudo service ssh stop
and ssh finally got killed. Yeahh! After issuing this command /etc/init.d/ssh start/stop work correctly. But only to restart of system. Is this some king of super-uber command and we should not user /etc/init.d/ scripts anymore?
The strange thing is, ssh is run by itself after system start-up (without being in /etc/rc...).
I see in /etc/init/ssh.conf that sshd is designed to start on filesystemstop on runlevel SI understand that runlevel S is single user.Which says to me that sshd is not stopped on shutdown (runlevel 0).Also, sshd is in /usr/sbin/sshd and furthermoresudo lsof -p <sshd_pid>shows that it uses lib files in /usr/lib.So, my question is, if sshd is not stopped on shutdown YET sshd uses files in /usr, then how can/etc/rc0.d/S40umountfsever successfully umount /usr during shutdown when /usr is on its own partition? sshd should still be using the files, meaning the file system is busy. right?Yet, I'm pretty sure that my shutdowns used to complete successfully. (Edit : I guess they didn't - see next post)
i installed it, but every time i start my system, sshd is running, but i can't find out where it's startup entry is located.Code:ls /etc/rc*.d | grep sshreturns nothing, which indeed is true, 'cause i removed the links from the runlevel directories. i checked the rc.local file which is empty and never added a line anywhere to start the ssh daemon explicit.i'm using the latest ubuntu with updates and everything (at least i think so). in my gnome startup application preferences the only thing related to ssh is the ssh key agent. but if i deactivate it, sshd still startsup.
Code: sudo update-rc.d -f ssh remove didn't helped either
I have just built my first ever Linux desktop, using VM Ware and it is running Ubuntu 10.10. I wish to try and use SSH to contact the machine but I don't believe the SSHD is running.
I have done a grep for SSHD shows nothing and have checked the Synaptic Package Manager and can see an openssh-client version 1:5.5.p1-4ubuntu is currently installed.
On Solaris, you can start SSH by typing /etc/init.d/ssh start but when checking /etc/init.d on Linux, there is nothing in there called SSH so am unable to restart it.
I just want to have the SSH running on the machine.
I've started to get segfaults in sshd when trying to connect. There has been no reboot and (until I restarted the sshd to try to fix the problem) there was still another ssh session connected.
Not sure if this belongs here or in the upgrade forum but since the upgrade went OK I figured I would post this here. I come from a RedHat background and am not too familiar with the Ubuntu/Debian way of dealing with service startup so this may be simple. I was running 10.04 LTS on a Sony Vaio and did a distribution upgrade to 10.10. When running 10.04 LTS the sshd daemon started on boot and I was able to ssh into the PC. Now that did the update I can start the daemon using the command "sudo /etc/init.d/ssh start" but it doesn't start when the systems boots.
I even tried creating the entries in the /etc/rc.#.d directories like this: /etc/rc0.d/K20ssh -> ../init.d/ssh /etc/rc1.d/K20ssh -> ../init.d/ssh
I have been trying for weeks to solve this one and have researched everywhere I know to look. Nothing has helped. I am trying to ssh to my other machine (machine1=galla, machine2=cachin). Both run Maverick Meerkat 10.10. I get the following error when trying to ssh to galla:ssh: connect to host galla port 22: Connection refused
uname -a outputs:Linux galla 2.6.35-27-generic #48-Ubuntu SMP Tue Feb 22 20:25:46 UTC 2011 x86_64 GNU/LinuxAlso, sshd does not stay running. I can start it, but a ps tells me it is never running. I imagine herein lies the problem. But why won't it stay running?I am not running any firewall on galla (iptables -L told me that).P.S. I can sucessfully ssh out of galla to cachin. And, even if I just try to ssh localhost on galla, same thing happens.
I can't ssh into my Dad's machine. He can ssh in from another computer on his network, but I can't get in across the internet. I thought we had port forwarding set up correctly on his router. (Westell 327W running verizon software - sshd application, port 22 to port 22, tcp).
I can exchange keys with his server but I get. "Permission denied, please try again" when I try and login. An nmap scan (with -PN option) on his IP shows the open port.
I've got Fedora 14 running on an EBS volume on Amazon EC2. I've created a few users and enabled port 22. When I set a password for these users, they can successfully ssh into the instance; even if they logout and login again....until:
If I reboot the machine, they can no longer ssh into the machine (permission denied). If I issue the passwd <user> command and change their passwords, they can login again....until I reboot the machine at which time they cannot login again until I change their passwords. The problem exists even from the machine. That is, if root attempts to ssh into 127.0.0.1 using their username/password, the same problem/resolution exists.
I'm having troubles trying to understand this problem:my homeserver until yesterday had a public IP, staying on network, with sshd running and all was fine;this evening I changed the IP, giving it a local lan address, and what happened if I tried to connect to it by ssh?I got an error about "Connection closed by remote host". Google helped me finding that was regarded to hosts.deny file, that was actually containing a lineALL:ALLthat I commented, and all was fine.My question is: why the hosts.deny (that has never changed) was observed only with the local IP?I tried to switch back to the public IP and leaving ALL:ALL, and it did connect without any problem
Updated two SSH packages today and now SSHd wont start. Worked fine an hour ago, and I'm still logged onto the server via pre-existing SSH sessions, but now I obviously cannot start new ones.
Code: paine@pandora:~$ sudo /etc/init.d/ssh start * /dev/null is not a character device! paine@pandora:~$
Somehow the -D option got tacked on to my sshd when I start up. How do I remove the -D option when sshd is started at boot? I'm guessing I need to edit something in /etc/init.d but not sure what. I checked System->Preferences->Startup Applications and the ssh server daemon isn't listed there. And since it is a command line option /etc/ssh/sshd_config is of no help.
I've been using ssh for a LONG time to connect my laptop to my desktop with no problems. I use a non-standard port (nnnnn) and keys. After a power outage that caused a shutdown and reboot, I can no longer ssh into the desktop. The only changes I've made are updates (laptop and desktop both running ubuntu 10.04).
$ ssh -p nnnnn Desktop ssh: connect to host Desktop port nnnnn: Connection refused No messages are generated in any of the logs on Desktop! $ /usr/sbin/sshd -T port nnnnn protocol 2 addressfamily any listenaddress 0.0.0.0:12023 listenaddress [::]:12023 .....
How to separate sftp and ssh and run on different ports.
i.e. a) sftp on port x b) ssh on port 22
I searched from the web and there are no detailed instructions. They suggested something like separating sshd_config into two files (file A and file B) and run two instances. Each instance points to its configuration file.
However, they didnt write down the detailed procedure of:
a) how to modify file A and file B (i.e. which line should insert specific commands)?
I have a Redhat fedora core release 6 (2.6.22.9-61.Ns4) server and form time to time ssh fails although I am still able to ping the device and with a reboot the device will start working correctly so upon further investigation it appears the sshd daemon fails.Not knowing a great deal about Linux I thought I would ask some advice on the path I am thinking of taking. The first would be to put an entry in the cron to try and start the ssdh every hour or so. Would this cause issues in the long term run it multiple times when the sshd daemon was still running?
The Second though I had was having a bash script to check if the process was running and if not restarts it and if it was just exit the program which would seem like a neater way to do it but this is where my limited Linux knowledge hits a wall so was looking for suggestion on how to implement this?
I have openSSH installed and wish to log on to my Centos container (hosted by switchlinck.co.uk) from my Windows PC using putty. I can log on fine using by entering my username and password, but wish to use an rsa key to log on without a password. I have managed to create the keys with putty, and ammended them to work with openSSH. However, I am unable to find the authorized_keys file to put the key into. SSH is running but that file does not exist in /etc/ssh. When I read different how to sites for this, they all point towards ~/.ssh. I do not have a .ssh directory anywhere on the system. I have tried creating different users but still can not find this directory.
When i login on localhost with pubkey-auth, i get the following in my log
Code: Select allSep 20 12:42:27 aldebaran sshd[19745]: Accepted publickey for root from 127.0.0.1 port 37520 ssh2: RSA 45:4e:27:4d:30:f5:3d:25:10:d0:92:88:53:77:1a:3b Sep 20 12:42:27 aldebaran sshd[19745]: pam_unix(sshd:session): session opened for user root by (uid=0) Sep 20 12:42:27 aldebaran systemd[19757]: pam_unix(systemd-user:session): session opened for user root by (uid=0) Sep 20 12:42:27 aldebaran systemd-logind[585]: New session 70 of user root. Sep 20 12:42:27 aldebaran systemd[19757]: Starting Paths.
I set up a debian lenny in vmware on my windows machine. The network interface is set to bridged, so the virtual machine is connected directly to the university network i am connected to. I want to be able to ssh into the vm.I installed sshd via "apt-get install ssh", generated a key pair with puttygen and copied the public part to "/home/user/.ssh/authorized_keys", set rights to 600 and then tried to disable password authentication completely, following the "securing debian" documentation.this is how my /etc/ssh/sshd_config looks now:
# Package generated configuration file # See the sshd(8) manpage for details # What ports, IPs and protocols we listen for
I tried recompiling a new kernel yesterday (2.6.34) on my debian sarge box, but I ran into multiple difficulties. These difficulties forced me to do a double dist-upgrade to lenny. The new kernel was (seemingly) compiled without any hiccups, and I ran dpkg -i on both the image and the header debs. They didn't install properly into grub, but I think I managed to fix that manually.Next thing I did was rebooting the server. It refused to come back up. Luckily my ISP has recovery tools, so I managed to switch back to the old kernel. It boots just fine with that kernel, but the problem is that there is no ssh daemon running! I can access it through ftp and do limited jobs through php, but nothing big, as I have no root access.Now, enough backstory. My question is: How can I install openssh-server onto the server remotely? I cannot access the server personally, as the server is in a completely different country.
have search the web for answer and there are some suggestion. tried those suggestion but was not successful.appreciate if any one can help to resolve this? I'm running fedora 11 and using the NXserver for remote access
I setup my old laptop for my mom with F13 and have sshd running. My dad set up their DD-WRT router so that it's forwarding port 22 to the laptop's ip address. Yet, I get "No route to host" when I try and ssh in from my house. Is there anything that would be preventing F13 from accepting the SSH connection?