I'm trying to use ssh-keyscan to get some known_host file population going on, but I have a ton of hosts I want to scan, all with multiple aliases in /etc/hosts. Is there a way to use my current /etc/hosts file to do an ssh-keyscan instead of making a special list of hosts that (from what I've read) ssh-keyscan needs?
Probably an easy (which means stoopid) question...I am trying to reroute a website using my hosts file so that it matches my servers certificate file for testing without effect dns and the live site.When I went to edit my /etc/hosts file it is non-existent. I have, I am assuming in it's place, hosts.allow and hosts.deny. Can anyone explain why I do not have a hosts file?
Well, as many proxy applications, GNOME Network Proxy Preferences only allow to ignore hosts. What I want to do is exactly the opposite. I only want to use the proxy for few sites. Is it possible to define only the allowed hosts in any way?
PS: I know FoxyProxy add-on for Firefox does this, but 1)I don't use Firefox and 2)I want the proxy settings system wide not only for browser.
I share a computer with my brother. It runs Lucid Lynx. I want to add an entry to the hosts file that will affect him negatively. Is there a way I can add the entry, without it affecting him, like, is there a user-specific hosts file?
I am trying to add subdomains on ubuntu 9.10 desktop edition and and I am not sure whether I need to add some info.(such as 127.0.0.1 sub1.example.com and so on) to the /etc/hosts file like the windows' windows/system32/drivers/etc/hosts file. I used to use the wamp-server(on Windows 7), I needed to edit 3 files, httpd.conf, httpd-vhosts.conf and hosts. And almost every edit is made in the httpd-vhosts.conf file on wamp-serveriles should be edited? or what else should be done that I didn't mention?
tell me a way to password protect the HOSTS file in ubuntu so that when i block certain websites the other person cannot unblock them. IMP: i donot want the HOSTS file to be protected by 'root' password as the other person knows it.
I would like to lock the /etc/hosts file somehow in a way that only someone else can unlock it, possibly using a lock code.I would then give the passcode to someone else.I'm running Ubuntu 10.10.
I often manually add a troublesome domain (e.g., advertisements, fake virus alerts, etc.) to my /etc/hosts file on Ubuntu 10.04 Lucid; but the effect isn't immediate.My hosts file is already fifteen thousand lines long (having combined all the hosts files I could find on the net, including the MVP one); but I still, almost daily, find a new irritant to add to my /etc/hosts file.My problem is I do not understand WHEN the /etc/hosts file is next read after a change.I've been rebooting to make sure the hosts is re-read; but there must be a simpler way.My question:
- WHEN is the /etc/hosts file reconsidered in Ubuntu? - Is there a way to have the /etc/hosts file re-read sooner?
Is it possible to have different /etc/hosts file for different network connections without having to go in and change it every time? The why: I have dyndns and port forwarding to get to my desktop. My laptop is sometimes on the same network, and sometimes not. Also, sometimes the dyndns doesn't update properly, or the outside connection is down, but I want to get to my desktop (and I'm too lazy to walk up the stairs). I'd like to be able to keep one set of bookmarks, ssh command aliases, etc. that would always get to it the fastest and most reliable way possible.
I run a few virtual servers at home behind a NAT, including an e-mail server, with dynamically updated dns records pointing to each of the servers. Consequently, I suffer from the loopback problem when working with these servers from my desktop PC. (E.g., I ping one of the dns hostnames and the ping goes to my router instead of the server). I fixed this problem by manually adding the in-home IP addresses and name pairs to my /etc/hosts, and then setting /etc/host.conf to a "hosts, bind" order.
This seems to work for every application on my desktop except for one: the postfix installation on my desktop PC (used for mailing smartctl messages and so forth) cannot communicate with my in-home e-mail server (times out). I checked the logs, and it looks like it is trying to use the IP address from the actual A-RECORD, rather than the address in my hosts file.
So I'm not quite sure what to do. There seems to be a "proxy_interfaces" parameter in main.conf which might be relevant, but I think it only deals with received mail. I'd prefer to have the mail going to that e-mail server, rather than also having to check the spool on my local desktop accounts.
I'm having problems configuring my virtual hosts file properly The site [URL]... opens on http and https The site 10.0.1.3/myapp/ works I am trying to redirect all traffic from [URL].... to [URL].... while maintaining access to [URL]....
I have some settings within hosts file of my Windows Vista. It helps me to bypass some limitation and get online better. I would like to migrate some settings to openSUSE 11.4.Is there anyone who knows how can I tune my openSUSE?FYI, setting of hosts file is lines of <IP Address> <Spaces OR Tabs> <URL OR Alias>
I've trying to get dnsmasq working as a combined dns and dhcp server. It's infuriating so far... In short, the DNS works fine for anything added to /etc/hosts, and the dhcp works fine, but the dhcp is not updating the dns with hostname information from clients.
The outcome of this is that i can only ping a node by hostname if i know it's address, which means setting a static dhcp allocation and putting the hostname into /etc/hosts manually, which is very annoying and kind of defeats the poit of dhcp. There must be a way to get dnsmasq to update the hosts file, surely The clients aren't using fqdn's if that matters, and i think i've tried every combinination of "expand-hosts" and "domain=" following is the dnsmasq config file contents:
When I converted to OpenSUSE 11.2, and went through YaST HTTP Server Configuration, creating my virtual hosts under the Hosts tab, YaST combinedm all int ile,"/etc/apache2/vhosts.d/ip-based_vhosts.conf".I did google and read, [URL]for further assistance.I'd like each virtual host to have its own file under vhosts.d, and wondering why YaST did not do that.The file /etc/apache2/httpd.conf laid out the file structure, and all vhosts.d/*.conf files are included.Is there a way to tell YaST to create separate files for each vhost, or does the user have to manually do it?
I was having a discussion with someone who asked me whether a Linux OS has to be rebooted when the hosts file is modified. From personal experience, on Windows I change the file but don't reboot and I've seen others do the same thing. I assume Linux has no exception(s), but is there any reason why a reboot is not required (to at least justify my actions)?
I run a local apache server, that has some virtual hosts running. Now I want to be able to locally connect to these virtual hosts, but when I try this, it puts www and .com behind the url and says it can't find it. On Windows I know the equivalent, editing the hosts file. Is there something similar in linux?
I need a script but i am not good at programming soWhat script have to do:- Every 1 minute is checking if ip address is available (ping)- if ip answers nothing happens- if ip does not answers: * file /etc/hosts is changed by one stored in /home/user/hosts* notification by xterm to restart some programIf finally ip answers file /etc/hosts is changed by one stored in /home/user2/hosts
I am trying something a bit tricky.Suppose there is a website URL...Now suppose when i open a file /var/www/ test.php which connects to the above website to gather some info and then allow me to further in the process, i want it to instead direct to a file say /var/www/test_done.php.How do i edit my hosts file for such a scenario? Is there any other better option than using a hosts file ?
Working fine: ==> scp my_log-bin.01393[0-9] root@192.168.103.66:/backup/ error - No such file or directory: ==> scp my_log-bin.0139[30-99] root@192.168.103.66:/backup/
If I need to append a set (or sets) of data to a file(or files) on remote hosts what is the best mechanism by which to do that? My first thought was ssh but the command syntax to append to a remote file isn't clear to me. Can anyone point me in the right direction here?
I've some file with .sh extensions that runs some softwares.Now,how do I stop running that filesI know we run the command ./start_tomcat.sh to start the apache.Is there any command to stop that file/process or is it just kill the process to stop the process
<rant>10.04 has been a disaster for my Acer laptop. NVidia must hate Ubuntu because any time I try to load into 2.6.32.22, the damn thing black-screens and dies. 2.6.31.21 "works", but I get flooded with errors related to the generic nvidia drivers. And I've specifically NOT installed the proprietary drivers because I know from past experience that they kill my computer, regardless of which Ubuntu version I use. Plus 10.04 is just... why did they change everything? The default color scheme makes my eyes bleed, and even the close/minimize/maximize buttons are in the wrong place. But that's minor... The real problem is that my graphics capabilities are shot to hell, and since this is a computer used for work-related image processing, I need 9.10 back.</rant>
Reinstalling from disk is not an option since any install disc (alt or otherwise) newer than 8.04 fails. (A fresh install of 9.10 involved installing 8.04 => 8.10 => 9.04 => 9.10) Is there a simple apt-get type command I can use to revert it?
I basically accidentally installed gnome3 on ubuntu tweak (I didn't notice the box was checked) and next time I started up I could only use the gnome 3 environment which is massively buggy. I wish to revert back to 11.04 (without a clean install, since I have a lot of files I need to keep). I have tried purging the gnome3 team ppa but I get the following error:
"sudo ppa-purge ppa:gnome3-team/gnome3/ Updating packages lists PPA to be removed: gnome3-team/gnome3 ppa
I recently installed KDE via the terminal using sudo apt-get install kde-standard and I'm now having difficulty logging in and when I do finally log in all I get is a blank screen, I have tried booting ubuntu in safe mode and uninstalling KDE from the terminal but I am still prompted with the KDE log in screen after rebooting, I was just wondering how to get back to unity.
Tried "publish file" in ubuntu one (web interface) to show a file to a friead, it worked great, then i selected stop publishing file, and according to ubuntu one it was not published any more. But my friend can still download the file from the url given, and so can I. Then I tried publishing it again via nautilus, it gave me the same URL, and stop publishing via nautilus, and the URL still works. Now I have the feeling that maybe all files in ubuntu one is accessible to the world by guessing the right URL. How can I know for sure? At least this one file now is world accessible even though ubuntu one says it isn't.
Found one bug in launchpad, which is closed because it supposedly works. Anyone else tried unpublishing files? [URL]