Heres my situation. I have around 30+ systems running Karmic and more coming online everyday. I'd like to be able to have these updated via the local network to conserve net bandwidth. Is it possible to setup a single system which will download updates I specify, then make them available to the other systems on my network via some kind of local repo?
Of course I'm not looking to download EVERY repo, just the essential updates to the core OS. I really just want to eliminate that 45 min/256 file download that takes place each time I install a new system.
One of Konqueror's unique features is that i can name a local process as the action in a form. When i submit that form, the local process is executed. Very helpful for certain offline tasks. What would make it even better is if i could find a way to pass some data to that local process from the html page. This could be the content of a hidden input item, etc. Alternatively, if there is a way for Konqueror to create or update a local file with data from the html page, that would acheive the same end.
i have just installed the 10.04 LTS server on my system.I set up a file server for my windows computers and can connect to the server. However, I was going to update and install a couple of packages, but My system cannot find the update or download server for itIs anyone else experiencing this problem. If not, any ideas on how to fix it.
i have been havin a little trouble with my ftp server im using proftpd and have been able to connect to the box on local lan but it stops with an error about tls packet.
i have been following a guide on settin up my server [URL]...and have attached my config and error details
I had problems with my Ubuntu Server 10.4, so I reformatted and reinstalled (I only choose OpenSSH from the install menu), then after it rebooted I installed apache. So I can access the machine via web, ssh and scp from within my local network, but strangely outside my network I can access it from web, but not ssh or scp. Nothing has changed on my router.
I am trying to setup a DNS server on my local network. When I set linux clients to use it, it works as expected. However, when I set windows clients to it, the root name doesn't resolve. For example, I have a zone called daniel. On linux "anything.daniel" resolves to the correct ip as does "daniel" which is the behavior I want. However, on windows 7, "anything.daniel" resolves correctly, but "daniel" doesn't. I am new to BIND9 so my config is mostly copy and pasted. Here is my zone file for daniel (where #.#.#.# is the ip I want daniel to resolve to):
@ IN SOA ns1.daniel. admin.daniel. ( 2007031001 28800 3600
Starting with Ubuntu Server 9.04, when I log in at times the OS smartly reports the number of updates available. I have a couple of questions of how to extend this functionality:
1) How could I back-port that functionality to 8.04 servers? Does that happen somewhere in the /etc/cron.daily/apt script? (Just the checking on # of packages needing updates, does not need to appear on the login screen.)
2) Then with that information, if the number >0 then use mailx to send admins an email. Since 9.04 and higher already do the first item, then in my mind #2 is the only thing needed for 9.04 and higher servers, and #1 is also needed for 8.04 boxes.
I'm trying to setup a NAS for my network, the only problem is that I can't figure out how to do it. On my network I have about 3 computers however only I only use 2 of them so I thought that maybe I could use the third computer in such a way that I could access it 24/7 from the internet as a server for all of my files (school papers, music etc etc...). The only problem is that I don't know where to get started. Both of the computers I'm currently using are Windows and the one that I,hopefully, can turn into a server is running on Ubuntu.
I have a home network set up that includes windows xp and a local server. DNS resolution allows both machines to access the internet. I know this because firefox works on the xp machine, and yum works on the fedora machine.I can see the server index page through firefox by ip. I can't see it by url.
So, I have a website that is hosted at Godaddy and I've set up a database with the ability to access it remotely. I was wondering if there is an easy way to set up an automated back-up of said database at Godaddy, to my local server.
I have a fast computer in my office and I want the person using the slow computer in the same office to boot up and see the login window (gdm) and log-in from there into the fast computer and be able to use their session on the fast computer the same time I am locally logged in to the fast computer as a different user and session.Is this best done through XDMCP? Where is a good tutorial on how to set this up?
I 'am trying to set a web server to develop in local website, i follow the guide in server to install php mysql but i 've got this error to set up the password of mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock'(2)i'm a noob in linux so i don't know how to resolve it.
I have a ClarkConnect (CentOS based) box running as my home router on a RR connection. I had the DNS servers set up to use Google's DNS server. I want to change them back to the local DNS servers but I can't find an obvious/easy way to get those address short of a) reconfiguring the router's network to DHCP them (would rather not interrupt everyone) or b) calling their tech support (kill me now!). Is there a command line tool/command I can use to query the DHCP server on the external NIC to see what DNS servers it would set me up with w/o munging my existing setup?
I have a postfix mail server on ubuntu 10.04 lts behind a router. so all local users are fetching/sending mails through ms outlook using local IP. Sometimes when internet goes down and any mail send then it bounced back immediately saying domain not found. Can u please tell me how i configure to hold all mails in postfix server rather than bounce when internet fails and will pass through when restored the internet around 15-30 minutes?
I have a lab of 10 computers with ips from 192.168.1.120 to 192.168.1.130 the server's Ip is 192.168.1.116When I am on client computer I type the server's Ip address on the browser and it works. All i want is that instead of entering my servers Ip I could just enter an address like: example.lan
how to setup a local mail server for internal testing on my php development work. For example if I sent an email using php script to [url]....I should be able to check the mailbox of 'someone' either in Outlook or SquirrelMail.I have done some reading about this. All I know is that I need Postfix with Courier. But I just don't know to get it working.
How do I make a local mail server that itself is a client to a WAN mail server.I want the local mail server to query new mail every 30 minutes from the WAN server.
I have several file servers in our offices and I am relatively new to Ubuntu / Linux. I get notices that there are updates for the server software from time to time. Is it typical to update everything when available or should I follow "If it ain't broke, don't fix it..." mentality?I would hate for everything to be working fine and then have an update throw me a curve.
After the upgrade from 8.10 to 9.04, all was well. But after the upgrade from 9.04 to 9.10, I lost the MySQL server. Now, I recall during the upgrade, I was asked if I wanted to keep the existing my.cnf file or replace it with a newer one. I did as suggested and kept the original as I had edited it before. The same question was asked with a couple other config files. I kept the original in each case. After the first step, I checked the server was running and the websites were up, all was well. After the update to 9.10, when I checked the server, I get the following error:
Code: error: 'Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)' Check that mysqld is running and that the socket: '/var/run/mysqld/mysqld.sock' exists! Can anyone point me in the right direction to getting this resolved?
how to upgrade from mysql5.1 to mysql5.5 version. Pls guide me in briefly.I did upgrade with the help of this url
http://www.ovaistariq.net/490/a-step...-to-mysql-5-5/ but i got error like this error:2002: Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2) when trying to connect FATAL ERROR: Upgrade failed
In /etc/my.cnf , i didnt get like this. Is this is the error user = mysql; socket = /var/run/mysqld/mysqld.sock; port = 3306; basedir = /usr/local/mysql; datadir = /usr/local/mysql/data; tmpdir = /tmp; log_error = /var/log/mysql/error.log
I'm the lead Dev of GnackTrack and we're having issues with running MySQL on the LiveDVD.Once installed everything works fine, mysql can be connected to but when using the liveDVD we get the following error:
Code:
root@root:~# mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) Contents of /etc/mysql/my.cf point to /var/run/mysqld/mysqld.sock but because this is a liveDVD the actual file is located in: /rofs/var/run/mysqld/mysqld.sock
In the office there is a local network with samba+openldap PDC. The local domain name is company.net. The company desided to create a corporate Website on a remote hosting and desided that the site's domain should be company.net which is same as local network's domain name. So now it is not possible to reach that corporate website from within the company's local network because, as I guess, bind9 which is installed on above menioned PDC looks for company.net on a local webserver. Is there a possibility to let people from this local network browse the remote site?
I don't understand this error nor do I know how to solve the issue that is causing the error. Anyone care to comment?
Quote:
Error: Caching enabled but no local cache of //var/cache/yum/updates-newkey/filelists.sqlite.bz2 from updates-newkey
I know JohnVV. "Install a supported version of Fedora, like Fedora 11". This is on a box that has all 11 releases of Fedora installed. It's a toy and I like to play around with it.
I've been using Ubuntu 10.10 (Maverick) for some time now as the OS for a home Nasbox, along with webmin. Its been great, and I've been able to SSH into the box without any issues in the past.Last night, I ran apt-get update and apt-get upgrade to bring my system more current.
Now, I can't seem to log into SSH any longer, and get a "Connection failed, Windows error 10061" error.I'm going to upgrade the OS to 11 as well in hopes things fix themselves.
I don't have much experience in Linux administration, but I needed to create a simple www server. It's is installed on a dedicated machine running Ubuntu Server 10.04.1 x64. I've got a problem with creating and setting ftp account to access /var/www. I've read that I shall not change the owner nor the access permissions to that folder. I've also tried changing /var/www permissions but even when I had access I could not modify/overwrite nor delete files which I've uploaded (I could upload and download/view). Currently it seems to work, but I've chown'ed /var/etc to my wwwuser user account (as I remember originally it was owned by root). It works but wwwuser must have a shell assigned (/bin/sh or /bin/bash), without it (/bin/false) he cannot log on to the ftp account. So here goes my question - who should own /var/www, what permission should this folder have and how to make it work using vsftpd? What is local_umask=022 in vsftpd.conf?
I'm running Ubuntu 10.04 LTS. Last night I was doing some admin on this box, it's running apache and ASSP for spam filtering. Once I finished I started some updates.I checked for updates and applied them, but fell asleep. This morning, my session had timed out at continue. I reconnected and saw a message stating a reboot was required.I've rebooted, my usual services are running, eg apache and ASSP. I can view pages on apache and the admin page for ASSP. I'm remote from the system, so connecting over the internet and when I try to connect, it fails.
Quite urgent, however at least my services are working. However I'm not happy that I can't access the system myself.I don't know if this is my own fault for leaving updates unattended or if an update caused the problem. Thanks.
When I log into my Ubuntu server via SSH it usually gives me the number of available updates. But a while ago it started saying this multiple times, and with different numbers. I've tried to install the updates. But one of the messages about updates always says there's updates available. Also the number seem to be static and never change.
Example: Code: login as: user user@domain's password: Linux Inceptor 2.6.32-31-generic #61-Ubuntu SMP Fri Apr 8 18:25:51 UTC 2011 x86_ 64 GNU/Linux Ubuntu 10.04.2 LTS