Fedora :: Get BackupPC On A F12 64 Bit Machine As The Server?
May 4, 2010
I am trying to get going BackupPC on a fedora 12 64 bit machine as the server. I have got Apache running asn can see the test page. I can log on to the BackupPC web page at http://127.0.0.1/BackupPC. but that is about as far as it goes. The web page does not have anything on it that I can configure, or get any useful information on. There are a number of menu item missing Admin options, LOG file. Old Logs, Email summary, Config file, Host file, Current queues are all missing.
This according to the articel I have showing a BackupPC web page sceen. I have edited the /etc/hosts file to include the localhost and the backupserver IP address. I have also added these to the /etc/BackupPc/hosts file. The web page that is displayed not only has the missing menu items but the colour of th highlighted menu item is blue not green as in the typical pictures I have.
I have tried to use the article Back Up linux and Windows systems with backupPC from howtoforge as the basis. As I indicated I can get teh web page up but can not do anything. Clearly I have missed something.
Can anyone assist me to get this going. Is there a definative article somewhere that sets up BackupPC on Fedora 12?
here is the thing. I've deployed BackupPC in a server at work, and everything is working fine. Now, what I need to do this thing. I have a remote server with a website and a postgres db running. I've been able to set up everything to be backed up using rsync. But I would like to make the same process to restore the backup immediately in a local server trough rsync, that has to be ready just in case of failure of the remote server. What I've tried to do is to run the DumpPostUserCmd, so whenever a dump is performed in the remote server, I can have my local server updated. What I've find out is that when you perform a restore through the web interface, the command that performs it is (using ps ax in the backuppc server):
So, what I could find out is: /usr/bin/rsync is the command that the process runs in the server where I want to put the restore. Then the options, and finally, source and destination of the restoration. But, as you can see, the source is '.' but I cannot guess where to point that!
I now have BackupPC up and running on ubuntu 9.10.I have a couple more issues. These relate to the fact that I am trying to use backuppc to backup files, but also to provide a central up to date store of network accessible files, so, for example, media files can be viewed over the network, rather than downloaded through the web interface. Thus I have the following problems:
1. Access the files through the network.I want to have /var/lib/backuppc as a browsable folder through the network. I have managed to change permissions on the folders so they are browseable, but i need to set the file permissions so i can open the files. How do I do this, and set it so that all future backup files are given accessible permissions?
2. A single store of files.I want one folder with all the incremental backups being copied into it, so to see the most up to date version of a file, I just browse to it in nautilus, or through shared folders from another pc on the network.At the moment, I have a 0 and a 1 folder, and I am sure I will end up with loads more. I just want a single folder, no matter how many times the backup is made.
3. Change the Naming Convention.Also is there a way to change the naming convention so the folders are given the normal names (so Documents rather than fDocuments)?
So im setting up backuppc but do not want Apache to run as the backuppc user. To get round this I need to setup suEXEC so that CGI scripts are ran as the backuppc user. This seems fine and I do have the module loaded,
1. I have configured my config files as said here 2. I have read that DOC_ROOT for suEXEC is set to /var/www I need to change this to /home/www - as a quick fix i have a symbolic link from /var/www to /home/www.
3. To confirm what DOC_ROOT is and check where the log file will be as suggested on many sites I run "/usr/sbin/suexec -V" but I get nothing back, it does not list any config.
4. Group and Owner for "/usr/share/BackupPC" is backuppc
i`m trying to set up a Central Backup Server with BackupPC installed on CentOS 5.6 x86_64, My CentOS has Samba3x / Winbind integrated with Active Directory i found this nice Wiki http://wiki.centos.org/HowTos/BackupPC , to get my BackupPC installed. after installing RPMForge's repo and settin` up the priorities to the repos, http://wiki.centos.org/PackageManagement/Yum/Priorities i get the following error regarding Samba3x Conflicts , i dont wanna miss up my Samba Configuration to install BackupPC, even --skip-broken option does not work for me
I need to access a Windows Server 2000 machine using a Linux machine via KDE, but that will migrate to Gnome. The Linux user to connect to Windows machine, you should open an application 'XYZ' automatically, and only this, denying any unauthorized access. When you close the application 'XYZ' communications (RDP?) Should be terminated. Do I need a log of accesses and possible attempts to circumvent the system and access other application.
I have been struggling to get backuppc running and finally today figured out what I was doing wrong. Backuppc is running on my Ubuntu 10.04 server. I have it backing up my mac, but I can 't get it work on my Ubuntu 10.04 desktop. I followed the same steps I used in getting it to work in my mac, so I can't really see why there is a discrepancy. This is the error log I'm getting now:
Code: full backup started for directory / Running: /usr/bin/ssh -q -x -l root 192.168.1.120 /usr/bin/rsync --server --sender --numeric-ids --perms --owner --group -D --links --hard-links --times --block-size=2048 --recursive --ignore-times . / Xfer PIDs are now 14404 Read EOF: Connection reset by peer Tried again: got 0 bytes Done: 0 files, 0 bytes
Got fatal error during xfer (Unable to read 4 bytes) Backup aborted (Unable to read 4 bytes) Not saving this as a partial backup since it has fewer files than the prior one (got 0 and 0 files versus 0) The dreaded 4 bytes error. I was getting that problem before on my mac, but from what I've managed to find, this error usually shows up when you don't have ssh keys set up properly. Once I got that figured out, it started working on my mac. I can confirm that that part is set up because when I execute the following as the backuppc user on the server:
BackupPC usually just works. It backs up the localhost and another PC, both running Debian Unstable. However it stopped backing up the remote machine after the 22nd March.This correlates with updating OpenSSH.All I get is "Unable to read 4 bytes from Server".As suggested on the backupPC website I ranCode:sudo -u backuppc /usr/share/backuppc/bin/BackupPC_dump -v -f backupclientI was asked for the sudo password and then for a password for each directory that was to be backed up.The backuppc password was not accepted. The root password was.Could somebody point me towards a solution? Do I have to recreate the SSH keys?
I'm trying to setup backuppc to access using apache. I followed this guide: [URL]. Apache2 is correctly running on my pc. I used this command: htpasswd /etc/backuppc/htpasswd user and, as the guide says, I tried to access http://localhost/backuppc but I get nothing there. Is anything else I should do to access backuppc?
In a moment of temporary insanity I tried to install Backuppc using the synaptic package manager, but started to panic when the installer tried to set up an Apache web server on my PC (which I do not want). At that point I tried to cancel the installation, but got an error message. I then marked the package for complete removal in the package manager, but every time I click the apply I get an error message, "E: backuppc: subprocess pre-removal script returned error exit status 2" and the package, which did not install correctly in the first place, refuses to go. When I ran the update manager today, to install the latest updates, the manager tried to continue with the Apache installation stage of the Backuppc installation. How do I get rid of it?
"E: backuppc: subprocess installed post-installation script returned error exit status 1"whenever I upgrade my system I get the above message Anyone know how to fix it? or if it's anything important ?
I'm making a clever backupsystem based on nfs and rsync.Basically, I export folders from the clients to a backup server, and the backup server processes them and makes backups.The backupserver mounts the folders during startup, but if a client restarts, then I guess it would unmount from my backupserver, right?What can I do to make it automount the folder whenever the client gets back up again?All the clients are static servers without much interferance, without any risk of external people tampering with them and without internet access. Security is not an issue, and any kind of shady compromisingcripts will do.However, installing software on them is tricky as I have to download packet for packet and transfer them via usb manually.
Has anyone got experience connecting a linux machine to a Microsoft VPN server using RSA authentication? What puzzles me perhaps most about this topic is the absolute dirth of information. If it is not possible, can anyone tell me why?
I'm trying to setup and configure a server entirely with text only run mode 3 on a virtual machine so I can redo my current live server. I'm now trying to set up the firewall of the system using iptables. I've read up on it and came up with the following:
-clear all rules #iptables -F -set default policy rules #iptables --policy INPUT DROP #iptables --policy FORWARD DROP
[Code]....
Everything above worked for me but just out of interest I looked at my live server which was configured using a GUI. I ran iptables-save and it was pretty much the same but its port open lines read like this:
#iptables -A INPUT -p tcp -m state --state NEW --dport 80 -j ACCEPT
so finally my question is do I really need the "-m state --state NEW"? Wouldn't having that drop established connections on those ports? I'm just confused as to what exactly the NEW state is doing and would it make a difference if I didn't include it.
In Ubuntu I can easily transfer packages from offline machine into online machine using APTonCD feature. In fedora ,Is there anything similar by which I can transfer my packages of online machine into the offline machine
I have an issue with the manner in which Network Manager is configuring the network and short of ditching Network Manager I can see no solution.The issue : Getting a machine to update its machine name in the DNS serverSounds simple doesn't it I operate a FreeBSD based firewall / DHCP / DNS server, using a default Network Manager DHCP configuration the Fedora clients do not register their names with the DNS server when they obtain an address.
I have traced the communications with Wireshark and the Fedora clients are NOT supplying the PC's hostname as part of the exchange so this is NOT a DNS server configuration issue. If I uncheck the option 'Automatically obtain DNS information from provider' under the DHCP settings the Fedora clients DO register the hostname that is put into the Hostname (optional) databox. They do NOT however store the DNS server IP address or any other records defined by the DNS server.
Is there some hidden settings or is this a bug because it isn't acceptable 'DHCP' behaviour if it isn't possible to automatically set DNS server IP addresses and at the same time register the hostname during the DHCP negotiation. Before it is said I know I can use a fixed DNS IP address but am not prepared to long term, I am also not prepared to define the Fedora clients with a 'static' IP. I am similarly not interested in playing around with scripts or any other such 'frigs' to achieve what should be a standard activity - registering a host with DNS during the DHCP negotiation.
I have been trying to SCP a couple files from my Ubuntu 10.10 machine to a Fedora 12 machine. Before today, did it with out any problems, always worked. Today however; after the SCP is complete from my machine, the file on the other machine is zero bytes, an empty file. The only thing I can remember getting changed was the new kernel that was in the update I did today. But I don't think that would have changed the SCP works.
I want to make my machine to PXE boot windows from another machine having RHEL5.2. I know the procedure to PXE boot linux, but I want to know is it possible to PXE boot your client machine with windows XP.
I am having problems gaining access to the BackupPC admin web page. The error:
Quote:
[Tue Feb 22 16:43:59 2011] [error] [client 192.168.0.2] (13)Permission denied: Could not open password file: /etc/BackupPC/htpasswd [Tue Feb 22 16:43:59 2011] [error] [client 82.30.227.113] access to /backuppc failed, reason: verification of user id 'myhtaccessuser' not configured
Why is this occuring, is there anyway of getting around this? I just cant remember originally on my old system how I set this up.
iam trying to sync file server data into backup server machine by command- rsync -avu path/of/data ipaddress-of-backup-server:/path/where/to/save after running it ask for root password and manually it is successful.but i want to make it automatic.for that i also tried cronjob and also generated authentication key but iam not successful in login automatically..anybody know how to authenticate root to login for storing data in backup server.
I have installed CentOS on a VMwareWorktation and that CentOS, i also install VMware Server (suscess) and setup a guest OS that, but i start this Guest OS, an error show "You may not power virtual machine in virtual machine"...
I want to set up the following server in open suse:dhcpopenldapnfs (to allow users to mount their home directories from the serverI started off with the openldap server. I configured it with dc=localdomain,dc=local as its domain. As the server machine has no internet. Though when I go to add a .ldif file with the following command
Code: ldapadd -x -D 'cn=Administrator,dc=localdomain,dc=local' -f /home/base.ldif -W It returns this
I've got 4 identical 1 TB drives and would like to use them in a software RAID configuration on my home server. I'm running Debian Linux using 'mdadm' utility to manage the software RAID. I don't know how much I've read is fact or dated or even false so I decided I would ask here to get help from people who know more about this than I do. This is essentially just a file server machine to store all my data so being that I've got four identical SATA hard drives, I was thinking about doing RAID level 5. I guess I'll start here and ask if that is the recommended level of RAID. I think RAID level 5 will be fine for my general server usage. My second issue is partitioning the four individual drives to get maximum performance / space from them. Basically just asking here how would you or you recommend I partition the drives? I was thinking about doing three seperate partitions per drive:
/dev/sda1 = 4 GB (swap)/dev/sda2 = 1 GB (/boot)/dev/sda3 = 995 GB (/)Now from that partition schema above, obviously all the types will be 'fd' for RAID and the partition for /boot is going to be bootable. My confusion is that I read Grub doesn't support booting from RAID 5 since Grub can't handle disk assembly. If /dev/sdx2 (sda2, sdb2, sdc2, sdd2) are partitioned for /boot (bootable), how would you guys configure this RAID to match up equally? I don't think I do a RAID level 1 on 4 identical partitions, right?
I have configured samba server on fedora machine and i am trying to authenticate a winxp machine through samba server but the issue is winxp machine is not becoming the part of the domain. The error is A domain controller for the domain HOMEDOMAIN could not be contacted.Ensure that the domain name is typed correctly. If the name is correct, click Details for troubleshooting information.
here is the configuration file text..
# Samba config file created using SWAT # from UNKNOWN (8) # Date: 2010/01/31 18:51:36 [global] workgroup = HOMEDOMAIN server string = Samba as Domain Controller.