Debian Configuration :: Exclude Directories From Bootstrap Copy?
May 24, 2011
I have successfully created an iso of my current running system using live build with the --bootstrap copy option..As expected, the image is gigantic. I would like to be able to use live-build to create copy-of-host iso's, but with specific options to -exclude specific pathways (ie. music folders, picture folders etc). Is there a way to do this? I did run a configuration and build using an option similar to that found in tar (something like -exclude=/home/user/music) and it ran through without any apparent errors, however, there was not any iso image to be found.
I'm trying to create backup/archive my Ubuntu 10.04 system files (so I can restore it in case my system get corrupted). More specifically, I'm trying to zip the important files in my root directory not including my home directory (which includes my documents which I backup separately/more frequently) to an external hard drive attached via USB (called 'My Book').Since File Roller didn't give me quite the level of control I was looking for, I created a script that I could execute to backup and archive regularly.
I'm trying to configure auditd to monitor "strange" events with apache2 weberver on Wheezy (though same problem occurs on Jessie), tried both with "vanilla" 3.2 and backports 3.16 kernel I am actually using.
Here's auditd rules I have problem with:
Code: Select all-a exit,never -F arch=b64 -S stat -F path=/var/www/server-status -k web -a exit,always -F arch=b64 -S stat -F uid=www-data -F success=0 -k web
So to recap, I want to log stat syscall failures for www-data user, but excluding some "known" issues, such as that "/var/www/server-status" (after a2enmod status, /server-status path can be accessed for statistics, though apache2 still tries to find physical file for that path and fails).
I am trying to exclude multiple directories when using tar. I can do it for just one directory with exclude= directory.I can also do it for multiple directories by typing that code again and again.As you can see im trying to call this variable that has endless amounts of directories in it seperated by a space.. but when run it doesnt work! It will however work if i just put one directory in the variable. Any ideas?
I found a script on webmaster world that mostly does what I need it to, but have been making modifications to tailor it to my specific needs.I know that //..*/ tells awk to ignore hidden directories, how do I define more directories to ignore? (i.e. temp, var, etc)? I've tried playing with prune before the awk command with limited success...I know that there are many ways to do the same thing and keep running into brick walls.
In Midnight Commander, is it possible to exclude some directories/patterns/... when doing search? (M-?) I'm specifically interested in skipping the .hg subdirectory.
Im having proftpd 1.3.3a standalone on squeeze with mysql backend. It all works fine with one little problem. When i log in to directory, there is nothing in it - knowing there is shed load of files. Directories are simply not being listed. Im logging in as group member and directory permissions are rwxr-xr-x. In addition to that when i try to create new directory connection is being lost and client attempts to reconnect. After successful reconnection, connection gets lost and over and over again until i abort.
[code]...
i find Failed binding to 0.0.0.0, port 201: Address already in use. Check the ServerType directive to ensure you are configured correctly.
I am setting up a mail server on debian. Once it's done, I'd like to have an indentical server on another machine, where debian will also be installed. The solution has to be hardware agnostic, so source machine is different than destination machine. I was reading on some wiki page that one can simply copy the root filesystem via rsync to the computer that he'd like to install the system on, then chroot to it and test if everything works. I'm guessing I'd have to change a couple things before, like :
- The network config - The /etc/fstab file (disks and partitions may be different)
This article is about using rsync to transfer a copy of your "/" tree, excluding a few select folders. This approach is considered to be better than disk cloning with dd since it allows for a different size, partition table and filesystem to be used, and better than copying with cp -a as well, because it allows greater control over file permissions, attributes, Access Control Lists (ACLs) and extended attributes. [1]
I have been searching for a solution to the following problem:
When my distro of choice updates Firefox web browser, the directory name is '/usr/lib/firefox-<version>'. The problem here is that the directory name is dynamic by nature and doesn't allow a simple static solution, e.g. 'cp -rf /usr/local/files/bookmarks.html /usr/lib/firefox/defaults/profile'.
The same quandary applies when adding extensions, changing prefs etc. I have looked at the following commands:- find, sed, xargs, grep, awk, fprint. Unfortunately my grasp of syntax and programming is very simple at best.
I have 2 massive duplicate dirs of the same format as below: dir1 subdir1 file1 subdir2 file1 subdir3 file1 ...
Dir2 is the same, but it has some newer files of the same name. I want to copy all file1's from Dir2 to the same name and folders in dir1. So basically something like: cp -pr bkpDir1/*/*-big.gif Dir2/*/*-big.gif
This works for singular cases: cp -pr bkpDir1/uniquesubdir/*-big.gif Dir2/uniquesubdir/*-big.gif
But not for wildcards: cp -pr bkpDir1/subdir*/*-big.gif Dir2/subdir*/*-big.gif
Anyway the aim is to do the first cp above, I have tried a few options using find. In trying to show an example stumbled upon a way that worked, while in dir2: find */*-big.gif | xargs -i cp -rp {} ../dir1/{} Sure there are better ways also...
I have a question which has been in part answered many times but nothing I found relateds completely to my situation. I am sure there will be people who will say RTFM but believe me I did, and searched as well but to no avail. I have a situation where I want to copy files created withing last hour in one directory into another one. The problem is that that the directories are on different levels in the dir tree so the absolute path is different. But I want to keep the relative path the same.
I want to copy new files from /mnt/path_to_webdav/user to /home/user. so if there is new file /mnt/path_to_webdav/user/doc/xy.txt I want it to be copied to /home/user/doc/xy.txt. Also if there is a new dir, say /mnt/path_to_webdav/user/newdir I want a new dir to be created in /home/user/newdir with all the files in it, should there be any. I can do find with exec and copy all the files into one directory.This is not what I want though. How do I preserve the relative path and get the files copied into their corresponding directories?
i have created on folder in my server to upload some regular states. I want that user can modify or upload already stored files. but, should not upload any unwanted files orfolders.for that i want to use "rm" command as auto scheduler (putting this in cron tab.so that all files will be removed except some required files / folders for which this upload facility is activated. users are using secure-shell for uploading data.
I am attempting to copy a set of sub folders from their multiple parent directories to a new location.
For example, I have three folders to copy:
I would like them to be copied to:
In actuality there are many folders besides folder1, folder2, folder3, and no numerical order exists. So, the folder named 'photos' would be copied to its parent folder's name in a new location. I would need this to occur for all folders in the '/home/user' directory.
I've finally managed to get a local install of WP up and running but, for some reason, when I copy new theme directories into the themes folder, WP-Admin doesn't see them.I've spent about 4 hours working my way through this and this is the last problem I've got to deal with
I was thinking of migrating my apt-mirror repository to the recommended ftp scrips: [URL] .....
I pre populated my pool with already downloaded files, and setup the scripts.
However, if I run the bin/ftpsync, and monitor rsync with lsof -p, I can see that it is still downloading files from oldstable (wheezy) despite exclude options.
I'm guessing it's a configuration error, but I can't seem to figure it out. Any thoughts? My etc/ftpsync.conf is as follows:
Actually, I don't think it works like I thought it did. A few guides I found listed the exclude options, but the sample config file has this:
Code: Select all## If you do want to exclude files from the mirror run, put --exclude statements here. ## See rsync(1) for the exact syntax, these are passed to rsync as written here. ## DO NOT TRY TO EXCLUDE ARCHITECTURES OR SUITES WITH THIS, IT WILL NOT WORK! #EXCLUDE=""
So it looks like it doesn't exclude the suites at all.
I'd like to backup my whole system to a 2nd disk using rsync (other tools not possible).Which paths should I exclude from the packup?I was thinking about /proc, /dev, the lost+found directories...What other paths am I forgetting?
Sometimes, randomly, when turning on my Ubuntu laptop (HP 6730b, Ubuntu 10.04) I get a wrong monitor setting (much lower resolution than normal) and it is not possible to set it correctly because the menu buttons are wrongly placed and some are not present (probably there is not enough room for them) I have no other way than to restart the system to get the right resolution. Can someone tell me this inexplicable (to me) behaviour? Of course I didn't change anything in monitor settings...
There is this bug in the latest version of Ubuntu, which is also Jessie, which is:
Can't copy a file from SMB share to the local file system: Software caused connection abort
The problem, apparently, is that newer versions of Samba hit servers with multiple requests at the same time, and for some reason the Zyxel and Iomega boxes can't handle this. The best solution they've come up with is to modify the smb.conf file on your server to include this setting: "max mux = 1".
Here is the reference material on this bug: [URL] ....
People who develop samba have fixed it in the latest version but neither the ubuntu nor Debian have released the fixed version of nautilus, as of yet. Here, is the reference: [URL] ....
I am using windows xp and debian linux.In windows xp I am having around 25 gb offree memory but in linux if i copy anything it says enough space memory to copy
I am facing a problem with my AT91SAM9260 customized board. Board is almost same as the evaluation kit.
I could download the binaries ( Bootstrap-v1.16, u-boot-1.3.4, linux kernel 2.6.20) successfully to the DATAFLASH/NANDFlash in my board by using atmel SAM-BA tool with usb/serialport/jlink.
Here I describe the problem.
When I power up the board, boot strap is not jumping to U-boot location, in the normal boot sequence and board stuck with bootstrap.
But when I disconnect/connect the JTAG USB cable ( provided with SAM-BA ICE) , it's jumping to u-boot location and booting the board properly. I'm getting the same error in NAND FLASH also.
I have tried one more test case.I copied bootstrap binary at the flash location, [location which is specified for u-boot binary] instead of U-boot.bin (location: 0x8400 in dataflash), I got continous bootstrap debug messages in my console. [ So can I conclude SDRAM doesn't have any problem? ]
I inherited a machine that's set up with 3 disks, managed by LVM (v 2.02.46-RHEL5). I just got a new box with the same hardware configuration, and would like to clone the disk setup by copying an LVM config file from the first box to the second, and have LVM on the new box set up the disks according to that same configuration. Is there a way to do this?
I have 2 PCs, 1 ubuntu and another kubuntu. Both are lucid 10.04.I have a network printer. On ubuntu I cannot find my printer model listed, but ubuntu gives the option to search for drivers online. This method succeeds.On kubuntu there's no such option. I cannot install my printer driver.
I have a folder that contains a bunch of source code tarballs and some shell scripts that compile it all. I've been using git to keep track of it all. It works very well. It makes it easy to distribute recent changes to the computers I look after - just git pull and all the recent changes are updated without me having to remember everything or redownload the whole thing.
The problem I have is that when I update a package (eg gnumeric-1.8.4 => 1.10.0) git keeps a copy of the old tarball so the whole repository grows ever larger. At the moment the only solution I have is (when the folder gets over 3GB) to delete .git and start over. This is less than ideal, I want git to remember all the changes I've ever made to the bash scripts so that I can go back and review the code, but I don't want it to keep a copy of all the old tarballs. ie, after I've used git rm ${FILE} it should delete its backup copy of ${FILE} but I still want it to keep track of all the changes to the current files (the bash scripts never get removed, only changed)
How do I tell git to remember all the changes to ${CURRENT_FILES} but to not keep a copy of ${REMOVED_FILES}
i am in need of linux help. iam at college and i need this back/restore script to pass this final part of an assessment. i require a backup script that will not only backup but also restore files to the relevent directories. e.g. users are instructed to store all wordprocessor files in a directory named wp. so i am needing to create a backup directory and 3 directories within that and some files within the 3 directories and then back them up ot restore them. l know i should/have to do this myself by been trying to get/understand info for the last few days and came up with zero.
I want to make a webserver with multiple users allowed to login through SFTP to a specific folder, www.Multiple users are added, lets say user1 and user2, and all of them belonging to the www-data group. The www directory has an owner www-data and a group www-data.
I have used chmod -R 775 on the www folder, but after I try to create a folder test through my SFTP server (using Filezilla) the group of the directory created has only r and x permissions, and I am not able to log in with the second user user2 and create a directory within www/test due to a lack of w permission to the group.
I also tried using chmod 2775 on www directory, but without luck. Can somebody explain to me, how can I make it so that a newly created directory inherits the root directory group permissions?
I think I've learned that these directories are 'predefined' and eventually recreated at each login (even when deleted they appear again and over again, pretty annoying indeed...)
[URL] .... [URL] ....
Now I would like to avoid the default creation of those dirs but I did not understand how to edit the local and global configuration files controlling this behaviour (I think)
Code: Select allgedit ~/.config/user-dirs.dirs
# This file is written by xdg-user-dirs-update # If you want to change or add directories, just edit the line you're # interested in. All local changes will be retained on the next run # Format is XDG_xxx_DIR="$HOME/yyy", where yyy is a shell-escaped # homedir-relative path, or XDG_xxx_DIR="/yyy", where /yyy is an # absolute path. No other format is supported.