I am using secure delete to remove files from a Debian Linux PC. However, secure delete does not remove folders. This has lead me to look at writing a script that would move files to a predetermined folder for deletion. My plan is as follows:I have a folder on my desktop called shredder where I move the contents of the waste bin to. The script needs to identify all files within the folders and sub folders, within the shredder folder, and move each file to the shredder folder and then delete the folder. At this point secure delete can be used with a command like shred -v -u *.*on the shredder folder.The problem I have is in creating the code to move files from the different folders and then deleting the folders. Note that the names of the files, folders and subfolders will not always be known
I wanted to restrict users within a particular folder say /var/lib/tomcat/webapps. I want the users to see all subfolders inside webapps and work with it (edit+read but no delete). I understood that chroot is the way, and i read this [URL] community discussion, but what i understand out of it is, they are trying to give a complete working installation of ubuntu to the user within a directory which i dont want to.
I have been asked by my company management to look into moving file share server from Windows 2003 server OS to Ubuntu 10.4 using Samba. I have successfully configured active directory authentication using winbind and have configured samba and am able to access my file share successfully.
The complication arises as a result of implementing ACL mappings on Linux, as I need fine grained control over specific subfolders and files. From what I have read, I cant map all 13 permissions to respective unix rwx permissions. I have a use case where a certain group called A has read write execute rights on a folder/file but they shouldnt be allowed to delete the specific folder/file. On windows, all I have to do is set up my security permissions to deny 'delete subfolders and files' and 'delete' and it works well. In linux world I understand I cant do this as the user has rwx permissions on the folder/file and he can do whatever he likes.
I googled a lot around this issue and found that if you set up sticky bit on the directory I can still read and write from the file or directory and wont be able to delete it. It works in case of most document types but MS office. From samba help I figured that "Word does the following when you modify/change a Word document: MS Word creates a new document with a temporary name. Word then closes the old document and deletes it, then renames the new document to the original document name." (from samba how to) So if the sticky bit is set on the directory containing word files for instance, linux wont be able to delete the file (as required in write operations by MS office) and hence comes with an error. I shall be highly obliged if some one can shed light on this issue. Alternatively I would love to learn about other solutions for the use case mentioned.
I want to list my folders and subfolders (recursive) and also show the size of the files in terminal. I started using this:
Code: ls -h -R > /test.txt I got everything but not the size of the folders. Then I tried this: Code: du -h --max-depth=1 > test.txt
Suppose to show me everything, but I can't see subfolders. And this command do not accept recursive. How can I show the size of the files and folders like the second command, but including the subfolders?
I am trying to write a simple back up script in python where I try to list the files that are 24 hours old in specific directories that I would choose.I read the manual of find and used
find . -mtime 1 > log.dat
to get the list of files in the log.dat however I also get the path information in that list as such
I receive a lot of emails daily, and with the ever-growing amount of maildirs I'm in need to structure/optimize the browsing in mutt. My maildirs follow this naming scheme: .domain.category.sub_category
My goal is to break domain, category and sub_category into nested levels when browsing through the mailboxes. This is sort of achieved through the use of imap. But I stumble upon a few snags, so my questions are:
Is this nested mailboxes view possible by directly accessing ~/Mail and not using IMAP? E.g. set folder="~/Mail" and set spoolfile="~/Mail/.INBOX" When I start mutt I'm presented with all mailboxes available, which is what I want to get away from. I want to get directed directly into my default/main inbox like I do when accessing ~/Mail directly. How?
When hitting c (a defined macro, see configuration below) I again get presented with all the mailboxes available, and not the mailboxes at the current browsing level, e.g. mailboxes containing a specific category. To get this view I need to hit c+TAB. I've solved this by adding a <tab> to the c macro's. When finally getting mutt to present me mailboxes in nested levels they are only enumerated and not annotated with N, indicating new mail, or even better, total number of new mails in or under a folder. I know it's possible to define format on the different views, but is there one for this view? If so, which?
My mutt configuration: set autoedit set edit_headers set reverse_name set from='blapp' set realname='Blapp' set use_from .....
# Automatic viewing of html mail, but always prefer text/plain set implicit_autoview alternative_order text/plain text/html
Am new in bash scripting, presently I have 2 files and i need to create a file reading from these 2 files. I was trying to work with 2 for loops, but I can see like once its executing 1st outer loop then all inner loop, then next outer loop+ all inner loop. How can i get result like 1 outer loop, 1 inner loop then 2 outer loop,2 inner loop etc. Below is my prog
#!/bin/bash rm d for i in `cat a` do echo "dn:$i >> d for j in `cat b`
I'm trying to remember how to use the output of a nested Bash builtin call. So `which prog` gives me the path to the program, i'm interested in. Then I would like to get the directory path leading to that program and plug it into 'cd', so i end up in the directory containing the program.
I want a list of all my mp3 files (or any other kind of file, actually) telling me HOW MANY OF THEM I have in my computer.I tried with both find and locate commands in terminal, but they don't tell me how many files I have.
I have an external hard drive mounted at /media/exthdd/ On that hard drive I have folders: Music, Pictures, Videos, etc. Can I make symbolic links to /media/exthdd/Music/ to say the root directory /_ ? the directory /_ is empty I just want a quick method of typing to get me there much like [cd ~] gets me to my home/username folder. I have my music organized by Artist/Year-Album/Track.Title.mp3 I want to be able to "cd /_" then "ls" and see all Artist folders.
Let's say I want to get the size of each folder of a linux file system. When I use ls -la I don't really get the summarized size of the folders.If I use df I get the size of each mounted file system but that also doesn't help me. And with du I get the size of each subfolder and the summary of the whole file system.But I want to have only the summarized size of each folder within the ROOT folder of the file system. Is there any command to achiev that?
I have scripts in folders /opt/apache2/tools/ and also i have another folder called IDM under /opt/apache2/tools. i tried to configure htpasswd for just IDM folder only as below.
bash-3.00# pwd /opt/apache2/tools bash-3.00# ls -al
I want to get the size of each folder of a linux file system. When I use ls -la I don't really get the summarized size of the folders. If I use df I get the size of each mounted file system but that also doesn't help me. And with du I get the size of each subfolder and the summary of the whole file system. But I want to have only the summarized size of each folder within the ROOT folder of the file system. Is there any command to achieve that?
I'd like to copy a file, say widgets/water.txt, to all subfolders in the folder widgets using a single command. So if the folder widgets has 10 subfolders like widgets/blue, widgets/green, etc. I'd like to copy water.txt to all of them with one command.
I tried the commands
Code:
cp water.txt ./*/water.txt cp water.txt ./*/
However these don't seem to work. The latter gives 'cp: omitting directory' errors.
lets say I have a project that have generated lots of xml files. Though all these xml files point to a location with the text name TEXT15. I want to change all the files that containts TEXT15 and change it to TEXT16. This actually works for files in a folder but not recursively in all the entire files....perl -pi -c 's/TEXT15/TEXT16/g' ./* but I have many subfolders and within this more subsub folders....i just want to do this recursively.
I want to record an internet radio station starting at 2:00am tomorrow morning. The specific program on the radio station lasts until 6:00am. The command I need to run to record the station is: Code:mplayer http://wjcu.jcu.edu:8001/listen.pls -ao pcm:file=indie_heat_of_the_night.wav -vc dummy -vo nullI'd use cron, but 1. I'm not sure how to and 2. it seems unnecessarily complicated for something that I only want to run once. If cron is the only/easiest solution, I guess I'll just have to resort to that, but I'd rather not.
Code: std :: map <QString, std :: vector <std :: pair <QString, QString> > > configFileDataVector; How should I insert data in it? All the examples which I have looked up in Google are of plain maps!