I would like to ask if there is a way of preventing a file from being deleted, but still retaining the option of editing it. I know that I can set write access off with chmod, but that would also mean that I can't edit the file any more. What I would like to achieve is to make it impossible to remove a file on which I am working.
I'm setting up an Ubuntu Computer Lab at my kids' school, and am trying to find a way to lock certain icons to the desktop. For example, I need each user to have Firefox on the desktop, and not be able to delete it. I've tried doing a chown on the Firefox shortcut and changing ownership to root, or to myself, but the student account can still just delete it with no problems. I've also tried using Sabayon, and while it does bring the shortcut back at each logon, it's not "Marked as Trusted" which is an annoying constant popup.How can I ensure that when logged on, each user will have a Firefox shortcut, along with a couple other mandatory ones? Some logon script maybe?
I used the ext3 format when I formatted my partition prior to installing Ubuntu10.10. I had accidentally deleted a file and began the process to get it back. It wasn't critical but helpful to recover the file. To make a long story short I ran into to some unexpected road blocks. I tried to use PhotoRec to get the job done but with no success.
I'm just looking down the road in the event I might have to recover something important.If it would be better going back to the Fat32 file system I would rather do it sooner than later. Just as a side note I am dual booting between linux and windows.
In /etc/default/rcS I have set FSCKFIX=yes. This solves a recurring 'no init found' problem that prevents my machine from booting. Occasionally however, the setting reverts (by itself somehow) to FSCKFIX=no. Thus my machine cannot boot. Is there a way that I can prevent this file from being changed?
How can you turn off unwanted file uploads in Amule? In Limewire you can decide if you want to share files or not. How can you do this in Amule. I've turned down the settings to 0 or1 and reduced the bandwidth time etc. But to actually prevent? Also in windows there's all kinds of file splitters and joiners what is recommended in Ubuntu.
A tutorial says in order to add new fonts, take you font file and put them into a newly created directory inside your home folder and name it ".fonts". Reset and they will be ready to use.
I'm not able to do this from Nautilus View because of the error that I do not have permission to do so.
I have made the dir in terminal using sudo mkdir
moving the actual fonts from my desktop into the actual folder. I don't know how to do it.
How do I prevent/disable a file from being copied?
I would want someone to be able to see the content of a directory, then open the relevant document, but just for viewing purpose. They cannot copy the file, either through copy + paste or File/Save As.
Is there a way to undelete a just-deleted file in JFS? I can't seem to find any information on it. I'd have sworn I did this before but didn't save the steps.
A friend had a 320 gig hdd he wanted me to back up. I saved all the files in a folder "Documents & Settings" and even made a 7z archive out of it. I used a 1tb mybook and copied the files to it then tried to delete it. Now i had recovered the files using ubuntu, but moved them to my windows partition. When i got on windows 7 and tried to delete the directory "documents & settings" i got an error saying some files had names that were too large or something like that. So i went on ubuntu and deleted the files from my windows partition without moving them to the ubuntu partition.
Well my 1tb drive just broke so i lost the files on there. Now im trying to back the files up using ubuntu. I am running scalpel at the moment, and it hasnt found anything at all. I really dont know if i set up the configuration file right. I just started scanning my other hdd that contains the linux boot. It has just started so i am not sure if it will find anything or not.
But incase it doesnt, how do i set up the config file to find the file? There are two things i deleted off of my windows partition. a 7z file archive which was the 9GB directory "Documents and Settings" zipped, and the actual folder Documents and Settings.
i have recently beingmessing about with a few of my own files(i realise that this was not very smart) and i managed to delete it from the trash. is there any possible way that i am going to be able to recover the folder that i deleted.
lol title says it all, im pretty new with using ubuntu and was messing around, I honestly dont reember what i was trying to do, but anyway, it ended up with me deleting the /usr/bin/ld file, it didnt really change anything and everything performed as normal untill i tried to compile some c++ code a few days later. now its giving me the error, collect2: cannot find 'ld', ive been searching all over looking for how I can get it back or reinstall it, seems no one else was dumb enough to do what i did lol forgot to mention, its not in the recycling bin because I override the file, then deleted it..
I have, for example, a folder called "MyFolder" and it contains 3 files: MyFile1, MyFile2, MyFile3. The only file that I do NOT want a particular user/group to even see that it exists is, for exmple, MyFile2.So, when they do a directory listing on MyFolder, they should only see MyFile1 and MyFile3. How can this be done in Linux? The important thing is that it is not just preventing them from "executing" MyFile2, but to prevent them from even knowing that it exists by not including it in a directory listing.This is a simpified example using one file, but in reality, I have lots of files and some of those that I want to block are also subfolders.It is very important for me to hide the existence of certain files/folders when the user does a directory listing. It's also important that the files stay in their current folder (that is, I can't use a workaround which requires moving all the files into a separate folder and then securing that folder).
Running Centos 5.5 64bit. Sometimes I boot this instalation in real machine, sometimas using vmware workstation. The problem is that these environments have different network interface cards - as soon as kudzu detects that network device changed it renames ifcfg-eth0 to ifcfg-eth0.bak and places new default ifcfg-eth0.
Is it possible to command kudzu to leave ifcfg-eth0 as it is ?
I recently made the dumb mistake of using tar to make a backup of my "/" on my "/" when my "/" didn't have near enough space to store the backup. I received a warning message, so I canceled the terminal process and used nautilus to delete what amount of the backup had already been saved. That didn't seem to free up any space on my "/" like it should have, though. In an effort to find any hidden trash files that needed to be deleted, I used this terminal command:
I recently accidentally (permanently) deleted a bunch of files off my computer. I used "foremost" to recover all my images, but there are still a bunch of videos that need to be recovered. The problem is that foremost seems to have also recovered a crapload of files from before i switched to ubuntu (i just removed windoze today) so i have a LOT of jpg images right now (over 400,000) and i don't want to deal with that many video files!How do i recover my recently deleted videos without getting a bunch that i don't want?? (can i specify the folder they were deleted from or something?)PS: i used this code to recover my picturesCode:sudo foremost -t jpg -i /dev/sda1
I am running into a very strange problem where my my .htaccess file keep getting deleted.Attempted scenarios ftp upload file.txt rename to .htaccess ftp upload .htaccess ssh - wget url/.htaccess ssh - wget url/htaccess.txt, rename to .htaccess
I updated my computer yesterday and ever since when I delete something from my second hard drive it no longer goes into the trash. It not only does that but does not release disk space so now I have a blocked hard drive. I have tried deleting .trash-1000 as root but that only created a folder called .trash-0 and did not do anything. I am running kubuntu lucid lynx kernel ver. 2.6.32-31-generic
I've recently had my file system irrecoverably corrupted after a hard lock-up. This has happened to me before, a couple weeks ago, and both times I've had to reinstall. I don't know what is causing the lock-ups (possibly the binary nvidia driver..), but I would like my file system to be intact or at least recoverable next time it happens.What is the "safest" way to mount things?
I'll be using ext3 with data=journal, but I've read that disk write caching combined with a kernel crash or power loss can still screw things up. Supposedly, mounting with barrier=1 helps some, but doesn't work with LVM? Would it be wise to turn off disk write caching completely? I understand there's a performance penalty, but if it helps keep the file system consistent in case of disaster.
By naming one of my folders wrong I thought I don't need it anymore and pressed delete button while holding shift. Is there any way I could get that folder back? (I'm actually looking for the file inside that folder - .conky config file to be more precise) I've tried scalpel and extundelete, but none of them worked.
The man page for rm says Quote:Note that if you use rm to remove a file, it is usually possible to recover the contents of that file. Do you know of a way to recover a file deleted with rm?
I just noticed on my Ubuntu machine (ext3 filesystem) that removing write permissions from a file does not keep root from writing to it. Is this a general rule of UNIX file permissions? Or specific to Ubuntu? Or a misconfiguration on my machine? Writing to the file fails (as expected) if I do this from my normal user account.Is this normal behavior?Is there a way to prevent root from accidentally writing to a file (Preferably using normal filesystem mechanisms, not AppArmor, etc.)
I understand that root has total control over the system and can, eg, change the permissions on any file.My question is whether currently set permissions are enforced on code running as root. The idea is the root user preventing her/himself from accidentally writing to a file. also understand that one should not be logged in as root for normal operations.
I am using Backtrack 4 Final, which is a Linux distro that is Ubuntu based. I had a directory that contained around 5 files. I deleted one of the files, which sent it to the trash. I then zipped the directory up (now containing 4 files), using this command:
zip -r directory.zip directory/
When I then unzipped directory.zip, the file I deleted was in there again. I couldn't believe this, so I zipped up the directory again, and the file reappeared again but this time could not be opened because the operating system said it didn't exist or something. I don't remember the exact error, and I cannot make this happen again. why a file that was deleted from a directory would reappear in that directory after it was zipped up?
I had been uploading pictures from camera to download fold. Had a lot in there. Thought right click delete would just delete the pictures. Deleted folder too.