General :: Security - Safely Zero Fill A File In A Compressed Filesystem?
Aug 29, 2011I had read that the shred doesn't safely work for compressed filesystems when shredding a file, how this can be accomplished in a compressed fs ?
View 1 RepliesI had read that the shred doesn't safely work for compressed filesystems when shredding a file, how this can be accomplished in a compressed fs ?
View 1 RepliesI am going crazy with a gzip file. I can decompress the file in Windows using WinRAR but it is impossible on any UNIX operating system. the file seems to be ok. If I do file the_name_of_the_file.gz
I get: the_name_of_the_file.gz: gzip compressed data, from Unix, last modified: Sun Jan 30 14:10:21 2011
But if I do gunzip -f the_name_of_the_file.gz I alsways get: gzip: the_name_of_the_file.gz: unexpected end of file The same problem happens when I try to extract the file using the GUI tool in Ubuntu or MacOSX,
I have external USB HD 500 GB, by mistake folder with 13 GB deleted, using some windows recovery tools, i got it back, but the deleted folder still take and fill space on the HD!!
any idea how to remove or edit it under linux ?
I need to resize an ext4 filesystem partition, How can I do it being sure it wont get f#@ked up?
Is it safe to do it using a gparted live cd?
I'm trying to figure out how to access compressed files without uncompressing them beforehand, and also without modifying the application/script I am using. Named pipes do the trick, but only seem to work once
In one terminal I do this:
Code:
$ echo "This is a file I'd like to be able to read." >> my_file
$ gzip my_file
$ mkfifo my_named_pipe
$ ls
my_file.gz my_named_pipe
$ gunzip -c my_file.gz >> my_named_pipe
[Code]...
Can anyone recommend a file system similar to SquashFS but writable?
View 2 Replies View RelatedI want to create a compressed ISO image file and mount that file to one of the virtual drives and access the content (read-only) without worrying about manual decompression/extraction.For Windows and Linux (Ubuntu) OSes.
View 1 Replies View Relatedi am using the following command to backup and sql file:
tar -zcvf "$BACKUP_DST/$FILE_NAME.tgz" "$BACKUP_DST/$FILE_NAME.sql"
i want to make sure the compressed file wont be larger then 300mb, if it exceeds 300mb, split it into several files.
Anyone know how to compress a file to extension z?not tar.gz , zip, 7zip
View 6 Replies View RelatedI was using dd if=/dev/zero of=/dev/user name/wipe.conf however i got a message that my hard drive is full.
lots of this is scary - dangerous. what is the best way to fill in random or zeros in deleted files without the hard drive filling up ?
I have a problem, I'm trying to make my own LiveCD, but I can't mount compressed SquashFS file system. Here I give you my limited LiveCD version... If somebody would take a look [URL]
View 5 Replies View RelatedHow do i fill a file with <n> bytes at a given value ,for example 3?
View 3 Replies View RelatedI would like to have my backup script that I am writing to create a sql dump of my database and go directly into a tar file. Does anyone know how I could do this with one command?
To be more clear I would like to go from
mysqldump -u xxxx -pXXXXX tablename> currentbackup.sql
tar -czvf backup-XXXXXXXX.tgz currentbackup.sql
rm currentbackup.sql
To a single command somehow. Does anyone know how I could accomplish something like this?
I have been having a recurring problem backing up my filesystem with tar, using bzip2 compression. Once the file reached a size of 4Gb, an error message appeared saying that the file was too large (I closed the terminal so do not have the exact message. Is there a way to retrieve it?). I was under the impression that bzip2 can support pretty much any size of file. It's rather strange: I have backed up files of about 4.5Gb before without trouble.
At the same time, I have had this problem before, and it's definitely not a memory problem: I am backing up onto a 100G external hard drive.
That reminds me, in fact, (I hadn't thought of this) that one time I tried to move an archived backup of about 4.5Gb to an external (it may have been the same one) and it said that the file was too large. Could it be that there is a maximum size of file I can transfer to the external in one go? Before I forget, I have ubuntu Karmic and my bzip2 version is 1.0.5 (and tar 1.22, though maybe this is superfluous information?)
I am attempting to be careful in case my system crashes, and although highly unlikely my first question is if there is a way to first compress my Linux Partitions. After running the diskutil command in OSX's Terminal, I basically end up with this poartition scheme:
Quote:
Macintosh HD = 130GB
disk0s3 = 1MB
disk0s4 = 30GB
Linux Swap = 1.3 GB
I am sure there is a way in the Terminal to first compress disk0s3, disk0s4, and Linux Swap, and then output the compressed partitions into my external Harddrive. I have already read some of the suggestions that only /HOME, /etc/fstab/, list of installed packages, /opt, and /var/cache/apt/archives/-where all installed packages are stored, is what I should backup. But, please correct me if I'm wrong. Wouldn't it take quite a while to install all those packages again in case of a system failure. Or would it just be easier to untar all of them in their directories once Linux has been reinstalled. The closest command I have found so far in being able to achieve this is:
Quote:
sudo tar cvf - files | (cd target_directory ; tar xpf -) The above code is very suitable for what I am looking for because it enables you to copy files into another location by using the tar command where you would create In my case the new location would be my external harddrive. My external harddrive already has its own Linux partition which I am able to mount in Linux and that Linux sees as free space.
When trying to create a new compressed/archive file in Gnome Commander (GM) the file is created but the selected files are not added. I can open the new (empty) archive file and then add files to be compressed. I have tried using several different formats (zip, tar.bz and others) with the same results. The "file roller" is shown as a plugin but has no configuration other than the compressed file type.
View 2 Replies View RelatedI heard that the ext file system does not causes file fragmentation. Could some one explain how this is achieved. And how come a file does not divides in to fragments as compared to Windows based Filesystems.
View 1 Replies View RelatedOn a Linux CD/DVD, there are compressed filesystem images for the live version for KDE or Gnome for example, but they have no extension, but they are clearly an image file ( compressed filesystem images for the live version before installation ) !!
I was wondering, How do I mount these compressed filesystem images, after I copy the ISO content of the CD/DVD on my system .... I want to edit some files or packages and make some changes, like if I want to customize a live version of gnome for example ! ... ( I know you might be tempted to tell me to use KIWI etc to customize etc ..... ) ... but I want to be able to mount the compressed file system image, then edit it for reading and writing while it is in a subdirectory on its own ... i want to open it ! ... is there a way to do this ??? ... these type of files have no extension ...
i can open this compressed filesystem image then to edit for read & write ... before I roll it back again ..... If and when I succeed .... what should I watch out for ? ... will the same compressed file image but slightly modified work again ?
PS. that same question could be kind of translated or be extended like : how do I use unionfs/squashfs programs on the command line to mount these image files with no extension for read & write mode ???
Consider the following:
mount | grep home
type reiserfs
rm -Rf /home/user/over_9000_little_and_big_super_secret_files/
# oops, I should have shredded it instead.
How can I properly and securely "initialize free space" to ensure that no additional info can be restored restored by digging in free space (Preferrably without stopping or disturbing the filesystem much.) Is dd if=/dev/frandom of=/home/qqqqq really secure for this (tails, journal, etc.)?
think of this directory as the current structure..
Quote:
|-- test
| `-- test1
`-- test.tar
test.tar is a compressed tar of /test/ (cvfz), now... I need to add another file called test2 to test.tar, WITHIN the test directory in the tar. Is this possible?
[Code]...
whether grub is able to load a non-compressed kernel and initrd, is mandatory to compress and why.
View 1 Replies View Relatedi can download a 700mb.rar to get almost a gig worth of iso.... so i was wondering if anyone knows a site where they compress the iso to a rar or any other format so that i can save time downloading....
Why i recently tried downloading knoppix dvd when i reached 3.2gb of 3.6gb the downloaded ended i mean i cannot resume...
I'm relatively experienced with UNIX and Linux, but this has me thrown for quite a loop, and it seemed like such a simple question. How would I go about finding the newest file in a file system? I thought something like:
Code:
ls -ltr `find /usr -type f`
would work, but I seem to be exceeding the argument maximum for ls:
ksh: 0403-029 There is not enough memory available now
I thought something involving xargs might work, but I really suck with that command.
I have problem to mount a compressed (ISZ) image under Linux, which was created by e.g. UltraISO? I am aware about user-space fuseiso, but it fails to mount these images, as I have reported in Debian bugtracker (correct me if I ddi something wrong). I ask the community for a help: I need a proved solution to mount these images without decompressing them.I believe that CONFIG_ZISOFS kernel option cannot help, as it refers a special RockRidge extension (per-file compression with mkisofs -z or mkzftree).
View 1 Replies View RelatedWe are working on 2.6.28. We have a requirement that we should boot using a non compressed kernel image. how can I build a non compressed kernel?
View 1 Replies View RelatedIn debian/ubuntu I want to:
a) Create a list of all the files in one directory tree
b) Do the same for a second directory tree
c) Compare the two lists such that, only the file NAMES are compared (i.e. just comparing the "file.txt" part so that "/home/folder/file.txt" == "/home/secondfolder/folder/file.txt)
d) Output a list of all the duplicates
How to do this using scripting languages or regex or something?
Yesterday I installed the latest version of ubuntu to my computer that was already running windows 7. I had everything working fine until in windows I deleted a partition that had nothing in it.
After this I restarted but I can't get into either OS. I get an error that says Error: unknown filesystem grub rescue>
I think I need to fix something in grub. I have been booting off of a usb stick with linux on it in the mean time.
Yesterday I installed the latest version of ubuntu to my computer that was already running windows 7. I had everything working fine until in windows I deleted a partition that had nothing in it. After this I restarted but I can't get into either OS.
I get an error that says Error: unknown filesystem grub rescue>
I was having trouble with the sound of the wmv files on linux, but i solved it by coping the w32 dll libraries and now it works just fine. There is one problem though : whatever player i use to open and play a wmv file (mplayer, vlc), it doesn't fit the full screen - in windows the same movie DOES.
View 1 Replies View RelatedI am trying to mount a file image, like this
mount -o loop /tmp/apps.img /media/apps
But I get the following:
mount: you must specify the filesystem type
I try ext3:
mount -o loop /tmp/apps.img /media/apps -t ext3
dmesg says:
error: can't find ext3 filesystem on dev loop6.
I've also tried ext2, vfat etc. How can I detect the filesystem type of apps.img?