General :: Need Writable Compressed File System
Mar 29, 2010Can anyone recommend a file system similar to SquashFS but writable?
View 2 RepliesCan anyone recommend a file system similar to SquashFS but writable?
View 2 RepliesI have a problem, I'm trying to make my own LiveCD, but I can't mount compressed SquashFS file system. Here I give you my limited LiveCD version... If somebody would take a look [URL]
View 5 Replies View RelatedOn a Linux CD/DVD, there are compressed filesystem images for the live version for KDE or Gnome for example, but they have no extension, but they are clearly an image file ( compressed filesystem images for the live version before installation ) !!
I was wondering, How do I mount these compressed filesystem images, after I copy the ISO content of the CD/DVD on my system .... I want to edit some files or packages and make some changes, like if I want to customize a live version of gnome for example ! ... ( I know you might be tempted to tell me to use KIWI etc to customize etc ..... ) ... but I want to be able to mount the compressed file system image, then edit it for reading and writing while it is in a subdirectory on its own ... i want to open it ! ... is there a way to do this ??? ... these type of files have no extension ...
i can open this compressed filesystem image then to edit for read & write ... before I roll it back again ..... If and when I succeed .... what should I watch out for ? ... will the same compressed file image but slightly modified work again ?
PS. that same question could be kind of translated or be extended like : how do I use unionfs/squashfs programs on the command line to mount these image files with no extension for read & write mode ???
I am going crazy with a gzip file. I can decompress the file in Windows using WinRAR but it is impossible on any UNIX operating system. the file seems to be ok. If I do file the_name_of_the_file.gz
I get: the_name_of_the_file.gz: gzip compressed data, from Unix, last modified: Sun Jan 30 14:10:21 2011
But if I do gunzip -f the_name_of_the_file.gz I alsways get: gzip: the_name_of_the_file.gz: unexpected end of file The same problem happens when I try to extract the file using the GUI tool in Ubuntu or MacOSX,
My Slackware's display suddenly got hanged with just a black screen and a mouse pointer visible not even the Ctrl+Alt+Backspace worked.So I switched off my system by pushing the button on the CPU .
After that when I tried to boot into my system it wont boot past going multiuser with below error messages
Code:
Hanging @ starting
Code:
Though I can login when trying to boot through singleuser but when I try to
Quote:
startX
It again gets stuck at the above errors
We are having a lot of trouble with this.
We need a command line version of Linux like ttylinux. Any command line will do with the latest kernel. It should be around 50 megs.
But the problem is, all these small Linux versions are LiveCD or have a compressed file system.
We need a SMALL linux distro, that we can install UNCOMPRESSED (no squashfs etc) on a hard disk.
This is so simple I'm sure I'm missing something..
I'm trying to figure out how to access compressed files without uncompressing them beforehand, and also without modifying the application/script I am using. Named pipes do the trick, but only seem to work once
In one terminal I do this:
Code:
$ echo "This is a file I'd like to be able to read." >> my_file
$ gzip my_file
$ mkfifo my_named_pipe
$ ls
my_file.gz my_named_pipe
$ gunzip -c my_file.gz >> my_named_pipe
[Code]...
I want to create a compressed ISO image file and mount that file to one of the virtual drives and access the content (read-only) without worrying about manual decompression/extraction.For Windows and Linux (Ubuntu) OSes.
View 1 Replies View Relatedi am using the following command to backup and sql file:
tar -zcvf "$BACKUP_DST/$FILE_NAME.tgz" "$BACKUP_DST/$FILE_NAME.sql"
i want to make sure the compressed file wont be larger then 300mb, if it exceeds 300mb, split it into several files.
I had read that the shred doesn't safely work for compressed filesystems when shredding a file, how this can be accomplished in a compressed fs ?
View 1 Replies View RelatedAnyone know how to compress a file to extension z?not tar.gz , zip, 7zip
View 6 Replies View RelatedHow to copy a Read-Only file in Linux and make the copy writable with a single cp command in Linux (Ubuntu 10.04)? The --no-preserve and --preserve seemed to be good candidates, except that they should "and" the mode flags, while what I am looking for is something that will "or" them (add +w mode).
More details: I have to import a repository from GIT to Perforce. I want that all Perforce depot files are Read-Only (that is how Perforce was designed), while all other files that were derived/copied from depot files are writable. Currently if a Makefile tries to copy a Read-Only file then the derived file will also be Read-only. This leads to build-errors when cp tries to overwrite Read-Only file second time. Of course the --force is a workaround here but then the derived file is also Read-Only. Also I do not want to mess with "chmod" after each "cp" command - I will do that only as the last resort.
I would like to have my backup script that I am writing to create a sql dump of my database and go directly into a tar file. Does anyone know how I could do this with one command?
To be more clear I would like to go from
mysqldump -u xxxx -pXXXXX tablename> currentbackup.sql
tar -czvf backup-XXXXXXXX.tgz currentbackup.sql
rm currentbackup.sql
To a single command somehow. Does anyone know how I could accomplish something like this?
I have been having a recurring problem backing up my filesystem with tar, using bzip2 compression. Once the file reached a size of 4Gb, an error message appeared saying that the file was too large (I closed the terminal so do not have the exact message. Is there a way to retrieve it?). I was under the impression that bzip2 can support pretty much any size of file. It's rather strange: I have backed up files of about 4.5Gb before without trouble.
At the same time, I have had this problem before, and it's definitely not a memory problem: I am backing up onto a 100G external hard drive.
That reminds me, in fact, (I hadn't thought of this) that one time I tried to move an archived backup of about 4.5Gb to an external (it may have been the same one) and it said that the file was too large. Could it be that there is a maximum size of file I can transfer to the external in one go? Before I forget, I have ubuntu Karmic and my bzip2 version is 1.0.5 (and tar 1.22, though maybe this is superfluous information?)
I am attempting to be careful in case my system crashes, and although highly unlikely my first question is if there is a way to first compress my Linux Partitions. After running the diskutil command in OSX's Terminal, I basically end up with this poartition scheme:
Quote:
Macintosh HD = 130GB
disk0s3 = 1MB
disk0s4 = 30GB
Linux Swap = 1.3 GB
I am sure there is a way in the Terminal to first compress disk0s3, disk0s4, and Linux Swap, and then output the compressed partitions into my external Harddrive. I have already read some of the suggestions that only /HOME, /etc/fstab/, list of installed packages, /opt, and /var/cache/apt/archives/-where all installed packages are stored, is what I should backup. But, please correct me if I'm wrong. Wouldn't it take quite a while to install all those packages again in case of a system failure. Or would it just be easier to untar all of them in their directories once Linux has been reinstalled. The closest command I have found so far in being able to achieve this is:
Quote:
sudo tar cvf - files | (cd target_directory ; tar xpf -) The above code is very suitable for what I am looking for because it enables you to copy files into another location by using the tar command where you would create In my case the new location would be my external harddrive. My external harddrive already has its own Linux partition which I am able to mount in Linux and that Linux sees as free space.
When trying to create a new compressed/archive file in Gnome Commander (GM) the file is created but the selected files are not added. I can open the new (empty) archive file and then add files to be compressed. I have tried using several different formats (zip, tar.bz and others) with the same results. The "file roller" is shown as a plugin but has no configuration other than the compressed file type.
View 2 Replies View RelatedI encountered a problem when I am trying to access my phpmyadmin the error came up: Wrong permissions on configuration file, should not be world writable!
View 6 Replies View Relatedthink of this directory as the current structure..
Quote:
|-- test
| `-- test1
`-- test.tar
test.tar is a compressed tar of /test/ (cvfz), now... I need to add another file called test2 to test.tar, WITHIN the test directory in the tar. Is this possible?
[Code]...
whether grub is able to load a non-compressed kernel and initrd, is mandatory to compress and why.
View 1 Replies View Relatedi can download a 700mb.rar to get almost a gig worth of iso.... so i was wondering if anyone knows a site where they compress the iso to a rar or any other format so that i can save time downloading....
Why i recently tried downloading knoppix dvd when i reached 3.2gb of 3.6gb the downloaded ended i mean i cannot resume...
i have debian system in which i have mounted the OS on a ext-3 system . I have got a partition of 60 gb , which is formatted to ext-2 partition . Even if I mount , i cant write anything into it . How can i change that ? How can I make the disk writable?
View 2 Replies View Related$ whoami
meder
$ cd /var/www
$ sudo mkdir html
$ sudo groupadd web
$ sudo usermod -a -G web meder
$ sudo usermod -a -G web medertest
$ sudo chown meder:web html
$ sudo chmod -R g+rwx html
The problem is, anytime I create a new file in /var/www/html even though the group is set to web, it is only writable by the original user. I was given the advice of setting the umask to be 002 because the default is what causes the problems. But I would have to do this for all users in that group, and as far as I know it would be tedious having all of them modify ~/.bashrc to have umask 002. Even if I can do it myself with a shell command for all of those users, it still seems too tedious.
I want to (no reason of course purely interest) get into the internals of my router (which happens to run linux) and be able to mount a drive to which i can save some data which will persist over reboots. I don't care if its only 16kb or whatever. I am guessing it must have some writable memory for the router configuration. I can telnet into the router and get a shell started up (busy box). Also I am aware of options such as tomato and DD-WRT to replace my router's firmware to give me much more access but I do not want to take the risk nor the time in configuring it and would like to retain my ISP's provided firmware. Somethings that might be useful:
[code]...
I want to map a windows shared folder to local directory, but I can't make it writable. I use mount command as following:
mount -t smbfs -o username=kcynice,password=kcynice,user,rw //192.168.1.100/SharedDocs /mnt/WinShare
Yes, this command can mount the network folder successfully, but i can only write it under terminal as root. I googled but got no answer.
So, how to mount it can be write by normal user?
I have problem to mount a compressed (ISZ) image under Linux, which was created by e.g. UltraISO? I am aware about user-space fuseiso, but it fails to mount these images, as I have reported in Debian bugtracker (correct me if I ddi something wrong). I ask the community for a help: I need a proved solution to mount these images without decompressing them.I believe that CONFIG_ZISOFS kernel option cannot help, as it refers a special RockRidge extension (per-file compression with mkisofs -z or mkzftree).
View 1 Replies View RelatedWe are working on 2.6.28. We have a requirement that we should boot using a non compressed kernel image. how can I build a non compressed kernel?
View 1 Replies View Relatedam having following error while installing TimeTrex payrollable Cache Directory: Warning: Not writable (/hrm/cache) Writable Storage Directory:arning: Not writable (/hrm/storage) Writable Log Directory: Warning: Not writable (/hrm/log) I did apply permission settings: i.e777 on cpanel.but invain.i got info. abt .htaccess files from where we can apply write attributes to the directory. (not getting how to apply on a linux based hosting)
View 1 Replies View RelatedIf I have a program running as root, can I have the config files as follow :Code:-rw------- 1 user user 50310 Mar 5 15:16 configfile.confRoot will be able to read the config-files, right ??And only the user 'user' will be able to change the config-files, right ?
View 3 Replies View RelatedBeing a system administrator i came across a statement as "Excluding temporary directories /tmp and /var/tmp, no root owned files should be in world writable directories"While the above statement may look straight forward but how would i check if there are any such directories in the distribution?
View 14 Replies View RelatedI work with a Debian Squeeze on my laptop and I have a 160GB external hard disk. My hard disk was formatted FAT32, but I decided to format it using ext2. I formatted it using fdisk from command line and everything went well. Unfortunately, when I mount my hard drive(which is auto-mounted from Debian) it has got root both as owner and group. Then I can't write to it because I have no permission to do that. Is there a setting to create an ext2 partition which has as owner the logged system user in order to have right permission every time.
View 7 Replies View Related