General :: Ext4 New File System Mounting Compatibility With The Older Ext3 Type?
Sep 7, 2009
How well is the ext4 new file system mounting compatibility with the older ext3 previous Linux installations ? I refer to Ubuntu 9.04 and the new Fedora 11 which have the option to install with the ext4 file format. Will it be better if I install with the older ext3, so that I will be able to mount all other Linux from each other in a multi-boot system ?
I was in the process of installing Ubuntu 11.10, but got stuck choosing which file system to use. ext3 and ext4; which is better for a personal desktop? If ext4 is better, will it work well on my old PC (bought 3 years ago), or perhaps ext4 is not actually compatible with an old hard disk?
I have an image of an ext3 file system done with dd. I know that the file system is corrupted but I want to try to recover some files. Whatever I dd it again to the original partition or assign the dd image to a loop device, that's what happens:
- dumpe2fs -h gives me a valid ext3 superblock.
- as I try to mount the device read only, it fails with a bad magic number error.
- executing dumpe2fs -h again gives bad magic number error.
- trying debugfs or fsck with backup superblocks fails the same way.
For me it seems that in spite of mounting the device as read-only, mount command do something wrong with the superblock as before the mount the superblock is correct and it's there.
Which file system uses Centos 5.6 by default, Ext3 or Ext4? I have installed on Ext3, it's upgrade from 5.5, but howto convert into Ext4 without damage or lost data?
I have dual boot system..i.e, windows XP and ubuntu 9.10(insatlled side by side). when i try to boot ubuntu, Im gettin sh:grub > prompt
[Code]...
I am getting something like this.. root mount file system failed.. ext2 ext3 ext4 ....... kernel panic message and hanged at kenelthreadhelpper+ what can i do.. I cant reinstall ubuntu again.. Because I have installed nany application there..
If you have a contiguous partial piece of an ext4 file system (assuming it's perfectly clean), starting from the beginning of the partition, is there any way to check it, or to mount it to get the files whose parents, inodes and data are all completely contained inside?
Have (or maybe had) a very large 11TB RAID 6 array, filled with a single large ext4 partition. Something strange happened when a single drive failed and the array ended up failing 13 out of the 11 drives. I had trouble getting the array restarted, and got to the point where I exhausted all of the options I considered completely safe. I considered a few things that may have worked, but mdadm doesn't seem to have a definite "do not change anything" option. So I decided the only way to be absolutely safe would be to clone the disks before proceeding - then I realized how much time that would take and sent the drives off to a recovery service so they could image them and check it out.
Before doing so, I copied the first 2GB from each disk. I XORd the images from the working drives to reconstruct the data chunks that were on the failed disk, manually assembled the chunks, and am very confident that I have 22GB of "correct" data in a single file. The parity and Q syndromes all matched (with RAID 6 you can still check with only 1 missing device). I've learned the fine details of ext4 from [URL], and have looked at lots of raw data from the reconstructed partition, and it all looks good. The recovery company says that they're not finding many inodes, but I found a lot of them, exactly where they're supposed to be. I tried to mount and e2fsk, but both processes seem to be extremely unhappy that the device size doesn't match the size implied by the file system geometry.
I considered hacking the superblock to manually reduce the size, but I figure that wouldn't work because there would then be more group descriptor blocks than it would expect after the superblocks. I might try doing that and compensating by incrementing the "reserve block count" to compensate. Alternatively, if there is some way to make the file appear to be the expected size with nothing but zeroes after the end of the actual data, maybe I could mount it and not get any errors until I cause the kernel to read past the true end of the file.
I just rebuild the kernel for slackware 13, everything works, but root file system which is ext3 is mounted as ext2. Normally I've build ext3, ext4 and so on as modules, not in the kernel... but if I do this, then the kernel mounts the file system as ext2, which is build in the kernel. I also modified rc.modules so I can make sure that ext3,ext4,jbd are loaded, but it doesnt work.
I grabbed the new lubuntu 10.10 from [URL] but it turns out I'm having a problem installing it on my netbook (Asus Eee PC 1015PED). While installing, this error pops up:
Quote:
The attempt to mount a file system with type ext4 in SCSI2 (0,0,0), partition #1 (sda) at / failed.You may resume partitioning from the partitioning menu.I'm installing via USB and have selected the option to erase everything and use the full HDD.
Unable to install Ubuntu 9.10 on a new internal harddrive. The hardrive contains no operating system. This hardrive is the only drive present in the system.
Whenever the installation trys to mount the ext4 partition the following error appears: The attempt to mount a file system with type ext4 in SCSI1 (0,0,0), partition #1 (sda) at /failed
Iv'e tried over and over to get past this error to no avail.
I've recently installed Ubuntu 10.10 on a machine, unfortunately, the hard drive was ext3 partitioned. Is there a way of converting this partition to ext4 without having to re-format and hence reinstall the entire OS ?
I have been able to get most of the way through the process of changing from using ext4 back to using ext3, but something is not quite right so my system does not boot properly.
I have a system that was running Karmic Koala 9.10 as a server (no graphical environment). I had two drives using RAID1 with LVM on top, where the logical volumes of oldvg (old volume group) were using mostly ext4. /boot was not part of the RAID: it's on a separate physical drive and uses ext2.
I recently added two more drives and used RAID1 and LVM, and made all lv partitions (/, /usr, /var, /tmp, /opt, /home, /srv) ext3. I used rsync to duplicate the contents onto the logical volumes of newvg (new volume group). I was careful with rsync's option switches, and this part seems to be fine.
I also edited (the new) /etc/fstab and changed the UUIDs of the seven mount points to point to the logical volumes that are part of newvg instead of oldvg, and added new entries to (the new) /boot/grub/menu.lst to refer to newvg in addition to those that I left around to refer to oldvg.
This wasn't sufficient: rebooting here failed, but I went in with a rescue disk, and first updated /boot/grub/device.map to include the new physical drives. I then mounted all the new logical volumes, mounted boot also at its proper place, and entered a chroot of the new system as it should be mounted. Once there, (and after making a backup of /boot) I ran "update-initramfs -k all -c" to rebuild the initrd images that were stored on /boot. Finally, I also edited /etc/mtab so that the two entries that referred to oldvg now refer to newvg instead.
Now, the machine begins to boot from newvg, but the console text includes messages like:
And a bit later,
Now, at this shell if I type mount, I see:
I am actually confused as to why there are only entries for /root and /var in /etc/mtab, actually, instead of entries for all of the main mount points. I am thinking it must be part of the boot staging process, because there are entries for newvg-usr, newvg-tmp, etc. in /etc/fstab.
When I type any of pvdisplay, vgdisplay, or lvdisplay, I get
In fact, even if I run lvm, I get a similar error:
However, if I go back to the rescue cd, pvdisplay, vgdisplay, and lvdisplay do show that all of the partitions from both the old and new volume groups are available.
I have successful upgraded my system from Lenny to Squeeze and have even installed NVIDIA Driver successful, as well as other applications that I need. My system is now running smoothly and okey. My applications are also running smoothly except Skype 2.2 (Debian Forum Guys are currently helping me solve it).
However, I do want to upgrade my file system to ext4 in order to take its advance features and advantages especially that my system is now in WORK HORSE mode. However, I am not confident enough to do it because the guide is limited and does not tackle the issue of a system using ext3 with LVM2 on it.
Therefore, my question is how do I migrate (LIVE) my Ext3 to Ext4 on my system that uses LVM2? A clear and understandable guide is highly appreciated especially that I am newbie on it.
I had 5.4 machine. Upgraded to 5.5 today via yum upgrade. All went fine. Rebooted. Wanted to convert root partition to ext4 (I have three partitions: /boot, / and swap). All of them on software RAID 1 (root is /dev/md2). I did the following for converting
yum install e4fsprogs tune2fs -O extents,uninit_bg,dir_index /dev/md2 nano /etc/fstab # I indicated here that my /dev/md2 is of ext4
I used the ext3 format when I formatted my partition prior to installing Ubuntu10.10. I had accidentally deleted a file and began the process to get it back. It wasn't critical but helpful to recover the file. To make a long story short I ran into to some unexpected road blocks. I tried to use PhotoRec to get the job done but with no success.
I'm just looking down the road in the event I might have to recover something important.If it would be better going back to the Fat32 file system I would rather do it sooner than later. Just as a side note I am dual booting between linux and windows.
how to convert ext3 to ext4? I'd like to convert partitions which I use for virtual machines (vmware-server and virtualbox). I use Ubuntu 9.10 as vmware-server host and Gentoo as virtualbox host.
I'm trying to mount a second hard drive as a ext3 (rw_acl,user_xattr). I type the ff.:
# mkfs.ext3 -c /dev/sdb1(it seems to create a file system from this 2nd HD)
then type:
# mount -v /dev/sdb1 / type ext3 (it seems to mount it)
But when I check the ext3 systems with typing:
# mount -t ext3 (to check the list of ext3 devices, it gives me this) /dev/sda1 on / type ext3 (rw,acl,user_xattr) /dev/sda2 on /home type ext3 (rw,acl,user_xattr) /dev/sdb1 on / type ext3 (rw)
How can I make /dev/sdb1 on type ext3 as (rw,acl,user_xattr) as the others?
I have accidentally removed vmware virtual disk, my host operating system is RHEL5.2 with ext3 file system, i have used photorec, magicresue and foremost but still no luck to recover the vmdk file. i have seen in foremost configuration file that there are some predefined files (ex- doc, pdf, jpg, avi, zip, etc),
1. is there any way to add vmdk file extension on that configuration file?
2. if yes how can i do ?
3. by adding vmdk on configuration file, can i specifically use recover option for vmdk?
In my system around 73gb(pc-desktop) i have,1 primary partition(windows)-25gb, 1-extended partition(remaining gb) 3 logical partitions were there in (under) extended partition in one of the logical partition is d:drive. in my hard disk d: drive is -/dev/sda5
previosly i was fat -file system , (d:drive-/dev/sda5), i remember i changed the d: drive(d:drive-/dev/sda5) file system to ext4file system ,with following command using terminal
After doing(changing the file system)this one ,i couldnt see the d:drive data
By doing that
1q) Did i reformatted the partition? i think the new filesystem(ext4) has no knowledge of the data that was on it when it had a FAT filesystem.
2q) How to do undo operation,i tried to change the filesystem type to fat/ntfs in terminal using command --sudo mkfs -t FAT /dev/sda5.
Result:its showing text message-'mkfs.FAT: No such file or directory'(not in single quote)
I have installed ubuntu to my pc. i made 3 partitions. one for system, one for data and one for swap. two of them were ext4. after some time i have reinstalled ubuntu again. but this time i didn't put to format the second partition, but just mount it using ext4. after that i cannot open my files. checked with gparted shows that 2GB used, but with df 188MB. and in properties writes ext3/ext4 filesystem. i used chown, chgrp but didn't help. please help, these data are ver important. i cannot lose them.
How can I format a USB hard drive to ext3/ext4 or whatever file format and have full permission to read, write and execute all files afterwards? When using the command line (as ROOT of course) mkfs.ext3 /dev/sdb? Restricts the rights to ROOT as does the procedure gParted. The man mkfs did not help much. Configuring the fstab- file is a bit of a hassle, so it would be nice, if there was an option to set the permissions "correctly" right from the beginning. Setting Ubuntu (I'm using Ubuntu 9.10) up, so that it mounts USB devices not as ROOT as default but giving all users all permissions seems to be really complicated, as a guy from my local LUG told me.
There were some files residing on my ext3 file system, using Ubuntu as my linux distribution. Yesterday I formatted the hard drive using a windows install CD, rewriting it with a new NTFS partition. I'm willing to restore my personal files deleted due to this format.
I bought a ssd drive for my laptop, installed it, installed Windows 7, installed Kubuntu 11.04. Till then everything worked fine, and I had following partitions on my disc:
It worked fine. While using 11.04 I encountered a serious bug in nvidia 270.41.06, and decided to switch to Kubuntu 10.10. I installed 10.10 on the very same /dev/sda5 (clicking a checkbox to format it). Everything worked fine, grub was installed and pointing to win7, and kubuntu 10.10. I disabled ext4 journaling as above, rebooted, and found, that grub now points to win7 and 11.04, and that system (which should have been removed during installation of 10.10) loads perfectly fine. I checked where 11.04 had been installed - still /dev/sda5. Win7 loads fine as well, so no linux on /dev/sda2 I checked if there was 10.10 kernel in /boot - no. File system on sda5 had no trace of 10.10.
I formatted sda5 with gparted, installed 10.10 again, disabled journaling and situation repeated, whole file system on sda5 changed. Enabling journaling did nothing, 10.10 didn't come back. I deleted sda3, sda5, sda6, made them again, installed 10.10, disabled journaling, and finally had my 10.10 on ext4 without journaling. So this is kind of solved, but I would still like to know that the hell happend? For the moment it looked like two file systems coexistened on one partition.
is there a way of sharing an ext3/ext4 formatted partition on an external USB drive between different users (uids) on different Linux machines without creating a group for this purpose, setting the group ownership of the partition to this group and adding each respective user to the group on every machine?This would mean that I need to have root privileges on every machine... which I may not have in some cases.I'm using the partition to store the code I'm developing on Linux and I would like the option to be safe... if possible.I could use a vfat partition but then I have no control of the rw rights + I cannot develop directly in the dir: I would always have to tar.gz the directory, extract, work, tar.gz, copy to the external drive.
I have just purchased a 2TB drive for my server and I was trying to get an idea of the differences between these file systems or other file systems out there. What is the amount of space after formatting for ext4, ext3, and ntfs?