CentOS 5 :: How To Backup Files And To Use 'fsck.ext3 Rebuild-tree Using Livecd'
Dec 9, 2009
I have a serious problem in booting centos 5.4 x86 as shown in the attached picture.I tried to backup before using fsck command, but I could not make a backup of damaged lvm on hard drive.First I made a rescue centos at virtualbox, and installed centos 5.4 x86 on virtual hard disk.And I attatched damaged hard drive. S I can see this damaged hard drive's lvm as attached picture.Please let me know how to backup my files and to use "fsck.ext3 --rebuild-tree using livecd".
I am attempting to run a fsck on a number of large ext3 filesystems. I am doing this proactively because I want to minimize reboot time and the filesystems are past the interval time of 6 months. When I run the command " fsck -f -y device" I get the following error on all of the filesystems-
fsck 1.39 (29-May-2006) e2fsck 1.39 (29-May-2006) fsck.ext3: Device or resource busy while trying to open /dev/mapper/mpath0p1
I am hosting a few customer servers now, all of which are virtual machines running on a CENTOS 5.x host. Each Centos host has a couple of extra drives. When I formatted them ext3 they automatically had a schedule of a full forced fsck after 6 months. Do I really need to do that check regularly? It results in a fairly large outage since my disks are each 1TB and there are up to three extra drives on each server. I try to reboot these servers every 6 months but this part adds a large amount of time to a routine reboot.
I Tarred and GZipped most of the data on one 1Tb partition and stuck the archives on a second 1Tb partition on a separate disk. I then proceeded to format the first partition with NTFS (from Linux.) The only problem is that I completely forgot that I had a CD drive and formatted sdc1 instead of sdd1! I began doing a full NTFS format and after a minute or two I cancelled it and decided to do a quick format. I then realized my mistake. I managed to find a copy of the superblock and began trying to recover the disk. fsck -t ext3 recognized the partition as NTFS but I luckily didn't have fsck.ntfs installed so it didn't touch it. I managed to get it working with fsck.ext3 (with -b,-B and -y) fsck.ext3 didn't mind that it was an NTFS partition.
Roughly how long will this take? It's running from Knoppix within a virtual machine to a USB hard drive which is 100% full. Days? Being that for a few minutes I attempted a full format am I going to end up with a bunch of corrupted archives? If I do end up with file corruption can anyone recommend a way of recovering the data / sorting it out? Is it likely to be just a few old files that are corrupt (It's my understanding that filesystems like to keep files in the same area on the disk to minimize the amount of head travel.) This might just be wishful thinking but as the filesystem fills up will ext3 put the newer files towards the end of the disk? If so then I'm hoping that a full NTFS format starts at the beginning of the disk.
After upgrading from opensuse 11.1 to 11.2 I get the following error messages while booting the system caused by the initial filesystem check routines:
ERROR: Couldn't open /dev/null (Too many open files) ext2fs_check_if_mount: Too many open files while determining whether ... is mounted. fsck.ext3: Too many open files while trying to open ...
I found a new version of the e2fsprogs at the OBS package claiming to fix this problem. But installing this new version did not solve my problem.
Here some information about the affected system: Operating System:openSUSE 11.2 (i586) Installed e2fsprogs:e2fsprogs-1.41.11-4.1.i586 Number of LVs:35 (all ext3)
I can only boot if I comment out some of the filesystems in my /etc/fstab. It seems that the number of filesystems must be less or equal 32.
With a 1Tb USB drive plugged in, we'll call it "TheDrive", I boot my machine and "TheDrive" is mounted automatically. The icon is on the desk-top. "TheDrive" mounts to /media/TheDrive. Everything is fine. But, I would like to automatically mount the drive in my file tree at the location /mnt/TheDrive. I would not like to have the drive automatically mounted to /media/ and appear on the desktop. I know that this requires the use of fstab; but, I do not know what to add to this file.
I have a centOS 5 box with 3ware 8 port raid cards.I run fsck.ext3 -y /dev/sdb1 and it shows as clean.But after writing to the FS for about 2 minutes, it becomes read-only.When I umont -l /data, and run the fsck I get that another program is using the system an I should wait.If I reboot the server , the array comes back as clean.
fsck.ext3: Unable to resolve 'UUID=theUUID' where "theUUID" (without the quotes) is the UUID
I believe this is caused by me trying to get lvm to use the external /boot because when I had unmounted the external /boot, it was creating a /boot in root. So, I booted a live cd and mounted the external /boot where /boot in the root volume is supposed to be. Basically, I think the problem is that I need to make my /boot (which is the only ext3 partition in the entire system and I want it that way) "relate itself" to the lvm root so that it boots into the system. As mentioned earlier, in the live CD, I made the external /boot mount itself in the root's /boot but I don't know how to tell the system to do this on its own while booting without my assistance. I chrooted from the live cd which involved a lot of tedious stuff but basically the important stuff I did were:
After a massive update including grub (not the problem) I cannot mount and boot because of the dubject error message. /root (dm-0) and swap (dm-1) are ok its just /home (dm-2) that appears broken.
I have a problem partitioning an Hard Disk Drive on a server, and I hope someone can help me with this. Here is the system configuration: Operating System Linux localhost.localdomain 2.6.30.8-64.fc11.x86_64 #1 SMP Fri Sep 25 04:43:32 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
Hardware: RAID bus controller: Hewlett-Packard Company Smart Array G6 controllers (rev 01)
The system mounts one hard disk (120 Gb) with the OS, and four 1.5Tb Hard Disks mounted on RAID 10 for a total of ~3 Tb I need to create several partitions on this RAID drive, but I have some trouble doing it. I need a total of 10 partitions of different sizes:
I am using CentOS 5.5.I suppose this is an oft repeated question. I accidentally deleted, using rm command, 2 wmv files. The files were in a single ext3 1Tb drive, with just 1 partition --- the ext3 one. Each file is 600 - 800mb. The 1Tb drive has only about 20Gb data.Immediately after deleting the files i unmounted the drive (/dev/sdc1). Thereafter i searched the the net and came to know of the recovery tools foremost and photorec. I have installed both of them. I am currently running both of them as root --- foremost is just showing a lot of * signs on the terminal and photorec has managed to find some txt and png files --- but no wmv.For foremost i used: /usr/sbin/foremost -t wmv -i /dev/sdc1For photorec i followed some instructions available on the web.
In the meantime, based on some post on the net i ran debugfs as root, then cd into the directory where the files were deleted. Then on typing ls -d i managed to get the inodes of the 2 deleted files and the names of the deleted files are also correct. The instructions on the net http://www.theavidcoder.com/?p=3 tell me to run fsstat and dls both of which i am unable to find in /bin, /usr/bin, /usr/sbin and /sbin. So i am unable to proceed further.
After a near miss with my 1.5TB, RAID5 file server, I have decided that I need to backup my data to an external hardrive periodically.I have been looking at rsync but the question I have is: Do I format the external hard drive in EXT3 (the sameas my fileserver) or NTFS?All my main machines are Windoze, but the file server is Ubuntu with a samba share.If my server ever went belly up, I would like to be able to access my data from the external hard drive. I guess if it's in EXT3 then windows would be clueless... I would either need to fix the server pronto or access it with a live CD or something.What would I lose if I used NTFS instead of EXT3? I think I would lose permissions and possibly ownerhsip information - are there any other issues?
In doing my kernel-kits that allow you to create a livecd backup of your installed slackware or arch install, I came across stuff about how many x86 or x86_64 systems use a PAE kernel?
if thats so; should I release a PAE kernel kit also besides the x86 and x86_64 kernel kits for slackware/arch??
or is it not a big thing?
Can anyone run a PAE kernel or it should be a seperate build? the x86/x86_64 kits are not PAE...let me check...
I want rebuild my 2.6.18.-128.e15 kernel in CentOS 5.3? but i have one trouble. When i type make bzImage I see
make[1]: *** No rule to make target `init/main.o', needed by `init/built-in.o'. Stop. make: *** [init] Error 2
on the screen. It is because kernel sourse codes are not full. I can not search full sourse codes of 2.6.18.-128.e15 kernel.not in src.rpm , not in tar.bz2 etc.
I've got a server running a software Raid for SATA disks on a P5E motherboard.
I had to had a lot of memory on thi sserver, then I have had to flash the bios. This has resetted the software raid on the disks, then when I boot, i got a Kernel Panic cause it doesn't find anything ...
how to rebuild the raid ? I can boot on a live cd, or anything else, but don't know how to do it without loosing my data.
I'm working on the development of a custom kernel (actually just a "small" change in the networking part), from the standard 2.6.23.17 source code (downloaded from kernel.org), on CentoOS 5.3. I'm using the following procedure to build and install the kernel:
1)cd <ROOT_DIRECTORY_OF_KERNEL_SOURCE> 2)Modifiy the 4th line of the Make file as follows:EXTRAVERSION = .17CUSTOM 3)make clean && make mrproper 4)make menuconfig 5) make rpm
I have a straingh problem with kill. I start a skript run.sh. With Ubuntu i can kill run.sh and the whole pstree is killed. with centos it does not work?
[Code]....
why does centos do not kill the wohle tree? is there someting with the bash? ubuntu use 4x and centos 3.2
I've got a directory structure full of files. I want to convert them all into some other format, but I don't know how to keep the directory structure, is there a way so I can "set" which command to use on files instead of `cp`?
I made a script that, by using "rpmrebuild" (which is available on SourceForge and uses rpmbuild as its backend), recreates all the currently installed RPM packages and stores them in a user-defined directory.
Everything works like a charm, except for a few packages.
Those packages fail to build because some of the files they should contain are missing on the filesystem.
This is an example of the errors I get:
Code: error: File not found: /etc/identd.key File not found: /etc/identd.key /usr/lib/rpmrebuild/rpmrebuild.sh: ERROR: package 'pidentd-3.0.14-5' build failed error: File not found: /usr/share/ssl/certs/stunnel.pem File not found: /usr/share/ssl/certs/stunnel.pem /usr/lib/rpmrebuild/rpmrebuild.sh: ERROR: package 'stunnel-3.26-1.7.3' build failed Is there a way to ask rpmbuild (because yes, rpmrebuild allows to pass parameters to rpmbuild) to *ignore* the missing files?
I have a drive with an NTFS partition where all the files were deleted. What I'm looking for is a way to rebuild the directory structure and recover the files. I really, really want the directory structure as the partition contains 460 Gigs of data. Normally I would use the tools here: [URL] but I've never dealt with this much data before. Everything there that I've used creates a pretty messy dump however.
I have used ntfsundelete before but only for a few files at a time. I have no idea what would happen if I tried to run it on a partition of that size. I'm comfortable with data recovery but this amount of data is beyond me. I've run ntfsundelete with no args and from what I can tell of skimming the pages of output all the files are fine. The partition has not been written to.
When I open Nautilus v2.16.2 in CentOS 5.4 then I miss at the left side a tree view of the whole directory tree. Only a list of all files in the current directory is visible.
I have an Nvidia graphics card,... actually I manage several workstations that run centos and have an nvidia video card. I also have a personal computer with ubuntu and an nvidia network card.
I would like to do a regular automatic update of those Centos workstations. (With a pilot group to test and then a full roll-out). Until oktober 2009 no major difference in automatic updating ubuntu and centos (apart from the differences between apt and yum):
After a kernel upgrade, the systems can not boot into it's Xorg gui, because the nvidia driver must be rebuild (=not recompiled, because this is partially object code, the driver is not opensource).
But from ubuntu 9.10 onwards, the kernel update process checks for the presence of propietary drivers like those of nvidia and does a rebuild on the reboot, so that the system can succesfully boot into the xorg GUI (and gdm or kdm) My question is: Are ther any plans for Centos to do the same, this would relief me from some upgrade hassle for the Centos workstations that I manage. Or does anyone know about a (good) automagic workaround?