Ubuntu :: Use Tftp Instead Of Nfs For PXE Root Filesystem?
Mar 19, 2010
I have a custom Ubuntu distro that run both from a CD and PXE boot. The problem I have is that I need to boot in an environment that has to be routed through a router that can't forward NFS (the protocol doesn't use a standard port) I found that the Ubuntu based Clonezilla Live CD has a option like "fetch tftp://server/folder/filesystem.squashfs" I can borrow the kernel and initrd and it works, but how do I add this feature myself? Is there a package I need to install or a initrd option I need to add?
When I try to boot to OpenSUSE I get the following error during boot-up: unknown filesystem type 'reiserfs' could not mount root filesystem - exiting to /bin/sh$
This only started happening quite recently - before this I could boot to Linux quite happily.
When I try to pxe boot a Sun X4100 (which actually has a RHEL OS on it right now) I get the message TFTP open timeout. All traffic is allowed for port 69 udp in both directions. I do get a dhcp address. I see that both on the server and the client it gets an IP. After that I get this message in the logs:
My linux distro is CentOS 5.3. Today I edited /etc/sysconfig/readonly-root and set "READONLY" to yes, now my /etc/sysconfig/readonly-root file is like this:
# Set to 'yes' to mount the system filesystems read-only. READONLY=yes # Set to 'yes' to mount various temporary state as either tmpfs
Lately however my root filesystem is getting filled up every night-- I come in in the morning and have notices that I have 0 bytes remaining. There's tons of room on the disk, but the root is full. Here's what it looks like with a df -h:
ive been recently experiencing some problems with my ubuntu studio 9.10 setup, with the filesystem failing to mount. after deciding to try a new hard drive and cable, as well as clean install ubuntu, fedora and now mint, im still finding no filesystem.im using a live cd created for mint (like it ). having clicked install to hard drive, all is well until the partition manager, where all the boxes are greyed out. clicking forward produces a box saying "no root filesystem defined". i see there are a few on here from a few years back and having read through them, cannot find a fix for myself.
Using Ubuntu 10.10, 64-bit, installing after LiveCD testing.sda3 can't really be erased due to its contents, something I can't exactly get back or transfer.
I have a dual booting newly installed 64 bit Ubuntu 9.10 on my machine. It was all fine until today. Now when I boot into Ubuntu, I see the error Failed to mount root filesystem. I cant remember any significant changes during the last session. One thing I remember is I upgraded the system using the update manager which asked me to choose an option for grub boot loader. I opted for its upgradation. After the upgrade, I was able to work with Ubuntu for a few more sessions. Windows XP works very fine.I checked other threads which suggested running fsck, but it did not help. fsck does not report any errors.
On Launchpad there is the following thread on ureadahead:
[URL]
Is it sensible to remove "ureadahead" until this is fixed or is there no harm done? As a normal user ... (yes still I have /var in a separate partition, because I want to be on the safe side with my databases located in /var when reinstalling or upgrading the system ... - by the way: does this make sense or is it better to just have /home separate and use a backup of /var folders?) ... as a normal user I feel a bit lost with bugs like this. It would be nice to get some information somewhere. Something like:
"Don't worry, just wait for an update with a bugfix!" or "To avoid further problems just remove 'ureadahead' until it's fixed!"
Is there a way to revert to default permissions using chmod, for root filesystem? As root I accidentally chmod'd / to 755, luckily this is a dev server and not production so its not critical to fix for me, just wondering though....
I use a mounted NTFS filesystem as my main data storage drive. I then symlink all my Windows folders (Documents, Pictures, etc.) into my Ubuntu home folder. Works great, because it means I can share files between Windows and Ubuntu hassle-free. However, any file created on or saved to the NTFS partition automatically has its owner set as "root". Is it possible to set the default owner to me (aaron)? Or does it have to be root on NTFS?
I was just copying a large (50GB) file from one mounted partition to another mounted partition (a USB drive), but before the operation completed, my root filesystem, on a separate partition, filled up.Because it filled up I also couldn't get past the login when I rebooted. I think this is because there is no room to load temporary files. I'm expanding the root partition to temporarily fix this. how can I avoid my root file system filling up when copying a massive file between mounted partitions? the file is being cached in root during the transfer.
I'm getting an error message that something along the lines of "volume "filesystem root" has only 25mb space remaining". How do I increase the volume size so I never have to worry about it again? This is the 3rd time I've tried ubuntu and it's sticking more and more but this has me thoroughly perplexed. I've got a 320GB HDD partitioned 3 times with a Linux partition being 7GB.
I have switched recently from Ubuntu to Debian and overall I am enjoying it. However I was just wondering, does Debian, like Ubuntu check the filesystem at boot periodically or if damaged, because it is doing neither in my case? How do I get it to do this
2 days ago I had installed Fedora 9 on an old machine. The installation was from a Flash USB, and was OK and the kernel on thar installation was 2.6.25-14.fc9.i686.
After the installation I updated the system, and all looks to be ok, and the system was set with the kernel 2.6.27.25-78.2.56.fc9.i686.
But when I start the system with the latest kernel itÅ› get blockd on "remounting root filesystem in read-write mode" step, but not with the original kernel witch start correctly.
I'm new to fedora 13 and I have been through a few installs already with a 12TB raid. Fedora is installed on a separate 250GB drive. I've mounted the 12TB drive as a single share and I'm capturing large video files (12-90GB each) to the raid in a Samba Share across the network. The system runs great for about three days and then I start getting warning messages that "the volume filesystem root has only 1.9GB of disk space remaining" then another later 205MB etc until it eventually fills to 100% and then locks the machine. If I reboot I get a Gnome error and can't login. The only solution has been to reinstall fedora again from scratch.
Each time I allocate more space for root. My current partition is 65G in size. The raid shows only 5.1TB of space used and it shows 7.2TB of free space. The raid share shows as being mounted in /media. Root shows that it will be full at 5.2TB, and I'm almost there, so I'm probably looking at another install in just a short while when it freezes again. I've read reinstall and make a larger root partition, but I'm not sure how big that must be to avoid this problem in the future. Also, is there a limitation on the size that root can be? my question stems from the fact that I have over 7TB of free space but somehow the root is reporting as 100% full at only to 5.1TB.
I am running 11.2, kde4. The day before yesterday, the system updated and I think there was kernal update within that. I had no problems immediately afterward. Then I did a total shutdown for the night, and turned it back on yesterday only to find this:
Mount: wrong fs, bad option, bad superblock on /dev/sda2, missing codepage or helper program, or other error In some cases useful info is found in syslog--try dmesg | tail or so Could not mount root filesystem--exiting to /bin/sh sh: cannot set terminal process group (-1): Inappropriate ioctl for device sh: no job control in this shell $
Besides the last updates from the other day, I did nothing out of the ordinary, no downloads or any system/configuration tweeks. Will I have to reinstall opensuse? or is there a way to reclaim my previous setup--or at least reclaim my files and documents? I'm running off of the 11.2 livecd.
I am using GRUB bootloader. I can boot into windows fine. But booting into linux gives me the error "kernel panic: unable to mount root fs on unknown block(0,0)I got LILO to load linux fine but GRUB always gives me this error regardless of the linux OS for this particular computer.
I boot up a Linux appliance entirely in RAM, ie. the image has a Linux kernel and an attached ext2 root filesystem.
Now that it's working, I would like to copy the root filesystem from RAM to a NAND flash memory.
Can I just mount the NAND, run "cp -a /* /mnt/nand", reboot with the kernel command line "root=/dev/mtdblock2 rw", and expect Linux to be happy... or is it more involved than this?
Suppose I have a good backup of the / root filesystem. How do I recover the / root area? Suppose I have modified the root filesystem, perhaps I do an update some of the packages and regret it, and I want to get back to the system at the time of the backup. How do most linux people recover the root area of a system from a backup?
1) I wondered if I might put a System Rescue CD in and boot off it? 2) And then NFS mount the directory containing the backup? -In my case, I have made a good backup using rsync, to a directory elsewhere on the network. 3) And then, still booted off the System Rescue CD, mount the partition that contains the / root area in question? 4) Would I then clear or empty or delete the contents from the / root partition? 5) And then copy across all the files from the backup into the / root partition?
I ask these questions because of the (very nice) way linux OS is built entirely from packages... Am I being too complicated? (By comparison, I can see it is easy to recover user data.)If, instead, I simply recovered the backup straight onto the updated root filesystem, I wonder what it would look like if I then tried to verify it with "rpm -Va", for example? Surely, all the packages would fail the verification, because it would think it has a later version of each package from the update, but the actual files would have been overwritten by the earlier version from the backup?
When i installed ubuntu. I made a seperate partition so that i could copy an ISO image onto it of an up-to-date version of ubuntu. I wanted to then boot the ISO up so i could install the new version that way. I've already tried doing it through the update manager but it'll download, almost be done with installing and it freezes on me. so i figured this would be easier. However i do not know how to gain access to the other partition to copy the ISO image. Please help.
I'm running CentOS 4.3 on a VM which is an application server for Quinstreet. trouble is when i keep coming in during the mornings it seems to keep making this root filesystem read only. There is no pattern for this and neither is it clear in the messages log why this keeps happening.
I've setup a filesystem on a RAID 0+1 and am looking at moving root filesystem from a single disk to the new one. I could not install CentOS on mirrored filesystem as the RAID card did not have a pre-built driver for CentOS 5.3, so I had to compile the driver after installing the system.What I'm going to do now is:
1. Mount the new mirrored filesystem under /root1 2. use find | cpio to copy everything from the existing / to /root1 3. use grub to create a boot record on /root1 4. edit /root1/etc/fstab to point / to the new disk 5. reboot the system and keep my fingers crossed
today I upgraded (yum update) one of my Dell Poweredge Server from 5.2 to 5.3. After rebooting the system first seems to start normal but then the following Error Messages appear:
Apr 5 14:28:26 srv_1 kernel: I/O error in filesystem ("dm-0") meta-data dev dm-0 block 0x668000008 ("xfs_trans_read_buf") error 5 buf count 4096 Apr 5 14:28:26 srv_1 kernel: attempt to access beyond end of device Apr 5 14:28:26 srv_1 kernel: dm-0: rw=0, want=27514634256, limitfs]
[code]....
Booting with a rescue disk and doing a xfs_repair solves the file system Problems but moved a lot of files ( at least /usr/bin and /usr/lib completly) to "lost+found"... I tried the update with a spare 5.2 Server (different Hardware), and ended up with exactly the same effect and error message. Both systems are running XFS as root File system on an LVM Disk.
Ive setup a filesystem on a RAID 0+1 and am looking at moving root filesystem from a single disk to the new one. I could not install CentOS on mirrored filesystem as the RAID card did not have a pre-built driver for CentOS 5.3, so I had to compile the driver after installing the system.
What Im going to do now is:
1.Mount the new mirrored filesystem under /root1 2.use find | cpio to copy everything from the existing / to /root1 3.use grub to create a boot record on /root1 4.edit /root1/etc/fstab to point / to the new disk 5. reboot the system and keep my fingers crossed
I have successful tar an existing CentOS 5.2 partition from Fefora10. The idea is to move a working CentOS 5.2 reside in an internal hard drive to a portable hard drive. I know how to edit a stencil in menu.lst to boot the clone CentOS5.2. During boot, I encountered
Red Hat nash version 5.1.19.6 mount: could not find filesystem /dev/root setuproot: moving /dev failed No such file or directory setuproot: mounting /proc: No such file or directory setuproot: mounting /sys: No such file or directory
I'm having trouble installing it on a "new" computer that I found at Goodwill for $60 with no operating system on it. When I go to edit the partitions, it won't let me do anything due to an apparent lack of a root filesystem. (I know this issue has been brought up and resolved in the past, but the usual solution (going into the validation.py file) isn't working for me, as there is no line in this one that says "if not root".)
A friend of mine upgraded his pc to ubuntu 10.04. Sadly enough we ran into issues with his graphics card, which apparently doesnt work well with lucid. We decided to downgrade to 9.10. I did this by installing over the old partition and chose to import the settings from the old account.The problems started when the pc booted for the first time:The list of kernels in grub2 was the one from 10.04.Somehow grub2 from the old installation seems to be still around and messes everything up. Any ideas how I could fix this?