I would like to build a SAN with like 4 TB of space on it. What is the best way to accomplish this task?
- What exact hardware would I need.
- What exact software do I need.
- How to bond two Ethernet cards together to get better network performance out of the box
I am looking to find a cheap rack-mount server cabinet. What is the best file system for a SAN? Is there a particular distro that works well with this? I prefer CentOS as that's what I have been learning on.
I've had a look at some similar threads but as I'm very new to linux they're already a bit technical for me. Sorry, this calls for someone with patience. I gather from other threads that disconnecting an external drive without unmounting is a no-no, and this seems to be the likely cause. Now the disk is read only and I'm unable to change any settings through the usual control panel on ubuntu. I'm just not familiar with the terminal instructions. I tried to cut and past a few command lines from other threads but I got some warnings that proceding could damage data. Like this one: WARNING! Running e2fsck on a mounted filesystem may cause SEVERE filesystem damage.
I am very new to linux, and I have a question regarding the filesystem check (fsck). The power recently went out and when I tried to restart linux the following error appears:
*/dev/sda1 contains file system w/errors, check forced it then goes on to say..
*An error occured during the file system check. Dropping you to a shell; the system will reboot when you leave the shell. Give root password for maintenance (or type Control-D to continue) I wasn't sure what to do, but checked some other online forums and they suggested running fsck manually - so I typed in the root password - and used the command, "fsck -A -V ; echo == $? ==" it then gave the following message
*WARNING!!! Running e2fsck on a mounted filesystem may cause SEVERE filesystem damage *Would you like to continue (y/n)
Again, I wasn't sure what to do so i just checked no. I then manually turned off the computer and was prompted at the beginning to press Alt-3. I was brought to another screen and it informed me one of the drives was degraded and suggested rebuilding the array. I tried doing this, but it still brings me back to the original error of, "/dev/sda1 contains file system w/errors, check forced," and the process continues.
Also, when I tried to rebuild the array, I didn't backup any of the data on our home directory before doing this (which was probably a big mistake). After being prompted to type the root password, I was able to give the ls command and look at all the directories...the home directory where our data was stored was empty and I am afraid I may have lost some information. Is there a possibility that data was lost when I was trying to rebuild using the old drives?
When I try to boot to OpenSUSE I get the following error during boot-up: unknown filesystem type 'reiserfs' could not mount root filesystem - exiting to /bin/sh$
This only started happening quite recently - before this I could boot to Linux quite happily.
I have a following problem: Recently my drive with Ubuntu 9.4 has mysteriously stopped working, i.e. when I switch the computer on it informs me that GRUB didn't find the filesystem. Well, I suppose it happens.
First, I though it was due to the drive dying, but I popped it in an external enclosure and HDTune told me the drive was fine. Wanting to recover the files on the drive before reinstalling I first tried to mount it in said external enclosure under Windows (I have Win Ext2 driver installed which used to work just fine). This time, however, drive gets assigned a letter but upon opening it Windows popped up an error saying that the drive was not formatted and whether I would like to format it then.
Unfazed by this streak of failures I tried to mount it under Linux but, alas, to no avail. I might have tried every single -t operator under mount command but it still won't budge and let me mount.
The root partition on my server seems to be full. I'd like to find out what files are on that partition and only that partition so I can free up some space on it.
I am getting a SSD and I'd like it to become my new Linux boot drive. However, it is smaller than my current hard drive's root Linux partition, so I'd like to copy over the filesystem and exclude some directories (which I'll leave on another hard drive). So I can't just clone the partition with parted or similar because it is too big.
I want to make sure all the data, metadata, links and such are preserved. That seems to exclude "cp" because it doesn't preserve all the metadata and link information.
The two basic techniques I've been able to identify seem to be something like:
find / -xdev -print0 | cpio -pa0V /mnt/dst
and:
rsync -avP -H -S --numeric-ids / /mnt/dst
Can anyone chime in with what they've used in the past, whether one of these or a different method, or if they see any flaws in these approaches.
While coding Python, I tend to save files very often (I have a pretty high code->test->code->test->... frequency). I hate when Linux syncs my changes to disk everytime I do a write.
How do I configure Linux so that it keeps file writes in memory for a certain period of time/number of writes?
To make this any useful, of course reads to not-yet-fsynced files must be from memory (so that the Python interpreter always sees the latest contents). Extra credits for background-fsyncing that doesn't block other writes/reads going on at the same time :-)
Once I deleted a log file and recreated it. After I did so the new log file was not being written to. I was told that it is because the file I deleted is still running and being used in memory. What is the memory portion called(such as user space) that the filesystem run in?
I've added a second drive to a system and I need to extend the lvm and the filesystem to the second disk. Is there a way to do this online with centos 5.5? I specifically need extending the actual ext3 filesystem which seems to be the trick part.
I have just purchased a 1 TB external hard disk to be used for backups. The backups will be performed with rsync and since I don't really care about accessing the data from other operating systems, I think I'll use ext3 on the partition. I'll just be backing up my home directory and probably /etc as well. In my home directory, I have a small number of files that are several GB, but most are tens of MB in size or less.
I'm just wondering if there are any special options I should pass when I create the filesystem with mkfs.ext3.
I have to create jffs2 filesystem for ARM.I have already created directories and nodes of devices as well as inittab script for ext2 filesystem. Now I don't know how to proceed from here. How to convert ext2 to jffs2. One more thing, is it necessary to create partition on hard drive on which one filesystem is already present to create one more new filesystem.
I heard that the ext file system does not causes file fragmentation. Could some one explain how this is achieved. And how come a file does not divides in to fragments as compared to Windows based Filesystems.
I have got arch Linux dual booting with Win XP on my laptop. I have been getting a filesystem check error since yesterday and am unable to start Arch. Upon googling and searching the arch fora, I came upon some advice which I tried which has not worked yet. Hence the new post.Basically, I was attempting to print something off and accidentally chose a printer that was not connected to my laptop. After half a minute or so, it repeatedly started giving me notifications that the printer was not connected...in excess of 200 messages that the printer was not working which continued to pop up despite me canceling the print job. The whole system got really sluggish (for the first time in the last year) and I had to restart the laptop upon which the boot messages appear. It gets to the point where its loading the various filesystems. It mounts root and says it fine.
I tried fsck which tells me that home and boot are still mounted.So I booted up using an Ubuntu Live CD and checked and repaired each file system which it successfully did. Upon rebooting into Arch, I am getting the same message.I have not installed anything new and had upgraded the whole system a few days before the problem started.
fsck from util-linux-ng 2.17.2 e2fsck 1.41.11 (14-Mar-2010) /dev/sda1 is mounted. WARNING!!! The filesystem is mounted. If you continue you ***WILL*** cause ***SEVERE*** filesystem damage. Do you really want to continue (y/n)?
I don't want to cause damage, but I'd rather not go into BIOS.
I want to know that how to use cdfs filesystem with redhat 5.2 i am unable to play a video cd in my system. this cd have .dat files. where i can download and install cdfs filesystem in my system.
I have been given a headless linux system running from a SD card. I get into it by putty, directly to root, not other user and even /home dir. Whatever I copy or write will dissapear because is ro.
I want to backup a lot of files on an external 2TB USB and sit it in a cupboard for the next 10 years. Looking for the most reliable filesystem for this. I don't care about speed, journalling, UNIX permissions or any of that stuff. All I care about is in 10 years time when the hard disk plates are rusted and unreadable and the drive hardly functions, what filesystem will be the easiest to recover my data from. Not ruling out FAT32 either for its simplicity but maybe there's a better filesystem for the job?
I need to combine 6 different filesystems into one filesystem using rsync. I am so confused as to which parameters I need to use. The 6 fileystems are:
I'm a photographer and I have a requirement find a better method of storing my photos other than multiple USB2 drives via USB hubs. Currently I use a Macbook Pro and 6 external drives connected via USB2 or FW800. 3 are a copy of the first three, kept up to day manually by running an rsync backup. I'd like to run a FreeNAS or OpenFiler NAS box using 2TB drives mirrored via software RAID. But - I would like to have the flexibility of also plugging into the drive physically for the faster throughput when necessary. My question is, is there a file system that both *nix and Mac OSX will play nice with?
I run Windows Vista and Ubuntu 9.10 dual boot. Today while booting windows, it informed me that there was something wrong with my hard disk and it would perform a check, and made some fixes.
Only when I wanted to boot into ubuntu again did I realise that the disk check had corrupted my linux partition. Ubuntu's load screen shows up, but just before the login screen it says that the filesystem could not be mounted.
Is there a way I can fix this? And how do I prevent windows from doing the same in the future?
I have attached an external USB disk to my debian gnu/linux system. The disk showed up as device /dev/sdc, and I prepared it like this:created a single partition withfdisk /dev/sdc (and some more commands in the interactive session that follows)formatted the partition withmkfs.msdos /dev/sdc1If I then attach the USB disk to a Windows XP or Vista system, then no new drive becomes available. The disk and its partition show up fine in the disk managment tool under "computer management", but apparently the file system in the partition is not recognized.How do I create a FAT32 file system which can actually be used in windows?edit:I've given up on this and went with a NTFS file system created by windows. In debian lenny this can be mounted read-write but apparently it requires you to install the "ntfs-3g" package and explicitly pass the -t ntfs-3g option to the mount command.
How to unmount a filesystem in Linux without investigating why is it busy? I want to do it in one command. It should handle applications using that filesystem, submounts, containers (lxc-execute -n qqq <command>) and all other things.
Just "unmount. No objections!". Special kernel patches or configuration is allowed. Filesystem should be really unmounted, so umount -l is certainly not an option. For example, for cryptsetup remove (BTW how to forcibly cryptsetup remove? Update: cryptsetup luksSuspend, but you won't be able to cryptsetup luksResume if it is not LUKS). How to make all filehandles on that filesystem invalid?
The only reliable way I know is mounting the filesystem through the FUSE (there is usually no problem to unmount FUSE thing because of I can just kill it's process). P.S. Already know mount fuser, lsof | grep, cat /proc/*/mounts | grep and obsolete non-working "badfs patch".
I'd like to make the ls -laR /media/myfs on Linux as fast as possible. I'll have 1 million files on the filesystem, 2TB of total file size, and some directories containing as much as 10000 files. Which filesystem should I use and how should I configure it?As far as I understand, the reason why ls -laR is slow because it has to stat(2) each inode (i.e. 1 million stat(2)s), and since inodes are distributed randomly on the disk, each stat(2) needs one disk seek.Here are some solutions I had in mind, none of which I am satisfied with:Create the filesystem on an SSD, because the seek operations on SSDs are fast. This wouldn't work, because a 2TB SSD doesn't exist, or it's prohibitively expensive.
Create a filesystem which spans on two block devices: an SSD and a disk; the disk contains file data, and the SSD contains all the metadata (including directory entries, inodes and POSIX extended attributes). Is there a filesystem which supports this? Would it survive a system crash (power outage)?Use find /media/myfs on ext2, ext3 or ext4, instead of ls -laR /media/myfs, because the former can the advantage of the d_type field (see in the getdents(2) man page), so it doesn't have to stat. Unfortunately, this doesn't meet my requirements, because I need all file sizes as well, which find /media/myfs doesn't print.Use a filesystem, such as VFAT, which stores inodes in the directory entries. I'd love this one, but VFAT is not reliable and flexible enough for me, and I don't know of any other filesystem which does that. Do you? Of course, storing inodes in the directory entries wouldn't work for files with a link count more than 1, but that's not a problem since I have only a few dozen such files in my use case.
Adjust some settings in /proc or sysctl so that inodes are locked to system memory forever. This would not speed up the first ls -laR /media/myfs, but it would make all subsequent invocations amazingly fast. How can I do this? I don't like this idea, because it doesn't speed up the first invocation, which currently takes 30 minutes. Also I'd like to lock the POSIX extended attributes in memory as well. What do I have to do for that?Use a filesystem which has an online defragmentation tool, which can be instructed to relocate inodes to the the beginning of the block device. Once the relocation is done, I can run dd if=/dev/sdb of=/dev/null bs=1M count=256 to get the beginning of the block device fetched to the kernel in-memory cache without seeking, and then the stat(2) operations would be fast, because they read from the cache. Is there a way to lock those inodes and/or blocks into memory once they have been read? Which filesystem has such a defragmentation tool?
Is there a way that I can use my Linux ext4 file system, as such and then use it on some other computer.I have a dual-boot of Windows 7 and Ubuntu 10.04 and my partition table looks like this:My question might not be clear, so explaining it with an example.Can I copy my Linux partition on a flash drive and then use it on a different PC, with or without any need to install Ubuntu on new PC, by simply booting from the copied ext4 partition.This way, I can easily port my Ubuntu packages and other applications, settings etc. from one PC to other