I dont know if this is possible or not but here what I would like to do. I have 6 linux server and each has 100GB disk space. All of these 6 box are compute nodes and space are not used really. However, If I can combine 6 servershard disk that will in total 6*100GB gives you quite a bigger space. Is there any tool or ways to mount these drive in one volume instead of mounting individually ?
I need to install a voice recording application on a server.My problem is that I have four hard disks with 150 GB storage space each. So I have 150*4 = 600 GB Storage available as whole .I need all four hard drives (600 GB ) as single volume to store and retrieve recording .
There is a disk 500 gb, it is broken on /boot and on /root and on /dev/sda1 and /dev/sda2. Whether prompt it is possible to redistribute a disk without loss of data namely it is necessary to make/boot and two equivalent on disk volume.
I created a encrypted volume on top of software raid1. These are my steps:
1. Create logical partition on sda
2. Create logical partition on sdb (same size)
3. Change type to partition to 'fd' for both partitions
4. Check that the both partitions are same size and type fdisk -l /dev/sda && fdisk -l /dev/sdb
5. partprobe
6. Make sure there are no remains from previous RAID installations on /dev/sdb by running: mdadm --zero-superblock /dev/sda6 mdadm --zero-superblock /dev/sdb6
I need to copy data from a single HD, which used to be part of a Linux RAID 1. I've googled around, but can't find any clue how to mount partitions from this single HD.
Background: The HD comes from a linux based NAS box Synology DS207+. The NAS uses ext3 as filesystem. Both NAS disks are fine, but the other NAS hardware is dead and not worth repairing or replacing.
For portability reasons; I am building a standalone kickstart ISO; based of Cent5.2. I am to the point where I can load my ks file (linux ks=cdrom:/ks.cfg), it reads it fine; and performs the install as I want.
Where I am having a problem; is a good way to have the install use upgraded RPM's, not the base; specifically a kernel with a few needed tweaks in it; which is packaged in an rpm.
I attempted to place my kernel rpm's into the CentOS directory and rerun creatrepo; but I simply managed to corrupt the base repo on the install media.
In my production setup, i have 3 servers using the same mount point. However, i see that the IOPS is low. Does this kind of architecture have any impact on IOPS. In case it is neutral, how can i tune my setup for better IOPS.
I ran yum update on my centos 5.6 box a couple of days ago and following this the system would not reboot, dont recall the exact error and don't seem to be able to find it logged anywhere but it was something to do with LVM not being able to find a disk.
In the end I have booted to linux rescue and edited my /etc/fstab file so the system does not try to mount the offending volume group. This enables the system to boot but I need to find out what is wrong with the system and get this volume group accessible again. Here is my edited fstab showing the commented out line. code...
I've added a new LUN to my Centos 5.2 server using Powerpath and have added it to an ext3 logical volume. I extended the logical volume using lvextend and the new space shows up correctly in lvdisplay. What I'm having problems with is getting Centos to see the new disk space (df -h shows 500GB, not 600GB as expected). I've tried running a resize2fs on the new volume but it tells me that "the filesystem is already n blocks long. Nothing to do". Does any one know where I'm going wrong? If possible I'd like to sort this without a reboot.
I've just started playing with virtualization and I started my first VM. I would like to know if it's possible for the host machine to mount the partitions of the VM when it's closed. Right now the VM uses /dev/vg0/vm1 and has 3 partitions on it. I tried mount /dev/vg0/vm1 ~/vm1 at first before I remembered that I'd need a way to mount a specific partition inside the logical volume, not the volume itself!
I believe server section is the best when speaking of RAID stuff...
I have the following situation:We have a DELL T3400 with embedded fake raid on it. I dont know exactly how the system was setup (I wasnt here at that time), but the RAID was enabled in bios and while booting, the two harddrives would be seen as members of intel raid volume0 (RAID 1 mirror). I am not sure if the software raid was actually properly configured in Linux (Fedora 9) and if the OS was reconstructing the whole raid or it was just the bios part that was mirroring the /boot or just some parts of it. Frankly I find these hydrid raids very confusing.Some bad disk manipulation from my part caused the server to crash, but I was able to recover and boot just with one hard drive after using fsck.
I decided to get rid of the raid as it's not the right solution for the application we need it for and decided to go for a traditional single harddrive system and to use Ghost for Linux to clone to a spare disk when backups are needed.So I installed the latest Fedora 12 distribution onto another harddrive and disabled RAID in bios (changed from RAID ON to autodetect, which is the only other option).
Here is what I have now: /dev/sda has the newly installed fedora 12 /dev/sdb is an empty harddrive that I would use as an intermediate /dev/sdc is the old harddrive member of intel raid volume0
sdb was partitioned into sdb1 sdb2 and sdb3 and I created an ext3 filesystem on sdb2. The hard drive belonging to RAID volume0 (sdc) has a lot of work done on it and I would like to be able to recuperate the files to the new disk (sda). I cannot mount that old harddrive while in fedora 12, as it sees some unknown raid member filesystem on it probably assigned by the intel raid chip.
So I decided to do it from the other side: to boot from raid volume 0, and from there mount a third intermediate harddrive (sdb) onto which I would copy the documents and then mount the same harddrive from the newly installed fedora 12 and copy those documents from that intermediate harddrive.I can mount /dev/sdb2 from fedora 12 fine and copy stuff to and from it, but not when I boot from the RAID volume 0 harddrive (sdc) with fedora 9 on it. It keeps saying that the partition in question (/dev/sdb2) is an invalid block device.I am stuck here, as my knowledge in this sort of things is very limited.If somebody can indicate me how to recuperate files from that old raid harddrive onto the new fedora 12 drive, I would appreciate a lot.
Does anybody have any documentation or can assist with any sort of steps on how to install a SSO server on Centos 5.4.We have just over 150 Centos servers country wide and we would like to implement an SSO server to manage the users and their login credentials locally and centrally.
How can I mount an NTFS disk in CentOS 5.4? I did 'yum list | grep -i ntfs', but that doesn't show any ntfs rpms. Does CentOS support NTFS filesystems?
I've been at this for a few hours now. Searched the forums and while I found many similar topics, none were quite the same. The most obvious difference to me is this line: Jun 12 15:43:05 G1093 kernel: sdc: unknown partition table
I have a Windows 2003 server with fiber attached volumes (NTFS) that I would like to mount readonly on a linux system to back it up to tape. The fiber device will allow me to present the volume R/W to one host and R/O to another, however, the R/O system doesn't see any of the changes made by the R/W server. In other words, how can I make a readonly volume refresh, scan for changes, or update without un/re-mounting it?
Is the "mount -o --bind" option what I want? From the MAN is doesn't seem right... the option "sync" seems slightly more promising but I think I'm just grasping at straws here. The best I have come up with is a cron job to unmount then mount the volume periodically.
I want to setup a Linux File Server for a small windows network (around 50 users). I do know that I am gona need Smb service/pkg for that. I haven't used Samba for a while now and as per the best of my knowledge, entire communication (including usernames and passwords) between a samba server & windows client machines will be plain text. Is there any way to secure all this communication??
Secondly, if i remember correctly, MS windows wont let me mount more than one samba shares as network disk when all my shares can be accessed by different smb users with different passwords?? is there a solution to this problem? OR may be if there is any other package available for this purpose so that i wont have to use samba?
my Fedora 11 system is not starting anylonger. It stops with the message:
Code:
VFS: Can't find ext4 filesystem on dev dm-0
The system told me since a while, that a lot of the sectors of one disk of the (software) RAID compound are failed already. So tried to disconnect each of the disks and start them separately. Unfortunaltly this is not working (for one its is not working at all, the other wents the same far as with both), when I tried to recover the system with the Fedora DVD, it said no distribution found. I am quite new and do not know so much about linux system, so i do not know what further information you could need. Maybe it can be important, that both disks are encryped (the system wents so far, that I can type in the password).
I have a server with two hard disks SATA (500 GB), I installed centos in one of them, desire to know how I can mount the other hard disk empty and without format, so that this hard disk always appears mounted when "reboot" the system.
I had done a new lucid install to a 1 TB RAID 1 array using the alternate CD a few weeks back. I messed up that system trying to some hardware working that lucid doesn't have drivers for yet, so I gave up on it and reinstalled to a single 80 GB disk that I now want to move over to the RAID array.
I moved all of the existing files on the array to a single folder, then copied all of the folders from the 80 GB disk over to the array with permissions and symlinks (minus the contents of /proc and /sys, which I created empty).
These are the commands I used:
Quote:
p -a -d -R -v -t /media/raid_array /b* cp -a -d -R -v -t /media/raid_array /d* cp -a -d -R -v -t /media/raid_array /e* cp -a -d -R -v -t /media/raid_array /h*
[Code]....
I tried to change fstab to use the 689a... for root, but when I try to boot, it's still trying to open /dev/disk/by-uuid/412d...
So then I booted from the single disk again and chrooted into the array, then ran update-initramfs -u. I got 3 "grep: /proc/modules: No such file or directory" errors, and "cat: /proc/cmdline: No such file or directory"- so I created directory /proc/modules, created an empty file /proc/cmdline, and ran the initramfs update again. Then I tried to shut down, which hung (probably because I was doing all of this from a terminal window in Gnome), so I killed the power after a couple of minutes.
It's still trying to use /dev/disk/by-uuid/412d... to boot.
What am I missing? I assume I just have to change the UUID to mount as root, but I don't know how.
I have a requirement that seems to be unique in nature. I have multiple clients who are caged to their home directories. I would like to "share" a directory which exists above these chroots with all these caged users. I know this can be accomplished using mounts but my problem is, how can I mount a single directory to multiple mount points located in each users home dir? Can this be done in the fstab file?
I've got a bit of a question. My network is laid out like this:
The role assignments are thus:
Firewall - sorts out the passing through to the 3 different networks, and acts as the traffic proxy. Windows 2003 server - Does Active Directory and DNS CentOS server - FTP and DHCP
Now, my problem is I need the CentOS server to be able to assign IP address to both networks, however, the CentOS server can *ONLY* be connected via the one interface to the firewall. It needs to assign the Windows 2003 server and the eth0 of the firewall an IP address via static DHCP, but it also needs to able to assign the clients dynamically via any address in the 10.23.1.0/24 range. I was thinking that I would be able to create static only assignments for the servers via their MAC addresses, and only have 1 dynamically assignable entry for the clients, and then get the firewall to allow ports 67 and 68 to flow freely between eth0 and eth1, but I wasn't entirely sure of the best way to do all this.
I am trying to run two web servers (Virtual Hosts) on a single Linux Centos 5.5 box with a single IP address 192.168.0.182. I did all the pre-installation requirements such yum install mysql, yum install mysqladmin, service httpd start, service mysqld start etc etc.In /var/www/html directory, I have two folder called server1 and server2. These two folders have the necessary web server php script files and folders. I opened the browser and managed to install the script on one web server successfully. When I put the IP address 192.168.0.182 on the browser address bar, the page loads without any problem. Now I would like to be able to install the other web server script and I don't know how to?Here is my httpd configuration;
How do I configure my Debian installation to mount external USB drives to mount points based on the volume names of the drives? For instance, if I have a thumb drive with the volume name of "SWORDFISH," how do I have Linux mount it at /media/SWORDFISH? I'm aware that this can be setup in FSTAB, but that requires that I know the UUID of the device beforehand and that I take the time to set each external device up in FSTAB first. That does nothing for me when I have a thumb drive that has never been plugged into my computer before.
This seems to be setup by default in Ubuntu/Kubuntu, but is not working for me with a fresh installation of Debian Squeeze and KDE4. I've spent the past 2 hours Googling for a solution and have turned up nothing. UPDATE: My results are inconsistent. Sometimes Debian mounts devices to mount points based on the volume names, and other times it gives them generic mount points (e.g. /media/usb1).
I created a encrypted volume on top of software raid1. These are my steps:
1. Create logical partition on sda
2. Create logical partition on sdb (same size)
3. Change type to partition to 'fd' for both partitions
4. Check that the both partitions are same size and type fdisk -l /dev/sda && fdisk -l /dev/sdb
5. partprobe
6. Make sure there are no remains from previous RAID installations on /dev/sdb by running: mdadm --zero-superblock /dev/sda6 mdadm --zero-superblock /dev/sdb6
14. Mount the encrypted volume: mount -O noatime /dev/mapper/ftdata /ftdata
It mounts successfully this first time. When I cd /ftdata, I can see the lost+found dir
Now, I unmount the volume cd ~
Code: umount /ftdata cryptsetup remove ftdata
And now, if I try to setup my encrypted volume like this:
Code: [root@localhost ~]# cryptsetup create ftdata /dev/md4 Enter passphrase: mount -O noatime /dev/mapper/ftdata /ftdata I get this error: mount: you must specify the filesystem type
I have a network (192.168.x.x) that I want to keep closed and private for the most part. I need however to get access to some files generated on the machines in this private network. So I first tried putting two cards in a machine running centos 5.2 and connecting one to the private newtork and the other to the public network. This worked somewhat but I was not able to see this bridging machine in the private network because I could not run 2 samba instances on this machine ( I need one for the public network). So I setup xen on a machine with the 2 NIC's and assigned one card to the host dom and the other to the guest dom which was connected to the private network.
This worked ok, but the only issue was the shared disk space. I couldn't use nfs because each machine operates in a different subnet and I don't know how to export a nfs drive across domains. So I created a logical volume on a disk and mounted this in both domains. This works some times,. but at other times I copy files from the private machine to the shared volume but i can't see them from the other domain. Also sometimes the guest domain which houses the private network server hangs during boot up saying that the logical volume has been assigned and cannot be mounted.
1) Is what I'm doing using logical volumes across domains legal (best practise, etc)
2) Is there another way for me to achieve what I want (sharing a disk partition across domains).
I'm sure many of you here have worked with disk quotas and lvm2 and my problem involves both. Basically what I'm wanting to do is have it so whenever a logical volume gets below a certain constraint (10Gb's) ie. it only has that much left - I want to automatically resize it to add 20 GB's. Obviously this can be done rather easily manually, and with a bit of python hacking it can be done programmatically but since this is for production use I was wondering if there was something a bit more fluid. Since this server is I/O intensive ZFS implemented via FUSE is not an option and neither is the still unstable BtrFS.