Ubuntu Servers :: LVM Storage - Partitions Will Be Formatted
Mar 22, 2010
I wanna try to install ubuntu server on my poweredge 1955 with emc storage, but on the partitioning step it says:
"the following partitions will be formatted:
...<here goes all the lvms in the storage>..."
I already tried all the options in the partitioning step without success
I'm interested in buying a new hardware for my company. The old server (now 10 years old) should be replaced with a new one. Till now, I was looking on different hardware suppliers, boards and different other places. I found a Tyan board [URL]. The hardware spec is quite interesting and the board would fullfill our claims.
how both storage devices will be supported by Ubuntu or Debian??
I can't create any formatted partitions in my hdd (/dev/sda) with Gparted or KDE Partition Manager.
With either I can only create an unformatted partition. When trying to reformat it, I get this: [URL]
Quote:
WARNING: the kernel failed to re-read the partition table on /dev/sda (Device or resource busy). As a result, it may not reflect all of your changes until after reboot.
So the problem is apprarently at kernel level. After that the partition will of course just be shown as "Unknown" - even after a reboot. What kinda app could possibly reserve my partition table?
i have a question regarding how data is placed on a media, for example the daily used hdd: when we talk about storage we often speak in heads sectors and cylinders. my question is if heads, sectors and cylinder is the true way data exists on a hdd platter?
lets take for example, disk_x 1000.2gb = 1000204886016 bytes 255 heads, 63 sectors/track 121601 cylinders.
fdisk -H 128 -S 32 /dev/disk_x each cylinder will be shrank to 2097152bytes, number of cylinders will grow to 476934. but everything will be much more aligned and readable or there is something i don't know and i will loose almost half of the total sector count on the hdd cause 63-32=31. i asked the partitioner just to use 32 sectors from each track and only 128 tracks of the cylinder.
or another example, if i have a cluster size of 4k. why not making each track use 56 sectors or 7 clusters. theoretically i have all files in my storage and each one of them occupies 14 clusters isn't it wiser to make it as described. what happens when i invoke fdisk -h -s params? what will be changed, the disc physically and the way it is accessed or only the partition table? you probably asking your self what the hell does this dude wants? i want to get maximum i/outputs and the widest bandwidth and the nicest readble partition tables and to understand better fdisk -H -S.
I have a desktop with Ubuntu and I've set up Samba to share files with my Windows 7 laptop. I can access my home folder just fine except for my NTFS storage partitions on the desktop's HDD and my home folder's Downloads folder (which times out whenever I try and open it).
Is there an alternative way to share files between Linux and Windows 7?
Trying to take a UK formatted date (30/12/2010) and insert it into Mysql (2010-12-30) is just not going to plan. I have a feeling I'm getting close, however it's just not working out.
I have a large RAID array of 12 TB attached to one of my Ubuntu server machines. The RAID volume is formatted with NTFS. The problem is that I can not mount this volume in Ubuntu. I can read it normally if I attach it to windows machine.This is the output from "sudo fdisk -l":
I'm running Ubuntu 9.10 with Kvm. I've used a howto to configure my network. Seems to work fine, I've installed the Virtual Machine manager, when I go to create my Virtual Machine, I see the the image is automatically created in /var/lib/libvirt/images. I have a totally separate path for my images. How to I configure a different image directory
I'm planning on setting up a home file server. I was wondering what platform would be recommended for something like this. The server would be used mainly for media storage which would be shared between an HTPC and a couple desktops and laptops. I was thinking of just getting whatever motherboard had the most SATA headers on it (which currently seems to be something P55-based) and setting up a RAID5 fakeraid with some 1.5 or 2TB drives and the OS in RAID1 with whatever drives I have laying around. It there anything flawed with this approach? P55 boards with 10 SATA headers are currently upwards of $200, which is kind of pricey. Is there a more economical route that I should consider? Also, are there any known problems with setting up a fakeraid like this using certain motherboard's SATA controllers?
My 11.04 installation is running beautifully in VM on ESXi. I'm trying to add storage so I added the disks, assigned them in vmware to the vm, then tried to mount them when I received the error code...
my setup: HP Microserver, booting vmware from USB drive. Hardware RAID card (Adaptec RAID 2405), 2 x 250GB HDD in RAID1 (datastore1) with VM's, 2 x 2TB HDD in RAID1 (datastore2) - the storage I'm trying to add.
A search of the above errors yielded many results, all of them were different scenarios to mine.
I am trying to disable USB storage from servers.What I did is as following1> modprobe -vr usb_storage2> blacklist usb_storageIt is working fine. But root can again load the module into the kernel. [ modeprobe -v usb_storage ] want to restrict this also. My requirement is not even root can access the usb storage.
I need some advice or tips or maybe your own experiences about building a home data storage or NAS.Here's some thoughts / requirements I think it should have:It should expandable. I'll stick a couple of 1TB HDDs and a little later I'll stick some moreIt should easily integrated to both Ubuntu and Windows 7. Ideally it'll be an integrated part of the filesystem.I'm thinking some sort of RAID as a backing up my data. RAID 1 seems like a such a waste but then again, these days, HDDs are cheap.And when I do add more HDDs, I'd like them to appear as one big storage unit instead of separate drives.Any suggestions and tips on how to go about this is welcome. Questions are plenty: should I go with server hardware or is bigger ATX case and standard hardware enough? I'll need some pointers so keep 'em coming
I have collected a number of computers over the years, and now I would like to put them to good use. I considered UEC, but many do not support hardware virtualization and all I really need is storage. Over all the machines, I estimate that I have 4-5 terabytes of storage, all going to waste because each one has relatively low storage space. Is there any way I could setup a redundant storage solution that utilities these machines in a networked system?
I run Debian on my old computer to use it as a server. Everything is configured properly so that it functions as a web server. Now that summer comes closer I will not be home most of the time and I was thinking to use part of my server to upload/download files. Is there some nice package that provides an easy interface for such a task? I am reffering to something like the wikimedia package but for just downloading/uploading files.
I'm trying to delete directories (long story, Mac temp files there, Windows not cooperating) on a sever connected to a HP 20 Modular Smart Array set up as RAID5. System currently running Windows. I've booted from a 9.10 LiveCD but can't see the external drives. Is it correct that I need to install mdadm to "see" those drives from LiveCD? From a different machine (linux) I can mount the drive using samba like so:
I have admin privileges on the Windows OS. In linux (or Windows beforehand), is it possible to take ownership of the directories so that I can do a rm -f -r <dir> ?
I trying to create a Cluster Linux with two servers and one storage, I have mounted some filesystems in both servers, but, when I create a file on server (1), I don't see the file on server (2).
Is that a problem with the linux or storage configuration ?.
Can ANYONE point me in the right direction on how to use storage on multiple servers as a single cluster?I thought storage cluster was for that but, after much googling, and even more help from here, I don't think that achieves my goal. My goal is to have multiple servers share a file system, to act as somewhat of a network raid, so if node-A goes down the files are available on other nodes, and hopefully so when the capacity of the nodes are reached I can add nodes to expand the "cluster".
I have one scsi storage array, JetStor SATA 416S,split into 2 halves. each is RAID5 12TB. Is it possible to dedicate each half to a different rhel5 server. I have "lsi22320-r" ultra230 dual channel scsi adapter in each server.
The first server sees the 2 halfs however the second server doesn't on the first server:
I am going to install Oracle RAC on two Servers, With shared SAN storage (Servers and Storage is IBM) OS = RHEL 5u5 x64 bit
And we used multipathing mechanism and created multipathing devices. i.e. /dev/mapper/mpath1. Then I created raw device /dev/raw/raw1 of this /dev/mapper/mpath1 Block device as per pre-reqs for Oracle Cluster. Every thing looks good, But we faced the performance issue as under.
when we run command : #dd if=/dev/zero of=/dev/mapper/mpath1 bs=1024 count=1000 the writing rate is approx. 34 MB/s But If we run command #dd if=/dev/zero of=/dev/raw/raw1 bs=1024 count=1000 the writing rate is very slow like 253 KB/s
My goal is to connect up a bunch of CentOS 5.6 servers to storage array which is running as an SRP target. These servers will be running eventually as Xen hypervisors. I'm using Mellanox Infiniband cards and a Qlogic Infiniband switch.
OFED drivers are already installed as standard. I have done the following.
1. Base install of CentOS 5.6 X86_64 2. Installed the following packages...
openib ibutils srptools infiniband-diags opensm
[Code]...
It works perfectly and connects to the target immediately with great results. I'd just like this to happen correctly on startup.
To work around the issue I have added the line srp_daemon -e -o into the init script for opensm /etc/init.d/opensmd but it doesnt always start the srp_daemon process. Its rather hit and miss
I'd really like srp_daemon to run as a service with a proper init script but I wouldnt know how to write one. I understand that this issue has been addressed in the RHEL6 srptools package but that doesnt help me because the software I intend to use on these servers specifically requires 5.
Every time my kernel is upgraded (happens often), I reboot and forget to reinstall the VirtualBox guest additions, so my partitions in fstab that use vboxsf (for shared folders) are hosed.
im using ubuntu server 64 bit on intel atom410D. when im using my SATA DRIVE as AHCI mode while i moving 4 GB files from one partition to another my server is getting so slow that it will take me to login on ssh 2 minutes. so i have seen a thing or two about a bug on it. i changed the AHCI mode to IDE mode and now it seems to work better.
I am wanting to install the 10.04 lamp server on a winxp pro machine with several hard drives all with ntfs. Will the lamp server be able to use the ntfs partitions ? Or do i need to change them over.There is several hundred gigs of data on the hard drives and i do not relish the job of converting them.Also can you just mount the ntfs shares that are on the windows machine with the ubuntu machine using a intranet type setup
I have an Ubuntu 9.10/64 Server configured with LVM in a 1Tb sata disk in my home server. For a simple server, LVM adds complexity if you need to recover your system, so I decided to migrate from LVM to normal partitions. I have another 1Tb disk to copy my system disk. In the backup disk, i have created same size partitions and trasfered files from /boot and root volumes with rsync, presenving perms etc... and excluding /dev /proc /sys etc... I have then modified /etc/fstab to mount new UUID's and reinstaled grub via live CD and chroot.
At this point, when I replace my system disk with backup one, the boot process start as usual, but when it starts loading services the boot process does not continue. The system is not "frozen". If I press intro, there is a line feed on screen. If I mount the backup disk in another system, there is nothing writed to /var/log/syslog or dmesg files during the boot process. I can change terminal with <alt>F2 but there is no login. In console, with backup disk I get:
Code: ....type 1505 audit.... operation="profile_load"..... Done. fsck from util-linux-ng 2.16 /dev/sda1: clean ...... fsck from util-linux-ng 2.16 /dev/sda5: clean ...... * Stopping remote control daemon(s): LIRC * Loading LIRC modules lircd-0.8.6[1057]: lircd(default) ready, using /var/run/lirc/litcd * could not access PID file for nmbd
And with original system disk: Code: ....type 1505 audit.... operation="profile_load"..... Done. fsck from util-linux-ng 2.16 /dev/sda5: clean ...... fsck from util-linux-ng 2.16 * Stopping remote control daemon(s): LIRC * Loading LIRC modules lircd-0.8.6[1057]: lircd(default) ready, using /var/run/lirc/litcd /dev/mapper/coll-root: clean ...... * could not access PID file for nmbd * Setting preliminary keumap... Ubuntu 9.10 coll tty1 coll login:
I'm setting up an Ubuntu server to replace my aged Pentium IV Slackware box. It's a Dell Inspiron 560 with modest core-2 duo processor, 8 gigs of ram, and a pair of good sized hard disks. I came upon a good deal on a couple of 40gig Intel SSDs. I'd like to use one in the server. I'd like to use the SSD for the relatively invariant stuff, because they write slow, and are life-limited in the # of writes. So: /bin /usr/bin /boot /etc /lib /usr/lib /usr/local/lib /mnt /opt
The best way IMHO to achieve this would be to make the SSD the root, and mount hard drive partitions/filesystems to it to places such as: /var /media (Here you read and write giant files. Hard disks do this just fine. One will work especially fine if one particular hard drive is dedicated to this.) /root /home /tmp
A quick "df" yields a list of filesystems. There are four that are not tied to any device! /dev /dev/shm /var/run /var/lock
(df also discloses that the root filesystem is presently standing at 502megs. Guess it'll fit in a 40-gig SSD). These deviceless filesystems worry me. Are they created magically on boot? What's required to make the system magically create them on boot? If I copy the filesystem over to the SSD and redo the grub config, will it Just Work? Web searches reveal subtleties WRT mount points.
I am trying to do the cluster storage with the Rock Cluster storage operating system . I have install the rock cluster on the main server . and I want to connect the client with the PXE boot . when I starting the client then it boot from the PXE boot into the mode of compute node. but it ask the where is to be stored the I have give the path..then server says that not able to find the directory path.
Steps:- 1) insert-ethers 2) client is started with the PXE boot. 3) it detects the dhcp . 4) At last it demand where is to be started by cd-rom, harddisk ,nfs etc.
then I started with the nfs and I have given the lanIp of server.and server is detecting the client but client is not finding the filesystem directory. into export partition directory but it is not taking path.
PATH:- /export/rock/install it is not finding this path then it is not able to start the o/s from PXE boot..Is there any solution and manual of rock or anyother solution then u can reply to me...
Setup1: Two rack mounted servers with a common storage device serving as the home directories for users on the servers. The storage device is a gfs partition mounted on the servers as the home directory using SAS cables. These servers have RHEL 5.4 as the installed operating system.
Setup2: A standard tower server with Debian 6 as the operating system used for tape backups. This has a tape drive connected to it.
Question: How to mount the storage device of setup1 using NFS on the server in setup2.
I used Ubuntu before, without problems but since the 10.04 version it won't recognize my partitions. I formated my laptop and partitioned it, installed Windows 7 64bit, which I need for my work, and wanted now to install Ubuntu 10.04/10. I then used GParted to check my Harddisk and it is having troubles to recognize my partitions, too while Windows finds them. GParted is giving me an error message saying my partitions are oversized. I am still in the beginning of my Linux experiences and so I don't know what to do. I have two 250GB harddisks (how Windows recognizes them),