Recently installed Ubuntu, I've got 2 * 500GB HDDs, mounted and partitioned, but I don't have read/write access to them, only root does.. How can i get access to save files and create folders etc?
change as I salvaged an old old computer and got it back into working order. Windows 7 kills the computer and the media being served is sluggish and slow.
The computer spec are as follows: Asus A8N32-SLI Deluxe Bios 1303 Asus Nvidia En210
My reality distortion field went through a polarity shift today, and nothing's been working like it should. After the 10,000th problem, I decided to just make room for a fresh lucid install and the accompanying stability. It worked well, then I had some sort of a boot error upon reboot. So then I just decided to reinstall. The problem is that neither of my 2 internal HDDs is registering. Currently sda is an 80GB drive with a slackware install, sdb is a 120GB with a 60GB mint partition and (upon previous install) 60GB lucid partition. I've got no important data on any of my partitions yet, but also no net connection. I have install dvds for lucid, mint kde 9, and slackware plus a gparted live. I'm seriously considering making this machine a slack standalone, but then I've got legacy nvidia drivers to deal with.
I'm new to the world of Ubuntu 10.10. My PC had windows Vista running on an 80GB HDD, and on Friday 03/12/10 I decided to install Ubuntu 10.10 on a second 160GB HDD. Wrongly I assumed I could simply have 2 HDD's in my PC and it would magically allow me to chose between Windows Vista and Ubuntu 10.10. Well that was 50 hours ago and I still can't get it to work. As you can see I have the results of my boot_info_script055.sh below.
PHP Code: Boot Info Script 0.55 dated February 15th, 2010 Boot Info Summary: Grub 2 isnstalled in the MBR of /dev/sda and looks on the same drive in partition # 1 for (,msdos1)/boot/grub. => Windows is installed in the MBR of /dev/sdbsda1: File system: ext4 Boot sector type: - Boot sector info: Operating System: Ubuntu 10.10 Boot files/dirs: /boot/grub/grub.cfg /etc/fstab /boot/grub .....
I have Vista installed a 500 gb and recently added a 320 gb hard drive. How do I install ubuntu on to the 320 gb HDD and be able to dual boot the 2 operating systems? Also how do I keep myself from getting the symbol 'grub_puts' not found error when updating to 10.4?
I'm having an issue with formatting the HDDs during the CentOS 5.5 installation.I've set up a hardware RAID1 with 2TB HDDs using FastTrack TX4650 (http:url... as a RAID controller.Then I tried to install CentOS 5.5 32-bit version using this configuration. I preloaded the required RAID drivers for the CentOS and started the installation. Everything went fine until the part where the CentOS formats the HDD - it took 12 hours and the process still goes on. I cannot determine whether the install process is stuck or the HDD is still being formatted.how long the formatting should take considering the above mentioned configuration?
I have a new machine arriving tomorrow and plan on installing ubuntu 10.04 x64 and windows 7 professional. I've only ever had a single HDD before, but now I have 2 * 640GB drives.Does it matter what OS I install first?Will I have to change anything relating to the HDDs boot order in the BIOS?I only got 2 HDDs so in the event of needing to reinstall one of the OS's they're on completely different drives. Also, in the eventuality I need to reinstall one of the OSs is it simply a normal reinstall procedure, or because they're on two seperate drives will I need to do anything different?
I'm 'trying' to dual boot WinXP and Ubuntu 10.10 from 2 separate HDD's. Currently I'm on attempt number 6 but my patience wore thin about 2 days ago - here's the current state of play: I have WinXP running fine (Was installed first) and I have Ubuntu running fine if I use a boot loader disc. I don't have any boot options at all - if I let the PC boot naturally then it just loads XP as normal. I followed this guide to the letter: [URL]
(For those that don't want to read the link: It get's you to create 4 partitions on a drive - 1: ntfs for winxp (not touched) 2: Linux partition 3: linux-swap partition 4: fat32 osshare partition) then if you get no boot options, it creates an ubuntu.bin file which you move to the C: and edit the boot.ini to include it in the options). But all the difference it makes is I get boot options, I press Ubuntu and my computer sits with a blinking cursor for as long as you let it.
1 is the first partition so windows finds it nicely. Before install, i unplugged my hdds so that grub wouldn't get confused. I told the installer to put the full install on sda3 with no swap space. I checked (advanced button on summary page) that bootloader is being installed to the usb on dev/sda (sda since no other drives attached). This should put it in the MBR (i think?)
Seems like I pressed all the right buttons huh? Is there a way to diagnose grub and see what's wrong? is there a reason grub may not initialize properly from a usb drive?
I have two HDs, one is a 80gb OS drive(Parallel ATA) and the other is a 1T storage drive (SATA). Well after each reboot they swap between /dev/sda and /dev/sdb, over and over again. One boot the OS is /dev/sda and the next its /dev/sdb, same goes for the second drive. This makes it difficult to setup fstab so it will mount the large storage drive on boot.
I'm having a little trouble with a mdadm RAID array at the moment in which the four hard drives in the array change their /dev/sdb /dev/sdc /dev/sdd/ /dev/sde placement on every reboot.
I have my current computer set up with 2 HDDs in it. A 500GB with GRUB and Windows 7 on it and a 160GB HDD with Ubuntu on it.
I would like to someone replace Windows 7 with Ubuntu on the 500GB drive, but I'm not sure how I would be able to do this and still keep GRUB and such .
EDIT: I'd like to do this without re-installing anything.
I have a NAS box that runs Ubuntu Server and Samba. 4 of the HDDs are in RAID5 (/dev/md1) and I've configured them to spin down after 10 minutes of inactivity. This filesystem is mounted on /share/Media. The other 2 HDDs are NOT configured to spin down. My other computer runs Ubuntu Desktop. I'm mounting the entire /share folder (that's located on NAS) using this entry in /etc/fstab:
i have a question regarding how data is placed on a media, for example the daily used hdd: when we talk about storage we often speak in heads sectors and cylinders. my question is if heads, sectors and cylinder is the true way data exists on a hdd platter?
lets take for example, disk_x 1000.2gb = 1000204886016 bytes 255 heads, 63 sectors/track 121601 cylinders.
fdisk -H 128 -S 32 /dev/disk_x each cylinder will be shrank to 2097152bytes, number of cylinders will grow to 476934. but everything will be much more aligned and readable or there is something i don't know and i will loose almost half of the total sector count on the hdd cause 63-32=31. i asked the partitioner just to use 32 sectors from each track and only 128 tracks of the cylinder.
or another example, if i have a cluster size of 4k. why not making each track use 56 sectors or 7 clusters. theoretically i have all files in my storage and each one of them occupies 14 clusters isn't it wiser to make it as described. what happens when i invoke fdisk -h -s params? what will be changed, the disc physically and the way it is accessed or only the partition table? you probably asking your self what the hell does this dude wants? i want to get maximum i/outputs and the widest bandwidth and the nicest readble partition tables and to understand better fdisk -H -S.
I have a site that users upload files on. Its on a dedicated server with 2 HDDs and the first HDD is 97% full, is it possible to use the other HDD for the files users upload? if so how?
I am having problems seeing SAS drives using Supermicro AOC-SAS2LP-H8IR adapter. The operation system is CentOS 5.6 64bit version. The operation system is installed on a SATA drive and the motherboard is an Intel Board "Classic Series" "Rockfish" G43 - Socket LGA775. From the OS I cannot see the drives. BIOS does see the PCI card and it ends there.
I'm running Ubuntu Server 9.10. I have two external USB HDDs. I use them each for different backup reasons. So certain data gets stored on one HDD, and different information gets stored on the other HDD.
I want to make a script that can look at the external HDD can determine which HDD it is, so that it can copy the proper information to it. Is there a way for Linux to determine this? Like if I see one HDD as /dev/sdc1, then unplug it and plug in the other HDD, should Linux see it as /dev/sdd1 or will it be /dev/sdc1?
I don't quite understand how it determines the /dev/sdxx values that it assigns to drives.
After reading Jeff Atwood's recent blog post on solid state drives, I'm somewhat deterred in wanting to own one. I basically want to use solid state drives in my home network for the following purposes (all machines running 64bit Linux):
My main (pwn3r) desktop computer. This will be my main workstation for work, video encoding, etc. This will be running an Intel 980x 6-core processor, making it a beast. My hard disk configuration will be:
RAID-0: 2 Crucial 128GB Solid State drives for the main operating system(s), essentially providing 256GB of incredibly fast storage.
RAID-1: 2 WD 2TB Hard Disk drives for media and backup storage.
My network firewall computer. This will be running Untangle on my home network for content filtering and firewalling (if that's a word). It will be running an Intel Atom D525 dual core 1.8GHz processor. The hard disk configuration will consist of a single small 16-32GB solid state drive for the operating system and little, if anything, else.
My home HTTP/SFTP/file/backup server. This will be running a dual-core Intel i3 processor; it will be used for some video encoding, as a local DLNA server, a HTTP server for a few largely static files and perhaps some interactive scripts, a SSH server, possibly OpenVPN, and will be used to back up critical files over the network. It will be running RAID-X (where X > 0), meaning RAID-1 or RAID-5 or 6 for fast, redundant data storage, as well as a small SSD for the operating system.
I'm not exactly made of money, and I can't really count on buying four new SSDs every year or so. I can understand replacing them in computer number 1 once a year... maybe, but for the other computers which won't be utilizing the drive very much (ie: they're not power machines), it seems ridiculous to buy new drives this often.
My question is this: can I actually depend on solid state drives like I would on hard disk drives? Also, is this the best economic option? I'd like to save as much power and heat as I can, and solid state drives seem to be the best option at this point.
Currently, I installed CentOS 5.4 into 2 HDDs in my PC. I have bought a larger capacity HDD and would like to clone/image everything over and retain my settings and preferences. How can I do it?
120 GB - OCZ Vertex3 MAX IOPS 300 GB - Western Digital Velociraptor (10k RPM, about 4ms avg. seek) 2x2TB Samsung Ecogreen F4
The system will be running Ubuntu with the main purpose of doing lots of Java development. Occasionally I have to develop Java in a Windows VM; for this I need fast VMs. I read a lot about SSD wear and maybe it is a bad idea to put the Eclipse workspace on the SSD, because of all the little writes the builds do. Perhaps the workspace (and thus /home) might find a better place on the Velociraptor which is real fast. How should I partition the whole thing to get the most out of it. LVM might be an option, too. Maybe putting a third partition on the SSD for one VirtualBox image. Currently I am thinking:
SSD: 2GB /boot, remaining space for / Velociraptor: LVM spanning the whole drive. 150GB /home Remaining Space for /virtualMachines or something like that Samsung drives (LVM over both or one Volume Group for each? - Latter would be better in terms of data security, because if one drive in a big volume group fails everything is lost)
The system never uses more than about 250GB of HDD at any one time, so I would like to remove /dev/sdd1 and /dev/sdc1 from the LV and then from the machine and leave /dev/sda and /dev/sdb alone.
Does anyone know that if I use the command "system-config-lvm" to reduce the total LVM size to say 580GB, whether all data is preserved (I don't have any way at this stage to backup up 250GB of data, unless I buy more HDDs and that is the whole point of this exercise anyway - remove the two terrabyte drives to be used as backup disks).
Once I am happy that the data is safe, I will use "pvmove" and "vgreduce" to remove both terrabyte drives.
Am attempting to setup an NFS partition for use by a number of users. I would with to use quotas to ensure equitable use of the HDDs resources. However, I keep getting the following error:
I would like to know the best way to dual boot an already installed Win7 HDD, with adding a second HDD to which I will install Mint9?
I have attempted this in the past with Mint8, but managed to screw it up some how with Mint 8`s Beta Grub2! So bear with me if I am skittish on repeating a "conventional" Grub bootloader selection approach!
This time I would prefer to install Mint9 to it`s own HDD with Win7 disconnected if possible, and installing Mint`s Grub bootloader directly to the Mint HDD installation, just to insure Win7s MBR isn't affected by the Mint9 installation, by keeping each O.S. and it`s bootloader completely separate and apart. Of course then comes the question of how to access my new Mint 9 installation, since reconnecting my Win7 HDD (with it`s MBR) will become the default, with no knowledge of any Mint installation.
Would a third party bootloader such as "Easy BCD" be the way to go? Or am I over complicating what I would like to accomplish here? The main thing is: NOT having to upset my twice installed Win7 installation again!
I'm running Debian 8.2 and trying to set up so I can plug in a couple of external hard drives that will be used to sync data between systems using rsync.
I've got the rsync bit working how I want, thats not a issue. But what I can't seem to get to work properly is when I plug the devices in, they don't mount automatically.
I've tried various methods to no avail so far, systemd.automount in fstab doesn't seem to want to work, for some reason it gives a I/O error. I've tried setting up udev rules and they don't work either, so I'm a bit of a loss now.
Not sure what info to provide that would be relevant at this time, but can add logs as required easy enough.
This machine is headless, so command line only suggestions would be best. I can access X via the network if I have to, but I'd rather do it by cli for ease of access.
My fstab file
Code: Select all# /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point>  <type> <options>    <dump> <pass> # / was on /dev/sda1 during installation UUID=9b4e9dae-ea53-439a-a7fe-87c371c03803 /    xfs   defaults    0    1 # /home was on /dev/sda9 during installation
Currently running Slackware 13.37 64-bit on a notebook and finally have suspend/hibernate after realizing that USB devices, especially USB HDDs, need to be disconnected before suspend/hibernate can work. Problem is I have 2 USB HDDs that are connected to my notebook whenever the notebook is stationary for the extra storage so I'd like to create a script that would get invoked that would stop the suspend/hibernate process if certain partitions are mounted. I know what I would like to accomplish, but I have basic scripting knowledge so I was hoping to get some assistance.
1. script would basically store a user specified string containing devices that are non-USB, ie: NONUSB="/dev/sda /dev/sdb"
2. possibly use /etc/mtab to get a list of what is currently mounted and then remove lines containing whatever is specified in $NONUSB and store those values in $USB
3. run a for loop that executes 'umount' on each token in $USB 3a. stop suspend/hibernate process if 'umount' fails at any point 3b. if 'umount' passes then suspend/hibernate
I would like to build a NAS from PC (D510MO) running Debian. I have two HDDs (one 3.5 1T and one 2.5 500G). On 3.5 HDD I have already two partitions 100M+40G dedicated for Win7-64. Now, I want to install Debian (second OS) on this PC and to have some kind of soft RAID or disk mirror of 500G space. I am planning to create a third partition on 3.5 HDD of 500G (identical as 2.5 HDD size) in order to have a mirror 500G space.
Please send my some suggestions on where I have to install Debian; on 500G 2.5HDD or 500G 3.5HDD!Will Debian boot from both HDDs 3.5 or 2.5 after I create the mirror? What Linux soft I have to use for mirroring (mdadm)?
Currently running Slackware 13.37 64-bit on a notebook and finally have suspend/hibernate after realizing that USB devices, especially USB HDDs, need to be disconnected before suspend/hibernate can work. Problem is I have 2 USB HDDs that are connected to my notebook whenever the notebook is stationary for the extra storage so I'd like to create a script that would get invoked that would stop the suspend/hibernate process if certain partitions are mounted. I know what I would like to accomplish, but I have basic scripting knowledge
1. script would basically store a user specified string containing devices that are non-USB, ie: $NONUSB="/dev/sda /dev/sdb"
2. possibly use /etc/mtab to get a list of what is currently mounted and then remove lines containing whatever is specified in $NONUSB and store those values in $USB
3. run a for loop that executes 'umount' on each token in $USB
3a. stop suspend/hibernate process if 'umount' fails at any point