General :: Filesystems - Write Files To HFS+ Drive From System?
Aug 22, 2011
Is it possible to write/edit files on HFS+ drive from Linux? Yes I need to disable journaling but how can I disable journaling from Linux? I dont have access to mac.
My home server runs Debian Lenny, and I'm about to upgrade the system drive to a larger drive.In the process, I want to take the opportunity to reorganize the partitions and resize them. For learning purposes, I'm planning to migrate from an MBR partition table to GPT.Because of those two changes, I can't just run "dd if=/old/drive of=/new/drive" (well, not without lots more work afterwards). I could use the debootstrap process to get a fresh installation on the new system drive, but I used that technique during the last system upgrade and it's probably overkill for this.Can I just copy the partitions from the old drive to the new?Will "dd if=/dev/hda1 of=/dev/hdb2" work, assuming /dev/hdb2 is larger than /dev/hda1? (If so, the filesystem can be resized to take advantage of the new larger partition, right?)Would parted (or gparted) be a better tool for copying the contents of the partitions?
I have a 500GB external drive I want to use on a couple of Linux systems, and looking for a filesystem for it. External drives are frequently formatted in FAT32, but I don't need to interoperate with Windows and would rather avoid the ugly limited kludge that is FAT.
Since I only need to use it on Linux, I would use ext4 or XFS, but they store ownership information. Ideally, I'd use a proper Unix file system that doesn't track ownership (files are owned by whoever mounts the device, like they are when mounting a FAT32 partition), but I do not know of any file system that does that.What would be a good file system for this disk?
I've been using *Unix systems for many years now, and I've always been led to believe that its best to partition certain dirs into separate FileSystems, off the main root FS.
For instance, /tmp /var /usr etc
Leaving as little as possible on the main / system.
Its so that you don't fill up the root system be accident, by some user putting in too bigger files in /tmp, for example.
I would presume that filling the / system would not be too good for Linux, as it would not be able to write logs and possibly other things that it needs to.
I believe that if root gets full, then there is something like a 5% amount saved for just 'root' to write to, so that it can do its stuff.
However, eventually, / will become full, and writes will fail.
On top of this, certain scripting tools, such as awk, use the /tmp/ system to store temp files in, and awk wont be able to write to /tmp/ as its full, so awk will fail.
However, I'm being advised that there is no need to put /tmp /var etc onto separate FSs, as there is no problem nowerdays with / filling up. So, /tmp /var /usr are all on the root FS.
I'm talking about large systems, with TBs of data (which is on a separate FS), and with a user populations of around 800-1000 users, and 24/7 system access.
/media/A and /media/B should be identical, but I want to confirm before deleting one.
Duplicate file finders don't work, because they'll find two copies of the same file within B, for instance. I only want to confirm that every file in one is identical to the other.
diff -qr /media/A/ /media/B/ seems to work, but the output is cluttered with garbage like
diff: /media/A//etc/alternatives/ControlPanel: No such file or directory
and
File /media/A//dev/tty8 is a character special file while file /media/B//dev/tty8 is a character special file
I can suppress the former with 2> /dev/null, but I don't know about the latter.
rsync -avn /media/A/ /media/B/ also produces a bunch of clutter, like "skipping non-regular file".
How can I compare the two trees and just make sure that all the real files exist in both and are identical?
I want to write a shell script which will simultaneously collect OS user information and write in an individual text files.Can anyone tell me the syntax of the script.N.B. The user name will be mentioned in an array within the shell script.
I have purchased a DLink Sharecenter Pulse NAS as my PC failed. I wanted to put the two SATA drives in and extract the data before formatting to use as JBOD or RAID. However, before I managed to access my data, the setup software started formatting the drives. I switched it off immediately.
Purchased a SATA to USB2 lead and connected to my work laptop but I cannot see the drive(s). Used Partition Magic and each drive has 3 partitions - of which show about 74GB as used and 2 x 512MB as not. Looks like the drives have been partly set for Linux but the format was not comlete. Have tried to use explore2fs to read them but I cannot access the 74 GB partition only one of the 512MB partitions.
I'm a bit stuck now - any one got any ideas how I can get my files off the 74GB partition before I put them back in the NAS to format.
i need to install windows to my computer, but i don't want to loose my linux either.So what i wonder is, since my filesystem is ext4, which filesystem can be used for both operation systems.Do i need to have fat32 or ntfs or can i still use under windows ext4? I have a large collection of movies on my one partition and if i have to convert it, then it will be kinda pain.
I have a 2TB USB drive which I use as a backup device - I dump two filesystems onto it, totalling around 1TB. However, doing the dump trashes my F11 system, making it basically unusable, not only during the dump but also afterwards. I have 8GB of RAM, all of which is needed and normally in use, but when dumping, the system starts hogging huge amounts of it as buffer space - up to 1GB of RAM is reported to be allocated. And rather than using free memory for buffer space, it seems to aggressively swap processes out to get it. The system tends to melt down as a result, and just switching virtual desktops can take 5 minutes.
But after the dumps finish, the problems continue - the system is currently trying to keep around 700-800MB free, and continually swapping out processes to do so, even after the buffer space in use has gone back to about 100MB. This seems like strange behaviour for a fairly common type of activity. Presumably a lot of the buffer space is used to store what is being read from the filesystem (which will never be needed again), and some is used to cache the writes to the USB drive which is slower than the internal hard drives.
I have spent a lot of time trying changes to some of the kernel parameters, after reading articles about them. Of all the ones I've tried, setting vm.dirty_ratio to 1 instead of 5 helps a bit, and setting vm.dirty_background_ratio to 5 instead of 20 makes some improvement I think. Setting vm.swappiness to 0 doesn't seem to help at all.
So my question (at last!) is - how can I back up my filesystems without my system dying? In particular, can I limit the space used for buffers somehow, or turn off buffering for the dump process? And why does dumping result in the system artificially keeping huge amounts of space free afterwards, so I have to reboot to make the system usuable again?
I am having a problem writing to an NTFS pendrive. I have created the NTFS pen drive in the following way:
Code: fdisk /dev/sda created the label with 'o', then written the table with 'w'
I've then gone into fdisk again : Code: fdisk /dev/sda started the partition creation with 'n', and chosen 1 partition '1', then written that with 'w'
I then used mkntfs to format: Code: mkntfs /dev/sda1 The blkid command gives me this output: /dev/sda1: UUID="58CEA9511D6BCEFA" TYPE="ntfs"
I can mount the pendrive (as root) with: Code: mount -t ntfs /dev/sda1 /mnt/pendrive and the mount command output: /dev/sda1 on /mnt/pendrive type ntfs (rw)
I have changed the permissions on /mnt/pendrive (while mounted) to 777, owner/group=root. However, when I try to copy something to the drive I get this error: cp: cannot create regular file `/mnt/pendrive/file.txt': Permission denied
I'm using Ubuntu 10.0.4. I downloaded an old script for starting/shutting down a service I have, and evidently "initlog" doesn't exist anymore. What is the correct way to write to the boot (system?) log?
When I mount an external usb drive on linux (CentOs4), the permissions are by default set to read-only. Since there are multiple users on the computer who need to use the external drive, I want everybody to have rw permission for the entire drive. I also want them to be able to mount the drive if the computer has accidentially been shut down. They can use sudo mount to mount the drive, but this will only give them read permission, and I obviously don't want to allow sudo chmod.
Is there a default setting that I can change so that every new external usb disk automatically gets rw permissions?
I'm trying to backup netbook files to an external optical drive. I can read discs but not write. A while back I tried using K3b but it did not see the external drive. Now it does, but tells me write access is needed and quits. I am in the cdrom group.
Does memtet86 write any data to the hard drive? or leave any data in memory once the test is complete. If so does turning of the machine or reboot clears the memory?
I installed Linux on a laptop once.. i'd like to do a dual boot but i've never done it before. Is there anything special i need to do? i don't want it to write over my current operating system.
I have a very very insane problem with my ssd sata harddisk. I did fill the harddisk, and Thunderbird complained about "no space left on device". But even if I delete some files from the harddisk, df will still say 0 blocks free. But it will decrease the number of used blocks. So it looks like it is freeing the blocks and deleting the files, but it don't put the blocks back to the free pool.
But here is where things get insane: If I log in with my normal user, I get a "No free space" when I try to write to the harddisk. But If i log in as root I can write to the file system, despite the fact that df is saying 0 blocks free. I did try to run fsck -f but it just run its test and then say that anything is fine. But it run for less then 10 seconds, is this expected on a 40GB ssd partition?.
I have dual boot ubuntu 10.10 and Windows XP and Accidently, Some files in XP system drive files got deleted and now canot boot into Win XP,These are the files left and nothing happend to folder....
I have installed a cable that connects from the CPU's SATA motherboard connection to a removable drives' ESATA connection.I would like to be able to swap drives on the ESATA connection and have all users be able to read and write to these drives.I have created the directory /archive/ where I would like the drive(s) to mount.The drives are all formatted Fat 32 - but in the future I may use HFS for formatting.When I used the command (as root):mount /dev/sdc1 /archivethe drive was mounted (but read only)What can I use in my /etc/fstab file that will allow drives to be mounted and unmounted by all users on the system? (both reading and writing)Also, will I be able to mount and unmount these drives without shutting down? or will I need to reboot every time I want to change drives?
I'm trying to write a script that searches my files and lists them by date. Can someone point me in the right direction? I've been looking through the books that i have but i'm just not finding the right commands to search dates.
Can you please do me favor and let me know how can I write the *.iso image files onto USB memory sticks as if we burn them into CD and thus making bootable CD to boot from ? Is there any command under Linux for this purpose ?
i have to write a shell script that will delete all the .dat files in /var/oracle/etl/incoming which the created date of the file is 7 days before the currrent date.
I have a program that is very heavily hitting the file system, reading and writing randomly to a set of working files. The files total several gigabytes in size, but I can spare the RAM to keep them all mostly in memory. The machines this program runs on are typically Ubuntu Linux boxes.
Is there a way to configure the file system to have a very very large cache, and even to cache writes so they hit the disk later? I understand the issues with power loss or such, and am prepared to accept that. Crashing aside, in normal operation the writes should eventually reach the disk!Or is there a way to create a RAM disk that writes-through to real disk?