Ubuntu / Apple :: Data Size Too Large For Disk?
Apr 28, 2011
I am trying to burn mac osx 10.5 install disk from from a 6.7gb dmg disk image. I thought I would be using 2 DVD-R 4.7GB discsfor this burn, I was hoping when the first was full it would ask for another to finish the burn. Instead it get the message that the DVD will not hold the choosen DMG. file.
Can I do anything besides buy a dual layer DVD that would hold the whole file?
View 1 Replies
ADVERTISEMENT
Mar 31, 2010
I want to ensure I have done all I can to configure a system's disks for serious database use. The three areas I know of (any others?) to be concerned about are: I/O size: The database engine and disk's native size should either match, or the database's native I/O size should be a multiple of the disk's native I/O size. DMA: Disks that are capable of Direct Memory Access (eg. IDE) should be configured for it.
Write-caching: When a disk says it has written data persistently, it must be so! No keeping it in cache and lying about it.
I have been looking for information on how to ensure these are so for CentOS and Ubuntu, but can't seem to find anything at all. I want to be able to check these things and change them if needed. The actual hardware involved is very modest. The point is to get the most out of what hardware we do have, even though it's "not very serious hardware" from a broader perspective.
View 1 Replies
View Related
May 3, 2010
the application that is giving me the most grief, as far as the dialog box being over-sized for the screen is KDE BasKet - how do I adjust it?
I have looked at the configuring file in the kde home file, but I am not seeing anything that looks like it controls the properties dialog box.code...
View 1 Replies
View Related
Mar 17, 2011
a client brought in an 160GB external HDD and wanted to get the files off it, there appeared to be no partitions on the disk but i thought it may have been formatted to use the whole disk. I tried to mount it as the various FS types the client thought it may have been to no avail.
I ran testdisk on it which told me that it previously had a mac partition table and a 210GB partition on it (which is larger than the disk) could anyone enlighten me as to whether or not this is even possible, and if so how could i retrieve the data?
View 2 Replies
View Related
Jun 17, 2011
CanoScan LiDE 210 running under 10.10 on a Tosh Tecra M11-130 laptop.Currently trying out xsane to archive some paperwork in monochrome, as the bundled Simple Scan utility can only save in either colour or greyscale. The problem is that the same A4 page saved as monochrome has a file size about three times larger in Ubuntu than in Windoze.
The scan mode options are either 'Colour', 'Greyscale' or 'Lineart'. There is no 'halftone' setting available, as shown in some of the xsane manuals. Don't know whether this is significant to this issue. Xsane's main option window shows 3508 x 2480 x 1 bit for a 300 dpi A4 monochrome scan when 'lineart' is selected, but the intermediate file size is 8.3MB instead of just over 1MB before packing for the PDF. This is consistent with each pixel not being recorded as a 1 or a 0, but as a greyscale 11111111 or 00000000, i.e. monochrome/halftone, but stored in an eight bit field. how to tweak xsane for true monochrome intermediate .pnm files and saved PDFs?
View 5 Replies
View Related
Oct 6, 2009
I think I am having a problem due to an NFS server file size limit. Is it possible I am missing a parameter on the RHEL NFS setup to handle large files? I am running an NFS server on a RHEL 5.1 machine and the HP-UX 11.0 machine does an NFS mount to that file system. The HP-UX executes a program that resides on the HP-UX machine to process a large 35 GB data file that resides on the NFS server machine. The program on the HP-UX can only read/process the first portion of the file until an "RPC: Authentication error" is returned multiple times until the program prematurely decides that it has reached the end of file.
I tried recompiling the same program to run on the RHEL 5.1 NFS server to access the 35 GB file locally (on the NFS server instead on HP-UX) and the program completed successfully, processing the whole file (about 7 hours of processing) with no "RPC: Authentication error." In addition, I have been running the nfs mount with the same machines for quite some time, but not with such large files sizes.
View 3 Replies
View Related
Jan 2, 2011
A friend has sent me an .iso by ftp, only trouble is it is 7gb and all my blanks are 4.7gb
Any simple suggestions, I am totally inexperienced working with DVD's so not sure how to re-size without ruining the content!
View 6 Replies
View Related
Jan 28, 2010
When I installed opensuse 11.2 64-bit (KDE) the installer set the root partition to 20GB by default. That seemed unnecessarily large, so I reduced it to 16GB. I then completed the install (basically a default KDE install minus games & educational stuff) and still had more than 8GB free. I'm aware that these days hard drive storage space is quite cheap, but it's not so cheap for me as I have an SSD. Would it not be reasonable to reduce the default root partition size to 12GB, or perhaps vary it according to the software package load selected?
View 3 Replies
View Related
Apr 3, 2010
I have 24" dual monitors with 1920x1080 resolution on both of them. Consequently the text appears so small. I use the following text-intensive applications frequently:
Web browser (Google Chrome)
IDE (Komodo)
Terminal (Gnome Terminal)
Email (Thunderbird)
I can configure text size on IDE, Terminal and Email. But for Chrome, it is not a good idea to set proportional font size because often one wants to see the entire (not just proportional fonts) site to be zoomed. So I am asking: Is it possible to increase DPI in Ubuntu (much like on Windows) so as to increase the text size across all apps? OR Is it possible to set permanent 'zoom' in Google Chrome, using a third-party extension maybe?
View 1 Replies
View Related
May 20, 2010
Well, when I copy large amount of data the other applications than Nautilus freezes until the copy is done...
So, what can I do? Because when backuping some data this is really annoying =/
View 6 Replies
View Related
Jun 16, 2011
I want to copy about 40GB - to a partiton. There are two hard drives in my box one won't boot but I can aaccess it and mount partitions and I aim to move data from it to a new bootable hard drive. Doing a simple cp copy command may not be the best way to copy and paste such a large chunk? Also I want to backup the data I plan to copy/paste using a USB hard drive to backup. But I could also paste data from the backup to the new drive instead of from old internal hd to new hd. - that's another option.
View 1 Replies
View Related
Dec 28, 2009
I am using OS 11.0 Every time I boot my laptop (dell inspiron 9300 - ati video M300). I get the desktop display as 1920 X 1200. This is too large for my default. I use KRandRTray to resize back to 1024 X 768. How can I set 1024 X 768 as the default but still have the option to go to 1920 X 1200?
View 3 Replies
View Related
Jun 24, 2011
I am running CentOS 5.5 with a 14T ext4 volume. We are sharing out a few sub-directories via NFS. Our customer was doing some penetration testing with our web app that writes to one of those NFS shares. We are not sure if they did something to cause the metadata to grow so large or if it is corrupt. Here is the listing:drwxrwxr-x 1 owner owner 470M Jun 24 18:15 temp.badI guess the metadata could actually be that large, however we have been unable to perform any operations on that directory to determine if the directory is just loaded with files or corrupted. We have not run an fsck on the volume because we would need to schedule downtime for out customers to do so. Has anyone come across this before
View 2 Replies
View Related
Jul 20, 2010
This problem is not exclusive to Ubuntu, I've experienced it in Windows and OSX as well, but it seems that almost every time I transfer a large number of files (i.e. my music collection) between my desktop computer and laptop via my external hard drive, I end up losing files for no reason. I usually don't notice the files are missing until later on, because I am never informed of any data loss. Now, every time I make a large transfer of files, I just do it two or three times to ensure that I don't lose any files.
View 2 Replies
View Related
May 28, 2011
I installed Fedora 15/Gnome 3 because I liked the Universal Access Settings widget for controlling the appearance of my living room computer attached to my TV. It should (when it becomes more stable) make it easy to zoom in on the screen when I'm on the couch. There is also a Large Text setting that allows me to toggle between normal text size and perhaps 125% text size.
I'd like to set that value to about 200%, but don't see how to do it. dconf-editor didn't seem to have a way. gnome-tweak-tool has a way to make all fonts bigger or smaller but I want to easily switch between normal text size when I'm sitting close and large text from the UAS. Screwing around with gnome-tweak-tool would require me to be up-close. It looks like UAS is controlled by /usr/share/gnome-control-center/ui/uap.ui, but it is a wickedly complex xml file & I don't know what to edit. Is there a per user way to change the behavior?
View 1 Replies
View Related
Nov 26, 2010
I am trying to convert a mdf file to iso with mdf2iso and this message pops up "Value too large for defined data type". The file is about 4.6 GB. Any other information
View 2 Replies
View Related
Apr 13, 2009
I've downloaded a big file from another server via "wget" to my /backup/ directory, and after the download was compeded I did run "ls" and the error was:ls: full_plesk_backup: Value too large for defined data type.
View 13 Replies
View Related
Sep 1, 2009
Are there any tools out there that let me select a bunch of data and burn it to multiple cd's or DVD's? I'm using k3b but have to manually select cd and dvd size amounts.
View 1 Replies
View Related
Feb 15, 2011
we've been trying to become a bit more serious about backup. It seems the better way to do MySQL backup is to use the binlog. However, that binlog is huge! We seem to produce something like 10Gb per month. I'd like to copy the backup to somewhere off the server as I don't feel like there is much to be gained by just copying it to somewhere else on the server. I recently made a full backup which after compression amounted to 2.5Gb and took me 6.5 hours to copy to my own computer ... So that solution doesn't seem practical for the binlog backup.Should we rent another server somewhere? Is it possible to find a server like that really cheap? Or is there some other solution? What are other people's MySQL backup practices?
View 8 Replies
View Related
Aug 10, 2010
I need to clone a laptop drive to a desktop drive. The laptop drive disk is 150 gb, however, only about 8 gb is used. Is it possible to clone this disk to a smaller drive?
View 3 Replies
View Related
Aug 24, 2009
There is a disk 500 gb, it is broken on /boot and on /root and on /dev/sda1 and /dev/sda2. Whether prompt it is possible to redistribute a disk without loss of data namely it is necessary to make/boot and two equivalent on disk volume.
View 3 Replies
View Related
Oct 15, 2010
This has happened several times now, with 9.10 and 10.04. I back up my photos periodically to external drives, using Nautilus. At the next attempted login Gnome won't start and sometimes gives power manager incorrect installation error.
First time this happened I was stumped and eventually did a clean install. Second time, I found advice elsewhere in this forum to solve this by emptying root trash, which did the trick. This time, however, root trash has nothing in it and 2 users trash were insignificant (I emptied them all anyway with rm -r). Tried looking for enormous directories but couldn't find a smoking gun. I would rather not end up doing another clean install - a painful and extreme solution. I'm continuing to look for solutions to the immediate problem, but my question really is, what causes this and how do I prevent it in the future? I've run Computer Janitor regularly and ran apt-get clean but no help. Should I do all my large scale copying from terminal? I'm not a total noob, but close.
View 9 Replies
View Related
Jan 27, 2010
1. An external hard disk with VFAT32 file system has a continuous 23GB file (old HD disk image). It is too large to 'remove to wastebasket' and unlike MS Windows remove to wastebasket does not sense file size and wipe file index .
How to remove a large file in SUSE 11.2?
View 9 Replies
View Related
Jan 5, 2016
What is the recommended method these days for command line partitioning and formatting for the Terabyte size hard disk.?
It was easy to keep up when your working or have access to hardware for re-purposing, but that has all dried up and my knowledge has been left behind. The problem(s) are with new, recent hardware
Following a crash from a now detectable faulty stick of RAMM, I've lost one of my data hard disks and my fiddling with replacement seems to leave various errors/warnings mainly about GPT not supported and this message is still present despite trying fdisk, cfdisk, gpart, gparted, and(?).
System is an ASUS mobo using SATA drives (root 500Gb: MBR+3 partitions;/, swap, /home), and two 2.4TB with single partitions.
View 5 Replies
View Related
Sep 23, 2015
So, my issues since upgrading to Jessie seem to compound. When I fix one issue, two more arise. Right now, I have a full system disk. How it got so full. So I started poking around. I ran
Code: Select all find / -type f -size +50M -exec ls -lh {} ; | awk '{ print $NF ": " $5 }'
Found a few files I could delete, and did, but I also found Code: Select all/var/log/syslog.1: 33G
/var/log/messages: 33G
/var/log/user.log: 33G
What I find strange is that they're all exactly 33G each. So that accounts for the missing 99GB I deleted them, however only recovered 27Gb. Whats weird is when I type df -h I get
Code: Select allFilesystem Size Used Avail Use% Mounted on
/dev/dm-0 106G 74G 27G 74% /
udev 10M 0 10M 0% /dev
tmpfs 3.2G 9.7M 3.2G 1% /run
tmpfs 7.9G 0 7.9G 0% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/sda1 228M 27M 189M 13% /boot
/dev/sdb1 1.9T 62G 1.8T 4% /media/ntfs
tmpfs 1.6G 0 1.6G 0% /run/user/0
What are the tmpfs's and how can I reclaim that space, and what is /dev/dm-0 and why is that taking up so much space?
I have 2 LVGs vgdisplay -v
Code: Select allroot@SETV-007-WOWZA:~# vgdisplay -v
DEGRADED MODE. Incomplete RAID LVs will be processed.
Finding all volume groups
Finding volume group "WOWZASERVER"
[Code] ....
After deleting the log files, I was able to regain access to my GDM session. But I still cant find out what /dev/dm-0 is, and where all the 75 GB is being taken up.
I just noticed, however, even though I can access the drive A-OK via browser, terminal, and web services (Our wowza) when I enter gParted I get this error for sda, my primary OS drive!
Code: Select all Libparted Bug Found!
Error informing the kernel about modifications to partition /dev/sda2 -- Invalid argument. This means Linux won't know about any changes you made to /dev/sda2 until you reboot -- so you shouldn't mount it or use it in any way before rebooting
Now that I'm in gParted I see 3 partitions: [URL] ....
It reports now, that I have used ALL of my disk space.
Post Log delete, and fresh reboot, this is what Code: Select alldf -h outputs
Code:
Select all Filesystem Size Used Avail Use% Mounted on
/dev/dm-0 106G 8.7G 92G 9% /
udev 10M 0 10M 0% /dev
tmpfs 3.2G 9.8M 3.2G 1% /run
tmpfs 7.9G 80K 7.9G 1% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
[Code] ....
What the heck is going on?
View 0 Replies
View Related
Jan 21, 2011
I'm new to setting up Linux Servers. I've setup a Ubuntu 10.10 Server along with CUPS and I'm using Webmin to talk to the server. I have a HP PSC 1315 Multifunction printer connected via usb to the server. Using the CUPS web interface I am able to get the server to detect the connected printer and it identified the HP PSC 1310 Series drivers.
When I printer a test page from the server's screen the print job goes through ok and the size was about 5k.
I then setup a samba share to allow my Windows 7 machine to share the printer. Windows 7 is able to pick up the shared printer correctly and I used the default HP 1310 Series drivers. When I tried to send a test page to the printer, that single page ended up being 3887kb and I also tried printing out a single paged word document which ended up being over 7MB.
View 4 Replies
View Related
Nov 13, 2010
I've been tinkering for over a week to try and get a functioning triple boot + shared data working. i've hit a road block. i'm using reFIt with SnowLeopard/W7/10.10. i can't get into ubuntu
*** Report for internal hard disk ***
Current GPT partition table:
# Start LBA End LBA Type
1 40 409639 EFI System (FAT)
2 409640 390772495 Mac OS X HFS+
3 391034640 781659639 Basic Data
[Code].....
OSX and Windows are fine. but whenever i select linux, it boots directly into windows. i'm hoping i don't need to install GRUB2 to the windows partition as that would defeat the purpose of reFIt (would it not?)
View 4 Replies
View Related
Jun 22, 2010
I'm migrating data (music and videos mainly) from a NAS server that was being used as my iTuens Media Folder onto a fresh install of Ubuntu 10.04 and I'd like to get rid of all the ._DS files as well as other files that os x 10.4 has been creating such as duplicates of mp3 files but prefixed with "._".
The reason I'm keen to remove all of these files is because I stumbled across a corrupted ._DS_Store file that caused me all sorts of head aches and I don't want them causing any problems in linux.
I've used the search function in Nautilus to search for ._ but it returns no results, even when I'm searching in a directory that I know for certain has those files in it. I have 'View hidden files' selected.
View 2 Replies
View Related
Feb 21, 2011
I have a PPC G4 with two internal hard drives. I just (well, a few hours ago) installed ubuntu 8.04 LTS (I think) on one of them. I find it wonderful. I never thought I would like another OS besides MAC OS.
Anyway, I still have a MAC OS 10.3.9 in the other (and bigger) hard drive: I would like to restart with Mac OS from that drive (something like choosing the startup disk in Mac OS System Prefrences) and I don't know how!
View 2 Replies
View Related
Jun 15, 2010
I'm currently setting up a server with 2x 1TB disks in raid1. (Centos 5.5)In the future, if the storage is insufficient and I decide to upgrade the disks to 2x 2TB, could I just:
- dump ghost image of the array on usb drive
- replace hdd's and build new array
- ghost array with image created previously
Would the above work, will the new partition automatically resize to 2TB or do I need to partition right now with LVM? would it work with LVM?
View 1 Replies
View Related