I have been having a recurring problem backing up my filesystem with tar, using bzip2 compression. Once the file reached a size of 4Gb, an error message appeared saying that the file was too large (I closed the terminal so do not have the exact message. Is there a way to retrieve it?). I was under the impression that bzip2 can support pretty much any size of file. It's rather strange: I have backed up files of about 4.5Gb before without trouble.
At the same time, I have had this problem before, and it's definitely not a memory problem: I am backing up onto a 100G external hard drive.
That reminds me, in fact, (I hadn't thought of this) that one time I tried to move an archived backup of about 4.5Gb to an external (it may have been the same one) and it said that the file was too large. Could it be that there is a maximum size of file I can transfer to the external in one go? Before I forget, I have ubuntu Karmic and my bzip2 version is 1.0.5 (and tar 1.22, though maybe this is superfluous information?)
I am going crazy with a gzip file. I can decompress the file in Windows using WinRAR but it is impossible on any UNIX operating system. the file seems to be ok. If I do file the_name_of_the_file.gz
I get: the_name_of_the_file.gz: gzip compressed data, from Unix, last modified: Sun Jan 30 14:10:21 2011
But if I do gunzip -f the_name_of_the_file.gz I alsways get: gzip: the_name_of_the_file.gz: unexpected end of file The same problem happens when I try to extract the file using the GUI tool in Ubuntu or MacOSX,
I am attempting to be careful in case my system crashes, and although highly unlikely my first question is if there is a way to first compress my Linux Partitions. After running the diskutil command in OSX's Terminal, I basically end up with this poartition scheme:
Quote: Macintosh HD = 130GB disk0s3 = 1MB disk0s4 = 30GB Linux Swap = 1.3 GB
I am sure there is a way in the Terminal to first compress disk0s3, disk0s4, and Linux Swap, and then output the compressed partitions into my external Harddrive. I have already read some of the suggestions that only /HOME, /etc/fstab/, list of installed packages, /opt, and /var/cache/apt/archives/-where all installed packages are stored, is what I should backup. But, please correct me if I'm wrong. Wouldn't it take quite a while to install all those packages again in case of a system failure. Or would it just be easier to untar all of them in their directories once Linux has been reinstalled. The closest command I have found so far in being able to achieve this is:
sudo tar cvf - files | (cd target_directory ; tar xpf -) The above code is very suitable for what I am looking for because it enables you to copy files into another location by using the tar command where you would create In my case the new location would be my external harddrive. My external harddrive already has its own Linux partition which I am able to mount in Linux and that Linux sees as free space.
I'm trying to figure out how to access compressed files without uncompressing them beforehand, and also without modifying the application/script I am using. Named pipes do the trick, but only seem to work once
In one terminal I do this:
Code: $ echo "This is a file I'd like to be able to read." >> my_file $ gzip my_file $ mkfifo my_named_pipe $ ls my_file.gz my_named_pipe $ gunzip -c my_file.gz >> my_named_pipe
When trying to create a new compressed/archive file in Gnome Commander (GM) the file is created but the selected files are not added. I can open the new (empty) archive file and then add files to be compressed. I have tried using several different formats (zip, tar.bz and others) with the same results. The "file roller" is shown as a plugin but has no configuration other than the compressed file type.
I want to create a compressed ISO image file and mount that file to one of the virtual drives and access the content (read-only) without worrying about manual decompression/extraction.For Windows and Linux (Ubuntu) OSes.
On a Linux CD/DVD, there are compressed filesystem images for the live version for KDE or Gnome for example, but they have no extension, but they are clearly an image file ( compressed filesystem images for the live version before installation ) !!
I was wondering, How do I mount these compressed filesystem images, after I copy the ISO content of the CD/DVD on my system .... I want to edit some files or packages and make some changes, like if I want to customize a live version of gnome for example ! ... ( I know you might be tempted to tell me to use KIWI etc to customize etc ..... ) ... but I want to be able to mount the compressed file system image, then edit it for reading and writing while it is in a subdirectory on its own ... i want to open it ! ... is there a way to do this ??? ... these type of files have no extension ...
i can open this compressed filesystem image then to edit for read & write ... before I roll it back again ..... If and when I succeed .... what should I watch out for ? ... will the same compressed file image but slightly modified work again ?
PS. that same question could be kind of translated or be extended like : how do I use unionfs/squashfs programs on the command line to mount these image files with no extension for read & write mode ???
I am using RHEL 5.I have a very large test file which cannot be opened in vi.The content of the file has some 8000 lines.I need to view ten lines between 5680 to 5690.How can i view these particular lines in a large file.what is command and option i need to use.
I've got a large .tar.gz file that I am trying to extract, I have had a look around at the problem and seems other people have had it, but I've tried their solutions and they haven't worked.The command I am using is:
I am trying to copy a file from a network resource on my Local Area Network, which is about 4.5 GB. I copy this file through GNOME copying utilities by first going to Places --> Network and then selecting the Windows Share on another computer on my network. I open it and start copying the file to my FAT32 drive by Ctrl+C and Ctrl+V. It copies well up-to 4 GB and then it hangs.
After trying it almost half a dozen times I got really annoyed and left it hung and went to bed. Next morning when I checked a message box saying "file too large to write" has appeared.
I am very annoyed. I desperately need that file. It's an ISO image and it is not damaged at all, it copies well to any windows system. Also, I have sufficient space on the drive in which I am trying to copy that file.
I am attempting to burn the ISO for Lucid Lynx final onto a 700MB CD. The ISO file is 699MB, but Windows reports that the size on disk is 733MB and thus CD Burner XP refuses to burn the file, stating that it's too large for a CD.
Why this discrepancy on file sizes? I've noticed this with other files as well, suddenly it's a bit of a problem, as you can see!
I recently built a home media server and decided on Ubuntu 10.04. Everything is running well except when I try to transfer my media collection from other PCs where it's backed up to the new machine. Here's my build and various situations:
Intel D945GSEJT w/ Atom N270 CPU 2GB DDR2 SO-DIMM (this board uses laptop chipset) External 60W AC adapter in lieu of internal PSU 133x CompactFlash -> IDE adapter for OS installation 2(x) Samsung EcoGreen 5400rpm 1.5TB HDDs formatted for Ext4
Situation 1: Transferring 200+GB of files from an old P4-based system over gigabit LAN. Files transferred at 20MBps (megabytes, so there's no confusion). Took all night but the files got there with no problem. I thought the speed was a little slow, but didn't know what to expect from this new, low-power machine.
Situation 2: Transferring ~500GB of videos from a modern gaming rig (i7, 6GB of RAM, running Windows7, etc etc). These files transfer at 70MBps. I was quite impressed with the speed, but after about 30-45 minutes I came back to find that Ubuntu had hung completely.
I try again. Same thing. Ubuntu hangs after a few minutes of transferring at this speed. It seems completely random. I've taken to transferring a few folders at a time (10GB or so) and so far it has hung once and been fine the other three times.Now, I have my network MTU set from automatic to 9000. Could this cause Ubuntu to hang like this? When I say hang I mean it freezes completely requiring a reboot. The cursor stops blinking in a text field, the mouse is no longer responsive, etc.
I'm trying to create an Ubuntu Server file server that will handle large file transfers (up to 50gb) from the LAN with Windows clients. We've been using a Windows server on our LAN on the file transfers will occasionally fail... though the server is used for other services as well.
The files will be up to 50gb. My thoughts are to create a VLAN (or separate physical switch) to ensure maximum bandwidth. Ubuntu server will be 64bit with 4tb of storage in a RAID 5 config.
I am trying to copy a 7.3gb .iso file to an 8gb USB stick and I get the following error when it hits 4.0gb
Error while copying "xxxxxx.iso". There was an error copying the file into /media/6262-FDBB. Error splicing file: File too large The file is to be used by a windows user, and I'm just trying to do a simple copy, not a burn to USB or anything fancy. Using 10.4.1.LTS, AMD Dual Core, all latest patches.
I recently upgraded my file/media server to Fedora 11. After doing so, I can no longer copy large files to the server. The files begin to transfer, but typically after about 1gb of the file has transferred, the transfer stalls and ultimately fails with the message:
"Error writing to file: Input/output error"
I've run out of ideas as to what could cause this problem. I have tried the following:
1. Different NFS versions: NFS3 and NFS4 2. Tried copying the files to different physical drives on the server. 3. Tried copying the files from different physical drives on the client. 4. Tried different rsize and wsize block sizes when mounting the NFS share 5. Tried copying the files via a different protocol. SSH in this case. The file transfers are always successful when I use SSH.
Regardless of what I do, the result is the same. The file transfers always fail after approximately 1gb.
Some other notes.
1. Both the client and the server are running Fedora 11 kernel 188.8.131.52-191.fc11.x86_64
I am out of ideas. Has anyone else experienced something similar?
I have some large image files that are 30 gig and more. I am running Ubuntu 9.10 whenever I try to copy one of these files to another drive I get a error saying the file is too large. I am trying to copy from an external Hard Drive or a slave drive does the same thing. I have a friend who has expressed the same issue. This must be a widespread bug.
Is there a file system that both Mac OSX 10.5 and linux can read/write for large files (like 4gb files)? My desktop is Ubuntu and I run most from there, but I want to back up my MacBook and linux box on the same external hard drive. Seems there are some (paid) apps for Mac that will mount NTFS but I'm wondering if there is just a shared files ystem that will work for both.
CanoScan LiDE 210 running under 10.10 on a Tosh Tecra M11-130 laptop.Currently trying out xsane to archive some paperwork in monochrome, as the bundled Simple Scan utility can only save in either colour or greyscale. The problem is that the same A4 page saved as monochrome has a file size about three times larger in Ubuntu than in Windoze.
The scan mode options are either 'Colour', 'Greyscale' or 'Lineart'. There is no 'halftone' setting available, as shown in some of the xsane manuals. Don't know whether this is significant to this issue. Xsane's main option window shows 3508 x 2480 x 1 bit for a 300 dpi A4 monochrome scan when 'lineart' is selected, but the intermediate file size is 8.3MB instead of just over 1MB before packing for the PDF. This is consistent with each pixel not being recorded as a 1 or a 0, but as a greyscale 11111111 or 00000000, i.e. monochrome/halftone, but stored in an eight bit field. how to tweak xsane for true monochrome intermediate .pnm files and saved PDFs?
When I try to transfer a large file lets say 700Mb or so my wireless shuts down and i have to restart my ubuntu computer the other computer is vista.ubuntu is on a wusb54gver4 and vista is going through a wrt54g tm running dd-wrt mega.I