Ubuntu :: How To Compress Huge File With JPEGs
Feb 23, 2011I want a huge file of .jpeg files. How can I can compress these files to save my space? Is there any s/w for this??
View 2 RepliesI want a huge file of .jpeg files. How can I can compress these files to save my space? Is there any s/w for this??
View 2 RepliesI've a file with a size of 6GB. I would like to compress this file and split them into smaller files. I was also thinking in use bzip2 to compress it, because if offers a good compression rate. How can I split this file into small ones to compress it?
View 5 Replies View RelatedI am trying to compress a folder by right clicking and selecting Compress...I get the following error:
An error occurred while adding files to the archive.No such file or directory
I want the folder and its content to be compressed to a .ZIP file which is natively accessible by Windows.
One of the most common qs i see in forums is >>> How to convert multiple jpg files to one one pdf file by one click . I have got 2 solutions that I consider solve this situation to the best ....
*1.* Install SCRIBUS from synaptic package manager . It can convert jpegs to pdf without any issues . F Spot Photo Manager can also be used .
*2.* This 2nd step I find much better ....
I have been using it myself for some time now ... and its flaw less ...
A) Install WINE using the terminal . For beginers .... Its a software that can run selected windows applications on a linux destro like ubuntu. Installing is very easy >>>> open synaptic manager and in quick search box type Wine. Once the wine files show in the search mark them for installation. For details on easy manual for wine installation check here .. [URL]
B) Now go to the page below .... Its a free software.
[URL]
Down load and save the software from the link given . Open the file using wine ( Right click on the file and choose the option open with wine.) The installer will run and the programme will be installed . And Viola ... Its ready .... Import ur jpg files and merge them all in to 1 pdf file .. Very Useful In Merging Comics together.
I have a compressed text file. The method of compress is unknown.I can see the file contents by using Midnight Commander without a problem but I would like to view the file just with cat. So I am trying to uncompress the file with unzip or gunzip but it does not work.How to check the method the file is compressed with? Is any way to find it with Midnight Commander?
View 4 Replies View Relatedhow do I compress, or reduce the resolution, of a tiff image file.I must use tiff I can not convert to JPEG etc.
View 3 Replies View Relatedhave 6680 wav files with about 500kb size in a folder and i want to merge all of them.the size of the files altogether is 1.5GB. how i can merge them and compress them to create a mp3 or ogg file?
View 4 Replies View RelatedI'm looking for a way to compress a large file (~10GB) into several files that wont exceed 150MB each.
Any thoughts?
i am using classpath-0.98,jamvm-1.5.4 and arm9 cortex processor. so my question is after install classpath and jamvm on arm9 , i am getting around 30MB FAT file. so tell me some tips how to reduce the size of FAT file as much as possible.
View 1 Replies View Relatedwrite a script to compress any file with 10 MB size to tar.gz
Say /var/log/tes directory has a file with 10 MB I want it to compress to tar.gz
I download a lot of files that have been compressed into separate pieces by winrar, so that a 4gb file will be shown as dozen or so 93mb peices. In windows, using winrar, extracting them was rather easy but in Kubuntu whatever default program is being used throws a fit. In a perfect world I would want to get a open source alternative from synaptic, alternatively anything I can download in RPM I can eventually figure out. I want to avoid trying to get winrar working under wine.
View 13 Replies View RelatedSeldom use windows, but need to transfer huge music file from Linux to XP
View 3 Replies View RelatedI have a UUID file that has grown to 8.4 gigs... and I don't know what to do. Its sucking up all the free space in the partition. I guess I will have to delete the file but I don't know how to do it properly.I suspect this is a backup of all the data I have been moving around between drives recently.
View 2 Replies View RelatedI am trying to compare a list of patterns from one file and grep them against another file and print out only the unique patterns. Unfortunately these files are so large that they have yet to run to completion. Here's the command that I used:
Code: grep -L -f file_one.txt file_two.txt > output.output Here's some example data:
Code:
>FQ4HLCS01BMR4N
>FQ4HLCS01BZNV6
>FQ4HLCS01B40PB
>FQ4HLCS01BT43K
>FQ4HLCS01CB736
>FQ4HLCS01BU3UM
>FQ4HLCS01BBIFQ
how to increase efficiency or use another command?
I haven't used WinFF before yesterday. Looks simple so I tried to convert an AVI file of 600MB to DV and ended up with an 8GB conversion.For device preset I used Raw DV for Pal Fullscreen as I'm in Australia.Does WinFF always produce such large files?
View 1 Replies View Relatedi am trying to transfer a file from my live linux machine to remote linux machine it is a mail server and single .tar.gz file include all data. but during transfer it stop working. how can i work and trouble shooot the matter. is there any better way then this to transfer huge 14 gb file over network,vpn,wan transfer. the speed is 1mbps,rest of the file it copy it.
rsync -avz --stats bkup_1.tar.gz root@10.1.1.22:/var/opt/bkup
[root@sa1 logs_os_backup]# less remote.log
Wed Mar 10 09:12:01 AST 2010
building file list ... done
bkup_1.tar.gz
deflate on token returned 0 (87164 bytes left)
rsync error: error in rsync protocol data stream (code 12) at token.c(274)
building file list ... done
code....
I want to extract a huge .tar.gz file but when I do extract it stalls the server. The server is write heavy and extracting seems to choke the disk. Is there a nice way to extract without stopping the world? I've tried the 'nice' and 'cpulimit' command but they don't seem to do the trick.
View 2 Replies View RelatedI have a huge log file of around 3.5 GB and would like to sample random sections in the middle of say 10 MB for the purpose of debugging what my application is doing.
I could use head or tail commands to get the beginning or end of the file, how can I grab an arbitrary portion from the middle of the file? I guess I could do something like head -n 1.75GB | tail -n 10MB but that seems clumsy and I'd need to determine line numbers for the midpoint of the file to get 1.75GB and 10MB line counts.
I am trying to read the /proc/net/tcp6 file of a huge server (chat server) for monitoring the tcp6 connection states.
This tcp6 file has more than 26000 lines. For monitoring the server connections, my monitoring tool has to read the /proc/net/tcp6 file quickly in regular interval. Presently it takes minimum 6-7 seconds for reading the whole file.
My tool can able to read the normal file (26,000 lines) less than 1 second, but it is not possible to read the same size of proc file.
I have 2 questions:
1) Why proc file takes more read time than normal file?
2) Is there any way to read the /proc/net/tcp6 file more quickly?
I use Nautilus 2.32.0; in Ubuntu 9.10 (I don't remember which version of Nautilus I had), I could select jpeg files, right-click and resize the files; that option is not present now.
View 4 Replies View RelatedI needed to use an SD card for something unusual today, but wanted to preserve all the photos on it. So I unmounted it and ran...
$ sudo dd if=/dev/mmcblk0 of=/home/kip/Desktop/Camera.bin bs=1MB
...to make a backup of the complete card, partitions included. I used the card for other purposes, umounted again, and then wrote back the image with...
$ sudo dd if=/home/kip/Desktop/Camera.bin of=/dev/mmcblk0 bs=1MB
I unmounted, removed the card, and then re-inserted to find most of the JPEGs corrupted. I can see some pixels, but they are all mangled.
I fear the blocksize (bs=1MB) may have been the culprit and should have been 512 (bytes) for a card containing a fat16 formatted partition.
Generally on my system(Karmic x64), if I generate a pdf, any image included comes out distinctly blurry. I have googled and come across this [URL] which says when there is a pdf generation, everything is recompressed, hence the blurriness. How is there a way around this? What do I tweak to ensure pictures are left alone (I already ensure they are as small as possible) when I convert anything to pdf?
View 2 Replies View RelatedI'm new member in this forum.
i have problem with my nautilus and may this sounding silly question.
how to display option "compress" and "extract here" at right click file on nautilus ?
i'm using ubuntu 11.04 and nautilus elementary 2.32.2
i was installed "file-roller" but option compress and extract here not display ?
My 1st and only grandson turned 1 year old and we have taken about 1 million pics of him. I wanted to make a video from pictures taken in the past year. The first one is an ultra sound picture. The last one was taken this last Saturday. I have 153 12 MPixel pictures in a folder. They are named C001.jpg - C153.jpg (I can rename them if needed).
I had noticed mencoder or ffmpeg can create movies from jpegs. But, I didn't think about the frames per second. Would I need to make like 20 copies of each picture and then make it like 10 FPS? or something like that? Optimally, I would like to have a video where each picture lasts for 8-10 seconds or so and I would be able to add a song as audio. (I think I know how to use mencoder to add the audio).
Is there a way to take a whole directory of pictures that are of various dimensions and scale them all down to conform to, say, an 800x600 (or 600x800) boundary?
Better yet is it possible to also ignore and not size-up files that are, say, 400x300?
I am command line savvy so if it can be done via some kind of script I'm cool with that.
In the Windows world where I came from, Irfanview freeware easily renamed a large folder of JPEG photos by EXIF time & date stamps, appending a unique number if the time and date stamps were the same. Is there an equivalent rename-by-EXIF information batch command in Ubuntu 10.04 Lucid? For example, change (based solely on EXIF information): FROM:DSC_0001.JPG TO:20110224_09:34:56am.JPG
View 9 Replies View RelatedSo I have a bunch of directories:
dir1
dir2
dir3
etc.
which themselves all contain subdirectories:
dir1subdir1subdir2etc.and at the lowest level they contain all of these jpegs that I need. The problem is that I only need some of them. They're named like this:
pic1.jpg
pic1_med.jpg
pic1_small.jpg
pic2.jpg
pic2_med.jpg
etc.
I want to just grab the ones without the size suffix and copy them all to another set of folders, while preserving the directory structure. The numbering all starts at 1 for each low level subdirectory, so I think that the directory structure is the only way to not get them mixed up.
I know that cp has a recursive option -r but how do I just extract the ones without the underscore? And then how do I preserve the directory structure when I move them over?
I've multiple jpegs uploaded form IP cam via FTP, I use mencoder to periodically pack them into single avi file, problem is that sometimes one or two jpegs submitted by cam are broken, and this make mencoder exit, without producing movie
View 1 Replies View RelatedHow can i Compress a file in ubuntu 10.04??
View 5 Replies View RelatedI just read in my Linux+ resources that it is not a wise idea to compress tar files with gzip if my tape drive supports compression. In addition, the resources mention that using gzip for tar files runs the risk of data lost if an error occurs with the compression.
Does anyone compress tar files for major backups/restore policies?