I have a large text file with three columns. I'm trying to write a PERL script that splits the file up based on the value of the 3rd column. So every time the third column reads 0, a new file is created and all the data up until the next 0 is found is written to that new file. This should happen over and over until the initial file has been entirely split up.
I am using RHEL 5.I have a very large test file which cannot be opened in vi.The content of the file has some 8000 lines.I need to view ten lines between 5680 to 5690.How can i view these particular lines in a large file.what is command and option i need to use.
I have a 7 GB VOB file which I created from a DVD using ffmpeg dump to remove CSS protection (it is legal where I live to do so). Now, I want to create a DVD/.iso that will be understood by regular DVD players/appliances. How do I do it?
I have a file in which contains one line with a lot floating points.In the very first place and some times in the downstream, there are a few integers, surrounded by blank spaces.1 1.02-4 1.03-5 544 1.04-1 65 2.98-1 5.78-10 3.45-2 etc etc.I aim to split the file in more files each of them containing an integer and the following floatings until the next integer.
I have a file with 5 columns. Column 4 contains numbers.Is it possible to split the file into multiple files using a condition for the contents of column 4 i.e if column 4 contains a value between 0-10 then print the lines to a new file called less_than_10.txt
I have a huge rar compressed split file archive at my webspace but not enough space to decompress the file.
Is it possible to decompress the split file archive and deleting already decompressed parts on the fly? Currently, I'm using winrar 3.70 beta 2 but it seems that it hasn't such an option.
If there's another way to improve a script that I have created, since I'm not an expert! It works, and it made what I wanted, but it took a while to do it... maybe it can be improved. Here's the background. I have one file, with 244000 lines, let's call it X. I needed to split it in 1000 files, each one of 244 lines. I also needed the files to have the .arp extension. So here is what I did:
Code: for i in {1..1000} do sed -n '1,244p;244q' X > $i.arp sed -i '1,244d;' X # in this way I deleted the copied lines each time done
I know that one can use ffmpeg to extract a smallfile.avi from a largfile.avi. But What I am looking for is an tool/command to split a large file into several files of a given size.
I have a log file on ubuntu 10.04 that has 500 lines of log data in it. What command could I use in a terminal to split the single 500-line file into generate ten files each with 50-lines of log files each?
i try to split mpeg file using ffmpeg. The splitting itself works OK, but the quality is lower. What should I do in order to keep the same video quality?
I've a file with a size of 6GB. I would like to compress this file and split them into smaller files. I was also thinking in use bzip2 to compress it, because if offers a good compression rate. How can I split this file into small ones to compress it?
standard Linux installation utilities split the root file-system and the home file-system on two separate but relatively equal-sized partitions? For example, when I put fedora on an 80GB disk, it automatically gave the root file-system 32GB and home 30GB and the swap 8GB of space. However, since my home file-system has a directory with 28GB of files in it, why is my root file-system reading 100% usage? Is the home FS overlaid on top of the root FS? Is there an advantage to doing this? I just made a boot partition (50mb or so), a root partition (90% of the disk space) and a swap (4%-5% disk space).
Mandriva 2010 kde4.3.5 Dolphin 1.3I like to use split screen with file manager. Would like duplicate up and back buttons that always corraspond to each pane of the window so I need not worry about which pane has the focus when I go to use the bauttons.I am hoping to find the functionality anywhere anyhow. I have been using the default Daulphin file manager but don't care if I need to change to a different one.
have a gzip file ABC_000023232.gzipBCD_023232032.gzipI want to split these files into smaller files but keep the extension same because I am using this as a variable in a script
I recently upgraded my file/media server to Fedora 11. After doing so, I can no longer copy large files to the server. The files begin to transfer, but typically after about 1gb of the file has transferred, the transfer stalls and ultimately fails with the message:
"Error writing to file: Input/output error"
I've run out of ideas as to what could cause this problem. I have tried the following:
1. Different NFS versions: NFS3 and NFS4 2. Tried copying the files to different physical drives on the server. 3. Tried copying the files from different physical drives on the client. 4. Tried different rsize and wsize block sizes when mounting the NFS share 5. Tried copying the files via a different protocol. SSH in this case. The file transfers are always successful when I use SSH.
Regardless of what I do, the result is the same. The file transfers always fail after approximately 1gb.
Some other notes.
1. Both the client and the server are running Fedora 11 kernel 2.6.29.5-191.fc11.x86_64
I am out of ideas. Has anyone else experienced something similar?
I've got a large .tar.gz file that I am trying to extract, I have had a look around at the problem and seems other people have had it, but I've tried their solutions and they haven't worked.The command I am using is:
I'm trying to copy a 6Gb file across from my laptop to an external usb drive but it quits at about 4.2Gb every time with a "file size limit exceeded" error. i have checked the output of ulimit -a and there is no limit there on the file size. I'm using the Slax live Cd for this as it always gets the job done