Each line of the file I am sorting is in the following format:
<url> <month> <day>
For example:
[URL]
I wrote the following to sort:
Code:
#!/usr/bin/perl $in = shift; chomp($in);
[code]....
The script worked fine for my small testing files, but failed in my input file. The input file is 18MB and containing more than 300,000 lines. The output will contains some lines like that:
I need to create an out put file, last two columns as a row of data (using the first columns as heading) with the file name as first row. For an example the out put file should look like; system_name,0900_in,1000_in,1100_in,1200_in,1300_in,1400_in,0900_out,1000_out,1100_out,1200_out,1300 _out,1400_out system1,15,17,19,21,23,25,24,26,28,30,32,34
I need to run a script to do above conversion. Need your help.
For Linux command sort, how do I force sort to load all input into memory and sort assuming I have enough memory? Or is it best to use a RAMDISK to store the input before feeding it to sort?
How can I generate a list of files in a directory [e.g.: "/mnt/hdd/PUB/"] ordered by the files modification time? [in descending order, the oldest modified file is at the lists end] ls -A -lRt would be great:[URL] But if a file is changed in a directory it lists the full directory...so the pastebined link isn't good [i don't want a list ordered by "directories", I need a "per file" ordered list] OS: openwrt..[no perl -> not enough space for it :( + no "stat", or "file" command]
Fox example.I want to rename the files below like this: test1.pngătest2.png.....
-rw-rw-r--. 1 test test 20448 2010-12-08 20:11 2010-12-08-212440_1440x900_scrot.png -rw-rw-r--. 1 test test 29799 2010-12-08 21:25 2010-12-08-212526_369x331_scrot.png -rw-rw-r--. 1 test test 34167 2010-12-08 23:54 2010-12-08-235424_580x328_scrot.png -rw-rw-r--. 1 test test 155202 2010-12-08 23:55 2010-12-08-235511_1440x900_scrot.png
I have very little linux experience. And need some help with a bash script. I need to a script I can set cron to run to sort files out of a holding folder into final folders. It doesn't necessarily have to be bash, but I think it would be sufficient for this. File names are formatted as such when created: Dest-Date-Time-CID-Destination# I want the files to be moved from a all in one holding folder to a folder structure like this.
So the script will need to make directories based on information in the file name which is delimited by single dashes. Then move files from the holding folder to the newly created "sorted" folders.
In a moment of madness I decided to reorganise my mp3 player. I am putting everything into one folder (called 'all'), but first I am renaming the files to the following format:
artist ## songtitle
where ## is a number like 01, 02. This has been pretty straightforward with the songs that already started with a number, because then I could just
rename 's/(dd)/artist $1/' *.mp3
but I have a problem with those tracks that don't already have the track-numbers at the beginning. I suppose what I am asking is: can I turn
All Along the Watchtower.mp3 Purple Haze.mp3 Hey Joe.mp3
into
01 All Along the Watchtower.mp3 02 Purple Haze.mp3 03 Hey Joe.mp3
PU12829,24869;PD15733,24869;PD15733,19785;PD12829,19785;PD12829,24869; PU4599,20915;PD9924,20915;PD9924,18898;PD4599,18898;PD4599,20915; PU12829,24869;PD15733,24869;PD15733,19785;PD12829,19785;PD12829,24869; PU4599,20915;PD9924,20915;PD9924,18898;PD4599,18898;PD4599,20915; PU1723,3423; #this line is ignored to short
[Code]...
What I'm trying to do is while true, cut each line from file that begins with PU and thats longer than 12 characters and write to a increasing numbered file for each line. Stating with object1 etc.
I am trying to write a script which compares a log file with reference file. The log file has a table, the LHS of the table are constants strings and RHS of table values changes if there are any changes in configuration.code...
Here I am looking for a script which compares test.log file ( whose RHS data-types are known prior whether it is digit or string) with test.Ref which is reference file for test.log file. It will be really helpful for me if any of you give some idea about writing this script.
I have a text file with 2 columns. Column A has 69,000 rows. Column B has 49,000 rows. Column A has our complete product list Column B has product list from Manufacturer 1 There are only certain/some rows which are common between 2 columns. and also, column B is not a subset of column A. Column A has extra entries and so does column B. I need to know, which rows from Column B, are common with Column A which rows from Column B are not common with Column A. Essentially I want to know from this list, how many of our products are from my manufacturer 1, how many does my manufacturer has which we dont carry.
How would I acheive this? My natural approach to solving this kind of obstacle is to reach for MS excel and use its lookup function, but its not working... Its taking forever and hanging up. since the file is so huge and probably my excel skills are really bad.
How can I do this from command line? I am looking for awk command if possible instead of sed since I am trying to pick up its syntax and usage etc. My thought process is, sort column A,B, for every row in A, lookup and output based on condition. Dont know if I am on the right track.
I'd like to extract a single column from 5 different files and put them gether in an output file. I saw a similar question for 2 input files, and the line of code workd very well, the code is:awk 'NR==FNR{a[NR]=$2; next} {print a[FNR], $2}' file1 file2I added the file3, file4 and file5 at the end, but it doesn't work. Does anyone know what do I have to do?
I want to merge columns (selectively) from several files and create a new file with the merge output. I saw some suggestions to use pr/paste to join the columns and then awk to pick-up the columns.
Code: pr -m -t -s file1 file2 | gawk '{print $4,$5,$6,$1}' But I have hundreds of files and I cannot manually pick up columns using awk as given in
I'm pretty sure this is super simple, but I've searched and searched and for the life of me I can't find any info on how to limit the columns that display in the interactive top program with arguments passed from the command line. I recall seeing something formatted like this ...
~/top -f (kdx)
Which would only show the %CPU, UID, program name fields/columns. I'd like to display the results in a really narrow terminal window on the right side of my desktop.
I'M working my way through Trainsignal's CompTIA Linux + Training course and I have a question about IRQs. According to the lesson using the command "cat /proc/interrupts", I need to memorize the system IRQs number columns 0-15. But when I use this command, I get somewhat of an unordered list, see below.
I am running the command on a Mac but due to it being a generic unix command and a command line query.. I thought I can write on this forum.. I am running the command
Code: df -h | grep '/dev/' I get Code: /dev/disk0s2 389Gi 62Gi 327Gi 16% / /dev/disk0s3 76Gi 24Gi 52Gi 32% /Volumes/Backup /dev/disk3s2 500Gi 47Gi 453Gi 10% /Volumes/Misc Note the huge space between the 1st and 2nd Column..
This is because currently I have some NAS drives mounted which are not showing due to grep. When they are not mounted. The output is fine with equal spaces between each column (like between col 2 and 3.. or 3 and 4). I want to do a (dare I say) sed or awk or something to reduce the space between 1st and 2nd col. So that it has space like between col 3 and 4.. or 2 and 3. This is because I am showing this output somewhere and because of the space its not showing up correctly.Also I hope the command will still work when the NAS drives (afp) are not mounted.. basically consistency. The spaces are not showing properly in the quote tag. Changed it to CODE tag.
if there's a tab-delimited file under /usr/desktop, how can I determine the number of rows and columns of the file in shell?And, if told the the 3rd column of the file contains only numerical values and all values in the 5th column are unique, how can I verify these in shell?
I've searched everywhere and I can't come up with a good solution. For each line I need to find the average, min, and max. I've seen plenty of solutions where the number of columns is fixed, unfortunately for me these lines can get pretty large. My thought was to read each line individually into an array, loop through the array and find the avg, min, and max that way but i haven't had much luck. I can read each line using a while loop but I'm having trouble with the array part, or perhaps that's not the best solution?
How would I list 4 users ID numbered 10, 11, 12 and 13 from my users list and output them to a file busers where their names are numbered by ascending order? How would I accomplish that on a one line command?
A few months ago I have setup a server with three hard disks. The partition mapping the disks as follows:
Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x7ca36fee
[code]....
Now I have the following problem the LVM file system don't mount properly.If I open the mount point I see only a few files of the LVM disk. If I want to unmount the disk I get the following error:
umount /data/ umount: /data/: not mounted
If I want to mount the volume I get the following error:
mount -a mount: /dev/mapper/gegevens-Data already mounted or /data busy
What options should I use when I'm using the sort command to sort the top 5 CPU processes (ps -eo user,pid,ppid,%cpu,%mem,fname | sort ??? | head -5) showing max to min usage?