I'm currently trying to design a small, simple enough shell program for area codes. I have a list of area codes in a database, and I am trying to write a program that will have a user input an area code, and then have the program print out information that immediately follows that area code in my database. I assume I need to use a find or locate command, but I'm not sure if I should be searching for a string or the number itself. The number could possibly occur at some other point in the file, though the way I have the file set up it only occurs once at the newline.
what function I should use and how I should go about it? As is I only have the absolute bare-bones beginning of having an echo for the prompt to input an area-code, and the read once it's input. Without the find I'm not sure how much farther I can get. Also, would it make it easier if I added some character such as a ! to the end of the number at the newline to make it easier to search for? With a macro that would be easy enough to do.
I'm using rhel6. Using File Browser Nautilus 2.28.4 I could easily locate any file I'm interested in by it name. I'd like to use this File Browser to locate the file name based on it content e.g. based on some word in the text file. It doesn't work for me that way ... My question: does Nautilus support the search of file based on it content or only based on the name of the file itself?
I am using grep command to search in a particular file whose size is 11 GB and i am getting Segmentation fault error as an output. My command and output is as follows:
[sdpuser@gnnsdp40 test]$ uname -a Linux gnnsdp40 2.6.18-164.el5 #1 SMP Tue Aug 18 15:51:48 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
how i can parse the complete file for searching string. I have also used split command of linux which splits the file of 11 GB to 11 files of 1 GB each respectively. But still getting the same "Segmentation fault" error while using grep.
I need to fgrep a list of things which are in a file. The file in which I will do the SEACHING is a large text file and I need fgrep to output each item from the list as a file with the item from the list as the file name.
I'm looking for a way to insert the number of lines in a file to the start of the aformentioned file. This should be simple but as I am not used to scripts in Linux, I am finding it tough going. I can find the number of lines in a file easily enough via
filesize=$(awk 'END {print NR}' $1)
but as for inserting this into the first line, i'm failing to do so. I've tried some of the other approaches on these forums but none so far have been able to do so.
I've tried:
sed '1i$filesize' $1
but sed i requires a string, not a variable so no go I've also tried:
but again with no luck as cat seems to need an input stream Just to recap, i want to insert a line at the start of a given file that holds the number of lines the original file has.
I need to get the max number from the name of a file
Formant of the file name: [a-zA-Z]*_[0-9][0-9]_[A-Z][A-Z].log Delimiter as underscore '_'
[code]....
known part in the above file name will be GA.log A give directory may or may not contain files in the above format or may contain file other then the above format if so then ignore it.
cmd=> ls *[0-9][0-9]_GA.log 2> /dev/null | awk -F_ '{ print $2}' | sort -nr | head -n1 | awk 'BEGIN { if ($1 >0 ) x=0; else x=1 } END {printf "%02.0f ", $1+1}'
output=> 01
if there are no files the output should be 01 and if file(s) found the output should be next highest number+1, In the above example it is 06 My cmd is bit lenghty. reduce my cmd and it should be in one line.
Can we find the inode of a particular file using its inode number?
The reason is i want to know how many blocks are occupied by specific file.
if we consider block size of 1K. if the file size is of 100 bytes. In such a case, when the file is stored on disk, the file will occupy 100 bytes or 1K (since we have choosen block size to be 1K) ?
I'm trying to write a script that takes two arguments, the first argument is a number, and the second argument is a filename. The shell script should indicate if the file's size is BIGGER or SMALLER the number provided. this is what i have sofar, am i on the write track, i'm hoping its just a problem with my if command
if [ $1 -h $2 ] then echo "$1 is bigger than $2" else
I'm trying to isolate a number from a text file using sed. The text file looks like this:
-GARBAGE-GARBAGE-GARBAGE- Number of frames: 183933 frames Codec -GARBAGE-GARBAGE-GARBAGE-
I tried the following:
Code: sed "s/^.*Number of frames: //g; s/ frames Codec.*$//g" "info.txt" > "frames.txt" Strangely, it only seems to be stripping off the end, but not the beginning, like so: -GARBAGE-GARBAGE-GARBAGE- Number of frames: 183933
I'm obviously not using the command correctly, so what am I doing wrong?
My folder have some files and I want to show the number of files on folder at "Total file on folder: " Ex: Monday folder have six files and it will show "Total file on folder : 6" when I run a script. This is my code :
#!/bin/sh if [ -d /home/kenzo/Monday/ ] && { echo "Monday" ls -l /home/kenzo/Monday/
I want to search and replace strings in a file with strings in other files/i need to do it with big strings(string1 is big) and i want to use a txt file for this.But this code not working :
If you create a file on UNIX/linux with special chars, like touch "la*, you can't remove it with rm "la*. You have to use the inode number(you can if you add the before the name, I know, but you'd have to guess as a user that it was used in the file creation).
I checked the manpage for rm, but there's no metion of the inode number. Doing rm inodenumber doesn't work either.
I have a large number of log files, on a linux box, I need to cleanse sensitive data from before sending to a third party. I have used the below script on previous occasions to perform this task, and it has worked brilliantly (script was built with some help from here :-)
However, now one of our departments has sent me a CLIENT_FILE.txt with 425000+ variables! I think I may have hit some internal limit. I have tried splitting the client file into 4 with around 100000 variables in each, this still doesn't work. I'm loathe to keep splitting though as I have 20 directories with up to 190 files in each directory to run through. The more client files I make, the more passes I have to do.
on creating a new perl script which replace IP address from the text file. eg. If in a file, we found any word like 11.222.333.44 then it has to be replaced to XX.XXX.333.44
if there's a tab-delimited file under /usr/desktop, how can I determine the number of rows and columns of the file in shell?And, if told the the 3rd column of the file contains only numerical values and all values in the 5th column are unique, how can I verify these in shell?
Code: $ echo 2 * 3 > 5 is a valid inequality. This will create a file in the current directory named '5' with the number '2' in it, the names of all the files in the current directory, followed by the number '3' and 'is a valid inequality.'
What I do not understand is why 'is a valid inequality' gets written to this file. I thought it would write '2', all the file names in the current directory, then '3' into the file called '5'. Why does the 'is a valid inequality.' get written to the file also?
I need to change all number 10 in a text file to word form, or in short from 10->ten. the thing is number 10 including in dates such as 10/22/1997 or 03-10-2011 should not be changed. im having some trouble because the file contains numbers like "price range from 10-50k".
this is just a sample.
name: john smith birthday: 10-11-1995 date hired: 05/10/2010 expected salary: 10-50k typing speed: 10 wpm
[Code].....
Using sed command is it possible to change like this..
i need to count the number of files and put the output into a variable. i used wc -l filename but i couldnt find an option to put the output to variable. example if the number o line is 5, i need the output of echo $x is 5.
I have on my windows machine several hundred files that are a format of .nc .ncs for a CNC machine. I need to convert them to txt which is something as easy as opening in notepad and then saving as .txt but there are so many that this kind of action would take way too long.
The reason I am writing the linuxquestions is because I would feel more comfortable in loading a live CD and using some sort of terminal command to do this than I would to download one of the many "freeware" type programs I have found for windows (even more so since I have had a root kit before and had to start all the way over to get rid of it).
I need to know:
1. Is this possible to do with the terminal without super advanced knowledge.
2. Can one please point me in the right direction; something to read or an example
I'm looking for a way to have two hosts talk to the same volume without the need for backend host communication ie. clustered file systems? Does anyone know of a clustered file system that will allow this? The basic idea I have in my head would be turn off caching and utilize stubs or pointers to let each host know when to reread new data? The hosts will not be writing to the same files or directories at the same time. They will also be targetting certain information in certain locations on the volume.
I'm trying to join two enclaves to pass data without networking. One host will drop a file and write a stub saying the file is there. The second host will then pick up the file analyze it and then drop a report so that the first host can then pick the report up.