Trying to remove lines from a syslog text file that have duplicate strings
Mar 10 06:51:11[http-8080-1] INFO com.MYCOMPANY.webservices.userservice.web.UserServiceController [u:2533274802474744|360] Authorize [platformI$tformIdAndOs=2533274802474744|360, userRegion=America|360]
then a few lines down
Mar 10 06:52:03 [http-8080-1] INFO com.MYCOMPANY.webservices.userservice.web.UserServiceController [u:2533274802474744|360] Authorize [platformI$tformIdAndOs=2533274802474744|360, userRegion=America|360
got the same thing in terms of a u: number but the issue is I need to remove duplicates and just leave one and the file has multiple duplicates of different u: numbers and it's 14,000 lines long. can anyone tell me if I can use awk? sed? or sort for something like this to? removing lines that have a certain string in there that's a duplicate.
i am working with linux security auditing project on my Servers.I want to find out all the commands executed by individual users.i think using last command,find out the login details.But how can find out the commands executed by each users on all logins except "history".?
I'm trying to come up with ideas for a simple way to strip a specific "entry" from a text file.I know tools like sed and perl can remove specific lines from a file but I haven't been able to come up with an elegant way to do my group of lines.In my file, the first "Location" line and the "SVNPath" line should be unique every time... but are they enough to strip out the whole set of the group plus the trailing one line of white space separating each group? Add to this, my file will grow as new entries are added (always appended to the end) but new entries will have the same formatting.
Contained within each of these 67 text files is about 1 million urls. Yes. I have 67 text files that contain 1 million lines of urls each. I am sure I am swimming in duplicates. I tried opening one text file and clicking sort ----->remove duplicates. Now Gedit is not responding my processor is maxed out to 100% and I think I am finally ready to delve into some command line code. Can anyone give me idiot proof instructions on how to sort the duplicates out of each one of these 67 text files? How about no duplicates across all 67?
I a csv-file (A.csv) with a total of 4.600.000 lines. Thats to many and only a few is necessary. I have a txt-file with 150 lines (X.txt) (all lines is dataset from a mainframe and looks like abc.def.123.456. How do I remove lines from A.csv where none of the dataset from x.txt is present?
i have a big file of random numbers i generated at some point in time, after working with it with different things(how fun that was)... i want to remove duplicate lines and i'm not sure i'm doing this right
I have this massive table file with some data in it and I want to replace some lines that are wrong with the correct ones that are in another table file of the same format. The wrong lines are not all together in a block but randomly distributed so I need to make a loop checking if the line is in the other file and if it is, replace it. I want to try and do it with sed or awk but I don't really know how to....
I need to do some text file manipulation which I think should be done with standard commands in BASH. I'm looking at comma seperated text files (stock market data). It comes in the form of date, stock code, open, high, low, close, volume. What I need to do first is move all data with same stock code sequentially into individual files.
While doing this since the stock code will now be the file name I need to remove the stock code. Next I need to filter out overlapping data from different files with the same date. ie. where two files contain the same date on the one line only one line will be added to the combined file. I think there must be a tutorial out there for basic text manipulation like this, I just haven't found it yet.
Having recovered from busting my installation, feel urgent need to know what I did to set it up.So...would like to see all commands I ran in terminal window and store them (execute as script in future?)I can see prior commands using up arrow, is there a way of storing all of those commands in history?Also, any pointers to setting up sort of backup of the package installation setup?
I have a txt file with couple of comment lines: Number of title = !num! #line1 #line2 #line3
I wrote a script with "sed" to replace !num! in this file, which is very straightforward. However, based on the !num!, I want to remove the number of "#" based on the !num! value. Is there an easy way to do that with "sed"; otherwise, i will have to write a script to loop through the file.
If I interactively ssh to a remote host and enter commands, I can up-arrow through the command history.If a script ssh's to a remote host and calls a command, it does not get appended to the history.How can I configure ssh or sshd so that this happens? I'd like to be able to have those scripted commands available in the history file when I log back in interactively.