General :: Replace Sequential Numbers In A File With A Different Sequence Using Sed?
Apr 11, 2010
I am trying to find a way to replace a set of sequential numbers in a file with a different sequence using sed. This might be done easier using awk or some sort of bash script, but it seems to me there must be a way to do this easily with sed. Basically, what I am editing is a Cisco switch config. I want to change the sequence of ports to a different numbered sequence. Here is an example of what I am trying to do.I want to change for example, the file:
Is there a command to return a recursive listing of sub-directories and the number of files in them? I have found plenty of ways to give me the total number of files in a directory structure, but none that gives a list of the sub-directories with the number of files in them. "du" gives me a listing of directories with their sizes, but I couldn't find an option (or any other way) to give me the number of files as well. Ideally, I'd like to get list with "Size" "Files" "Dir name" - And the order of the columns doesn't matter. Is there a "simple" command line solution or do I need a shell script for that?
I'm writing a bash script where I read a text file (containing a column of numbers) and store each line in an array. There seem to be some problems with the whole thing however, but only for some files and not others. Here's what I do:
Code: #!/bin/bash file=time_notOk.txt ### The file with a column of numbers i=0 ### Array counter ### Read the file
While converting some images using ImageMagick, I noticed a somewhat strange effect. Using xargs was significantly slower than a standard for loop. Since xargs limited to a single process should act like a for loop, I tested that, and found it to be about the same.
Thus, we have this demonstration.
Quad core (AMD Athalon X4, 2.6GHz) Working entirely on a tempfs (16g ram total; no swap) No other major loads
I have to read a couple of numbers from a random.txt file. In this .txt file there are random numbers. They are separated by a space. Example if you opened test.txt:
test.txt :1 6 1 3 6 8 10 2 4
I would like to read those numbers using CAT and store them into an array:
numlen=${#num[*]} - (must be like this because it is a part of a larger program)
I have a china phone which has mp3 player and unfortunately it reads the file names in its memory card in sequential order according where the file is saved. The file system is NTFS. cod
**Note: DDD song was last because I saved AAA to EEE songs then later added DDD song
Then suddenly i deleted BBB song and replaced it with FFF song code...
This is kinda lame. but the OS of the phone has no capability sorting the file according to filename in its built in mp3 player.
my question is how can I sort the files sector by sector(is my term right?) so the lame mp3 player would read the files finally in alphabetically sorted order. I Will plug my phone on my PC
I trying to change a file with hundreds of entries, replacing line with "IP Address Number" for "Host Name", one for another.
this is the original: [IP Address Configuration : "172_17_27_161.SUBNET_U"] IP Address Number = 172.17.27.161Assignment Type = 8Host Name = CAST124Last Used = 1290499294000MAC Address = 1 00 16 35 74 4C 59Client Identifier = 01 00 16 35 74 4C 59and the result desired is: [IP Address Configuration : "172_17_27_161.SUBNET_U"]Host Name = CAST124Assignment Type = 8IP Address Number = 172.17.27.161Last Used = 1290499294000MAC Address = 1 00 16 35 74 4C 59Client Identifier = 01 00 16 35 74 4C 59I know how to change one character by another with sed, but not to change a line for another, because I don't know in which line number it is.
I have txt file with list of ID's and I need to insert comma in every line and then remove new line character so it'll become one long string. So to clarify, I have txt file content that looks like this.
234 5466 2356 ... and so on.
but I would like this to change to 234,5466,2356,... I looked at sed and tried to wrap my head around the commands but I guess my brain isn't smart enough. its really confusing for me. I've managed to add commas to end of line (sed "s/$/,/g" filename) but somehow I can't seem to remove new line character from each line.
I have recently switched to using LXDE on my PC and I am on the whole pretty pleased with it. However,PCMan is giving me a really odd problem. Some of the files/folders are being displayed in the wrong order where they contain numbers. They are being ordered by their first digit not the whole number.
i got the slackware folder with my builds of my favorites packages, in /tmp i create the new versions of that packages. theres is a way in bash to replace the new ones in /tmp with the old ones in /home/user/slackware??
something like..... in /tmp: mv -i package1.tgz /home/user/slackware | and somethin to replace the old version of the app with the new one
I have a jar, and I need to replace a class in it, at this moment, I can only open it with "archive manager" and then drag and drop the new compiled class into the jar, but I think this is really boring, if I can do with with just a command ?
I am having difficulty getting sed to replace a string of text in an XML file, despite the fact that I have no trouble using grep to find that same string. Since the new string and old string to be replaced contain a lot of special characters, I thought it best to store them in variables as opposed to using a slew of backslashes:
I have a large number of log files, on a linux box, I need to cleanse sensitive data from before sending to a third party. I have used the below script on previous occasions to perform this task, and it has worked brilliantly (script was built with some help from here :-)
However, now one of our departments has sent me a CLIENT_FILE.txt with 425000+ variables! I think I may have hit some internal limit. I have tried splitting the client file into 4 with around 100000 variables in each, this still doesn't work. I'm loathe to keep splitting though as I have 20 directories with up to 190 files in each directory to run through. The more client files I make, the more passes I have to do.
I have an SQL dump, file.sql that has many references to a particular domain, d1.com. I would like to run a command that can replace every occurrence of d1.com with d2.com. I've tried looking into sed before but the man pages are quite daunting.
I have large text files with space delimited strings (2-5). The strings can contain "'" or "-". I'd like to replace say the second space with a pipe. What's the best way to go? Using sed I was thinking of this:
on creating a new perl script which replace IP address from the text file. eg. If in a file, we found any word like 11.222.333.44 then it has to be replaced to XX.XXX.333.44
I need to replace a string in a file(startup.sh) using a script(parser.sh). After running parser.sh startup.sh should be filled with nfs path like /home/vimal etc but im getting error since path contains /. how to remove this.
I am trying to write a script to access sqlplus and use the output to replace the result in another file. But I am having some issues with it (This script is just a test script and I am just trying to print the updated value.
#!/bin/bash I am not able to post the sqlplus connection, but it works. bb=$a
I can't get sed to actually change the file, clearly there's something basic not working, can anyone point me in the right direction? I know nothing about scripting. Oh yeah, all the directories have spaces which was why so elaborated.
find . -name "*epub" | while read file; do unzip -o "$file" content.opf && mv content.opf content.opf.bak && sed 's/<dc:language>UND</dc:language>/<dc:language xsi:type="dcterms:RFC4646">EN</dc:language>/' < content.opf.bak > content.opf && zip "$file" content.opf && rm -f content.* ; done