OpenSUSE :: Partial Loading Of Data Followed By Broken Up Multiple Lines Due To Firefox Or Not
Apr 20, 2011
Viewing any source on the Web results in partial loading of data followed by broken up multiple lines. It is occurring now with this post. Paging down I get three lines of "submit new thread" until at the bottom are multiple lines (no characters) after "forum rules". I put it to my network provider who cannot come up with any idea of what may cause it. It cannot be the hardware as the same condition exists on two PC's. Both are on 11.2.
I have been experiencing a problem where the screen loads and after initial first few lines breaks up into multiple repetitions of lines. Reloading helps but has to be repeated when pageing down. Mail is no problem; it is supplied by my network provider. OS is openSUSE 11.2 which I update when advised. Below is a sample from the error console:
I need only one of the lines having the same ending. It does'nt matter which lines to discard as long long as one with the same ending is retained. Of course I have to retain any unique lines too.
I'm familiar with load balancing.. but Is it possible to actually bond multiple DSL lines together? I hear of ways to bond using MLPPP but that requires support from an ISP. Is there a way to actually bond without support from my ISP, or use say a cable modem and a DSL line together for faster speed / diversity?
I want to (from the command line) be able to counte lines in a bunch of files of a specific type in a folder and all its sub-folders. How would I do this?
I often use the rpl command to make changes to multiple html files at once. For example:
rpl -R '<br />' '<br /><br />' mydirectory However, I haven't been able to figure out how to change multiple lines. For example, let's say I want to change all occurrences of :
How to search multiple words in multiple lines, inside a directory including sub-directory? Pls. give easy example. I want to search the files (in /xx folder and all subfolders) that have header.h included and used x() function. I tried $grep -r "header.h" | grep -r "x(" /Folder/subfolder/ > search.log
Firefox in openSUSE has a customized openSUSE toolbar menu. The hyperlink to openSUSE documentation is broken (HTTP 404). Please remove it; it is sufficient to visit the home page and to access the documentation from there.
Firefox is having trouble displaying characters. At times, a web page will display only partial of a certain character, and it will affect every character on the screen. In other words, sometimes when I load a page, the bottom half of all the 'i's or 'r's are missing. The problem persists when I refresh the page, and resolves on its own. It will happen to most web pages. Also, at times, a white or black stripe will run through some characters, usually a heading title on the page. I'm running Firefox 3.6.10 on Ubuntu 10.10. I did a complete reinstall of Ubuntu 10.10 when it came out (i.e. I reformatted and installed 10.10) and have been having this problem since then. Never had the problem with 10.04, which I had been running since May. Firefox and Ubuntu are both updated currently.
Please note the attachments, where in one there is a problem with the letter 'i', and in the other, on the Heading "Search New Posts" there are white lines running through the words.
On my computer, I'm using 11.3, KDE 4.4.4 and Firefox 3.6.8. Firefox is so s...l...o...w..... It's slow loading and it's slow browsing. Konqueror takes 1, maybe 2 seconds to load, and another second to load the home page. Firefox takes a good 30 seconds to load. The only message I get when loading from a terminal is about not using a shared database. I don't hear the disk churning like it would with a fragmented disk.
I can't believe everyone is having this response, the hue and cry would be enormous! Anything to try? Should I try loading another browser? I tried Opera on 11.2 and didn't really like it, had problems with a lot of content. What about Sea Monkey (Gad, I HATE words with monkey in them!) Konqueror won't show a lot of videos and I have problems using the back and forward buttons. I'll wait for fixes but in the meantime.
bouncing icon and spinning icon in toolbar for 10-20 seconds, then nothing. Is there a special log file to be found for mozilla errors? (couldn't find anything useful in or near /var/log/messages or warnings all dependencies of associated programmes seem to fulfil dependencies, but obviously something isn't right. Does anyone else have the same problem?
I use jpilot on opensuse 11.3 64bit to sync pim data with my Palm Treo 680 via bluetooth. This worked fine until today. Now I get the following error message when I try to sync: Syncing on device bt: Press the HotSync button now dlp_ReadSysInfo error Exiting with status YNC_ERROR_PI_CONNECT Finished.
The last successfull sync was on the 20th October and today is the 24th October. I did not change any settings in jpilot or on my palm device. So I guess there must have been an update of opensuse which causes this error. But I do not now how to look up the updates during this period or how to undo them. Was there an update between the 20th and the 24th Oktober, which might affect either jpilot or bluetooth functionality?
As awk programmers may know, we can print range of lines with awk, from an initial pattern until a final pattern as follow:
Code:
awk '/Initial_String/,/Final_String/' inputfile
Well, I have this inputfile:
Code:
[code]...
Once having those elements in that way within an array(a[]), I want to be able to manipulate the array (a[]) and copy its elements to another array (b[]) in different order (all lines joined in a single line separated with commas), as follow:
In GUI style editors, you can generally select multiple lines, press tab a few times to move all the lines across (or shift-tab to go back). I have no idea how to do this in VIM.I googled around and couldn't find any straight answer to I came here.
I have a txt file with couple of comment lines: Number of title = !num! #line1 #line2 #line3
I wrote a script with "sed" to replace !num! in this file, which is very straightforward. However, based on the !num!, I want to remove the number of "#" based on the !num! value. Is there an easy way to do that with "sed"; otherwise, i will have to write a script to loop through the file.
I've seen a few tutorials that have commands and parameters on multiple line, like the one below:
Code: chkconfig --levels 235 mysqld on /etc/init.d/mysqld start
I can copy and paste this in Putty, but what if I want to manually type it? If I press return, the first line gets processed, so how do I insert a new line?
I have tried to boot Ubantu 10.04 several times and every time the screen is unreadable, the desktop is broken up into about 5 vertical sections and that's as far as I can get. Help. I have a HP Pavilion dv6000 laptop. other versions of Ubantu work okay.
I have a list of words that I want to grep in many files to see which ones have it and which ones dont. in the text file I have all the words listed line by line, ex: list.txt:
check try this word1 word2 open space list ..
I want to grep each line one by one. like I want it to
grep "check" *.log grep "try this" *.log grep "word1" *.log .. etc how can I do this?
eed to make a script to append a line to the bottom of multiple files (only certain files, but 100's spread over directories).Doing a find replace inside multiple files is easy, I use the followingfind /base/dir -name "*.txt" -exec perl -pi -w -e 's/FIND/REPLACE/g;' {} ;So I tried doing the followingfind /base/dir -name "*.txt" -exec echo "Append this" >> {} ;However this just appends all the text into a file called "{}". Whereas {} should be replaced with each file that's found.
I've been trying to sort this out for several hours and I?m totally lost? I?ve been searching around, but haven?t found the solution to my problem. I have a directory with 100 files. I need to copy 10 lines of each files (let?s say from line 45 to 55) into one unique file. So I guess I could use sed ?w, but I didn?t manage to write the right script. I also tried using a loop to create 100 different files, each one with the 10 lines) to concatenate them later on. But I only got 1 file, not 100.
I have a very, very large log file (360MB) that I'm trying to thin out. As it turns out the majority of this file has entries that aren't necessary so I'm attempting to build a command that will strip these out. The following command works to display only the data that I do not want:
This displays exactly the data I want to delete from the file by displaying the expression and six lines above it and five lines below it. However I'm at a loss as to how to remove this data from the output and display everything else. I looked into the -v option with grep redirecting the output to a new file:
However it doesn't work, the new file is the same size as the old one. What am I doing wrong? Is there a better method of doing this? I'm a bit out of my element since the method I'd normally use can't handle files of this size.
Order of these lines are random... So I cannot delete line #19, for example... And you can see that top four lines I want to delete are pairs. So there might be some clever way to detect the lines, if a line has both "1.9" and "1.11", then delete the line... I am new to perl language. The following is the code I have now... I think I just need to write some code inside the while loop checking if I want to delete the line $dotline before I write to a NEW file.
I'm trying to split a text file into various parts. Everything in between "123" and "break" (including linebreaks) goes into the splitted file.
e.g. using this text file:
This should split into 4 files. However I'm only getting 2 files: one for the line "123break" and one for "123 blah break". The two occurrences that contain linebreaks are being ignored. The .* part of my match should capture linebreaks seeing that I'm using the /s modifier shouldn't it? Even when I use the match /(123 break)/gs it still doesn't capture the first occurrence. I'm using Perl v5.12.3 (from ActiveState) on Windows XP. The text file is also in Windows format.
Code listed below.
The above code generates two files Output_1.txt and Output_2.txt which contain "123break" and "123 blah break" respectively. I want it to generate four files.
I have a dataset of around 1000 lines. Out of these 1000 lines I need to pick randomly 160 lines of data and write it to a file. The program is needed to eliminate data bias when I run the program through a reanalysis program. I am thinking I need to use the rand or srand term, but I am having difficulty writing this in perl. I have to write it in perl, because the rest of my scripts for this project are in perl, so consistency is important. The data only consists of one column of the data (YYYYMMDDHHHH).