I was trying to do a school assignment in LaTeX. The assignment involves having the page split into three columns: the first column is for the quote, the second is for a reaction, and the third is for questions. These columns must be able to break in the middle of a row. I was trying to accomplish this using LaTeX.
First, I tried the longtable environment, but that would not allow me to pagebreak in the middle of a row. Then, I tried parcolumns, but for some reason, the second "row" had a huge space between the first two words. Does anyone know of an environment suitable for this kind of work?
I have a folder with only 24 files named <number>.dat (i.e. 4.dat, 6.dat and so on) where <number> is between 0 and 256. Each file has just two columns of data and nothing else.
I'm trying to combine all the second columns ($2) together. I've been fiddling around with getline and so far have
which takes file 4.dat and adds $2 from 6.dat, but I want a single command to take each $2 from every file and add them to (for example) 4.dat (having $1 from 4.dat is no problem). A command that takes every file in the folder and grabs $2 and places them in a common file would be ideal. Frankly I can work around if you combine both columns from every file.
I'd like to extract a single column from 5 different files and put them gether in an output file. I saw a similar question for 2 input files, and the line of code workd very well, the code is:awk 'NR==FNR{a[NR]=$2; next} {print a[FNR], $2}' file1 file2I added the file3, file4 and file5 at the end, but it doesn't work. Does anyone know what do I have to do?
I want to merge columns (selectively) from several files and create a new file with the merge output. I saw some suggestions to use pr/paste to join the columns and then awk to pick-up the columns.
Code: pr -m -t -s file1 file2 | gawk '{print $4,$5,$6,$1}' But I have hundreds of files and I cannot manually pick up columns using awk as given in
I've been hitting my head against a wall for awhile with this one:As the last part of some data analysis I performing I would to construct a matrix from a series of different files. These files have the format:
I've searched everywhere and I can't come up with a good solution. For each line I need to find the average, min, and max. I've seen plenty of solutions where the number of columns is fixed, unfortunately for me these lines can get pretty large. My thought was to read each line individually into an array, loop through the array and find the avg, min, and max that way but i haven't had much luck. I can read each line using a while loop but I'm having trouble with the array part, or perhaps that's not the best solution?
Each line of the file I am sorting is in the following format:
<url> <month> <day>
For example:
[URL]
I wrote the following to sort:
Code:
#!/usr/bin/perl $in = shift; chomp($in);
[code]....
The script worked fine for my small testing files, but failed in my input file. The input file is 18MB and containing more than 300,000 lines. The output will contains some lines like that:
I am a semi-noob on this and I have problems getting my emacs recognizing .tex as latex and even running latex-mode. Usually when you run latex-mode (M-x : latex-mode) emacs should switch to latex-mode, but nothing happens in my case. The menu bar still show the TeX options, highlighting remains the same etc.
I am running emacs 23.1.1 (x86_64-unknown-linux-gnu, GTK+ Version 2.10.4), this is on a university system so I don't know much about it.
> uname -a Linux karakum 2.6.18-164.11.1.el5 #1 SMP Wed Jan 20 00:57:09 EST 2010 x86_64 x86_64 x86_64 GNU/Linux.
I've been looking though different editors for one that has good printing support. Ideally it should be able to print C++ code with line numbers, syntax highlighting, multiple columns per page, customizable fonts and sizes and a print preview feature so that I can make sure it looks right before sending it to the printer. It appears that notepad++ had at least some of these features, but it is not available on linux. The best I could do so far is to copy/paste the output of 'cat -n foo.cpp' into oowriter and format it into two colums. I don't get synax highlighting though and I have to manually replace tabs with a few spaces as well as some excessive leading spaces before the line numbering.
I'm just texing a little report and I get the following error message: LaTeX Warning: Citation 'tzvp' on page 4 undefined on input line 74 I have made a bibliography in the classic way, i.e.
I had been playing around with various Python scripts in order to add certain columns to Nautilus, in particular bitrate, length, artist and genre for information on mp3 files. Well, I got greedy and installed something I shouldn't have and now I have massive duplicate columns available, and checking them to be visible has no effect at all. I have lost all the information that would populate the aforementioned columns.
I went to the Configuration Editor, went to apps>nautilus>list view>default visible columns and deleted the extraneous columns. I also disabled all the pythons scripts I put in ~/.nautilus/python-extensions. The extra columns never go away, and, if checked, never make visible the named information. Here is a screencap.
I'm trying to install a .sty, .cls, and a .bst (bibtex) file for latex. I'm currently using texlive in Ubuntu 10.10. I have the general idea of where to install these but whenever I try to compile the .tex I get an error that says permission is denied to the .cls file, so I'm not sure what's going on.
I've ran the mktexlsr and everything else, but I still get this problem. If I run sudo pdflatex <filename> I wind up compiling a pdf document that I can't access. Not sure if I have to add permissions to the .sty and .cls files after they've been copied.
Additionally, the .sty and .cls files I'm using aren't in the official texlive distribution so I would definitely need to install them myself.
I have been messing with a2ps in order to print a text file I have that has 130 columns by 80 rows per page. It appears that a2ps automatically scales the number of rows based upon the number of columns I try to print.The file prints properly for the columns but a2ps scales the rows to be 97 rows per page. So the 80 rows print but the next 17 rows are blank. It is not scaling the font for for 80 rows.
How can I add columns to the right of GtkTreeView? How can I add the menu to the right of the window? How can I change the position of the icon in the GnomeMessageBox to the right of the dialog? And how can I change the arrange of the buttons from right to left in GnomeMessageBox? and position of the icon on the buttons in the GnomeMessageBox?
I have a file that contains a couple of email addresses and I want to extract the usernames ( Letters before @ symbol ). How can I do that using sed/awk.
I know cut will work, but the current environment doesn't allow me to use cut command. I can use either awk or sed.
I like view file listings in Nautilus in the List View. By default the "Location" column is not visible and I have to set it to visible through "View -> Visible Columns" each time. Is there anyway to make the change permanent?
I need to write a document in japanese using Latex, but i'd like to know what are the steps to do it from scratch. I'm not so familiar with Latex and i really need some advices, especially regarding the packages for the language and all. What are the necessary programs to get? Packages? libraries?
I am using the Gedit LaTeX Plugin 0.2 rc3 on ubuntu 10.04 with gedit 2.30.3. The problem is that it will not make pdf files. I do have rubber installed.
Question how can I update latex packages is there any easy way to do it in linux, such as in windows (miktex for example).I have installed all the TeX-live packages on the ubuntu software center. But I seem to have outdated older packages.03156397135
Say I have a text file with10 columns. I need to reorder them based on a list of column numbers that will reorder them.
My problem is this:
If I want to cut out 5 columns (columns 1,2,3,9,10) in the order 1,10,2,9,3 then I have tried using:
Code: cut -f1,10,2,9,3 my_file.txt > reordered_file.txt But this just extracts the columns in order as if I used:
Code: cut -f1,2,3,9,10 my_file.txt > reordered_file.txt How can I cut these columns and place them into the new file in the order I specify?
While this might seem quite trivial, I will actually need to do this for a file containing ~14000 columns with ~12000 columns that I need to extract in a particular order.
I am trying to write a script which compares a log file with reference file. The log file has a table, the LHS of the table are constants strings and RHS of table values changes if there are any changes in configuration.code...
Here I am looking for a script which compares test.log file ( whose RHS data-types are known prior whether it is digit or string) with test.Ref which is reference file for test.log file. It will be really helpful for me if any of you give some idea about writing this script.
I bet this is a Perl one-liner (or very simple python script).I have a tab separated files in which each row looks like:Unique_Eight_Character_Sequence [3 tabs] data1~moredata1~moredata1 [3 tabs] data2~ moredata2~ moredata2 ... dataN~.The output file should have each column converted into a row (with the unique character sequence copied in for the first column), and then each "~" replaced by a comma.
Is there any way to filter the output of a command based on the values on the output columns. For example i execute du -h on directory with many files. Now I want to filter the output based on the size (i.e. M or G or K ). The filtered o/p should contain only M(megabytes) or G(gigabytes) and also all columns.