I'm trying to figure this error message out. This little script is supposed to tweet my laptop's IP address, as a cron job, I'm hopeful that it would do so even if it's stolen. This is a variant of one that works, but this doesn't, and I can't see a difference in the curl line of either one.
Code:
#!/bin/bash
user="xxxxxx@xxxxxxxxx"
pass="xxxxxxxxxxx"
wget [URL]
TWEET=`sed -n 1p index.html`
curl --basic --user "$user:$pass" --data-ascii "status=$TWEET" "[URL]"
rm -f index.html
exit
This is the error message.
Code:
curl: (6) Could not resolve host: status=66.183.103.67; Cannot allocate memory
{"request":"/statuses/update.json","error":"Client must provide a 'status' parameter with a value."}
Why does curl think the status is the URL?
im trying to output a list of running processes via a shell script. At the moment i got this which outputs the processes to a text file called out.
echo $(ps aux) >>out
The problem is though, the processes are all just one big block of text which makes it hard to read. Does anyone know how to sort the output to a text file so that it prints to the text file at 1 process per line? I know its probably simple but im very new to linux.
I have to delete a certain line of text from the a textfile via ubuntu's shell scripting.I have done research, and it seems that most people advocate the usage of sed /d option. sed makes does not edit the text file. Hence, most options I discovered involved the use of a temporary variable/textfile and then overwriting the old file with the temporary new file. Is there anyway whereby I can bypass the use of temporary storage containers? I hope there is any magical combination of commands to edit the file directly.
I have two txt files containing x and y coordinates: xcoord.txt & ycoord.txt. I need to open them; read them line by line to get each coordinate; then each time I need to update Xs and Ys parameters inside another file called "dc.in" with the grabbed values.
Finally each time I need to run two exe files ( dc_2002 and st_vac) and produce corresponding output for each Xs and Ys ( dc.in is an input file for this exe files)
I have written the following code but it does not work:
Was wondering if any perl guru's could help me with a quick log file adjustment. I have a text file that looks like so (tabs and newlines are revealed so you can see what separates the data):
There are maybe 100 lines of text in this file at any given time. I need to delete all duplicate lines only looking at the first bit of text prior to the first tab. It doesn't matter which one gets deleted as long as there are no two lines that begin with that same text at the beginning before the first tab. So in this example, either the fist line "1234" or the last line "1234" would need to be deleted. I already have code in my script that opens the files - I just need the code to read the text into an array and the part that would find matches based on the above criteria, and make the deletions.
If it would be easier, I can even do a system call and use SED (v4.1.5) and/or AWK (3.1.5) instead.
a sed command to add a text before line number in text file? I have text file with 500 lines, and i want to add 3 more lines with text after line 300, OR before line 302, isn't no problem.
bash 3.1.17(2) I'm trying do write a shell script which must operate on each line of an ASCII text file. So, all the code must be inside a loop, and inside the loop, the first thing should be to read the next line from the file. I have the bash read command. But it reads from stdin. Any way to make read from a file?
I need to be able to convert HTML email messages saved as text files (.eml or .msg) to PDF documents, one PDF per email, retaining formatting and images.
Are there any Linux tools that will allow me to do this from the command line (so it can be scripted)?
Say I have a text file like: Code: 1 3 4 How would I use ksh to put the number '2' into the second line of that file?Okay it's not bash, it's ksh because this computer is OpenBSD
I have a text file called namelist.wps. In this file there is a line that reads:
Code: start_date = '2010-12-26_12:00:00', '2010-12-26_12:00:00', I have to automatically update the year, month, and day of month for this line without changing the rest of the file. Here is the script that I have:
I just modified the grub file in 10.10 in order to see what the text line boot is like. Well now I want to go back, but when I try to gedit /etc/default/grub it gives an error that he couldn't display. How can I edit the file to go back to gnome??? I am on macbookpro 6.2 tripleboot Mac OS 10.6, Win7 and Ubuntu 10.10.
Using the latest version of Ubuntu desktop on an emachine t5062 if it matters. I have a text file of keywords that is one-three words line after line for like 5000 lines. How would I go about adding a word to each line.Aside from typing it in or copying and pasting.If it can`t be done with Gedit I am all for using another program.
I have a text file called namelist.wps. In this file there is a line that reads:Code: start_date = '2010-12-26_12:00:00', '2010-12-26_12:00:00', I have to automatically update the year, month, and day of month. I set values for the year, month, and day of month using the following code in a c-shell script:Code: set y1 = `date +%Y`set m1 = `date +%m`set d1 = `date +%d` After I do this, how do I update year, month, and day of month, without changing any of the other lines in the namelist.wps file?
I am thinking of appending something to each line in a text file with Java. I prefer not write a new file with content appended from the old one.That 'something' would probably be Time Stamp when the file is created (which is same for each line).I am not sure Java provide some easy way for it or not
I have large text files with space delimited strings (2-5). The strings can contain "'" or "-". I'd like to replace say the second space with a pipe. What's the best way to go? Using sed I was thinking of this:
The problem that I am having now is that I can not is the text when the menus come up or when opening a file from the cd or curraculum on line. I have cairo-dock running and a conky running. Could this be the couse of my problem? I was going to uninstall and reinstall packet tracer but I am unsure on how to uninstall.
I have a utility that works with files. The utility is crashing at after about 120 files. The input to the utility is a file containing a filelist. I want to cut the file with the file names in it to seperate files containing about one hundred or so. My thought was to determine the number of lines/100 and then use head and delete to create temporary files to run the utility multiple times to prevent the crash. When I tried to create a variable using the wc -l command the output gives me the number of total lines but it also includes the filename of the input file. (873 Filename.txt) I can not figure out how to remove the Filename.txt from the variable.
I would like to know if it would be easy for me to get a program which, given a plain text file as input, discards the line separators and writes the rest as its output. And let's put in the case of a file that has been mangled to the point of having CR,LF (carriage return, line feed) in some places, only CR in others, and only LF in still other places. That is, the three possible combinations used in systems as a newline char (the 1st is, or was used by m$- dos, the 3rd one by Unix and I know systems where CR is the line terminator).
After all, all the program has to do is, every time it finds a char belonging to the set {CR, LF}, cast it away.
I don't understand why this is so difficult.In the old days, there was lpforms which allowed some formatting. CUPS did not see fit to implement this into it's lp package.cgi-...-cgi?lpforms+1In the old days, lpr allowed you to select a font in the command line with -1=fontname. CUPS did not see fit to implement this into it's lpr package.htmIn the old days, printers had fonts installed on them that you could access. Modern printers don't seem to have this. So now I still need to be able to select a font when I print certain text files from the command line but it seems this is impossible. I've been working with instances and lpoptions, which allows me to do a lot of other things I need like orientation and margins and even set the font size, but I still cannot choose a font other than the default.
I need to extract the 7th line below C-FM and every third line after that. How can I do it? I've tried using grep but I get all the lines in between. An example of the text I am working with is shown below.
I put a text file on my desktop and added a couple lines of text with gedit. File type shows text/plain. Double-click opens the file in gedit which is what I want. I'm using the file to temporarily hold some snips of code that I copy from file to file, but when I copy some html into the file and save it, now file properties show it's text/html and a double-click opens the file in firefox, which isn't what I want. Is there some way to keep the file type from changing itself?
I have a file : cpq_cciss-2.6.20-34.rhel4.i686.dd which is designed to build a floppy disk; these floppy is used to hold disk driver which is not on RedHat CD-Rom. But this .dd is not complete: some files, like /drivers/pci.ids are missing.My idea is to extract all files from .dd file, put missing files and then re-create a new dd file. But, how can extract all files from initial .dd file, and then recreate a new one?