General :: Curl -K Command Outputs Weird Symbols Instead Of Downloading URL From File?
May 16, 2011
I'm on Ubuntu 11.04. I have read around about how to use curl to download a list of URLs from a text file, and everyone says to use Code:curl -K URLlist.txt. This is what the curl man page says as well. However, for even a simple file with one URL, this command outputs a bunch of weird symbols for me instead of downloading the file.For example, I have a text file "test.txt" with one line in the following format:
Code:
url = "http://www.example.com/image.jpg"
I use the curl command to download this file:
I'm on Ubuntu 11.04. I have read around about how to use curl to download a list of URLs from a text file, and everyone says to use
Code: curl -K URLlist.txt This is what the curl man page says as well. However, for even a simple file with one URL, this command outputs a bunch of weird symbols for me instead of downloading the file. For example, I have a text file "test.txt" with one line in the following format:
Code: url = "http://www.example.com/image.jpg" I use the curl command to download this file:
I need to download a file from a website which has a URL formatted like:
[URL]
This redirects to a .zip file which has to be saved. There is also a need to authenticate based on username and password.
I tried to use wget, curl and lynx with no luck.
UPDATE:
wget doesnt work with redirection. It simply downloads the webpage instead of the zip file. curl gives the error "Maximum redirection exceeded > 50 " lynx also gives the same error.
Is there any curl API to configure only the required protocol. If I have proper openssl installed, the installed curl will have all the protocols (like HTTP, HTTPS, FTP, File etc...) supported by default. Is there any way to allow or disallow only some of the protocol at the runtime. Say I need to support only HTTPS, FILE and I dont want to allow HTTP. Is there any way to do this?
how to update a series of values from multiple grep commands outputs to be appended to a single row of a csv file? Work on a linux envir. The values from grep output will be numeric values.
Output sold look like:
1,3,4,5,7,0,5
Each of these values will be odtained from multiple grep commands piped with wc -l Is it possible to update a single row of a csv file if so pleas ehelp me with the command to be used to redirect the output into the csv file
I am trying to delete these symbols "[ ]" from a file but it says string not found. I tried: %s/[//g while editing the file not working also tried sed -e '/[/d' and sed '/]/d' still no job.
By issuing the 'nm' command on shared library (internally using one static library), the functions exposed by static library is also being listed, Which allows to use internal functions which is of course not intended. I have one static library having A(), B() and C() functions. Creating one shared library which has function XYS() that is using A() and B() functions from Static library. While doing 'nm' on shared library, all the static library function are being listed.
I have two suse linux server. Recently we got the license key. In one server I could able to register successfully. Another server is throwing an error. I am using yast-software- Novel customer center configuration. After selecting configure now, I am getting error has follows Execute curl command failed with 7 curl: (7) could't connect to host.
Is there a way to set up a custom keyboard layout? (ex: set q to the f key, etc.) I have looked around, but have been unable to find one.
If there isn't one, then a (basically) equivalent solution for me would be to map some of the symbols I need (ex: Δx,Σ,ect.) to ctrl-/,ctrl-., ect. through keyboard shortcuts. The problem I run into here is that I do not know of any commands that paste a specific symbol into the focused text input area. Does anyone know of one?
I am, as the forum title suggests, new to linux and to programming and having trouble figuring out how to do this.I have a very large XML file with a lot of information in it. I'm trying to get a single tag out of the file, each of these tags contains a single web link and I want to download the file at every single one of those links. I really don't know how to do this.My thought, though its probably not the most efficient or correct way, was to use VIM to search the document and somehow extract all of this one particular tag and then use wget on the links.
I am trying to connect to the web interface found at [URL] using curl. This first requires login information to be entered at [URL], but I am having an issue with the login process. I am trying to submit the following form via POST:
Code: <form action="j_security_check" method="post" id="login_form" name="login_form"> <center> <table style="background: #cac1cf;FONT-SIZE: 12px;"> <tr> <td align="center" colspan="2">Please enter your username and password:</td> </tr> <tr> <td align="right">Username</td> <td> <input name="j_username" style="width: 250px" id="j_username" type="text"/> </td> </tr> <tr> <td align="right">Password</td> <td> <input style="width: 250px" name="j_password" id="j_password" type="password"/> </td> </tr> <tr> <td colspan="2" align="center"> <input value="Enter" name="enter" type="submit"/> <input value="Clear" name="Clear" type="reset"/> </td> </tr> </table> </center> </form> The command that I am using for this is the following:
Code: curl -c cookies -b cookies -L -d "j_username=user%40domain.com&j_password=pass" [URL] The command is properly formatted as far as I can tell. I tested it with another website using a similar authentication scheme using different POST variables specific to the form and it worked fine.
When I run the above command with the -v tag, it reveals this: Code: * Connected to lcl.uniroma1.it (151.100.4.74) port 80 (#0) > POST /sso/j_security_check HTTP/1.1 > User-Agent: curl/7.21.0 (i686-pc-linux-gnu) libcurl/7.21.0 OpenSSL/0.9.8o zlib/1.2.3.4 libidn/1.18 > Host: lcl.uniroma1.it > Accept: */* > Content-Length: 44 > Content-Type: application/x-www-form-urlencoded > } [data not shown] < HTTP/1.1 408 The time allowed for the login process has been exceeded. If you wish to continue you must either click back twice and re-click the link you requested or close and re-open your browser < Date: Sat, 29 Jan 2011 15:26:41 GMT < Server: Apache-Coyote/1.1 < Content-Type: text/html;charset=utf-8 < Content-Length: 1554 < Connection: close < { [data not shown] 103 1554 100 1554 0 52 5081 170 --:--:-- --:--:-- --:--:-- 10223* Closing connection #0
I cannot tell why the login timeout is expired when I try to do this, and my investigation toward this end has been fruitless. I saw a brief snippet on Google that vaguely suggested that the underscores in the domain name were at fault, but replacing these with their encoded counterparts did nothing to resolve the issue (that, and underscores should be fine when sent unencoded according to the standards). I have extensively perused the man pages and have come up with nothing to adequately explain this behavior. I also talked to a friend who has worked with curl in his line of work, but he mostly has experience in the context of PHP and has not dealt with this issue before. I am running GNU/Linux 2.6.35-22-generic-pae.
This was what I tried...wget -A rar [-r [-l 1]] <mediafireurl>
That is to say, I tried with and without the recursive option. It ends up downloading an HTML page of a few KB in size, while what I want is in the range 90-100 MB and RAR.
What happens with MediaFire for those who may not be aware, is that it first says
Processing Download Request...
This text after a second or so turns into the download link and reads
I have a requirement like this:Cut the characters from each line of a file with following positions: 21-24, 25-34 ,111-120.Thse fields now need to be placed in a tab delimited output file.Currently this is how I am achieving it:
I am trying to download a non-existent file from a repository by giving the URL, using curl APIs. It is throwing an error message "The requested URL returned error: 404" but the specified file is downloaded with 0 bytes. My code snippet is as follows :
I am looking for a way to configure rTorrent to stop downloading all torrents after they have downloaded x amount. For example, specify 15mb and as soon as the torrent reaches that size have it finish downloading the pieces it has requested and then start seeding partially completed. The reason for this is I'm trying to come up with a way to build ratio on a site where torrents are added very fast and at a very high frequency.
I download and add the torrents to rTorrent automatically via RSS, but I only want to download a small amount and seed that small piece while there are still a lot of people in the swarm (swarm drops off very quickly) and come out with a positive ratio from that small piece, beating the ratio clock so to speak. I thought it would be an interesting all be it somewhat impractical exercise in shell scripting, if rTorrent can be hooked into like that, documentation is sparse in some areas.
Is it possible to make a script that makes a tree of all folders and subfolders and outputs it to either a .txt file or .pdf? All folders except from one shall list 2 levels. The except folder all the way.
I have a site's url. I have it's ftp admin username and passwd..How can I upload there ( in / directory ) a file named app.log using curl ?I read the manual but I understood nothing
I am having some issues with downloading images to my website from my suppliers!
I have a text file (extracted from product their product lists) which has all of the image URLs!
I have tried to use php using the below script which was started via a cron job, however exec is blocked and my hoster has told me to use curl..... Is there something that can be written in or with curl to do the same thing?
I have an HP Laserjet 2200 connected via USB with 64MB of memory. When I print page 2 of this document (for example) [URL] the delta-symbol and minus-signs do not show up, although I see them in Adobe reader version 9.3.
I have tried several of the available drivers for the printer (including the "[recommended]" one) and none of them produce these symbols.
I'm guessing this is a font issue. I don't know how to show you what font stuff I have installed.
I am using "curl" command line tool to upload file to ftp server through ftps.I have also tried with the "Secure FTP" software from windows using Implicit mode, which works fine while transfering files.Command as follows:
curl -vk --ftp-ssl -u [username]:[password] ftps://ftp.hostname/directory/test.txt -T /tmp/text.txt --ftp-pasv --disable-expsv Login to server successfully but geting error while start to transfer data. The verbose
I have a quirky situation whereby I'm using SED to selectively comment out a line in a crontab job (on Solaris, I know but it's connected to the Linux function I'm working with).
What's happening is this.
Remove hash symbol
Code: /opt/csw/bin/gsed -i '/^.*/usr/local/scripts/mirror-fix.sh.*/ s/^#//' $TEMPFILE Restore hash symbol
The problem I'm running into is that the script can sometimes prepend an extra hash # symbol if run more than once (I have a lockfile that I poll for to discourage this, but that's not perfect).
I wonder how I can modify that sed statement to remove any/all leading hash marks up to the first other character, in this case it's a 0 (zero) which is a crontab file.
When I open a gtk app from the command line this error pops up:
Code:
Gtk: Unable to locate theme engine in module_path: "clearlooks", and I can not find clearlooks theme engine anywhere as it is supposed to be part of gtk by default?
I have a Cygwin script that connects to a web site with the bash command "curl". Phasing out Windows, I need to port everything to Linux. Ubuntu bash doesn't recognize "curl". There has got to be an equivalent.What would it be? Beneath the dashed line the top of the cygwin man page. And while on the topic of the bash commands available on Ubuntu, where could I find an exhaustive list?
Used to work perfectly. I attempted to install VPN client, result is can't connect to anything. Networks are seen by network manager, nothing happens when click connect. Results below:
Wireless LAN present in Hardware information. Atheros AR242x 802.11abg Wireless PCI Express Adapter (wlan0) Kernel Driver: ath5k 168c:001c /var/log/boot.msg
[code]....
Advice is to use output of dmesg command to find problem, but as above I don't get any results for this in terminal.
attempt to ping external site linux-cfi6:~ # ping -c 66.70.73.150 Usage: ping [-LRUbdfnqrvVaA] [-c count] [-i interval] [-w deadline]
So I just turned my computer on and it was beeping rapidly, and it would stop if I hit "Enter". Also this came on my screen: Cannot set Fray", something like that. It boots fine..just what is that?