Ubuntu :: Download A File Via HTTP From A Shell?
Mar 28, 2011How can I download a file via HTTP from a shell?
View 3 RepliesHow can I download a file via HTTP from a shell?
View 3 RepliesI have a debian box running Apache2 and PHP5.2.6 lenny.
When a request is made via https, php displays the content fine. If the request is made over HTTP the file is offered for download, rather than displaying it.
I know its probably something trivial but I've never seen this issue.
The plot thickens, I can display PHP over HTTP in some directories but not others (which offer the file for download)?
I have two files, uploads.txt and downloads.txt. I would like to combine the columns of these files based on the ip address. How can I best do this?
Uploads.txt Code: 192.168.0.147 1565369
192.168.0.13 1664855
192.168.0.6 1332868 Downloads.txt Code: 192.168.0.147 9838820
[code]...
Using netcat, nc(1), craft a valid http/1.1 request for getting http headers (not the html file itself!) for the main index page of www dot aalto dot fi. What request method did you use? Which headers did you need to send to the server? What was the status code for the request? Which headers did the server return? Explain the purpose of each header.
nc -v www dot aalto dot fi 8080
HEAD / HTML/1.1
host: www dot aalto dot fi
And it returns:
200 OK
Content-Length: 858
Content-Type: text/html
Last-Modified: Thu, 02 Sep 2010 12:46:01 GMT
[Code]....
I really don't know what does it mean. Question 2: Using netcat, nc(1), start a bogus web server listening on the loopback interface port 8080. Verify with netstat(, that the server really is listening where it should be. Direct your browser to the bogus server and capture the User-Agent: header "Direct your browser to the bogus server and capture the User-Agent: header" I don't understand this question.
I have a question about using ubuntu to download files from an HTTP server remotely and didn't know where to put it, so hopefully it falls under general support. Anyway, I am about to move into a place with an incredibly slow internet connection and a tiny data allowance and my brother has said that, if possible, I can use his internet connection to download any large files to a box I can just leave at his place, then I can simply come over to his place every few weeks and copy said files to a hard drive and all will be well. The problem is that I am not sure how to do this.
Today I went out and bought a few parts and built a cheap computer with a HDD big enough to hold whatever I need, however when I got home I realised I had no idea how I was going to handle the software aspect of this. Is there any way that I can access that computer remotely over the internet and schedule fairly large downloads from an http server? Also after talking to a friend I was told that I need to install the server version of ubuntu if this is to work, is this correct? Also, if its relevant the specs of the computer I have for this is using an "Intel Desktop Board D510M0 + Intel Atom Processor D510" which uses 64 bit architecture.
I'm wondering the way to download a whole directory from a web site (in exemple http://alien.slackbook.org/ktown/4.5.1/x86/kde/) to my hard drive.
View 3 Replies View RelatedYahoo! is shutting down Geocities and I need to download all the files in my webfolder there, is there a program that will download all the files there automatically
View 1 Replies View RelatedI'd like to permit to start download file when I click over some links. How can I to start download files through http protocol with apache 2?
View 5 Replies View RelatedI am trying to connect to the web interface found at [URL] using curl. This first requires login information to be entered at [URL], but I am having an issue with the login process. I am trying to submit the following form via POST:
Code:
<form action="j_security_check" method="post" id="login_form" name="login_form">
<center> <table style="background: #cac1cf;FONT-SIZE: 12px;">
<tr> <td align="center" colspan="2">Please enter your username and password:</td>
</tr> <tr> <td align="right">Username</td>
<td> <input name="j_username" style="width: 250px" id="j_username" type="text"/> </td>
</tr> <tr>
<td align="right">Password</td>
<td> <input style="width: 250px" name="j_password" id="j_password" type="password"/> </td>
</tr> <tr> <td colspan="2" align="center">
<input value="Enter" name="enter" type="submit"/>
<input value="Clear" name="Clear" type="reset"/>
</td> </tr> </table> </center> </form>
The command that I am using for this is the following:
Code:
curl -c cookies -b cookies -L -d "j_username=user%40domain.com&j_password=pass" [URL]
The command is properly formatted as far as I can tell. I tested it with another website using a similar authentication scheme using different POST variables specific to the form and it worked fine.
When I run the above command with the -v tag, it reveals this:
Code:
* Connected to lcl.uniroma1.it (151.100.4.74) port 80 (#0)
> POST /sso/j_security_check HTTP/1.1
> User-Agent: curl/7.21.0 (i686-pc-linux-gnu) libcurl/7.21.0 OpenSSL/0.9.8o zlib/1.2.3.4 libidn/1.18
> Host: lcl.uniroma1.it
> Accept: */*
> Content-Length: 44
> Content-Type: application/x-www-form-urlencoded
>
} [data not shown]
< HTTP/1.1 408 The time allowed for the login process has been exceeded. If you wish to continue you must either click back twice and re-click the link you requested or close and re-open your browser
< Date: Sat, 29 Jan 2011 15:26:41 GMT
< Server: Apache-Coyote/1.1
< Content-Type: text/html;charset=utf-8
< Content-Length: 1554
< Connection: close
<
{ [data not shown]
103 1554 100 1554 0 52 5081 170 --:--:-- --:--:-- --:--:-- 10223*
Closing connection #0
I cannot tell why the login timeout is expired when I try to do this, and my investigation toward this end has been fruitless. I saw a brief snippet on Google that vaguely suggested that the underscores in the domain name were at fault, but replacing these with their encoded counterparts did nothing to resolve the issue (that, and underscores should be fine when sent unencoded according to the standards). I have extensively perused the man pages and have come up with nothing to adequately explain this behavior. I also talked to a friend who has worked with curl in his line of work, but he mostly has experience in the context of PHP and has not dealt with this issue before. I am running GNU/Linux 2.6.35-22-generic-pae.
I installed Nagios on my Ubuntu 10.04 server using apt-get and when I accessed the web console, everything was OK. I made some changes to apache (creating some new virtual sites) and since then Nagios gives me a warning message for HTTP with the message, HTTP WARNING: HTTP/1.1 404 Not Found. The sites that I created are working perfectly. I noticed that the attemps are 4/4. Does this need to be reset or does Nagios automatically reset that once it detects the issue is resolved?
View 1 Replies View RelatedI have a slave node uploading all kinds of backups to my server in the internet. Now I would like to display the actual upload and download rate to this server (not the entire nic-traffic, any protocol) in a small php-page for easy monitoring.I had a look at quite some monitoring tools and the one which kind of offers what I am looking for is iftop with a filter on the IP of my server. As I would like to periodically update a file with the actual rates, an interactive program won't do. A possibility would be to filter the packages myself using but this seems to be quite a long shot.The optimal solution would be a program or script printing out the actual upload to a host specified in the options to STDOUT
View 3 Replies View RelatedI recently set up an automated shell script (bash on Ubuntu 8.10) to download new files from a server using ftp. Unfortunately the other end of the link is not terribly stable (and there is nothing I can do about this) which has resulted in the script hanging sometimes and then being kicked off again at the time set in the crontab.
This has resulted in multiple hung sessions taking up all the system resources.
The offending section of code is given below.
Code:
I'd like to know if there is a way I can force an exit if the connection hangs or alternatively if something like the Perl Net::FTP can handle these sorts of errors internally.
I have a web page that has links to a php script that sends pdf files to the browser when clicked.
The links are like this:
Code:
<a href="getfile.php?id=1201234">
The files sent are shown embedded in the browser, which is what I want.
The problem is: the title of the browser window or tab in which the pdf file is opened is "getfile.php?id=1201234"), and not the actual file name of the pdf file.
Is it possible to send the file by a php script in a way that the window/tab title becomes the filename and not the link by which it has been accessed?
I need to small shell script that I can download hdf data from ftp://e4ftl01u.ecs.nasa.gov/MOLT/MOD13A2.005/first,file name.MOD13A2.A2000049.h26v03.005.2006270052117.hdf each sub folders.next I copy all files with h26v03 to local mashine.
View 1 Replies View RelatedI'm running a webserver and i've uploaded serveral .txt files. I want them to be downloadable... For example if someone opens: [URL], to start downloading, not just to open in the browser.
View 3 Replies View RelatedCode:
cat ${SOURCE}/{start,universal,index,end}.txt > ${SERVER}/index.html
cat ${SOURCE}/{start,universal,02042010,end}.txt > ${SERVER}/02042010.html
[code]....
What's the point of checking if my file was tampered (as it says in the debians page [URL] ....) if the signatures are downloaded over http? Didn't the recent incident with Linux Mint had cleared our mind about these things?
View 0 Replies View Relatedi am using fc9 server i installed Apache web-server i kept some datafile in my html folder when tried to download remotely through web i can download the file tried to get the file in remotely through wget command i am unable to get the fileor is failed: Connection timed out Retrying below the steps i tried it
my target file is http://X.X.X.X/test.zip
wget -T 0 http://X.X.X.X/test.zip
wget http://X.X.X.X/test.zip
[code]...
We're having an issue with HTTP POST file uploads on our two Ubuntu PCs. For some reason, whenever one of our users attempts to submit a file in an HTML form, the request times out, usually with a 500 Internal Server Error message. This problem is not limited to one site, but occurs on all sites that use file uploads. Also, the problem does not appear to be with our network, as a Windows 7 PC on the same network can upload files to the same sites without any difficulties. The problem is not browser-specific; we have tested with Firefox, Epiphany, and Google Chrome and all produce the same results. The issue is relatively new, and was first observed within the last month; before this time, both machines had no problems uploading files.
Does anyone have ANY idea what could be causing this? I've tried a number of things, including rebooting the PCs, rebooting the network, disabling IPv6, etc. I'm not very experienced in Linux system administration, but I can use the terminal and am familiar with some terminal-based diagnostic tools, so if you need any additional info or want me to try something, please let me know! I've exhausted my own computer knowledge with regards to finding a solution to this problem.
However, configured a website on a dedicated server using WHM/cPanel. The site was uploaded using the master account for the website.
The security issue is public users are able to upload files on to my server via the website. They could even access the root and execute whatever they want on the server.
I have consulted with 2-3 Linux experts. According to them, the PHP user has rights to execute anything on the server or upload & store files in whichever folder they want.
Can I protect my folders to avoid file uploads via the website. The application has security vulnerabilites. However, I want to prevent hackers to enter my site until the vulnerabilities are fixed.
I've never figured out a good program to use with file globbing from http sites. Wget works with ftp sites and file globbing and for mirroring I use lftp, but I would really like to download just the files that start with "xf" from Robby's site.
View 7 Replies View RelatedI have NDISWRAPPER installed on my laptop, but when I try to install the download file which is a Windows Xp dos executable file of 8mbs I have tried every thing but without success I can see my Iomega 250 Zip drive when I go into system>administration>disk utilities and acess properties but cannot make it run,
Dell inspiron 6400
OS: Ubuntu 10.04LTS
Ram:2gb
HDD: 250gb
Tony044
How can I read .gz file direct on shell/terminal without decompressing the file?
satimis
I need a shell script that will add the users name and date to a file when the user has modified the file, these files are within a group and only accessible to this group. But we need a way for people in the group to know who and when the file was last modified.
View 1 Replies View Relatedi am trying to convert a binary file in to ASCII using shell script. this file contains multiple types of data like string, number, bcd, etc.
View 5 Replies View Relatedparsing xml file using shell script and generate report in a PDF file
Xml file input:
<report>
<student name="x" father name="x1" class="first" Address="xyz">
<property name="sports" value="yes"/>
<property name="drawing" value="no"/>
[code]....
I am supposed to take some small files, and print them to a specific printer, such that the small files are concatenated into one file. The file name has to be included in the file that gets printed.
Should I be looking to concatenate the files into one file with the file names included, and then print them?
something like: -printfunction -printername < file*
Quote:
/usr/local/bin/mencoder /root/video1.avi -of lavf -ovc lavc -lavcopts vcodec=flv:vbitrate=300:acodec=libfaac:abitrate=64 -srate 22050 -oac lavc -vf scale=360:240 -o /root/output_temp_video1.flv
[code]....
I am trying to istall litb for P2020DS. I got an error the following error:
[hwtesting@HWLSRV1 ~]$ cd /home/hwtesting/ltib-p2020ds-20091119
[hwtesting@HWLSRV1 ltib-p2020ds-20091119]$ ./ltib
Don't have HTTP::Request:ommon
Don't have LWP::UserAgent
Cannot test proxies, or remote file availability without both
HTTP::Request:ommon and LWP::UserAgent
add folowwing line to User Privilage section:
hwtesting ALL = NOPASSWD: /bin/rpm, /opt/freescale/ltib/usr/bin/rpmvisudo
I edit the sudoers by visudo command and insert this line just under the following line:
root ALL = (ALL) All
But still I am getting the following:
Don't have HTTP::Request:ommon
Don't have LWP::UserAgent
Cannot test proxies, or remote file availability without both
HTTP::Request:ommon and LWP::UserAgent
I need to change the some configuration in httpd.conf file with out affect any current status of http service.
View 3 Replies View Related