Ubuntu :: Cannot Apt-get, Or Wget Anything?
Sep 10, 2010
I'm typing this on my linux laptop, at work. My Firefox works fine, but I cannot apt-get, or wget anything. To get my Firefox to work, I just went into the Firefox preferences, checked "Automatic proxy configuration URL" and entered the url that I have. Now Firefox works fine, but the rest of my system does not.o be a similar setting in System>Preferences>Network Proxy. There is check box for "Automatic proxy configuration" and a field for a "Autoconfiguration URL". I put the same URL that put into Firefox here and told it to apply it system-wide, but my apt still does not work. This is a big deal because I need to install software and I really don't want to start manually downloading packages, plus I need ssh.
I have googled extensively on how to get apt to work from behind a proxy, but nothing seems to be working. I don't have a specific proxy server and port; rather I have some kind of autoconfiguration URL. Plus, my system has no /etc/apt.conf file at all. Any ideas on how I can get my system to be able to access the internet? It's very strange to me that Firefox can, but apt, ping, wget, etc cannot.
View 10 Replies
ADVERTISEMENT
Jun 19, 2011
If a wget download is interrupted (like if I have to shutdown prematurely), I get a wget.log with the partial download. How can I later resume the download using the data in wget.log? I have searched high and low (including wget manual) and cannot find how to do this. Is it so obvious that I did not see it? The wget -c option with the wget.log as the argument to the -c option does not work. What I do do is open the wget.log and copy the URL and then paste it into the command line and do another wget. This works but the download is started from the beginning, which means nothing in the wget.log is used.
View 2 Replies
View Related
Dec 17, 2010
I am trying to have this cool effect where gnome scheduler downloads with wget this image every three hours. However, even when I do it manually in the terminal it doesn't seem to download it correctly. When I go to open the .jpg it says in a big red bar on the top "Could not load image '1600.jpg'. Error interpreting JPEG image file (Not a JPEG file: starts with 0x47 0x49)"
However, when I go to the picture in the link above and right click "Save Image As" it downloads it fine.
View 4 Replies
View Related
Feb 23, 2011
I'm currently using wget to keep a running mirror of another site but I don't have much space locally. I was wondering if there was a way to turn on -N (timestamping) so that only the "updates" were retrieved (i.e. new/modified files) without hosting a local mirror.
Does -N take a timestamp parameter that will pull any new/modified files after "x"?
It seems like a waste to compare remote file headers against a timestamp without presenting the option of supplying that timestamp. Supplying a timestamp would allow me to not keep a local mirror and still pull updates that occurred after the desired timestamp.
View 3 Replies
View Related
Apr 28, 2011
Like the subject says,.. I'm lookin for a wget'able 11.04 Live CD URL This URL works great with a point and click but doesn't tell me what the direct URL is to use wget. [URL]
View 1 Replies
View Related
Feb 21, 2010
I'm trying to download a set of files with wget, and I only want the files and paths "downwards" from a URL, that is, no other files or paths. Here is the comand I have been using
Code:
wget -r -np --directory-prefix=Publisher http://xuups.googlecode.com/svn/trunk/modules/publisher There is a local path called 'Publisher'. The wget works okay, downloads all the files I need into the /Publisher path, and then it starts loading files from other paths. If you see [URL]..svn/trunk/modules/publisher , I only want those files, plus the paths and files beneath that URL.
View 2 Replies
View Related
Aug 4, 2010
I'm trying to use wget on an ubuntu 10.04 server 64bit, with 16GB RAM and 1.1 TB free disk space. It exits with the message "wget: memory exhausted". I'm trying to download 1MB of some sites. After different tries this is the command I'm using:
Code:
wget -r -x -Q1m -R "jpg,gif,jpeg,png" -U Mozilla http://www.onesite.com
(I only need the html documents, but when if I run the -A options only the first page is donwloaded, so I change to -R).
This happens with wget 1.12 version. I've tried the same command in other computers with less RAM and disk space (ubuntu 8.04 - wget 1.10.2) and it works just fine.
View 1 Replies
View Related
Aug 17, 2010
I am using wget to grep some data from a .htaccess protected website.I don't want to use the --http-user= and --http-password= variables in the script so I tried to create a ~/.wgetrc file.Whenever I run my wget script, it will never use the http_user and http_password examples to login to th website.
View 2 Replies
View Related
Dec 11, 2010
Would it be possible to use wget to order something on e.g. [URL].
View 4 Replies
View Related
Apr 25, 2011
I'm trying to parse some redfin pages and it seems like I'm having a problem with the # symbol.Running the following:echo 'http://www.redfin.com/homes-for-sale#!search_location=issaquah,wa&max_price=275000 ' > /tmp/issaquah_main.txtwget --level=1 -convert-links --page-requisites -o issaquah/main -i /tmp/issaquah_main.txt
View 3 Replies
View Related
Jun 24, 2010
i was trying to copy some files over my hdd using wget.this was the format of the command the catch is that there is a local website that is installed into directory heirarchy and i would like to use wget to make the html files link to each other in one directory level.the command didn't work inspite of trying different forms, so what's the mistake in this command or is there another way?
View 3 Replies
View Related
Jul 26, 2010
I have executed the command
Code:
sudo wget -r -Nc -mk [URL]
(referring the site : [URL]) for downloading entire website using wget for offline viewing on Linux,
At the middle of download I have shutdown my laptop (downloading was not over) and when I started my laptop again. I have executed the same command to continue downloading but I have got the error :
Code:
test@test-laptop:/data/Applications/sites/googlejam$ sudo wget -r -Nc -mk [URL]
--2010-07-25 19:41:32-- [URL]
Resolving code.google.com... 209.85.227.100, 209.85.227.101, 209.85.227.102, ...
Connecting to code.google.com|209.85.227.100|:80... connected.
HTTP request sent, awaiting response... 405 Method Not Allowed
2010-07-25 19:41:33 ERROR 405: Method Not Allowed.
Converted 0 files in 0 seconds.
test@test-laptop:/data/Applications/sites/googlejam$
View 8 Replies
View Related
Sep 13, 2010
I used the crontab to start wget and download the file with the following
Quote:
14 02 * * * wget -c --directory-prefix=/home/Downloads/wget --input-filefile=/home/Downloads/wget/download.txt
But it doesn't shows a terminal and so not able to get the current status and stop wget. So how can I start wget with a terminal using crontab?
View 1 Replies
View Related
Nov 4, 2010
I am trying to wget a site so that I can read stuff offline.I have tried
Code:
wget -m sitename
wget -r -np -l1 sitename
[code]....
View 7 Replies
View Related
Jan 18, 2011
i don't know if there is an apt-get for it or if i need to use wget
View 1 Replies
View Related
May 6, 2011
if there is a mirror I could use to download a recent version of Ubuntu (e.g. natty). I'd like to use wget but can't find an address for a mirror.
View 3 Replies
View Related
Jul 28, 2011
I want to try to download an image of the earth with wget located at [URL] which is refreshed every 3 hours and set is as a wallpaper (for whom is interested details here). Wen I fetch the file with Code: wget -r -N [URL] the jpeg is only 37 bytes and of course too small and not readable.
View 5 Replies
View Related
Aug 8, 2011
I am running ubuntu linux.I use flashgot add-on with firefox.When I download a file , I choose wget with the help of flashgot.When flashgot starts download with wget ,xterm opens and the download starts.So I can't copy the download link. With lxterminal , gnome-terminal or rox-term it's possible to copy the download link but xterm has poor quality and it's not possible to copy anything from it.So, please tell me how I can use lxteminal or gnome-terminal with flashgot to download a file so that I can copy the download link.
View 5 Replies
View Related
Mar 3, 2010
I'm trying to figure out how to use wget to save a copy of a page that is frequently updated. Ideally, what I'd like it to do is save a copy of the page every minute or so. I don't need multiple copies; I just need to know what the most recent version was. Also, if the page disappears for whatever reason, I don't want it to save the error page, just wait until the page is up again.
View 2 Replies
View Related
Mar 7, 2010
i download files from megaupload and hotfile. is there any possibility of making wget download more than 1 file at a time? or do you suggest any other download programme? i have ubuntu 9.10
View 3 Replies
View Related
Apr 25, 2010
I've looked around the other threads as well as the wget man page. I also Googled for some examples. I still cannot work it out. From the page [URL] I want to download the 48 linked files and their corresponding information page.To do this (the first file) by hand I click on the line that saysApplications (5) Go to the first optionDell - Application Open and copy the linked pageApplies to: Driver Reset Tool Then back on the first page click on the Download button. On the window that opens up I choose to save the file.
Then I move on to the next option (which is Sonic Solutions - Applications) and repeat this until I would have all my files. I do not want to download the many other links on this page. Just the above mentioned, so I can take it back to my internet-less place and refer to it as if I was on the net. I am using the 9.10 LiveCD at my friends place.
View 2 Replies
View Related
May 21, 2010
i try to make wget download automatic in startup in ubuntu
View 8 Replies
View Related
Jul 27, 2010
I am trying to download site using wget :$sudo wget -r -Nc -mk [URL] but it is downloading the contents of all directories and subdirectories under the domain :[URL] (ignoring the 'codejam' directory) so it is downloading from links like : [URL]... i want to restrict the download so that wget command should download only the things under 'codejam' directory
View 9 Replies
View Related
Jan 21, 2011
I tried to download some images from google using wget:where wget cbk0.google.com/cbk?output=tile&panoid=2dAJGQJisD1hxp_U0xlokA&zoom =5&x=0&y=0However, I get the following erros:
--2011-01-21 04:39:05-- http://cbk0.google.com/cbk?output=tile
Resolving cbk0.google.com... 209.85.143.100, 209.85.143.101
Connecting to cbk0.google.com|209.85.143.100|:80... connected.
[code]....
View 3 Replies
View Related
Apr 8, 2011
I am interested in making wget do a slightly different function for me. I have downloaded it, built it (1.12) and it works perfectly right out of the box. would like to have it login to my creditcards.citi.com https website, give my user id, my password and "select NEXT-SCREEN label=Account Activity", then capture the account activity that returns.
I got these three values in my firefox Selenium script that runs perfectly time after time. My big picture goal, is to be able, on a crontab, to dump my account activity every night at midnight. I am not married to this idea if anyone has a better or different route.
View 1 Replies
View Related
Sep 6, 2011
I need to mirror a website. However, each of the links on the site's webpage is actually a 'submit' to a cgi script that shows up the resulting page. AFAIK wget should fail on this since it needs static links.
View 1 Replies
View Related
Jul 1, 2010
I did a yum remove openldap and apparently it trashed yum and wget.w can I get them back now?
View 5 Replies
View Related
Jun 28, 2011
If I have an address, say [URL], and I want to run n number of wgets on it. How can I do this? I'm curious for the reason of checking how wgets caches DNS. Turn off caching of DNS lookups. Normally, Wget remembers the IP addresses it looked up from DNS so it doesn't have to repeatedly contact the DNS server for the same (typically small) set of hosts it retrieves from. This cache exists in memory only; a new Wget run will contact DNS again.
The last part confuses me. "a new Wget run will contact DNS again." This means if I run a for-loop to call wget on an address, it will just make a new call to DNS every time. How do I avoid this?
View 8 Replies
View Related
Sep 9, 2010
I am writing a bash script where I would need to down load few file from server but the glitch is authentication is being performed by SSO/Siteminder server.
Does anyone aware of a option or trick with wget or curl to authenticate against SSO and then download the file from the server.
Standard http-user and http-password definitely does not suffice the need.
View 1 Replies
View Related
Jan 4, 2010
I'm trying to download a file and extract it in one line, but the extracted file is owned by me instead of root even though I'm using sudo:
Code:
sudo sh -c 'wget [URL]'
If I don't try to extract the file, it is owned by root as I expected:
Code:
sudo sh -c 'wget [URL]'
View 1 Replies
View Related