General :: Http Or Ftp Data Is Going Through Network?
Oct 19, 2010Is there a tool or way to find out how much http or ftp data is going through network?
View 4 RepliesIs there a tool or way to find out how much http or ftp data is going through network?
View 4 RepliesI have a network of 20 machines, all running Ubuntu 10.04.
Each machine has about 200[GB] of data that I'd like to share with all other 19 machines for READ ONLY PURPOSES. The reading should be done at the FASTEST POSSIBLE WAY.
A friend told me to look into setting up HTTP / FTP. Is it indeed the optimal way to share data between the machines (better than NFS)? if so, how do I go about it?
UPDATE: Just to clarify, all I want is to be able (from within machine X) to access one of machine Ys files and LOAD IT INTO MEMORY. all of the files are of uniform size (500 [KB]). Which method is fastest (SAMBA / NFS / HTTP / FTP)?
I'm trying to connect to a wifi network where it hijacks all requests and redirects you to a page where you have to agree to a terms of use before it lets you connect to the actual outside world. This is a pretty common practice, and usually doesn't pose much of a problem. However, I've got a computer running Ubuntu 9.10 server with no windowing system. How can I use the command line to agree to the terms of use? I don't have internet access on the computer to download packages via apt-get or anything like that. Sure, I can think of any number of workarounds, but I suspect there's an easy way to use wget or curl or something.
Basically, I need a command line solution for sending an HTTP POST request essentially clicking on a button. For future reference, it'd be helpful to know how to send a POST request with, say, a username and password if I ever find myself in that situation in another hotel or airport.
I've got a Netbox NT330i [URL]
Ubuntu 10.04 (server) installed with a few minor issues, but is generally in fine shape.
I installed apache2 and the whole lamp stack ready to do some Drupal work.
I noticed a logo.png wasn't loading. Apache logs thought it was serving it fine (200 responses)
It wasn't displaying because it was being corrupted.
I hunted for some reason, stripped back apache2, removed it, installed it bare.
Same problem.
I tried nginx. Same problem.
I've compared pdfs transfered from the webservers and I can see it every-so-often clumps of corrupt data.
Also, wget on the same box gets the files without corruption.
SSH works fine. No corruption. But medium size SFTP transfers can take a while - I presume because it's having to resend corrupted data until the checksums add up.
I then tried using wlan0, which seems to work fine, so it's just the ethernet port corrupting data.
From lspci, the eth0 is an AR8131 rev c0
I am trying to connect to the web interface found at [URL] using curl. This first requires login information to be entered at [URL], but I am having an issue with the login process. I am trying to submit the following form via POST:
Code:
<form action="j_security_check" method="post" id="login_form" name="login_form">
<center> <table style="background: #cac1cf;FONT-SIZE: 12px;">
<tr> <td align="center" colspan="2">Please enter your username and password:</td>
</tr> <tr> <td align="right">Username</td>
<td> <input name="j_username" style="width: 250px" id="j_username" type="text"/> </td>
</tr> <tr>
<td align="right">Password</td>
<td> <input style="width: 250px" name="j_password" id="j_password" type="password"/> </td>
</tr> <tr> <td colspan="2" align="center">
<input value="Enter" name="enter" type="submit"/>
<input value="Clear" name="Clear" type="reset"/>
</td> </tr> </table> </center> </form>
The command that I am using for this is the following:
Code:
curl -c cookies -b cookies -L -d "j_username=user%40domain.com&j_password=pass" [URL]
The command is properly formatted as far as I can tell. I tested it with another website using a similar authentication scheme using different POST variables specific to the form and it worked fine.
When I run the above command with the -v tag, it reveals this:
Code:
* Connected to lcl.uniroma1.it (151.100.4.74) port 80 (#0)
> POST /sso/j_security_check HTTP/1.1
> User-Agent: curl/7.21.0 (i686-pc-linux-gnu) libcurl/7.21.0 OpenSSL/0.9.8o zlib/1.2.3.4 libidn/1.18
> Host: lcl.uniroma1.it
> Accept: */*
> Content-Length: 44
> Content-Type: application/x-www-form-urlencoded
>
} [data not shown]
< HTTP/1.1 408 The time allowed for the login process has been exceeded. If you wish to continue you must either click back twice and re-click the link you requested or close and re-open your browser
< Date: Sat, 29 Jan 2011 15:26:41 GMT
< Server: Apache-Coyote/1.1
< Content-Type: text/html;charset=utf-8
< Content-Length: 1554
< Connection: close
<
{ [data not shown]
103 1554 100 1554 0 52 5081 170 --:--:-- --:--:-- --:--:-- 10223*
Closing connection #0
I cannot tell why the login timeout is expired when I try to do this, and my investigation toward this end has been fruitless. I saw a brief snippet on Google that vaguely suggested that the underscores in the domain name were at fault, but replacing these with their encoded counterparts did nothing to resolve the issue (that, and underscores should be fine when sent unencoded according to the standards). I have extensively perused the man pages and have come up with nothing to adequately explain this behavior. I also talked to a friend who has worked with curl in his line of work, but he mostly has experience in the context of PHP and has not dealt with this issue before. I am running GNU/Linux 2.6.35-22-generic-pae.
I'm trying to connect to the VPN of my employer; after fix various minor issues I reach this point in which the DNS entries and the default gateway of the VPN are overwritten with the values of the eth0 device that appears by default. Therefore the vpn is not useful.
View 1 Replies View RelatedWe know that, in a network, data transfer time is dependent on capacity of the receiving node. I have two Linux systems A (sender) and B (receiver) connected. I want to simulate congetion in a network, that means I want to increse the time taken to transfer some data. I believe, I can do this if I can reduce the capacity of the receiver node B. That perhaps possible by decreasing the receiving buffer size of B. How can I change the receiving buffer size of this linux system B?
View 1 Replies View RelatedI need to have Opensuse 11.2 use my proxy server here in the office and it is by hostname/ip:8080 only not HTTP. The problem is using Yast2 I don't have the option of using the proxy that way it wants http. I've been using opensuse on and off since 9 (great flavor BTW my favorite) Easy as you need it to be and just as complicated as you want it to be, a perfect mix.
View 8 Replies View RelatedI'm trying to set a PHP script on my http server. The Script is "Gen2". I Follow All the instruction of the script and set up in the server but when i open it on the web browser this error happen.
Free Image Hosting
I try others script and works correctly
I don't Think is a script error because it was tested in a web server.
*** I dont write the script I download from the internet
here is the script gen2.rar
I've been attempting to setup PXE/HTTP network installs so we can better handle deployments for new systems. I have a test CentOS 5.4 VM running, and another test VM that I want to deploy 5.4 to. TFTP, DHCP are all working correctly. Apache 2.2.3 config "seems" OK. When I kick off the VM which I will install to, DHCP discovery and IP allocation works, the TFTP server is found, I am presented with a menu option of OS selection.
I choose #1, for my 5.4 but then it immediately tells me:
"Invalid or corrupt kernel image"
/var/log/messages doesnt show anything other than the DHCP OFFER/ACK process and that TFTP client doesn't accept options
/var/log/httpd/error_log doesnt show anything either
Not sure where else to look for diagnosis.
My Apache config directory: /var/www/html/CentOS
Content listing:
[root@CentOS-test CentOS]# ls -la
total 4515700
drwxr-xr-x 4 root root 4096 Jul 9 10:38 .
drwxr-xr-x 3 root root 4096 Jul 8 16:46 ..
-rwxrwxrwx 1 root root 4619468800 Jul 6 15:54 CentOS5.4.iso
-rwxrwxrwx 1 root root 932 Jul 6 17:37 initrd.img
-rwxrwxrwx 1 root root 70 Jul 6 17:37 ks.cfg
drwxr-xr-x 2 root root 4096 Jul 9 10:38 msgs
-rw-r--r-- 1 root root 13100 Jul 9 10:37 pxelinux.0
drwxr-xr-x 2 root root 4096 Jul 9 10:38 pxelinux.cfg
-rw-r--r-- 1 root root 932 Jul 9 10:33 vmlinuz
pxelinux.cfg contains:
0A000000
default
pxeos.xml
My Apache DocumentRoot: /var/www/html/CentOS
Directives:
<Directory "/var/www/html/CentOS">
AllowOverride None
Options None
Order allow,deny
Allow from all
</Directory>
Forgot to add this line from my pxelinux.cfg/default file:
label 1
kernel 5.4/vmlinuz
append initrd=5.4/initrd.img ramdisk_size=16000 method=http:/10.37.129.3/CentOS ip=dhcp
(I think I have found my problem.. 5.4 was in the TFTPBOOT directory, but now that I'm using HTTP, I changed this to be:
kernel CentOS/vmlinuz and append initrd=CentOS/initrd.img) -- question is, will just changing this work?
Somebody know anyway to install Centos with http install into a vlan network? I need to tag interface during ip-confgiration.
View 1 Replies View RelatedIn Network Settings in OpenSuse 11.1 while using "Traditional Method with ifup" I am able to set up a box as a server and connect via http over the net BUT Firefox cannot browse. If I switch to "User Controlled with NetworkManager" I can run Firefox but my server is not contactable. How do I do both?
View 4 Replies View RelatedMy box has to connect to internet using specified http proxy.I have set proxy in both kde control center and yast2 control center. They both tell me the proxy works fine. But when I really try to use yast2 to update my system, it report an error:
Code:
Failed to download ./repo/repoindex.xml from [URL]
History: - [AbstractCommand.cc:195] URI = [URL]
Even I try
Code:
export http_proxy=http://XXXX
yast in command line,the error still exist.
In debian apt-get and slackware slackpkg,my proxy works fine. So I am sure it is not my fault and maybe it is a bug of yast2.
* a router/gateway. The external interface have the public IP, an other the DMZ, a third the internal room* a DMZ with the web server* an internal network (internet public room)I redirect the http port 80 to the web server. You should see him there.But I can't see this web site from the internal room. From the public IP /URL I have some sort of non existent message (sorry forgot to copy it). If I call for the private IP, I get the home page (but not the CSS files)the gateway nat's the networks.What is the trick to see the web site from the internal network?
View 4 Replies View RelatedMy computer shares an internet connection using an ADSL router.There are other three machines.I have set up a Apache server for learning purpose and I want it to be inaccessible from anywhere else including the PCs in the network.When I enter my ip-address assigned in the network (192.168.1.1xx) from other computer,I get my ppages and I dont want that.
How can I block HTTP requests from other computers?
Using netcat, nc(1), craft a valid http/1.1 request for getting http headers (not the html file itself!) for the main index page of www dot aalto dot fi. What request method did you use? Which headers did you need to send to the server? What was the status code for the request? Which headers did the server return? Explain the purpose of each header.
nc -v www dot aalto dot fi 8080
HEAD / HTML/1.1
host: www dot aalto dot fi
And it returns:
200 OK
Content-Length: 858
Content-Type: text/html
Last-Modified: Thu, 02 Sep 2010 12:46:01 GMT
[Code]....
I really don't know what does it mean. Question 2: Using netcat, nc(1), start a bogus web server listening on the loopback interface port 8080. Verify with netstat(, that the server really is listening where it should be. Direct your browser to the bogus server and capture the User-Agent: header "Direct your browser to the bogus server and capture the User-Agent: header" I don't understand this question.
I installed Nagios on my Ubuntu 10.04 server using apt-get and when I accessed the web console, everything was OK. I made some changes to apache (creating some new virtual sites) and since then Nagios gives me a warning message for HTTP with the message, HTTP WARNING: HTTP/1.1 404 Not Found. The sites that I created are working perfectly. I noticed that the attemps are 4/4. Does this need to be reset or does Nagios automatically reset that once it detects the issue is resolved?
View 1 Replies View RelatedProblems with launching data files of the nas and saving to them is a kde problem. The dot desktop files have to contain access rights for smb/http etc and even when given these it still will not work. I have mainly concentrated on getting the VLC video player to work as it is capable of playing from just about any source, comes with codecs etc etc. Amazing package really.
Pure K apps such as kwrite at least work fine. I tried setting up samba but to no avail.
As dropping a file into VLC's focus didn't do anything I created a vlc desktop icon and dragged the nas file onto that. It plays and a kde error message pops up from plasma shell - can't find file!
I enable kde automount. The content of that when it starts is disturbing. It shows my system disks a detachable and not attached! No need to worry though. I selected mount on log in and attachment where the server was shown. VLC still wouldn't work.
Next I enable NFS file transfers on the NAS. This has allowed me to use open with directly onto an avi file on the nas. I can also click launch them. Remaining problem is opening files on the nas from within VLC. Up pops the kde message "you can only select local files". The file manager here seems to be an instance of dolphin. This suggest that there is going to be a problem saving files to the nas as well. Looks to be the case. VLC can convert formats and all sorts of things. If I select a file locally and try and convert it and save to the nas up pops the "you can only select local files" as soon as I select ok having set the path and file name.
Strange thing is that working transfers seem to be using CIF even though it took an NFS enable to get it partly working via KDE's automount. Dolphin only allows a CIF set up which has a distinct advantage as a direct ip address can be entered. The automount has introduced a very very long delay before kde is up and running following a log in. Samba is even worse in this respect and both seem to lack a method of direct ip input which means they have to find the server.
One other aspect. As far as NFS is concerned from a very recent post elsewhere nautilus works. Pass on CIF. And of course it's all instantaneous and ok on windows even on vista. Enabling the TV protocol on the nas has confused Vista as it only wants to connect like that and needs drivers. Might also be down to having NFS enabled though. MS might not like that.
I have filed all of this on bugzilla if anyone would like to vote - bug number 695648. Seems to me that the CIFs route should be the default for ease with many users on home networks. I'm also sure that the problem is basically KDE preventing aps from accessing the nas.
I have troubels with internet, on different Linux x64 systems on my laptop(Lenovo ThinkPad sl510), but if I load WindowsPE all is OK ( what coud it be? where to search?There is an hardwere firewall/nat/gateway in my local network, it allows only connections to dst ports tcp 80 (http), udp 53 (dns) and no frags, no icmp, deny in and etc. But Windows Internet (the same Firefox) works fine , and under Linux sites doesn't loding full or "connetion timed out"...But if I have can start downloading any file it would be downloaded full (I have downloaded DVD iso of SuSe)Dns throu nslookup responce not evry time...Decreasing of MTU to 1372 didn't help (( Deactivating ip v6 also....What coud it be? What is different betwin Windows and Linux in DNS clients is any alternative dns client in SuSe? Is the trouble only in DNS?
View 1 Replies View RelatedI have a normal user (sites:users) and the usual http user (wwwrun:www).I'm hosting several sites and I want to be able to upload stuff via ftp, so I'm using the "sites" home (/home/sites) to keep the sites I'm hosting. Giving read permissions to all inside /home/sites makes it accessible and readable to the wwwrun user. Problems come when I need to upload something. The easy way is to give 777 permissions to the folder that's going to receive the file, but I don't feel comfortable at all with that.
What do you recommend? Is there any group configuration that could help me (like adding "sites" to the "www" group)? Or any other configuration at all that might be according the the best practices?
I've been running a SuseStudio-built VM for a few weeks with no issues. I built a new one recently, and now I can't configure a new vhost in Apache using the http-server module. It gives me this error. Screen shot 2010-11-06 at 11.42.24 AM.png. Why YaST suddenly decided that my hostnames aren't valid?
View 1 Replies View RelatedI have tftpd-hpa and dhcp3-server up and running. I just want to install server edition via network, from the host machine (my laptop, running ubuntu 9.10) with an ISO file (ubuntu 8.04 32-bit server edition). I managed to boot the client machine with pxe-netboot technique, but instead downloading all the files from internet, I need to do this process directly from ISO. To transfer ISO from host to client, I also installed Apache. I unpacked ISO file into /var/lib/tftpboot/server/. I created a link to the Apache root: /var/www
Code:
ubuntu@ubuntu:/var/www$ ls
returns => index.html server
server folder is the place where I unpacked the ISO.
My dhcp3-server has this setup and it works well with netboot, but I don't know how to add Apache to the formula to transfer the iso file from host to client. Firewall is disabled. This is my edited /etc/dhcp3/dhcpd.conf file.
Code:
host pxeinstall {
hardware ethernet 00:06:29:DE:E3:CD;
fixed-address 192.168.2.4; (client IP)
next-server 192.168.2.2; (host IP)
filename "/server/install/netboot/pxelinux.0"; (relative to tftpboot)
} subnet 192.168.2.0 netmask 255.255.255.0 {
range 192.168.2.2 192.168.2.5;
option routers 192.168.2.1; }
When I pxe-boot the client, the process comes to a halt when tftp server is trying to access to pxelinux.0 file. I got thls error:
PXE-T00: Permission denied
PXE-E36: Error received from TFTP server
I have no experience with Apache... so I think there is a problem with my IP addresses.. Do I need to use 127.0.1.1 instead of 192.168.2.1 (my routers IP)?
how to connect my network data card to run internet on openSuse..
View 4 Replies View RelatedI installed WordPress 3.x on my localhost/Apache server, but I can neither install plugins nor update anything.This happens with both the stable WP3.0 version and the 3.1 beta. When I try to search the Plugin Directory from the WP dashboard, I get this message: An Unexpected HTTP Error occurred during the API request.When I run an update, I get a page asking for the login credentials for the ftp user ("To perform the requested action, WordPress needs to access your web server. Please enter your FTP credentials to proceed. If you do not remember your credentials, you should contact your web host."). Since I'm part of the 'ftp' group on the system, I enter my system login information, click Proceed -- and get a blank page that does nothing.
I've gone to YaST, and I see that the system ftp user has a 6-character password (which may or may not be mine). I'm afraid to change it and risk screwing up other ftp-related functions. I'm running openSUSE 11.3, and am obsessive about updating. I will note that I have an old 2WIRE router that often requires me (including Zypper repos) to enter IP addresses instead of DNS-based URLs to successfully download stuff. Not sure if this is related, but just in case...
How to create the virtual web site (name based) accessible on http and https simultaneously ?
Example
server have ip address: 192.168.251.22 and virtual IP address=192.168.151.22
Target: create VirtualWebSite(name based) accessible on http and https simultaneously. ?
I can create a virtual site(name based), but he will be accessible ONLY on http or ONLY on https.
A few months ago I have setup a server with three hard disks. The partition mapping the disks as follows:
Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x7ca36fee
[code]....
Now I have the following problem the LVM file system don't mount properly.If I open the mount point I see only a few files of the LVM disk. If I want to unmount the disk I get the following error:
umount /data/
umount: /data/: not mounted
If I want to mount the volume I get the following error:
mount -a
mount: /dev/mapper/gegevens-Data already mounted or /data busy
I am trying to do PXE boots for some servers w/o DVD/CD drives. I can do the PXE boot and load the installer boot image, but from there, I would like to have the installation media be from the internet, rather than a locally mounted disc. The boot installer will ask for the location (http or ftp), so is there one out there somewhere?
View 1 Replies View RelatedI had setup an SSL secure server awhile back, such that: [url] works but [url]does not (note the different: in the first, I use HTTPS, whereas the second I use HTTP) How can I get both to co-exist?
View 7 Replies View Relatedhow can i disable http when they browsing the URL instead of http should be https
i have apache server installed in centos
Usually we require vnc to take remote sessions. There was one another i think it was called xdrp or xrdp. I am asking this out of curiosity, is there any way to take remote sessions using http. Like in web conferencing, we invite users to join the conference and then we are able to share desktop. Is there any way to do this on one-to-one basis ? is such a technology exists for linux (for any disto) ?
View 1 Replies View Related