General :: Analyse The Output Of Tcpdump ?
Jul 14, 2010I am trying to analyze the output of tcpdump, but I am unable to figure out what the output is. as I think that the security my computer would be compromised by this output.
View 2 RepliesI am trying to analyze the output of tcpdump, but I am unable to figure out what the output is. as I think that the security my computer would be compromised by this output.
View 2 RepliesHow to convert Tcpdump output file to a Pcap format? Is there such way?
This is what i mean:
tcpdump -i eth0 >> test.out
Now i want to convert test.out to Pcap so It's readable via Wireshark.
I'm trying to get webalizer to analyse some log files. The server uses virtual hosts and has log rotations on and also uses turbopanel (now known as simple control panel). Because of this, the documentation is limited and webalizer works in a weird way. I found this perl script under turbopanel called webalizerrun.pl the code is as follows:
Code:
#!/usr/bin/perl
$WEBALIZER = "/usr/bin/webalizer";
chomp($var = shift);
$wdir = "$var/conf/webalizer";
opendir(DIR, $wdir) or die "Unable to read $wdir: $!";
[Code]...
Here's what I want to do, and I believe I can do this using this code with slight modifications. As of now, the log files for each site is in the folder specified above with the file named as "domain-name_access_log" and then the log rotation just adds a number to the end of that. I want use this perl script to run webalizer for a particular site and have its output be placed in directory.
1.) Line 4: chomp($var = shift): I know chomp is used to remove trailing characters, but what character in this case? How may I find that out? Also what does $var = shift do inside chomp?
2.) Line 8: What exactly does the readdir function do? What does it return to $domain?
The rest seems similar to csh, checks if it's dir or file and then changes to the directory and runs webalizer on that directory.
How to get manual of tcpdump?
View 2 Replies View RelatedSince yesterday Firestarter has been prompting me that it is blocking external connection attempts as shown in the picture below:I'm not even going to bother covering the IP addresses because I personally don't see why I should care but as you can see, there has been loads of them attempting to connect to ports 3674 - 3675. I ran nmap 127.0.0.1 and it came back as 631 being the only one open. So then I thought maybe lsof -i would mention much more but all it shown was:
@boris:~$ cat meh
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
cupsd 1644 root 5u IPv6 14329 0t0 TCP localhost:ipp (LISTEN)
[code]...
I'm trying to capture traffic between two machines, A and B. I would like to make sure that the traffic I capture with tcpdump is between eth1 on the local machine and eth0 on the remote machine. As I understand it, the -i flag specifies the local machine interface - but how to set the remote one?
View 3 Replies View Relatedwhen i send any packet to anu destination and want to see he mac address of source and destination i am using the command tcpdump -qec1 but rather then getting the mac address of source and destination each time i am getting mac address of the system which is broadcasting. will anybody tell me how can i get source and destination mac address even if any other packet is also being broadcast to my network.
View 1 Replies View RelatedDoes gzip have the capability to decode gzipped traffic? I have been beating my head against the wall with this issue. What I'm trying to do is capture traffic between a web server and clients, and I've got it set up where it's redirected to a file for ease of grepping, however it's seemingly incapable of decoding gzipped encoding. I know I can do this with tshark, I'm curious as to whether tcpdump has this capability (i.e. only using tcpdump, and not some additional tool like tcpshow or what-not).
I can't find much on this issue in the man page for tcpdump, but it is fairly lengthy, so it's possible that I missed something, but I don't see that as especially likely.
What is the syntax to capture packets from multiple host through tcpdumptcpdump ip host host1|host2|host3|host3
View 3 Replies View RelatedI have a linux box with two interfaces: eth0 is a builtin and eth1 is a USB-LAN.
There is an IP configured on eth1.
eth0 is up but no IP is configured. This interface is used for sniffing with tcpdump.
The problem is that eth0 frequently stops receiving packets -- my tcpdump captures are empty, and if I look at the interface stats with ifconfig, I can see that no packets are received.
If I bounce the interface (ifconfig eth0 down; ifconfig eth0 up), it starts receiving packets again.
I'm using Fedora9. I cannot start wireshark or tcpdump because of the lib dependency error:
Code:
[root@localhost ~]# wireshark
wireshark: error while loading shared libraries: libpcap.so.0.9: cannot open shared object file: No such file or directory
I updated libpcap before and the latest version is libpcap.so.1.1. I changed the version because of another application but I cannot remember when I did it, perhaps on Sep.11?
Code:
[root@localhost lib]# ll |grep libpcap
-rw-r--r-- 1 root root 309670 2010-09-11 08:10 libpcap.a
lrwxrwxrwx 1 root root 12 2010-09-11 08:10 libpcap.so -> libpcap.so.1
lrwxrwxrwx 1 root root 14 2010-09-11 08:10 libpcap.so.1 -> libpcap.so.1.1
-rwxr-xr-x 1 root root 243207 2010-09-11 08:10 libpcap.so.1.1
So I tried
Code:
ln -s libpcap.so.1.1 libpcap.so.0.9
but it doesn't work.
Trying to use tcpdump -r TEST, and get permission denied, even though I am logged in as root or super user. Tried using the "chmod a+rw TEST" (any other file for that matters, yes it came from another source) and get permission denied.
View 4 Replies View RelatedHas anyone noticed ABRT not being able to connect to Bugzilla or the remote Core Dump analysis server? Since my upgrade during the beta phase of F15 I have been unable to file bug reports from ABRT's traces, nor has it offered me the option to file bugs to bugzilla. What it does, however, is offer me to analyse the Core Dump and backtrace of the crash using either the remote server or locally with GDB. Needless to say I'm clueless in GDB, and it requires a LOT of debug symbols found in the corresponding debug packages. But trying to use the remote analysis server always results in a message of the server being busy and to try again later. IIRC the last time I was able to file a bug through ABRT was in 13, has anyone been able to do so in 15?
View 14 Replies View RelatedThis is probably the wrong forum to star and I am clutching at straws here hoping someone can point me in the right direction.I own a cheap IP Network WebCam, bought from eBay (like this one:it doesn't list it's manufacturer anywhere and just says it is the F-Series IP Camera.It works well, both on Firefox and IE. However, you can only get a live video stream with audio out of it using IE and an ActiveX plugin or the supplied Windoze only IP Camera Super-Client software which doesn't like Wine.
As I am mainly an Ubuntu user, I was wondering if anyone knew of any tricks to get hold of the two streams, maybe using VLC to view?I've tried IE4Linux and Play-On-Linux to install IE in ubuntu, but the ActiveX unsurprisingly doesn't work. I was hoping that:someone can point me to a forum that might have someone who knowssomeone can tell me how to analyse the network traffic that IE is getting to get an IP address for the video and audio feedssomeone has some experience with these cameras and knows the answer to all my questions I've tried using wireshark, but the output makes no sense to me - I've figured out various addresses to the video stream:[URL]But I can't get the audio stream. I was hoping to use it as a baby monitor, it has excellent night viewing capabilities with its IR LEDS, but having no audio in Ubuntu with it is a pain.
I would like to know the command lines for;
-detecting the wifi in my house without being connected to it
-getting ips/macaddress from the people connected to the wifi
I am running slackware-current and I have tcpdump-4.1.1-i486-1.txz installed. If I remember right libpcap used to be part of tcpdump, but since recently i cannot find it in my system anymore! Tools like nmap give me the error message:
"error while loading shared libraries: libpcap.so.1: cannot open shared object file: No such file or directory"
I am trying to create a dump log using tcpdump. I want display the top 10 ip addresses sorted numerically showing how many times the ips are hitting the server. I'm getting frustrated because It's not working how I'd like it to.
View 1 Replies View RelatedI have configured NFS Server on CentOS 5.2 with an IBM Web Server(AIX). The IBM Web Server can upload all data onto NFS Server. Now, today i was having slow response on IBM Web Server & by measuring the NFS, I found below error while running "tcpdump" command. I have ran "tcpdump" command on NFS Server.
tcpdump -n -i eth1 | grep 2049
18:36:37.237451 IP 10.100.19.241.2049 > 10.100.19.88.1758143293: reply ok 1448 read [|nfs]
18:36:37.237476 IP 10.100.19.241.2049 > 10.100.19.88.539981409: reply ERR 1448
18:36:37.237481 IP 10.100.19.241.2049 > 10.100.19.88.796287348: reply ERR 1448
18:36:37.237488 IP 10.100.19.241.2049 > 10.100.19.88.1986098295: reply ERR 1448
18:36:37.237566 IP 10.100.19.241.2049 > 10.100.19.88.539762736: reply ERR 1448 .....
18:36:37.238263 IP 10.100.19.241.2049 > 10.100.19.88.1869440302: reply ERR 1448
16133 packets captured
23339 packets received by filter
7100 packets dropped by kernel
10.100.18.241 is the IP of NFS Server & 10.100.19.88 IP belongs to IBM Web Server.
Is there a way to do multiple interfaces in tcpdump? I have found that when using "-i any", not all packets are captured (compared to "-i eth0" on a machine with only one interface). I need to monitor traffic on some machines with as many as 6 interfaces, and get these packets that "-i any" misses. When I give the "-i" option multiple times, it seems to only use the last one.
View 3 Replies View RelatedI'm trying to capture packets to a file with the -w option but the file is empty yet if I use the '-w -' option to put data on stdout I see plenty of captured packets.I'm using CentOS 5.5 x86
Code:
[root@server ~]# tcpdump -v -i eth0 -w dump -s0
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
[code]....
When attempting to run a tcdump in the background (IPSO) with the following command:
I get the message:
However the command runs fine without the '&' at the end of the line are there syntax restrictions in using the '&' flag?
I looked and have tcpdump installed on ubuntu 10.04 lts I can do a tcpdump --help and it gives the commands.I get no device found when I do tcpdump from the terminal window.my Ubuntu is having trouble looking up domains it just sits there and hangs looking up google.comI'm on a ATT 3mb DSL dry line running an asus netbook and a biostar via mobo desktop they both have trouble looking up domains right out of the DSL modem.I would try to set the DNS in ubuntu but I don't know how to do that without knowing the gateway and such. I have to get the IP of the computer, the netmask, the gateway, and the DNS for the manual setup.
View 3 Replies View RelatedI was wondering how one could set up tcpdump to run in the background, dumping all output to a file until I terminate the process.Here is the dilema... I SSH into the box that will be listening (using tcpdump)...
ssh> sudo tcpdump -i eth0 > dump_file
yadda yadda...
then if I exit my ssh session, tcpdump closes.
If I do a...
ssh> sudo tcpdump -i eth0 > dump_file &
[1] 12938
yadda yadda.
I am trying to install libpcap and tcpdump, but even if I have already installed Flex, as the terminal tells me to do. What else could I do?
Code:
configure: error: Your operating system's lex is insufficient to compile libpcap. Flex is a lex replacement that has many advantages, including being able to compile libpcap. For more information, see [URL].
I need to start a tcpdump, and then download a file by FTP. I can't understand any way of achieving this in the tcpdump man file.
View 1 Replies View RelatedI would like to set up tcpdump to rotate log file every 1 hour and retain files for the lat 14 days but I don't think any combination of -C and -W would allow me to do that (Atleast I haven't been able to figure it out), so I am trying to rotate the files every X number of MB and retain the last 20 files. This seems to be fairly simple with the '-C X -W 20' option but I am having some trouble in customizing the names of the log files. I have tried '-w capture-$(date +%Y-%M-%d-%H:%M-)' thinking that each file would start with the current date and time but all files are using the date and time when the capture was started so the only difference is the number at the end (which is done by -W). if I can customize the names of the file so that it has the date and time when the capture in started. In fact if I can do that, I dont need the numbers that '-W' appends at the end but I dont know how to get rid of them.
View 4 Replies View RelatedI'm running NetWare SLES 10 sp3 with OES2 sp2. I was working with the folks at Novell to resolve an iPrint Print Manager problem.
During the process they wanted to perform a packet capture using tcpdump. While logged in as the root user the error no suitable device was found, and I received no data at all. This server is running on a VMWare Center. On other SLES 10 sp3 systems (residing on that same VMWre Center), tcpdump captures packets just fine. I inherited all of these servers, so I wasn't here during the initial build, but I'd make the guess that they were configured similarly. On a Server that I built recently, tcpdump works fine. On two of my Servers it does not, and gives the mentioned error.
It's not that big a deal, otherwise the Servers are communicating and working just fine. But, I'd like to get it working just because it's supposed to work. Students are off for the summer, so I have time to play.
The only window that's open is the terminal running this command, no pidgin, skype, samba, torrent or anything I can think of is using the network yet there is ***** load of output from tcpdump. I was hoping to use this to check where certain applications connect to and what messages they send but when I'm doing nothing there is already more output than I can go through. Running tcpdump for less than 10 seconds gives me the following output:
Code:
16:13:22.015683 IP ns.hihkptt.net.cn.domain > desk.local.56598: 46887 1/2/2 (166)
16:13:22.016251 IP ns.hihkptt.net.cn.domain > desk.local.60099: 21168 1/2/2 (166)
16:13:22.016743 IP ns.hihkptt.net.cn.domain > desk.local.42325: 50346 1/2/2 (166)
16:13:22.034733 IP ns.hihkptt.net.cn.domain > desk.local.41441: 63658 1/2/0 (134)
16:13:22.035215 IP ns.hihkptt.net.cn.domain > desk.local.42865: 37537 1/2/0 (134)
16:13:22.036124 IP ns.hihkptt.net.cn.domain > desk.local.35006: 7520 1/2/0 (134)
16:13:22.036569 IP ns.hihkptt.net.cn.domain > desk.local.38480: 51322 1/2/0 (134)
16:13:22.066006 ARP, Reply 192.168.0.1 is-at 00:b0:0c:02:60:9c (oui Unknown), length 46 .....
We are having a Linux box with Tcpdump continuously running on it to monitor bunch of sources. Separate Tcpdump process runs in a background for each host for monitoring traffic. I use -w option with it to save the capture in the pcap format to analyze it later. Now what I need is, if the Linux machine gets rebooted amidst of its packet capturing activity, I want tcpdump to automatically start the process again for every host without overwriting previous captures.
Remember: Without overwriting previous captures . . .
Basically, I will be keeping all the tcpdump commands in the shell script and will load the script at startup during the linux boot. Is there any way to achieve this case, where by on rebooting, Tcpdump does not overwrite previous captures?
I am trying dump some packets using tcpdump and it does not seem to be working.
System is fedora12
TCPDUMP v4.1
Libpcap v1.0
I even rolled my own,
TCPDUMP v4.1.1
libpcap v1.1.1