Networking :: How To Create Dump Log Using TCPdump
Dec 7, 2010
I am trying to create a dump log using tcpdump. I want display the top 10 ip addresses sorted numerically showing how many times the ips are hitting the server. I'm getting frustrated because It's not working how I'd like it to.
My application team is asking me to generate the kernel-dump.
Here are details about my server. OS: RHEL 4.7/32 Bit Kernel Version: 2.6.9-89.0.23.ELsmp Processor: Intel(R) Xeon(R) CPU E5520@ 2.27GHz Hardware: HP Proliant 380G6 series server.
I am using Electric Cloud applications. Sometimes it creates some kernel panic and immediately got rebooted. Kernel-debuginfo rpm is not installed. In some thread, I read the kernel-debuginfo rpm's version should match with the kernel version. In my case I couldn't even find the exact version of kernel-debuginfo version.
I am using RHEL 4.7 (32bit) on HP Proliant 380G6 series server. We are using Electric Cloud Agents on these servers. Nowadays we are facing some memory issues and its creates some kernel panic and then restarts the server. When i reported the issue to my application team, they asked me to come with the core dump. I googled it enough, then i set ulimit value as unlimited. (previously it was 0, then i made a entry in /etc/profile file as follows ulimit -c unlimited) But still whenever my server restarts due to that kernel panic, it couldnt generate the core dump. My application was installed on /opt
I'm working on a script. fter it has installed and removed packages, I need to configure a ton of settings. In GNOME, I understand that those settings are kept in "/home/user/.gconf". Can I create a virtual machine and configure the system through the GUI to my liking and then dump all of the settings, so that I can load them on another machine? Is it as simple as copying the directories?
I have used Dump Command to dump the application files. For Full backup the level 0 is working fine. For incremental backup I used the level 1 or 2 it is getting the error as
DUMP: Only level 0 dumps are allowed on a subdirectory DUMP: The ENTIRE dump is aborted.
The code I used =============================== #!/bin/bash #Full Day Backup Script #application folders backup #test is the username now=$(date +"%d-%m-%Y") [Code]...
I want to generate core dump files from my program when it crashes. Its a pretty big process and has about 10-11 threads in it.I have followed the documentation to enable core dump by setting ulimit to unlimited etc. I quickly tried "A demo program creating a core dump" from the following webpage, which succeeds in Segfault and dumping a core file in the directory that I configured.However, I tried running my original program and caused it to crash. I did this by making calls to kill(), raise() or the same null pointer access as shown in the webpage above. In each case, my program crashed but did not generate a core dump file. Am I missing something?My program is in C++ and my environment is Redhat 9.0 (kernel 2.4.20)
Going through the "Why do I NOT get a core dump?" section on the same webpage as above, I can see two potential problems. One - there are issues with the suid/sgid (bullet # 6). I am not able to change any settings with suid because my system does not contain either /proc/sys/fs/suid_dumpable or /proc/sys/kernel/suid_dumpableTwo, my program has threads in it and the bullet # 8 is the problem.
Is there a way to do multiple interfaces in tcpdump? I have found that when using "-i any", not all packets are captured (compared to "-i eth0" on a machine with only one interface). I need to monitor traffic on some machines with as many as 6 interfaces, and get these packets that "-i any" misses. When I give the "-i" option multiple times, it seems to only use the last one.
I'm trying to capture packets to a file with the -w option but the file is empty yet if I use the '-w -' option to put data on stdout I see plenty of captured packets.I'm using CentOS 5.5 x86
I would like to set up tcpdump to rotate log file every 1 hour and retain files for the lat 14 days but I don't think any combination of -C and -W would allow me to do that (Atleast I haven't been able to figure it out), so I am trying to rotate the files every X number of MB and retain the last 20 files. This seems to be fairly simple with the '-C X -W 20' option but I am having some trouble in customizing the names of the log files. I have tried '-w capture-$(date +%Y-%M-%d-%H:%M-)' thinking that each file would start with the current date and time but all files are using the date and time when the capture was started so the only difference is the number at the end (which is done by -W). if I can customize the names of the file so that it has the date and time when the capture in started. In fact if I can do that, I dont need the numbers that '-W' appends at the end but I dont know how to get rid of them.
I'm running NetWare SLES 10 sp3 with OES2 sp2. I was working with the folks at Novell to resolve an iPrint Print Manager problem.
During the process they wanted to perform a packet capture using tcpdump. While logged in as the root user the error no suitable device was found, and I received no data at all. This server is running on a VMWare Center. On other SLES 10 sp3 systems (residing on that same VMWre Center), tcpdump captures packets just fine. I inherited all of these servers, so I wasn't here during the initial build, but I'd make the guess that they were configured similarly. On a Server that I built recently, tcpdump works fine. On two of my Servers it does not, and gives the mentioned error.
It's not that big a deal, otherwise the Servers are communicating and working just fine. But, I'd like to get it working just because it's supposed to work. Students are off for the summer, so I have time to play.
The only window that's open is the terminal running this command, no pidgin, skype, samba, torrent or anything I can think of is using the network yet there is ***** load of output from tcpdump. I was hoping to use this to check where certain applications connect to and what messages they send but when I'm doing nothing there is already more output than I can go through. Running tcpdump for less than 10 seconds gives me the following output:
Code: 16:13:22.015683 IP ns.hihkptt.net.cn.domain > desk.local.56598: 46887 1/2/2 (166) 16:13:22.016251 IP ns.hihkptt.net.cn.domain > desk.local.60099: 21168 1/2/2 (166) 16:13:22.016743 IP ns.hihkptt.net.cn.domain > desk.local.42325: 50346 1/2/2 (166) 16:13:22.034733 IP ns.hihkptt.net.cn.domain > desk.local.41441: 63658 1/2/0 (134) 16:13:22.035215 IP ns.hihkptt.net.cn.domain > desk.local.42865: 37537 1/2/0 (134) 16:13:22.036124 IP ns.hihkptt.net.cn.domain > desk.local.35006: 7520 1/2/0 (134) 16:13:22.036569 IP ns.hihkptt.net.cn.domain > desk.local.38480: 51322 1/2/0 (134) 16:13:22.066006 ARP, Reply 192.168.0.1 is-at 00:b0:0c:02:60:9c (oui Unknown), length 46 .....
I have configured NFS Server on CentOS 5.2 with IBM Web Server,which is having AIX 5.3 The IBM Web Server can upload all data onto NFS Server. Now, Today i was having slow response on IBM Web Server & by measuring the NFS, i found below error while running "tcpdump" command on CentOS Server.
tcpdump -n -i eth1 | grep 2049 18:36:37.237451 IP 10.100.19.241.2049 > 10.100.19.88.1758143293: reply ok 1448 read [|nfs] 18:36:37.237476 IP 10.100.19.241.2049 > 10.100.19.88.539981409: reply ERR 1448 18:36:37.237481 IP 10.100.19.241.2049 > 10.100.19.88.796287348: reply ERR 1448
[code]....
I have changed Network Card in CentOS. All LAN is on Gigabit Network. Also I have changed the Network Cable(Patch Cord). But,still no response.
I have a linux box with two interfaces: eth0 is a builtin and eth1 is a USB-LAN.
There is an IP configured on eth1.
eth0 is up but no IP is configured. This interface is used for sniffing with tcpdump.
The problem is that eth0 frequently stops receiving packets -- my tcpdump captures are empty, and if I look at the interface stats with ifconfig, I can see that no packets are received.
If I bounce the interface (ifconfig eth0 down; ifconfig eth0 up), it starts receiving packets again.
I am running a test to determine when packet drops occur. I'm using a Spirent TestCenter through a switch (necessary to aggregate Ethernet traffic from 5 ports to one optical link) to a server using a Myricom card.While running my test, if the input rate is below a certain value, ethtool does not report any drop (except dropped_multicast_filtered which is incrementing at a very slow rate). However, tcpdump reports X number of packets "dropped by kernel". Then if I increase the input rate, ethtool reports drops but "ifconfig eth2" does not. In fact, ifconfig doesn't seem to report any packet drops at all. Do they all measure packet drops at different "levels", i.e. ethtool at the NIC level, tcpdump at the kernel level etc?nd am I right to say that in the journey of an incoming packet, the NIC level is the "so-called" first level, then the kernel, then the user application? So any packet drop is likely to happen first at the NIC, then the kernel, then the user application? So if there is no packet drop at the NIC, but packet drop at the kernel, then the bottleneck is not at the NIC?
I have a need to make a rather odd filter in tcpdump- I would like to capture only all those packages on interface eth0, that are outgoing(in other words from IP 192.168.1.1, which is IP for eth0 in this computer) and doesn't have src MAC address 11:22:33:44:55:66. However, fallowing command says, that syntax is wrong:
Code: tcpdump -n -p -i eth0 src host 192.168.1.1 ether src not 11:22:33:44:55:66 Is this possible? If yes, then what is the correct command?
I have a WAN network that i need to do some analysis on, for the traffic flows. I did lots of googling to figure out what useful tool to collect the packet informations.I found this site [URL]..witch i made great use of to recognize the tcpdum tool. I also have a network simulator on windows platform wich is Opnet Guru, (by the way.. is there a linux version for this simulator?).
MY QUESTION IS: How can i feed the Opnet Guru with the flows data collected with the
Code: tcpdump with its different options?
NOTE: in the Opnet Guru invironment there is an object called the profile that is beeing used to customize and genarate data flows with the desired characteristics to simulate the real flows. So i need to feed the Opnet with the fresh data collected with the tcpdump tool (command) instead of using the built-in profile.. i hope i was clear enough..
I have a question regarding packet drops. I am running a test to determine when packet drops occur. I'm using a Spirent TestCenter through a switch (necessary to aggregate Ethernet traffic from 5 ports to one optical link) to a server using a Myricom card. While running my test, if the input rate is below a certain value, ethtool does not report any drop (except dropped_multicast_filtered which is incrementing at a very slow rate). However, tcpdump reports X number of packets "dropped by kernel". Then if I increase the input rate, ethtool reports drops but "ifconfig eth2" does not.
In fact, ifconfig doesn't seem to report any packet drops at all. Do they all measure packet drops at different "levels", i.e. ethtool at the NIC level, tcpdump at the kernel level etc? And am I right to say that in the journey of an incoming packet, the NIC level is the "so-called" first level, then the kernel, then the user application? So any packet drop is likely to happen first at the NIC, then the kernel, then the user application? So if there is no packet drop at the NIC, but packet drop at the kernel, then the bottleneck is not at the NIC?
I have set the iptables INPUT policy to DROP. As I have expected tcpdump wasn't showing any packages... for a while. Suddenly it begun to show UDP syslog packages being sent by a remote host. It is conform with the configuration of syslog, but since the INPUT policy was set to DROP, with no exceptions, it is not conform with configuration of iptables. Why after setting INPUT policy to DROP, with no exceptions most of the packets recieved before are being dropped and some not, as tcpdump shows?
I have a WAN network that i need to do some analysis on, for the traffic flows. I did lots of googling to figure out what useful tool to collect the packet informations.I found this site http://scrutin.wordpress.com/2007/04...-tcpdump/witch i made great use of to recognize the tcpdum tool. I also have a network simulator on windows platform wich is Opnet Guru, (by the way.. is there a linux version for this simulator?). MY QUESTION IS:: How can i feed the Opnet Guru with the flows data collected with the Code: tcpdumpwith its different options? NOTE: in the Opnet Guru invironment there is an object called the profile that is being used to customize and genarate data flows with the desired characteristics to simulate the real flows. So i need to feed the Opnet with the fresh data collected with the tcpdump tool (command) instead of using the built-in profile.
I am running slackware-current and I have tcpdump-4.1.1-i486-1.txz installed. If I remember right libpcap used to be part of tcpdump, but since recently i cannot find it in my system anymore! Tools like nmap give me the error message:
"error while loading shared libraries: libpcap.so.1: cannot open shared object file: No such file or directory"
I have configured NFS Server on CentOS 5.2 with an IBM Web Server(AIX). The IBM Web Server can upload all data onto NFS Server. Now, today i was having slow response on IBM Web Server & by measuring the NFS, I found below error while running "tcpdump" command. I have ran "tcpdump" command on NFS Server.
tcpdump -n -i eth1 | grep 2049 18:36:37.237451 IP 10.100.19.241.2049 > 10.100.19.88.1758143293: reply ok 1448 read [|nfs] 18:36:37.237476 IP 10.100.19.241.2049 > 10.100.19.88.539981409: reply ERR 1448 18:36:37.237481 IP 10.100.19.241.2049 > 10.100.19.88.796287348: reply ERR 1448 18:36:37.237488 IP 10.100.19.241.2049 > 10.100.19.88.1986098295: reply ERR 1448 18:36:37.237566 IP 10.100.19.241.2049 > 10.100.19.88.539762736: reply ERR 1448 .....
18:36:37.238263 IP 10.100.19.241.2049 > 10.100.19.88.1869440302: reply ERR 1448 16133 packets captured 23339 packets received by filter 7100 packets dropped by kernel 10.100.18.241 is the IP of NFS Server & 10.100.19.88 IP belongs to IBM Web Server.
I am trying to analyze the output of tcpdump, but I am unable to figure out what the output is. as I think that the security my computer would be compromised by this output.
I looked and have tcpdump installed on ubuntu 10.04 lts I can do a tcpdump --help and it gives the commands.I get no device found when I do tcpdump from the terminal window.my Ubuntu is having trouble looking up domains it just sits there and hangs looking up google.comI'm on a ATT 3mb DSL dry line running an asus netbook and a biostar via mobo desktop they both have trouble looking up domains right out of the DSL modem.I would try to set the DNS in ubuntu but I don't know how to do that without knowing the gateway and such. I have to get the IP of the computer, the netmask, the gateway, and the DNS for the manual setup.
I was wondering how one could set up tcpdump to run in the background, dumping all output to a file until I terminate the process.Here is the dilema... I SSH into the box that will be listening (using tcpdump)...
ssh> sudo tcpdump -i eth0 > dump_file yadda yadda...
then if I exit my ssh session, tcpdump closes.
If I do a... ssh> sudo tcpdump -i eth0 > dump_file & [1] 12938 yadda yadda.
I am trying to install libpcap and tcpdump, but even if I have already installed Flex, as the terminal tells me to do. What else could I do?
Code: configure: error: Your operating system's lex is insufficient to compile libpcap. Flex is a lex replacement that has many advantages, including being able to compile libpcap. For more information, see [URL].