I'm trying to capture packets to a file with the -w option but the file is empty yet if I use the '-w -' option to put data on stdout I see plenty of captured packets.I'm using CentOS 5.5 x86
I have a WAN network that i need to do some analysis on, for the traffic flows. I did lots of googling to figure out what useful tool to collect the packet informations.I found this site [URL]..witch i made great use of to recognize the tcpdum tool. I also have a network simulator on windows platform wich is Opnet Guru, (by the way.. is there a linux version for this simulator?).
MY QUESTION IS: How can i feed the Opnet Guru with the flows data collected with the
Code: tcpdump with its different options?
NOTE: in the Opnet Guru invironment there is an object called the profile that is beeing used to customize and genarate data flows with the desired characteristics to simulate the real flows. So i need to feed the Opnet with the fresh data collected with the tcpdump tool (command) instead of using the built-in profile.. i hope i was clear enough..
I have a WAN network that i need to do some analysis on, for the traffic flows. I did lots of googling to figure out what useful tool to collect the packet informations.I found this site http://scrutin.wordpress.com/2007/04...-tcpdump/witch i made great use of to recognize the tcpdum tool. I also have a network simulator on windows platform wich is Opnet Guru, (by the way.. is there a linux version for this simulator?). MY QUESTION IS:: How can i feed the Opnet Guru with the flows data collected with the Code: tcpdumpwith its different options? NOTE: in the Opnet Guru invironment there is an object called the profile that is being used to customize and genarate data flows with the desired characteristics to simulate the real flows. So i need to feed the Opnet with the fresh data collected with the tcpdump tool (command) instead of using the built-in profile.
Does k3b require free hard drive space at the finalize media stage? I'm trying to write a 17.6 GB file to 25 GB media with 18 GB free space on the hard drive. My disc seems to write 100% then displays "Error". This is the debugging code:
I am using ARM9(S3C2440) board in which linux kernel 2.6.30.4 is ported and i need to write some 8-bit data(say 0xAA) into the memory location 0x08000000 and i need to trace the data written into the location is correct or not how to do this in linux.
Are there any tools out there that let me select a bunch of data and burn it to multiple cd's or DVD's? I'm using k3b but have to manually select cd and dvd size amounts.
I have been trying to write a simple snip of bash shell code to import from 1 to 100 records into a Bash array.
I have a CSV file that is structured like: record1,item1,item2,item3,item4 record2,item1,item2,item3,item4 record3,item1,item2,item3,item4 record4,item1,item2,item3,item4
And would like to get this data into corresponding arrays as such: $record1[item1-4] $record2[item1-4] $record3[item1-4] $record4[item1-4]
I synthesized a seismogram by using Fortran codes, I need plot the synthesized seismogram and the data together, so I can verify the accuracy of code. Now I encounter a question: how to read the SAC data by Fortran code, I have searched some codes on Internet, the details as follow:the velr12a.sac is my data file.
Code: c read sac file PROGRAM RSAC PARAMETER (MAX=1000) DIMENSION YARRAY(MAX) CHARACTER*10 KNAME
I am trying to create a dump log using tcpdump. I want display the top 10 ip addresses sorted numerically showing how many times the ips are hitting the server. I'm getting frustrated because It's not working how I'd like it to.
Is there a way to do multiple interfaces in tcpdump? I have found that when using "-i any", not all packets are captured (compared to "-i eth0" on a machine with only one interface). I need to monitor traffic on some machines with as many as 6 interfaces, and get these packets that "-i any" misses. When I give the "-i" option multiple times, it seems to only use the last one.
I would like to set up tcpdump to rotate log file every 1 hour and retain files for the lat 14 days but I don't think any combination of -C and -W would allow me to do that (Atleast I haven't been able to figure it out), so I am trying to rotate the files every X number of MB and retain the last 20 files. This seems to be fairly simple with the '-C X -W 20' option but I am having some trouble in customizing the names of the log files. I have tried '-w capture-$(date +%Y-%M-%d-%H:%M-)' thinking that each file would start with the current date and time but all files are using the date and time when the capture was started so the only difference is the number at the end (which is done by -W). if I can customize the names of the file so that it has the date and time when the capture in started. In fact if I can do that, I dont need the numbers that '-W' appends at the end but I dont know how to get rid of them.
I'm running NetWare SLES 10 sp3 with OES2 sp2. I was working with the folks at Novell to resolve an iPrint Print Manager problem.
During the process they wanted to perform a packet capture using tcpdump. While logged in as the root user the error no suitable device was found, and I received no data at all. This server is running on a VMWare Center. On other SLES 10 sp3 systems (residing on that same VMWre Center), tcpdump captures packets just fine. I inherited all of these servers, so I wasn't here during the initial build, but I'd make the guess that they were configured similarly. On a Server that I built recently, tcpdump works fine. On two of my Servers it does not, and gives the mentioned error.
It's not that big a deal, otherwise the Servers are communicating and working just fine. But, I'd like to get it working just because it's supposed to work. Students are off for the summer, so I have time to play.
The only window that's open is the terminal running this command, no pidgin, skype, samba, torrent or anything I can think of is using the network yet there is ***** load of output from tcpdump. I was hoping to use this to check where certain applications connect to and what messages they send but when I'm doing nothing there is already more output than I can go through. Running tcpdump for less than 10 seconds gives me the following output:
Code: 16:13:22.015683 IP ns.hihkptt.net.cn.domain > desk.local.56598: 46887 1/2/2 (166) 16:13:22.016251 IP ns.hihkptt.net.cn.domain > desk.local.60099: 21168 1/2/2 (166) 16:13:22.016743 IP ns.hihkptt.net.cn.domain > desk.local.42325: 50346 1/2/2 (166) 16:13:22.034733 IP ns.hihkptt.net.cn.domain > desk.local.41441: 63658 1/2/0 (134) 16:13:22.035215 IP ns.hihkptt.net.cn.domain > desk.local.42865: 37537 1/2/0 (134) 16:13:22.036124 IP ns.hihkptt.net.cn.domain > desk.local.35006: 7520 1/2/0 (134) 16:13:22.036569 IP ns.hihkptt.net.cn.domain > desk.local.38480: 51322 1/2/0 (134) 16:13:22.066006 ARP, Reply 192.168.0.1 is-at 00:b0:0c:02:60:9c (oui Unknown), length 46 .....
I have configured NFS Server on CentOS 5.2 with IBM Web Server,which is having AIX 5.3 The IBM Web Server can upload all data onto NFS Server. Now, Today i was having slow response on IBM Web Server & by measuring the NFS, i found below error while running "tcpdump" command on CentOS Server.
tcpdump -n -i eth1 | grep 2049 18:36:37.237451 IP 10.100.19.241.2049 > 10.100.19.88.1758143293: reply ok 1448 read [|nfs] 18:36:37.237476 IP 10.100.19.241.2049 > 10.100.19.88.539981409: reply ERR 1448 18:36:37.237481 IP 10.100.19.241.2049 > 10.100.19.88.796287348: reply ERR 1448
[code]....
I have changed Network Card in CentOS. All LAN is on Gigabit Network. Also I have changed the Network Cable(Patch Cord). But,still no response.
I have a linux box with two interfaces: eth0 is a builtin and eth1 is a USB-LAN.
There is an IP configured on eth1.
eth0 is up but no IP is configured. This interface is used for sniffing with tcpdump.
The problem is that eth0 frequently stops receiving packets -- my tcpdump captures are empty, and if I look at the interface stats with ifconfig, I can see that no packets are received.
If I bounce the interface (ifconfig eth0 down; ifconfig eth0 up), it starts receiving packets again.
I am running a test to determine when packet drops occur. I'm using a Spirent TestCenter through a switch (necessary to aggregate Ethernet traffic from 5 ports to one optical link) to a server using a Myricom card.While running my test, if the input rate is below a certain value, ethtool does not report any drop (except dropped_multicast_filtered which is incrementing at a very slow rate). However, tcpdump reports X number of packets "dropped by kernel". Then if I increase the input rate, ethtool reports drops but "ifconfig eth2" does not. In fact, ifconfig doesn't seem to report any packet drops at all. Do they all measure packet drops at different "levels", i.e. ethtool at the NIC level, tcpdump at the kernel level etc?nd am I right to say that in the journey of an incoming packet, the NIC level is the "so-called" first level, then the kernel, then the user application? So any packet drop is likely to happen first at the NIC, then the kernel, then the user application? So if there is no packet drop at the NIC, but packet drop at the kernel, then the bottleneck is not at the NIC?
I have a need to make a rather odd filter in tcpdump- I would like to capture only all those packages on interface eth0, that are outgoing(in other words from IP 192.168.1.1, which is IP for eth0 in this computer) and doesn't have src MAC address 11:22:33:44:55:66. However, fallowing command says, that syntax is wrong:
Code: tcpdump -n -p -i eth0 src host 192.168.1.1 ether src not 11:22:33:44:55:66 Is this possible? If yes, then what is the correct command?
I have a question regarding packet drops. I am running a test to determine when packet drops occur. I'm using a Spirent TestCenter through a switch (necessary to aggregate Ethernet traffic from 5 ports to one optical link) to a server using a Myricom card. While running my test, if the input rate is below a certain value, ethtool does not report any drop (except dropped_multicast_filtered which is incrementing at a very slow rate). However, tcpdump reports X number of packets "dropped by kernel". Then if I increase the input rate, ethtool reports drops but "ifconfig eth2" does not.
In fact, ifconfig doesn't seem to report any packet drops at all. Do they all measure packet drops at different "levels", i.e. ethtool at the NIC level, tcpdump at the kernel level etc? And am I right to say that in the journey of an incoming packet, the NIC level is the "so-called" first level, then the kernel, then the user application? So any packet drop is likely to happen first at the NIC, then the kernel, then the user application? So if there is no packet drop at the NIC, but packet drop at the kernel, then the bottleneck is not at the NIC?
I have set the iptables INPUT policy to DROP. As I have expected tcpdump wasn't showing any packages... for a while. Suddenly it begun to show UDP syslog packages being sent by a remote host. It is conform with the configuration of syslog, but since the INPUT policy was set to DROP, with no exceptions, it is not conform with configuration of iptables. Why after setting INPUT policy to DROP, with no exceptions most of the packets recieved before are being dropped and some not, as tcpdump shows?
I have an application where I am sending data via serial port from PC1 (Java App) and reading that data in PC2 (C++ App). The problem that I am facing is that my PC2 (C++ App) is not able to read complete data sent by PC1 i.e. from my PC1 I am sending 190 bytes but PC2 is able to read close to 140 bytes though I am trying to read in a loop.Below is code snippet of my C++ AppOpen the connection to serial port
I try to understand the way the network-admin tool retrieve and set information about the network connections, and after reviewing thee code, I still dont undertsnad how it does it. I thought that it should use maybe linux commands such as iwconfig or ifconfig or by using ioctl for get/set the parameters of the wireless, but I did not find such things in the code. It also does not use AT commands for modem devices parameters.
I see that it uses liboobs files, but after veiwing the code, I still don't understand how it works.
These question comes since I work on application which connects to wireless devices (possibly several wifi and modems devices can be connected ), and I try to think how to set/get the network parameters, I thought that a good start will be to look at how network-admin works.
I've got a few linux boxes (fedora 13 and 11) with a common disk mounted. I'm trying to get them all to write files to that disk, however it seems that only the first to connect actually has permission to write.
I'm very new to this networking stuff and this is a bit of a hack. Is there anyway to give all computers write access to a disk (it's actually a managed back up disk, primarily for windows users but is the only shared disk in the building), or is this likely to be very very complicated?
Samba seems to crash and come back after some seconds if I copy a lot of small files in a short period of time over the network. How do I fix it?
I have Ubuntu 9.10 Server 64bit running on a D945GCLF2 board sharing two 1TB ext4 formatted HDDs to my Windows PCs using samba. I've been having an issue with reading or writing files through samba. It happens during copying operations or checksumming, anything that reads or writes MANY small files in a small amount of time. I am pretty sure the problem has to do with my server because the server has run on two different LANs in different homes and will crash from activity with any of several other PCs. There is no crashing if I access the files through SSH, although when I do that the max transfer speed is less than 1MB/s.
When I induce the crashing, there is absolutely no output to the server terminal.
As an easy access example of something that will crash samba, extracting Cinebench R11.5 to the server will do the job. It always fails.
I am running slackware-current and I have tcpdump-4.1.1-i486-1.txz installed. If I remember right libpcap used to be part of tcpdump, but since recently i cannot find it in my system anymore! Tools like nmap give me the error message:
"error while loading shared libraries: libpcap.so.1: cannot open shared object file: No such file or directory"
I have configured NFS Server on CentOS 5.2 with an IBM Web Server(AIX). The IBM Web Server can upload all data onto NFS Server. Now, today i was having slow response on IBM Web Server & by measuring the NFS, I found below error while running "tcpdump" command. I have ran "tcpdump" command on NFS Server.
tcpdump -n -i eth1 | grep 2049 18:36:37.237451 IP 10.100.19.241.2049 > 10.100.19.88.1758143293: reply ok 1448 read [|nfs] 18:36:37.237476 IP 10.100.19.241.2049 > 10.100.19.88.539981409: reply ERR 1448 18:36:37.237481 IP 10.100.19.241.2049 > 10.100.19.88.796287348: reply ERR 1448 18:36:37.237488 IP 10.100.19.241.2049 > 10.100.19.88.1986098295: reply ERR 1448 18:36:37.237566 IP 10.100.19.241.2049 > 10.100.19.88.539762736: reply ERR 1448 .....
18:36:37.238263 IP 10.100.19.241.2049 > 10.100.19.88.1869440302: reply ERR 1448 16133 packets captured 23339 packets received by filter 7100 packets dropped by kernel 10.100.18.241 is the IP of NFS Server & 10.100.19.88 IP belongs to IBM Web Server.