From list2009 at lunch.za.net Tue Aug 4 01:48:06 2009 From: list2009 at lunch.za.net (Andrew McGill) Date: Mon, 3 Aug 2009 17:48:06 +0200 Subject: [netflow-tools] softflowd -m 512000 ... flow-capture ... 90 % less traffic?? Message-ID: <200908031748.07004.list2009@lunch.za.net> Greetings netflow-tools, I have softflowd sending information to flow-capture for a network with a few hundred hosts (don't ask, the answer is probably "yes"). Softflowd was configured with the default without a -m parameter, so that softflowd tracked a maximum of 8192 flows. The primary reason for rolling over flows was running out of connections - and cpu load was obnoxiously high. So I fixed it (in the sense of thereifixedit.com, perhaps). I told softflowd that it should track a maximum of 512000 flows, and it duly did. The before and after log files for 10 minutes of traffic look like this: -rw-r--r-- 1 root root 12678211 Jul 26 17:02 ft-v05.2009-07-26.165257+0200 -rw-r--r-- 1 root root 673952 Jul 26 17:32 ft-v05.2009-07-26.172247+0200 ... which is great, BUT it seems that most of the traffic is getting lost. It's not that this traffic is getting deferred into later stats -- it simply never gets reported -- the reported totals dropped to 10% of their previous values! before: Average Kbits / second (real) : 49598.9333 after: Average Kbits / second (real) : 3872.6817 The next day it was still roughly 10% of the real amount: Average Kbits / second (real) : 4617.1089 Is this correct behaviour? Am I doing one or more things wrong? &:-) Notes: Startup parameters: flow-capture -p /var/run/flow-capture.pid -n 144 -N -1 \ -w /var/log/netflows -S 10 0/0/8828 softflowd -i eth2 -n 127.0.0.1:8828 # BEFORE softflowd -i eth2 -n 127.0.0.1:8828 -m 512000 # AFTER In case it's relevant, this is what flow-stat said about the files: # --- ---- ---- Report Information --- --- --- (BEFORE) # # Fields: Total # Symbols: Disabled # Sorting: None # Name: Overall Summary # # Args: flow-stat # Total Flows : 723704 Total Octets : 2975935893 Total Packets : 6138299 Total Time (1/1000 secs) (flows): 5790296389 Duration of data (realtime) : 480 Duration of data (1/1000 secs) : 2363291 Average flow time (1/1000 secs) : 8000.9183 Average packet size (octets) : 484.8144 Average flow size (octets) : 4112.0900 Average packets per flow : 8.4818 Average flows / second (flow) : 306.2649 Average flows / second (real) : 1507.7167 Average Kbits / second (flow) : 10075.1113 Average Kbits / second (real) : 49598.9333 IP packet size distribution: 1-32 64 96 128 160 192 224 256 288 320 352 384 416 448 480 .000 .379 .286 .090 .082 .044 .024 .021 .013 .007 .005 .003 .003 .001 .002 512 544 576 1024 1536 2048 2560 3072 3584 4096 4608 .003 .005 .001 .014 .018 .000 .000 .000 .000 .000 .000 Packets per flow distribution: 1 2 4 8 12 16 20 24 28 32 36 40 44 48 52 .643 .086 .075 .097 .030 .018 .011 .007 .005 .004 .003 .002 .002 .002 .001 60 100 200 300 400 500 600 700 800 900 >900 .002 .005 .003 .001 .001 .000 .000 .000 .000 .000 .001 Octets per flow distribution: 32 64 128 256 512 1280 2048 2816 3584 4352 5120 5888 6656 7424 8192 .000 .241 .298 .191 .104 .082 .022 .011 .007 .004 .004 .002 .002 .002 .002 8960 9728 10496 11264 12032 12800 13568 14336 15104 15872 >15872 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .019 Flow time distribution: 10 50 100 200 500 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 .733 .014 .013 .024 .035 .032 .025 .017 .010 .008 .016 .007 .005 .005 .004 12000 14000 16000 18000 20000 22000 24000 26000 28000 30000 >30000 .006 .004 .003 .003 .002 .004 .002 .001 .001 .001 .023 # --- ---- ---- Report Information --- --- --- (AFTER) # # Fields: Total # Symbols: Disabled # Sorting: None # Name: Overall Summary # # Args: flow-stat # Total Flows : 50516 Total Octets : 261406012 Total Packets : 551158 Total Time (1/1000 secs) (flows): 329152148 Duration of data (realtime) : 540 Duration of data (1/1000 secs) : 1366814 Average flow time (1/1000 secs) : 6515.8001 Average packet size (octets) : 474.2851 Average flow size (octets) : 5174.7172 Average packets per flow : 10.9106 Average flows / second (flow) : 36.9810 Average flows / second (real) : 93.5481 Average Kbits / second (flow) : 1530.9283 Average Kbits / second (real) : 3872.6817 IP packet size distribution: 1-32 64 96 128 160 192 224 256 288 320 352 384 416 448 480 .000 .205 .364 .125 .116 .053 .028 .020 .019 .009 .005 .004 .003 .002 .002 512 544 576 1024 1536 2048 2560 3072 3584 4096 4608 .003 .008 .001 .015 .019 .000 .000 .000 .000 .000 .000 Packets per flow distribution: 1 2 4 8 12 16 20 24 28 32 36 40 44 48 52 .439 .174 .073 .119 .067 .034 .018 .015 .010 .008 .006 .004 .003 .004 .002 60 100 200 300 400 500 600 700 800 900 >900 .004 .010 .005 .001 .001 .000 .000 .000 .000 .000 .001 Octets per flow distribution: 32 64 128 256 512 1280 2048 2816 3584 4352 5120 5888 6656 7424 8192 .000 .059 .297 .200 .153 .154 .042 .022 .011 .006 .004 .003 .002 .004 .003 8960 9728 10496 11264 12032 12800 13568 14336 15104 15872 >15872 .002 .002 .002 .001 .001 .001 .001 .001 .001 .000 .027 Flow time distribution: 10 50 100 200 500 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 .298 .024 .012 .073 .319 .071 .035 .024 .012 .010 .020 .009 .007 .006 .005 12000 14000 16000 18000 20000 22000 24000 26000 28000 30000 >30000 .009 .005 .003 .004 .004 .003 .003 .003 .003 .004 .035 From Sameka.S.Prather at noaa.gov Tue Aug 4 01:51:14 2009 From: Sameka.S.Prather at noaa.gov (sameka.s.prather) Date: Mon, 03 Aug 2009 11:51:14 -0400 Subject: [netflow-tools] Please remove me from this list Message-ID: <4A770772.2010803@noaa.gov> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 - -- ?Thank You, Sameka S. Prather Cell 202-360-9428 Office 301-713-3333 x 141 -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFKdwdyZafATzZjRVgRAjhcAJ0eIiBGX+kT1ndUqwTurHwLS8urfgCcDkvU giaDwbffw0KqngFPAm+stVY= =3IS6 -----END PGP SIGNATURE----- From smajko at wp.pl Fri Aug 14 18:09:35 2009 From: smajko at wp.pl (Sebastian Majkowski) Date: Fri, 14 Aug 2009 10:09:35 +0200 Subject: [netflow-tools] softflowd -m 512000 ... flow-capture ... 90 % less traffic?? In-Reply-To: <200908031748.07004.list2009@lunch.za.net> References: <200908031748.07004.list2009@lunch.za.net> Message-ID: <4A851BBF.9020200@wp.pl> Andrew McGill wrote: > Greetings netflow-tools, > > I have softflowd sending information to flow-capture for a network with a few > hundred hosts (don't ask, the answer is probably "yes"). Softflowd was > configured with the default without a -m parameter, so that softflowd tracked > a maximum of 8192 flows. The primary reason for rolling over flows was > running out of connections - and cpu load was obnoxiously high. So I fixed it > (in the sense of thereifixedit.com, perhaps). I told softflowd that it should > track a maximum of 512000 flows, and it duly did. > > The before and after log files for 10 minutes of traffic look like this: > > -rw-r--r-- 1 root root 12678211 Jul 26 17:02 ft-v05.2009-07-26.165257+0200 > -rw-r--r-- 1 root root 673952 Jul 26 17:32 ft-v05.2009-07-26.172247+0200 > > ... which is great, BUT it seems that most of the traffic is getting lost. > It's not that this traffic is getting deferred into later stats -- it simply > never gets reported -- the reported totals dropped to 10% of their previous > values! > > before: Average Kbits / second (real) : 49598.9333 > after: Average Kbits / second (real) : 3872.6817 > > The next day it was still roughly 10% of the real amount: > > Average Kbits / second (real) : 4617.1089 > > Is this correct behaviour? Am I doing one or more things wrong? > > &:-) > > > > Notes: > > Startup parameters: > flow-capture -p /var/run/flow-capture.pid -n 144 -N -1 \ > -w /var/log/netflows -S 10 0/0/8828 > > softflowd -i eth2 -n 127.0.0.1:8828 # BEFORE > softflowd -i eth2 -n 127.0.0.1:8828 -m 512000 # AFTER > > > In case it's relevant, this is what flow-stat said about the files: > > > # --- ---- ---- Report Information --- --- --- (BEFORE) > # > # Fields: Total > # Symbols: Disabled > # Sorting: None > # Name: Overall Summary > # > # Args: flow-stat > # > Total Flows : 723704 > Total Octets : 2975935893 > Total Packets : 6138299 > Total Time (1/1000 secs) (flows): 5790296389 > Duration of data (realtime) : 480 > Duration of data (1/1000 secs) : 2363291 > Average flow time (1/1000 secs) : 8000.9183 > Average packet size (octets) : 484.8144 > Average flow size (octets) : 4112.0900 > Average packets per flow : 8.4818 > Average flows / second (flow) : 306.2649 > Average flows / second (real) : 1507.7167 > Average Kbits / second (flow) : 10075.1113 > Average Kbits / second (real) : 49598.9333 > > > IP packet size distribution: > 1-32 64 96 128 160 192 224 256 288 320 352 384 416 448 480 > .000 .379 .286 .090 .082 .044 .024 .021 .013 .007 .005 .003 .003 .001 .002 > > 512 544 576 1024 1536 2048 2560 3072 3584 4096 4608 > .003 .005 .001 .014 .018 .000 .000 .000 .000 .000 .000 > > Packets per flow distribution: > 1 2 4 8 12 16 20 24 28 32 36 40 44 48 52 > .643 .086 .075 .097 .030 .018 .011 .007 .005 .004 .003 .002 .002 .002 .001 > > 60 100 200 300 400 500 600 700 800 900 >900 > .002 .005 .003 .001 .001 .000 .000 .000 .000 .000 .001 > > Octets per flow distribution: > 32 64 128 256 512 1280 2048 2816 3584 4352 5120 5888 6656 7424 8192 > .000 .241 .298 .191 .104 .082 .022 .011 .007 .004 .004 .002 .002 .002 .002 > > 8960 9728 10496 11264 12032 12800 13568 14336 15104 15872 >15872 > .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .019 > > Flow time distribution: > 10 50 100 200 500 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 > .733 .014 .013 .024 .035 .032 .025 .017 .010 .008 .016 .007 .005 .005 .004 > > 12000 14000 16000 18000 20000 22000 24000 26000 28000 30000 >30000 > .006 .004 .003 .003 .002 .004 .002 .001 .001 .001 .023 > > # --- ---- ---- Report Information --- --- --- (AFTER) > # > # Fields: Total > # Symbols: Disabled > # Sorting: None > # Name: Overall Summary > # > # Args: flow-stat > # > Total Flows : 50516 > Total Octets : 261406012 > Total Packets : 551158 > Total Time (1/1000 secs) (flows): 329152148 > Duration of data (realtime) : 540 > Duration of data (1/1000 secs) : 1366814 > Average flow time (1/1000 secs) : 6515.8001 > Average packet size (octets) : 474.2851 > Average flow size (octets) : 5174.7172 > Average packets per flow : 10.9106 > Average flows / second (flow) : 36.9810 > Average flows / second (real) : 93.5481 > Average Kbits / second (flow) : 1530.9283 > Average Kbits / second (real) : 3872.6817 > > > IP packet size distribution: > 1-32 64 96 128 160 192 224 256 288 320 352 384 416 448 480 > .000 .205 .364 .125 .116 .053 .028 .020 .019 .009 .005 .004 .003 .002 .002 > > 512 544 576 1024 1536 2048 2560 3072 3584 4096 4608 > .003 .008 .001 .015 .019 .000 .000 .000 .000 .000 .000 > > Packets per flow distribution: > 1 2 4 8 12 16 20 24 28 32 36 40 44 48 52 > .439 .174 .073 .119 .067 .034 .018 .015 .010 .008 .006 .004 .003 .004 .002 > > 60 100 200 300 400 500 600 700 800 900 >900 > .004 .010 .005 .001 .001 .000 .000 .000 .000 .000 .001 > > Octets per flow distribution: > 32 64 128 256 512 1280 2048 2816 3584 4352 5120 5888 6656 7424 8192 > .000 .059 .297 .200 .153 .154 .042 .022 .011 .006 .004 .003 .002 .004 .003 > > 8960 9728 10496 11264 12032 12800 13568 14336 15104 15872 >15872 > .002 .002 .002 .001 .001 .001 .001 .001 .001 .000 .027 > > Flow time distribution: > 10 50 100 200 500 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 > .298 .024 .012 .073 .319 .071 .035 .024 .012 .010 .020 .009 .007 .006 .005 > > 12000 14000 16000 18000 20000 22000 24000 26000 28000 30000 >30000 > .009 .005 .003 .004 .004 .003 .003 .003 .003 .004 .035 > > > > > _______________________________________________ > netflow-tools mailing list > netflow-tools at mindrot.org > https://lists.mindrot.org/mailman/listinfo/netflow-tools > > > > Hi Andrew, I am not an expert in net-flow but I had similar situation. I guess that this is the result of increasing max flows tracked - so less frequently netflow records will be created (and smaller file). When maximum 8192 flows was reached probably softflowd just ends some flows creating records to manage other flows, thats why the file is bigger. The same situation is when manipulating timers - this allows you to decide when (or how long) the flow is tracked before creating netflow record... Maybe it would be good to trace some connections from one of you users to see how they are placed in netflow records - this will prove if the data is tracked or not (as you suspect) Regards, Sebastian From djm at mindrot.org Sun Aug 16 02:59:25 2009 From: djm at mindrot.org (Damien Miller) Date: Sun, 16 Aug 2009 02:59:25 +1000 (EST) Subject: [netflow-tools] Softflowd & flow-tools on multiple interfaces. In-Reply-To: References: <1a6f1ce60907090811i793efc05jbd0f6a3078e3d754@mail.gmail.com> Message-ID: On Mon, 13 Jul 2009, Sean Cody wrote: > I've deployed both softflowd and flow-tools to devices that I can't easily add > a mirror port to. > So I've got around 5 sensors per site (softflowd on 3 mirror interfaces and on > 2 devices directly) and 1 collector and am saving them in completely different > flow-tools log sets. A bit of reading lends me to the idea of using the > interface field in the flow records to record which device the flow came from > (and have online 1 set of flow logs). > > Is this possible or should I continue using the 1 softflowd per flow-capture > setup? Some platforms support listening to all IP traffic that passes through a host, but softflowd doesn't support this yet. > As well is there an easy way to tell if softflowd is missing flows (ala > tcpdump discards)? You can compare the total of the netflow packet or byte counts with those of the interfaces over the same time period. -d From koti.kelam at gmail.com Wed Aug 19 21:27:56 2009 From: koti.kelam at gmail.com (Koteswar) Date: Wed, 19 Aug 2009 16:57:56 +0530 Subject: [netflow-tools] Reg softflowd v9 Message-ID: Hi I am runing softflowd in my PC with following command #softflowd -D -v 9 -i eth1 -n192.168.100.2:5555 -t maxlife=10 -t expint=10 And it is not sending v9 template flow set immadiately. If I create some ICMP session with #ping 192.168.100.254 -c 4 then it is sending template flow set but not data flow set with the ICMP session data. If I do ping again then it is sending data flow set with the ICMP session data. So we are missing first data flow set. I am attaching the debug info with this mail. Regards Koteswar -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: debugLog Type: application/octet-stream Size: 6925 bytes Desc: not available URL: From koti.kelam at gmail.com Mon Aug 24 14:56:57 2009 From: koti.kelam at gmail.com (Koteswar) Date: Mon, 24 Aug 2009 10:26:57 +0530 Subject: [netflow-tools] Simple netflow probe for linux In-Reply-To: References: Message-ID: Hi In sofflowd, If I select track level as "ip" (softflowd -T ip) then it is filling other fields like protocol, src port, dst port, tcp flags to 0 and sending data flow set. But this is not correct behavior. It should not add these fields to data flow set or template flow set so that we can reduce exported flow data volume and network load (RFC3957). Please clarify if I am wrong? Regards Koteswar On Thu, Jul 9, 2009 at 1:20 PM, Damien Miller wrote: > On Wed, 8 Jul 2009, Koteswar - Pandu wrote: > > > Hi all > > > > I want a simple netflow probe in linux which will export v5 and v9 > > flows to the collector. Any daemon is available for this or kernel can > > be patched to do this?? > > Well this list partially exists to support softflowd: > > http://www.mindrot.org/projects/softflowd/ > > Softflowd is a software netflow probe that supports v5, v9 and IPv6. > > -d > -------------- next part -------------- An HTML attachment was scrubbed... URL: From djm at mindrot.org Mon Aug 24 17:19:32 2009 From: djm at mindrot.org (Damien Miller) Date: Mon, 24 Aug 2009 17:19:32 +1000 (EST) Subject: [netflow-tools] Simple netflow probe for linux In-Reply-To: References: Message-ID: On Mon, 24 Aug 2009, Koteswar wrote: > Hi > In sofflowd, If I select track level as "ip" (softflowd -T ip) then it is > filling other fields like protocol, src port, dst port, tcp flags to 0 and > sending data flow set. But this is not correct behavior. It should not add > these fields to data flow set or template flow set so that we can reduce > exported flow data volume and network load (RFC3957). > Please clarify if I am wrong? The tracking level (-T flag) defines how much of the packets are inspected. You setting of "ip" is the bare minimum, and does not include Layer-3 information like the protocol and protocol ports. Normally you would only select this option if you were uninterested in this information. If you do want to see source/destination ports and the protocol in use then I suggest that you specify "-T full" or just leave the -T flag off, since "full" is the default anyway. -d From koti.kelam at gmail.com Mon Aug 24 17:59:23 2009 From: koti.kelam at gmail.com (Koteswar) Date: Mon, 24 Aug 2009 13:29:23 +0530 Subject: [netflow-tools] Simple netflow probe for linux In-Reply-To: References: Message-ID: But while sending template record better not to add the unwanted fields like protocol and port. And in case of sending data record also donot add the fields protocol and ports if track level "ip" is selected. In softflowd we are sending all the fields independent of track level but setting unwanted fields to 0. Regards Koteswar On Mon, Aug 24, 2009 at 12:49 PM, Damien Miller wrote: > On Mon, 24 Aug 2009, Koteswar wrote: > > > Hi > > In sofflowd, If I select track level as "ip" (softflowd -T ip) then it is > > filling other fields like protocol, src port, dst port, tcp flags to 0 > and > > sending data flow set. But this is not correct behavior. It should not > add > > these fields to data flow set or template flow set so that we can reduce > > exported flow data volume and network load (RFC3957). > > Please clarify if I am wrong? > > The tracking level (-T flag) defines how much of the packets are inspected. > You setting of "ip" is the bare minimum, and does not include Layer-3 > information like the protocol and protocol ports. Normally you would only > select this option if you were uninterested in this information. > > If you do want to see source/destination ports and the protocol in use then > I suggest that you specify "-T full" or just leave the -T flag off, since > "full" is the default anyway. > > -d > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From list2009 at lunch.za.net Tue Aug 25 20:35:14 2009 From: list2009 at lunch.za.net (Andrew McGill) Date: Tue, 25 Aug 2009 12:35:14 +0200 Subject: [netflow-tools] =?iso-8859-1?q?softflowd_-m_512000_=2E=2E=2E_flow?= =?iso-8859-1?q?-capture_=2E=2E=2E_90_=25_less=09traffic=3F=3F?= In-Reply-To: <4A851BBF.9020200@wp.pl> References: <200908031748.07004.list2009@lunch.za.net> <4A851BBF.9020200@wp.pl> Message-ID: <200908251235.15079.list2009@lunch.za.net> On Friday 14 August 2009 10:09:35 Sebastian Majkowski wrote: > Andrew McGill wrote: > > Greetings netflow-tools, > > > > I have softflowd sending information to flow-capture for a network with a > > few hundred hosts (don't ask, the answer is probably "yes"). Softflowd > > was configured with the default without a -m parameter, so that softflowd > > tracked a maximum of 8192 flows. The primary reason for rolling over > > flows was running out of connections - and cpu load was obnoxiously high. > > So I fixed it (in the sense of thereifixedit.com, perhaps). I told > > softflowd that it should track a maximum of 512000 flows, and it duly > > did. > > > > The before and after log files for 10 minutes of traffic look like this: > > > > -rw-r--r-- 1 root root 12678211 Jul 26 17:02 > > ft-v05.2009-07-26.165257+0200 -rw-r--r-- 1 root root 673952 Jul 26 > > 17:32 ft-v05.2009-07-26.172247+0200 > > > > ... which is great, BUT it seems that most of the traffic is getting > > lost. It's not that this traffic is getting deferred into later stats -- > > it simply never gets reported -- the reported totals dropped to 10% of > > their previous values! > > > > before: Average Kbits / second (real) : 49598.9333 > > after: Average Kbits / second (real) : 3872.6817 > > > > The next day it was still roughly 10% of the real amount: > > > > Average Kbits / second (real) : 4617.1089 > > > > Is this correct behaviour? Am I doing one or more things wrong? > > > > &:-) > > > > > > > > Notes: > > > > Startup parameters: > > flow-capture -p /var/run/flow-capture.pid -n 144 -N -1 \ > > -w /var/log/netflows -S 10 0/0/8828 > > > > softflowd -i eth2 -n 127.0.0.1:8828 # BEFORE > > softflowd -i eth2 -n 127.0.0.1:8828 -m 512000 # AFTER > > > > > > In case it's relevant, this is what flow-stat said about the files: > > > > > > # --- ---- ---- Report Information --- --- --- (BEFORE) > > # > > # Fields: Total > > # Symbols: Disabled > > # Sorting: None > > # Name: Overall Summary > > # > > # Args: flow-stat > > # > > Total Flows : 723704 > > Total Octets : 2975935893 > > Total Packets : 6138299 > > Total Time (1/1000 secs) (flows): 5790296389 > > Duration of data (realtime) : 480 > > Duration of data (1/1000 secs) : 2363291 > > Average flow time (1/1000 secs) : 8000.9183 > > Average packet size (octets) : 484.8144 > > Average flow size (octets) : 4112.0900 > > Average packets per flow : 8.4818 > > Average flows / second (flow) : 306.2649 > > Average flows / second (real) : 1507.7167 > > Average Kbits / second (flow) : 10075.1113 > > Average Kbits / second (real) : 49598.9333 > > > > > > IP packet size distribution: > > 1-32 64 96 128 160 192 224 256 288 320 352 384 416 448 > > 480 .000 .379 .286 .090 .082 .044 .024 .021 .013 .007 .005 .003 .003 .001 > > .002 > > > > 512 544 576 1024 1536 2048 2560 3072 3584 4096 4608 > > .003 .005 .001 .014 .018 .000 .000 .000 .000 .000 .000 > > > > Packets per flow distribution: > > 1 2 4 8 12 16 20 24 28 32 36 40 44 48 > > 52 .643 .086 .075 .097 .030 .018 .011 .007 .005 .004 .003 .002 .002 .002 > > .001 > > > > 60 100 200 300 400 500 600 700 800 900 >900 > > .002 .005 .003 .001 .001 .000 .000 .000 .000 .000 .001 > > > > Octets per flow distribution: > > 32 64 128 256 512 1280 2048 2816 3584 4352 5120 5888 6656 7424 > > 8192 .000 .241 .298 .191 .104 .082 .022 .011 .007 .004 .004 .002 .002 > > .002 .002 > > > > 8960 9728 10496 11264 12032 12800 13568 14336 15104 15872 >15872 > > .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .019 > > > > Flow time distribution: > > 10 50 100 200 500 1000 2000 3000 4000 5000 6000 7000 8000 9000 > > 10000 .733 .014 .013 .024 .035 .032 .025 .017 .010 .008 .016 .007 .005 > > .005 .004 > > > > 12000 14000 16000 18000 20000 22000 24000 26000 28000 30000 >30000 > > .006 .004 .003 .003 .002 .004 .002 .001 .001 .001 .023 > > > > # --- ---- ---- Report Information --- --- --- (AFTER) > > # > > # Fields: Total > > # Symbols: Disabled > > # Sorting: None > > # Name: Overall Summary > > # > > # Args: flow-stat > > # > > Total Flows : 50516 > > Total Octets : 261406012 > > Total Packets : 551158 > > Total Time (1/1000 secs) (flows): 329152148 > > Duration of data (realtime) : 540 > > Duration of data (1/1000 secs) : 1366814 > > Average flow time (1/1000 secs) : 6515.8001 > > Average packet size (octets) : 474.2851 > > Average flow size (octets) : 5174.7172 > > Average packets per flow : 10.9106 > > Average flows / second (flow) : 36.9810 > > Average flows / second (real) : 93.5481 > > Average Kbits / second (flow) : 1530.9283 > > Average Kbits / second (real) : 3872.6817 > > > > > > IP packet size distribution: > > 1-32 64 96 128 160 192 224 256 288 320 352 384 416 448 > > 480 .000 .205 .364 .125 .116 .053 .028 .020 .019 .009 .005 .004 .003 .002 > > .002 > > > > 512 544 576 1024 1536 2048 2560 3072 3584 4096 4608 > > .003 .008 .001 .015 .019 .000 .000 .000 .000 .000 .000 > > > > Packets per flow distribution: > > 1 2 4 8 12 16 20 24 28 32 36 40 44 48 > > 52 .439 .174 .073 .119 .067 .034 .018 .015 .010 .008 .006 .004 .003 .004 > > .002 > > > > 60 100 200 300 400 500 600 700 800 900 >900 > > .004 .010 .005 .001 .001 .000 .000 .000 .000 .000 .001 > > > > Octets per flow distribution: > > 32 64 128 256 512 1280 2048 2816 3584 4352 5120 5888 6656 7424 > > 8192 .000 .059 .297 .200 .153 .154 .042 .022 .011 .006 .004 .003 .002 > > .004 .003 > > > > 8960 9728 10496 11264 12032 12800 13568 14336 15104 15872 >15872 > > .002 .002 .002 .001 .001 .001 .001 .001 .001 .000 .027 > > > > Flow time distribution: > > 10 50 100 200 500 1000 2000 3000 4000 5000 6000 7000 8000 9000 > > 10000 .298 .024 .012 .073 .319 .071 .035 .024 .012 .010 .020 .009 .007 > > .006 .005 > > > > 12000 14000 16000 18000 20000 22000 24000 26000 28000 30000 >30000 > > .009 .005 .003 .004 .004 .003 .003 .003 .003 .004 .035 > > > > > > > > > > _______________________________________________ > > netflow-tools mailing list > > netflow-tools at mindrot.org > > https://lists.mindrot.org/mailman/listinfo/netflow-tools > > Hi Andrew, > > I am not an expert in net-flow but I had similar situation. I guess that > this is the result of increasing max flows tracked - so less frequently > netflow records will be created (and smaller file). When maximum 8192 > flows was reached probably softflowd just ends some flows creating > records to manage other flows, thats why the file is bigger. The same > situation is when manipulating timers - this allows you to decide when > (or how long) the flow is tracked before creating netflow record... Well, I have worked around the problem by using a smaller connection buffer, and setting a timeout of 59 seconds for anything to make sure that something gets logged: /usr/local/sbin/softflowd -i eth0 -n 127.0.0.1:8818 -m 65536 -t maxlife=59 -t general=59 Flows over 1 minutes -- well, that's just tough. We'll count it as multiple flows. > Maybe it would be good to trace some connections from one of you users > to see how they are placed in netflow records - this will prove if the > data is tracked or not (as you suspect) like a regression test ... hmm.. Yes, a good idea .. hmm. I wonder what the correct test is ... something like this ? dd if=/dev/zero bs=1024 count=10 | netcat somewhere.com 80 expect a flow of ... um ? 10 kb plus a bit? (it's a touchy-feely test ...) &:-)