From sschwerdhoefer at multamedio.de Fri Jan 20 04:50:01 2006 From: sschwerdhoefer at multamedio.de (Sebastian Schwerdhoefer) Date: Thu, 19 Jan 2006 18:50:01 +0100 Subject: [netflow-tools] Problem with pfflowd on freebsd 6.0 Message-ID: <20060119175001.GF14058@localdomain> Hi, > When i start pfflowd with -D switch i only have this message: > pfflowd -d > no export target defined > zzzz -1 > pfflowd[37565]: pfflowd listening on pfsync0 > > it dosen't matter that i tell to what collector send this data. > nothing happens. > this is router for my network so there is a lot of changes in pf > states. the same situation here. I commented out the lines """ if (ph->action != PFSYNC_ACT_DEL) return; """ in pfflowd.c and it seems to work now. But this is a very dirty hack, 'cause the condition to only handle PFSYNC_ACT_DEL packets should save a lot of unnecessary netflow datagrams - as far as i and my boss understood. From djm at mindrot.org Fri Jan 20 23:26:25 2006 From: djm at mindrot.org (Damien Miller) Date: Fri, 20 Jan 2006 23:26:25 +1100 Subject: [netflow-tools] Problem with pfflowd on freebsd 6.0 In-Reply-To: <20060119175001.GF14058@localdomain> References: <20060119175001.GF14058@localdomain> Message-ID: <43D0D6F1.6090203@mindrot.org> Sebastian Schwerdhoefer wrote: > the same situation here. I commented out the lines > > """ > if (ph->action != PFSYNC_ACT_DEL) > return; > """ > > in pfflowd.c and it seems to work now. But this is a very dirty hack, 'cause > the condition to only handle PFSYNC_ACT_DEL packets should save a lot of > unnecessary netflow datagrams - as far as i and my boss understood. I have no idea why you aren't seeing PFSYNC_ACT_DEL messages... Could you have your pfflowd.c to printf() the pf->action value? Maybe that will give us a clue. -d From sschwerdhoefer at multamedio.de Fri Jan 20 23:48:31 2006 From: sschwerdhoefer at multamedio.de (Sebastian Schwerdhoefer) Date: Fri, 20 Jan 2006 13:48:31 +0100 Subject: [netflow-tools] Problem with pfflowd on freebsd 6.0 In-Reply-To: <43D0D6F1.6090203@mindrot.org> References: <20060119175001.GF14058@localdomain> <43D0D6F1.6090203@mindrot.org> Message-ID: <20060120124831.GL14058@localdomain> Damien Miller schrieb am 2006-01-20 um 13:26 Uhr: > I have no idea why you aren't seeing PFSYNC_ACT_DEL messages... > > Could you have your pfflowd.c to printf() the pf->action value? Maybe > that will give us a clue. pfsync works fine, and state deletions are synchronized too, but pfflowd (with a debug message) just prints out: ... DEBUG: ph->action: 1 DEBUG: ph->action: 2 ... No PFSYNC_ACT_DEL message arrives, only PFSYNC_ACT_INS and PFSYNC_ACT_UPD. It seems as pfsync (on FreeBSD 6) uses PFSYNC_ACT_UPD to notify about state deletions... Sadly I'm not a programmer, so I couldn't figure out how to detect if a PFSYNC_ACT_UPD message is a masked delete message. regards, Sebastian Schwerdhoefer From djm at mindrot.org Sat Jan 21 10:25:36 2006 From: djm at mindrot.org (Damien Miller) Date: Sat, 21 Jan 2006 10:25:36 +1100 Subject: [netflow-tools] Problem with pfflowd on freebsd 6.0 In-Reply-To: <20060120124831.GL14058@localdomain> References: <20060119175001.GF14058@localdomain> <43D0D6F1.6090203@mindrot.org> <20060120124831.GL14058@localdomain> Message-ID: <43D17170.1000708@mindrot.org> Sebastian Schwerdhoefer wrote: > Damien Miller schrieb am 2006-01-20 um 13:26 Uhr: > >>I have no idea why you aren't seeing PFSYNC_ACT_DEL messages... >> >>Could you have your pfflowd.c to printf() the pf->action value? Maybe >>that will give us a clue. > > > pfsync works fine, and state deletions are synchronized too, but > pfflowd (with a debug message) just prints out: > ... > DEBUG: ph->action: 1 > DEBUG: ph->action: 2 > ... > No PFSYNC_ACT_DEL message arrives, only PFSYNC_ACT_INS and > PFSYNC_ACT_UPD. It seems as pfsync (on FreeBSD 6) uses PFSYNC_ACT_UPD > to notify about state deletions... Does tcpdump on the pfsync interface see delete events? -d From andreas.brillisauer at hetzner.de Tue Jan 24 04:10:32 2006 From: andreas.brillisauer at hetzner.de (Andreas Brillisauer -- Hetzner Online AG) Date: Mon, 23 Jan 2006 18:10:32 +0100 Subject: [netflow-tools] Number of active flows raises and raises... Message-ID: <1138036232.8573.21.camel@neptune> Hello, I want to use softflowd for traffic accounting of a mirrored port. The amount of traffic is currently about 10,000 packets per second. I use the following parameters: ---8<----------------------------------------------------------------- /usr/local/sbin/softflowd -i eth2 -t maxlife=300 -m 1048576 -n 127.0.0.1:9800 ---8<----------------------------------------------------------------- I did expect that the number of active flows won't raise mentionably after the limit of 5 minutes because softflowd has to expire the flows (see option "-t maxlife=300"). But the number of active flows raises and raises until the limit of 1048576 is reached. I have no answer for that. Once the limit of maximum flows is reached softflowd takes 99% of the CPU. Any suggestions? Regards, Andreas From djm at mindrot.org Tue Jan 24 11:51:10 2006 From: djm at mindrot.org (Damien Miller) Date: Tue, 24 Jan 2006 11:51:10 +1100 (EST) Subject: [netflow-tools] Number of active flows raises and raises... In-Reply-To: <1138036232.8573.21.camel@neptune> References: <1138036232.8573.21.camel@neptune> Message-ID: On Mon, 23 Jan 2006, Andreas Brillisauer -- Hetzner Online AG wrote: > I did expect that the number of active flows won't raise mentionably > after the limit of 5 minutes because softflowd has to expire the flows > (see option "-t maxlife=300"). But the number of active flows raises and > raises until the limit of 1048576 is reached. I have no answer for that. > Once the limit of maximum flows is reached softflowd takes 99% of the That is a bug, which caused the maxlife to only be checked when traffic was received on a flow. Please try this patch: Index: softflowd.c =================================================================== RCS file: /var/cvs/softflowd/softflowd.c,v retrieving revision 1.86 diff -u -p -r1.86 softflowd.c --- softflowd.c 18 Nov 2005 05:19:12 -0000 1.86 +++ softflowd.c 24 Jan 2006 00:45:01 -0000 @@ -473,7 +473,7 @@ flow_update_expiry(struct FLOWTRACK *ft, if (ft->icmp_timeout != 0 && ((flow->af == AF_INET && flow->protocol == IPPROTO_ICMP) || ((flow->af == AF_INET6 && flow->protocol == IPPROTO_ICMPV6)))) { - /* UDP flows */ + /* ICMP flows */ flow->expiry->expires_at = flow->flow_last.tv_sec + ft->icmp_timeout; flow->expiry->reason = R_ICMP; @@ -486,6 +486,11 @@ flow_update_expiry(struct FLOWTRACK *ft, flow->expiry->reason = R_GENERAL; out: + if (ft->maximum_lifetime != 0 && flow->expiry->expires_at != 0) { + flow->expiry->expires_at = MIN(flow->expiry->expires_at, + flow->flow_start.tv_sec + ft->maximum_lifetime); + } + EXPIRY_INSERT(EXPIRIES, &ft->expiries, flow->expiry); } @@ -745,9 +750,18 @@ check_expired(struct FLOWTRACK *ft, stru (ex != CE_EXPIRE_FORCED && (expiry->expires_at < now.tv_sec))) { /* Flow has expired */ + + if (ft->maximum_lifetime != 0 && + expiry->flow->flow_last.tv_sec - + expiry->flow->flow_start.tv_sec >= + ft->maximum_lifetime) + expiry->reason = R_MAXLIFE; + if (verbose_flag) - logit(LOG_DEBUG, "Queuing flow seq:%llu (%p) for expiry", - expiry->flow->flow_seq, expiry->flow); + logit(LOG_DEBUG, + "Queuing flow seq:%llu (%p) for expiry " + "reason %d", expiry->flow->flow_seq, + expiry->flow, expiry->reason); /* Add to array of expired flows */ oldexp = expired_flows; From sschwerdhoefer at multamedio.de Wed Jan 25 02:05:48 2006 From: sschwerdhoefer at multamedio.de (Sebastian Schwerdhoefer) Date: Tue, 24 Jan 2006 16:05:48 +0100 Subject: [netflow-tools] Problem with pfflowd on freebsd 6.0 Message-ID: <20060124150548.GA29840@localdomain> Damien Miller schrieb am 2006-01-21 um 00:25 Uhr: > Does tcpdump on the pfsync interface see delete events? Hm...: Directly listening at pfsync0 does not work (tcpdump: unsupported data link type 121) and if I listen at the "syncdev", tcpdump or ethereal does not decode the pfsync packets. Anyway I'll attach a commented tcpdump output. Maybe you can decode it. regards, Sebastian Schwerdhoefer 11:34:10.466564 IP 172.16.17.241 > 224.0.0.240: pfsync 268 11:34:11.125427 IP 172.16.17.241 > 224.0.0.240: pfsync 92 11:34:11.126415 IP 172.16.17.241 > 224.0.0.240: pfsync 228 11:34:11.126422 IP 172.16.17.241 > 224.0.0.240: pfsync 92 11:34:11.127423 IP 172.16.17.241 > 224.0.0.240: pfsync 228 11:34:12.127270 IP 172.16.17.241 > 224.0.0.240: pfsync 356 11:34:13.128119 IP 172.16.17.241 > 224.0.0.240: pfsync 268 11:34:14.464950 IP 172.16.17.241 > 224.0.0.240: pfsync 268 11:34:15.465793 IP 172.16.17.241 > 224.0.0.240: pfsync 268 11:34:15.974669 IP 172.16.17.241 > 224.0.0.240: pfsync 180 11:34:15.975669 IP 172.16.17.241 > 224.0.0.240: pfsync 452 11:34:16.159600 IP 172.16.17.241 > 224.0.0.240: pfsync 268 11:34:16.342616 IP 172.16.17.241 > 224.0.0.240: pfsync 452 11:34:16.654566 IP 172.16.17.241 > 224.0.0.240: pfsync 532 11:34:16.752550 IP 172.16.17.241 > 224.0.0.240: pfsync 356 11:34:16.788545 IP 172.16.17.241 > 224.0.0.240: pfsync 452 11:34:17.788405 IP 172.16.17.241 > 224.0.0.240: pfsync 444 11:34:19.127201 IP 172.16.17.241 > 224.0.0.240: pfsync 268 11:34:20.325028 IP 172.16.17.241 > 224.0.0.240: pfsync 268 11:34:21.470885 IP 172.16.17.241 > 224.0.0.240: pfsync 268 11:34:22.471730 IP 172.16.17.241 > 224.0.0.240: pfsync 268 11:34:23.472587 IP 172.16.17.241 > 224.0.0.240: pfsync 268 11:34:24.473424 IP 172.16.17.241 > 224.0.0.240: pfsync 268 11:34:25.474272 IP 172.16.17.241 > 224.0.0.240: pfsync 444 11:34:26.475116 IP 172.16.17.241 > 224.0.0.240: pfsync 268 11:34:27.475965 IP 172.16.17.241 > 224.0.0.240: pfsync 268 11:34:28.476825 IP 172.16.17.241 > 224.0.0.240: pfsync 268 11:34:29.477657 IP 172.16.17.241 > 224.0.0.240: pfsync 444 ### Here I started my browser and pfctl -ss reported the new states 11:34:30.478538 IP 172.16.17.241 > 224.0.0.240: pfsync 268 11:34:30.937397 IP 172.16.17.241 > 224.0.0.240: pfsync 444 11:34:30.948405 IP 172.16.17.241 > 224.0.0.240: pfsync 452 11:34:31.515314 IP 172.16.17.241 > 224.0.0.240: pfsync 532 11:34:31.522305 IP 172.16.17.241 > 224.0.0.240: pfsync 180 11:34:31.531302 IP 172.16.17.241 > 224.0.0.240: pfsync 1348 11:34:31.538301 IP 172.16.17.241 > 224.0.0.240: pfsync 180 11:34:31.539301 IP 172.16.17.241 > 224.0.0.240: pfsync 452 11:34:31.551299 IP 172.16.17.241 > 224.0.0.240: pfsync 532 11:34:31.565296 IP 172.16.17.241 > 224.0.0.240: pfsync 532 11:34:31.585294 IP 172.16.17.241 > 224.0.0.240: pfsync 532 11:34:31.587294 IP 172.16.17.241 > 224.0.0.240: pfsync 356 11:34:31.591292 IP 172.16.17.241 > 224.0.0.240: pfsync 452 11:34:31.611290 IP 172.16.17.241 > 224.0.0.240: pfsync 532 11:34:31.726274 IP 172.16.17.241 > 224.0.0.240: pfsync 532 11:34:32.481171 IP 172.16.17.241 > 224.0.0.240: pfsync 532 11:34:32.493161 IP 172.16.17.241 > 224.0.0.240: pfsync 268 11:34:32.679128 IP 172.16.17.241 > 224.0.0.240: pfsync 452 11:34:32.925090 IP 172.16.17.241 > 224.0.0.240: pfsync 356 11:34:32.948087 IP 172.16.17.241 > 224.0.0.240: pfsync 900 11:34:32.958134 IP 172.16.17.241 > 224.0.0.240: pfsync 532 11:34:32.969083 IP 172.16.17.241 > 224.0.0.240: pfsync 452 11:34:32.980082 IP 172.16.17.241 > 224.0.0.240: pfsync 532 11:34:32.987080 IP 172.16.17.241 > 224.0.0.240: pfsync 180 11:34:32.989080 IP 172.16.17.241 > 224.0.0.240: pfsync 900 11:34:32.990080 IP 172.16.17.241 > 224.0.0.240: pfsync 180 11:34:32.995078 IP 172.16.17.241 > 224.0.0.240: pfsync 452 11:34:33.011077 IP 172.16.17.241 > 224.0.0.240: pfsync 532 11:34:33.013079 IP 172.16.17.241 > 224.0.0.240: pfsync 532 11:34:33.013086 IP 172.16.17.241 > 224.0.0.240: pfsync 532 11:34:33.015076 IP 172.16.17.241 > 224.0.0.240: pfsync 532 11:34:33.019076 IP 172.16.17.241 > 224.0.0.240: pfsync 180 11:34:33.020075 IP 172.16.17.241 > 224.0.0.240: pfsync 452 11:34:33.047072 IP 172.16.17.241 > 224.0.0.240: pfsync 532 11:34:33.049070 IP 172.16.17.241 > 224.0.0.240: pfsync 532 11:34:33.061069 IP 172.16.17.241 > 224.0.0.240: pfsync 532 11:34:33.067068 IP 172.16.17.241 > 224.0.0.240: pfsync 532 11:34:33.070072 IP 172.16.17.241 > 224.0.0.240: pfsync 356 11:34:33.078084 IP 172.16.17.241 > 224.0.0.240: pfsync 452 11:34:33.090068 IP 172.16.17.241 > 224.0.0.240: pfsync 532 11:34:33.092068 IP 172.16.17.241 > 224.0.0.240: pfsync 532 11:34:33.095063 IP 172.16.17.241 > 224.0.0.240: pfsync 532 11:34:33.096064 IP 172.16.17.241 > 224.0.0.240: pfsync 356 11:34:33.097063 IP 172.16.17.241 > 224.0.0.240: pfsync 452 11:34:33.105062 IP 172.16.17.241 > 224.0.0.240: pfsync 532 11:34:33.120071 IP 172.16.17.241 > 224.0.0.240: pfsync 532 11:34:33.126059 IP 172.16.17.241 > 224.0.0.240: pfsync 532 11:34:33.149069 IP 172.16.17.241 > 224.0.0.240: pfsync 532 11:34:33.163058 IP 172.16.17.241 > 224.0.0.240: pfsync 532 11:34:33.487007 IP 172.16.17.241 > 224.0.0.240: pfsync 532 11:34:35.139763 IP 172.16.17.241 > 224.0.0.240: pfsync 444 11:34:36.483596 IP 172.16.17.241 > 224.0.0.240: pfsync 268 11:34:37.484453 IP 172.16.17.241 > 224.0.0.240: pfsync 356 11:34:38.485306 IP 172.16.17.241 > 224.0.0.240: pfsync 356 11:34:39.486153 IP 172.16.17.241 > 224.0.0.240: pfsync 356 11:34:40.487004 IP 172.16.17.241 > 224.0.0.240: pfsync 356 11:34:41.487842 IP 172.16.17.241 > 224.0.0.240: pfsync 356 11:34:42.488700 IP 172.16.17.241 > 224.0.0.240: pfsync 356 11:34:43.489549 IP 172.16.17.241 > 224.0.0.240: pfsync 356 11:34:44.490399 IP 172.16.17.241 > 224.0.0.240: pfsync 356 11:34:45.491243 IP 172.16.17.241 > 224.0.0.240: pfsync 356 11:34:46.492091 IP 172.16.17.241 > 224.0.0.240: pfsync 356 11:34:47.492935 IP 172.16.17.241 > 224.0.0.240: pfsync 356 11:34:48.493785 IP 172.16.17.241 > 224.0.0.240: pfsync 356 11:34:49.494633 IP 172.16.17.241 > 224.0.0.240: pfsync 356 11:34:50.495475 IP 172.16.17.241 > 224.0.0.240: pfsync 356 11:34:51.496320 IP 172.16.17.241 > 224.0.0.240: pfsync 356 11:34:52.497174 IP 172.16.17.241 > 224.0.0.240: pfsync 356 11:34:53.498025 IP 172.16.17.241 > 224.0.0.240: pfsync 356 11:34:54.498862 IP 172.16.17.241 > 224.0.0.240: pfsync 356 11:34:55.499717 IP 172.16.17.241 > 224.0.0.240: pfsync 356 11:34:56.500564 IP 172.16.17.241 > 224.0.0.240: pfsync 356 11:34:57.501409 IP 172.16.17.241 > 224.0.0.240: pfsync 356 11:34:58.502263 IP 172.16.17.241 > 224.0.0.240: pfsync 356 11:34:59.503112 IP 172.16.17.241 > 224.0.0.240: pfsync 356 11:35:00.503956 IP 172.16.17.241 > 224.0.0.240: pfsync 356 11:35:01.504799 IP 172.16.17.241 > 224.0.0.240: pfsync 356 11:35:02.505670 IP 172.16.17.241 > 224.0.0.240: pfsync 356 11:35:03.505466 IP 172.16.17.241 > 224.0.0.240: pfsync 356 11:35:04.507362 IP 172.16.17.241 > 224.0.0.240: pfsync 356 11:35:05.508193 IP 172.16.17.241 > 224.0.0.240: pfsync 356 11:35:06.509046 IP 172.16.17.241 > 224.0.0.240: pfsync 356 11:35:07.509889 IP 172.16.17.241 > 224.0.0.240: pfsync 356 11:35:08.510839 IP 172.16.17.241 > 224.0.0.240: pfsync 356 11:35:09.511582 IP 172.16.17.241 > 224.0.0.240: pfsync 356 11:35:10.512434 IP 172.16.17.241 > 224.0.0.240: pfsync 356 11:35:11.513289 IP 172.16.17.241 > 224.0.0.240: pfsync 268 11:35:12.514120 IP 172.16.17.241 > 224.0.0.240: pfsync 356 11:35:13.514971 IP 172.16.17.241 > 224.0.0.240: pfsync 356 ### here the states dissapeared. 11:35:14.515815 IP 172.16.17.241 > 224.0.0.240: pfsync 356 11:35:15.516661 IP 172.16.17.241 > 224.0.0.240: pfsync 268 11:35:16.517510 IP 172.16.17.241 > 224.0.0.240: pfsync 268 11:35:17.518346 IP 172.16.17.241 > 224.0.0.240: pfsync 268 11:35:18.519201 IP 172.16.17.241 > 224.0.0.240: pfsync 268 11:35:19.520041 IP 172.16.17.241 > 224.0.0.240: pfsync 268 11:35:20.520889 IP 172.16.17.241 > 224.0.0.240: pfsync 268 11:35:21.521741 IP 172.16.17.241 > 224.0.0.240: pfsync 268 From andreas.brillisauer at hetzner.de Thu Jan 26 02:56:14 2006 From: andreas.brillisauer at hetzner.de (Andreas Brillisauer -- Hetzner Online AG) Date: Wed, 25 Jan 2006 16:56:14 +0100 Subject: [netflow-tools] Does softflowd open a new flow for same IPs but different ports? Message-ID: <1138204574.3442.44.camel@neptune> Hello, I'm not quite sure about the following. Assuming that softflowd captures two IP packets. Both packets have the same source and destination IP but different ports. Does softflowd open two flows or only one? If it opens two flows, is there a possibility to say softflowd to ignore the ports? Greetings, Andreas -- Hetzner Online AG Industriestr. 6 D-91710 Gunzenhausen Tel: +49 9831 610061 Fax: +49 9831 610062 E-Mail: info at hetzner.de http://www.hetzner.de From djm at mindrot.org Thu Jan 26 11:01:22 2006 From: djm at mindrot.org (Damien Miller) Date: Thu, 26 Jan 2006 11:01:22 +1100 Subject: [netflow-tools] Does softflowd open a new flow for same IPs but different ports? In-Reply-To: <1138204574.3442.44.camel@neptune> References: <1138204574.3442.44.camel@neptune> Message-ID: <43D81152.30609@mindrot.org> Andreas Brillisauer -- Hetzner Online AG wrote: > Hello, > > I'm not quite sure about the following. Assuming that softflowd captures > two IP packets. Both packets have the same source and destination IP but > different ports. Does softflowd open two flows or only one? If it opens > two flows, is there a possibility to say softflowd to ignore the ports? Not at present, but it could be added pretty easily. Please try the attached patch. It may not apply cleanly against a released version of softflowd - if this is the case, please try a snapshot from http://www2.mindrot.org/softflowd_snap/ -d -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: softflowd-track.diff Url: http://lists.mindrot.org/pipermail/netflow-tools/attachments/20060126/1a09f7c2/attachment.ksh From andreas.brillisauer at hetzner.de Tue Jan 31 04:09:14 2006 From: andreas.brillisauer at hetzner.de (Andreas Brillisauer -- Hetzner Online AG) Date: Mon, 30 Jan 2006 18:09:14 +0100 Subject: [netflow-tools] Up to which amount of traffic does softflowd work properly? Message-ID: <1138640954.3439.85.camel@neptune> Hello, I would like to use softflowd for traffic accounting on a Gbit interface. I already did some testing. I have a AMD Opteron machine with 2 GB RAM that runs Debian Sarge. The machine gets the traffic via a mirrored port. At 270 Mbit (50,000 packets/sec) softflowd needs 80 % of the CPU. Does anyone use softflowd with more Mbit? Are there any possiblities for a higher performance. I already have a patch for ignoring the ports -- so softflowd will open less flows. This is one possibility to improve the performace. Are there any other ways to increase performance? If softflowd isn't high-performance enough to handle a Gbit interface on the abovementioned hardware, are there other tools that are designed for such an amount of traffic? What are your experiences? Greetings, Andreas From djm at mindrot.org Tue Jan 31 12:47:36 2006 From: djm at mindrot.org (Damien Miller) Date: Tue, 31 Jan 2006 12:47:36 +1100 (EST) Subject: [netflow-tools] Up to which amount of traffic does softflowd work properly? In-Reply-To: <1138640954.3439.85.camel@neptune> References: <1138640954.3439.85.camel@neptune> Message-ID: On Mon, 30 Jan 2006, Andreas Brillisauer -- Hetzner Online AG wrote: > Hello, > > I would like to use softflowd for traffic accounting on a Gbit > interface. I already did some testing. I have a AMD Opteron machine with > 2 GB RAM that runs Debian Sarge. The machine gets the traffic via a > mirrored port. At 270 Mbit (50,000 packets/sec) softflowd needs 80 % of > the CPU. > > Does anyone use softflowd with more Mbit? Are there any possiblities for > a higher performance. I already have a patch for ignoring the ports -- > so softflowd will open less flows. This is one possibility to improve > the performace. Are there any other ways to increase performance? A simple way is to decreate the flow timeouts, so softflowd expires flows more aggressively. The default ones are quite conservative. If there is certain traffic that you are not interested in, then you can pre-filter it using a bpf(4) program on softflowd's command line. There is no point in letting softflowd see traffic that you don't care to report on. Beyond this, there are several internal optimisations that I have been planning on making that will speed up softflowd considerably. The top of the list is switching to a pool allocator for flows and associated expiry events rather than calling malloc(3) for each one. If you are familiar with the GNU toolchain, building a profiled version of softflowd and collecting a good profile would help direct these efforts to areas that are most likely to be productive. -d From djm at mindrot.org Tue Jan 31 16:45:59 2006 From: djm at mindrot.org (Damien Miller) Date: Tue, 31 Jan 2006 16:45:59 +1100 (EST) Subject: [netflow-tools] Up to which amount of traffic does softflowd work properly? In-Reply-To: References: <1138640954.3439.85.camel@neptune> Message-ID: On Tue, 31 Jan 2006, Damien Miller wrote: > A simple way is to decreate the flow timeouts, so softflowd expires "decrease", not "decreate" (a perfectly cromulent word, but not what I meant). -d From greg at propdata.co.za Tue Jan 31 20:07:07 2006 From: greg at propdata.co.za (Greg Armer) Date: Tue, 31 Jan 2006 11:07:07 +0200 Subject: [netflow-tools] FreeBSD pfflowd + flowd Message-ID: <20060131090714.5D89117E608@mail.mindrot.org> Greetings list, I seem to be having an issue with using FreeBSD pf / pfflowd and flowd. I have a working firewall ruleset running on a FreeBSD 5.4-STABLE server using the FreeBSD port of pf from OpenBSD. I compiled my own kernel with the options pfsync Option to get the pfsync0 interface, which is up and working. I then installed pfflowd and flowd from the FreeBSD ports tree. If I run pfflowd and run a # tcpdump -n -i lo0 -s1500 -vvvTcnfp I see the netflows coming from pfflowd across the pfsync0 interface: root at fyrewall:~ #> tcpdump -n -i lo0 -s1500 -vvvTcnfp tcpdump: listening on lo0, link-type NULL (BSD loopback), capture size 1500 bytes 11:06:54.515048 IP (tos 0x0, ttl 64, id 15359, offset 0, flags [DF], length: 71) 127.0.0.1.63464 > 127.0.0.1.65270: P [tcp sum ok] 3176441976:3176441995(19) ack 759031372 win 35840 11:06:54.516505 IP (tos 0x0, ttl 64, id 15360, offset 0, flags [none], length: 64) 127.0.0.1.62934 > 127.0.0.1.53: NetFlow v5810, 65.536 uptime, 0.023397729, 256 recs 11:06:54.558983 IP (tos 0x0, ttl 64, id 15362, offset 0, flags [none], length: 346) 127.0.0.1.53 > 127.0.0.1.62934: NetFlow v5810, 65.540 uptime, 655360.023397729, 33152 recs started 65.537, last 78250.013 115.45.101.117:1377 > 6.102.97.108:29485 >> 107.97.103.3 6 FRAU tos 102, 65537 (3222011909 octets) started 842596.711, last 107047.777 103.101.115.117:28001 > 105.116.101.192:27072 >> 27.192.48.0 89 tos 0, 487424 (268722489 octets) started 25486.848, last 268597.864 200.192.89.0:2657 > 1.0.1.0:27489 >> 0.0.9.0 105 tos 116, 3418382336 (33554688 octets) started 1824561.344, last 1610613.248 1.132.230.0:256 > 5.2.122.107:388 >> 192.152.192.96 5 tos 2, 99558 (328314 octets) started 3231236.192, last 131.073 132.230.0.5:1 > 2.122.98.192:34022 >> 152.192.96.0 2 tos 122, 25486848 (84048483 octets) Pfflowd is running as follows: nobody 89103 0.0 0.4 1488 1000 ?? Ss Mon08AM 0:02.51 /usr/local/sbin/pfflowd -n 127.0.0.1:2055 If I use netcat to listen on 127.0.0.1 UDP port 2055 while the flowd daemon is not running I receive nothing: root at fyrewall:~ #> nc -4 -l -u 127.0.0.1 2055 ^C However connecting with netcat to port 2055 on 127.0.0.1 with flowd running I receive the connection, indicating that flowd is running correctly: root at fyrewall:~ #> nc -uv -s 127.0.0.1 127.0.0.1 2055 Connection to 127.0.0.1 2055 port [udp/*] succeeded! ^C So it seems my problem lies with getting traffic out of pfflowd and into flowd. Here is my pfflowd start script: root at fyrewall:~ #> cat /usr/local/etc/rc.d/pfflowd.sh #!/bin/sh # Enter the host to send the netflow datagrams to, the format # is IP:PORT (e.g 127.0.0.1:2055) host="127.0.0.1:2055" case "$1" in start) echo -n " pfflowd" /usr/local/sbin/pfflowd -n ${host} ;; stop) if [ ! -f /var/run/pfflowd.pid ]; then echo "pfflowd not running" exit 64 fi kill `cat /var/run/pfflowd.pid` ;; esac Perhaps someone could offer some assistance ? I also have a pf rule to: pass quick on pfsync0 And watching the pflog0 interface does not show any blocking going on for the pfsync0 interface. Many thanks for any assistance. Greg (wiqd)