Informal notes taken by Geoff Huston - any inaccuracies are because I didn't hear it correctly! Apologies in advance if that's the case.
Looking in to the perceived problems between parent and child zones at delegation points in the DNS
Methodology is to undertake a zone transfer and then analyze the delegations in the zone for errors and warnings. What is 'correct' is unclear in the DNS specifications in some cases, so the checker does express an interpretation of the specifications.
The swedish graph shows a step function in the opening of registrations to fcfs. These appear to be around 17% of delegations that have some kind of error
In the DNS root zone there are 53% of delegations with errors of some form!
The check concentrates on coherency between parent and child.
The check includes >1 nameserver for the zone, all responses should be authoritative, serial numbers should match, NS records should match in parent and child, A records should exist for NS records, PTR records should exist for IP addresses when looking for an A record for a server, and does the A record exist for all domain names (including the ones when looking for a PTR). Nameservers respond in TCP and UDP. SOE email is functional, correct glue records when parent needs glue.
For .se zone, the outcomes in May 2003 were lame delegations proliferating (41%), followed by missing A (14%), non-TCP responders (14%), no email response.
There is a definite problem here with lame delegations
The root zone (268 entries), 93 zones have at least 1 NS record that does not respond to a TCP query, 68 zones have >= 1 lame name server delegation, 35 zones have a MNAME that does not respond to mail, and various other smaller numbers of zones with other errors.
Of course DNS will work as long as at least one server responds. If one server does not respond is this an error? Is this check applying too strict a ruleset of errors?
Conversation did note that there should be some cleanup of terminology in the DNS specifications as to what is an error in DNS configurations (such as 'slightly lame'). There was also a question as to the specification of a 'timeout' as distinct from 'no response'. It was noted that this was the default timers as used in the DNS resolver code. The issue of 'error' is one where the DNS will still work in the case of many forms of such records.
There are a large number of poor measurements out there. Better measurements are needed from multiple points using real DNS traffic
This exercise is to measure DNS server quality. The system measures real queries from multiple sources and the response times are measures, together with the server instance ID (anycast load balancing exposure), SOA and server software version. Later answers are queried less frequently.
What is not measured is the DNS service itself, nor the effects of very short (< 60secs) incidents.
Send good ideas about this to Daniel
see presentation
5 countries (UK, NL, DE, US, JP) have 50% of the total RIR IPv6 address allocations
Followup from Barry Greene's presentation at the March IEPG. Generic term to capture blackholes, sinks, tarpits, etc. The focus is on network abuse in terms of intrusion attempts and hack activity. They advertise otherwise "unused" IP addresses to capture scanning-based attacks. There are limited false positives in such an approach. This exercise captures around 100,000 IP addresses across 4 /16s, Also advertised a /8 sink. Interesting work on getting high performance from the iSINK units.
The premise is that the traffic is randomly scattered and that some will be directed to these sinks.
The campus iSINK receives traffic from local net mgmt query attempts. traffic from mis-configured hosts talking to ghosts, and probes and worms with an affinity to probe close to the source The traffic profile appears more fractal than regular.
The /8 sink gets around 5K pps and 5Mbps.Unsolicited traffic dominated by UDP. A significant amount of traffic was backscatter.