March 1996 IEPG Meeting
Location: Los Angeles, Ca, USA
Date: Sunday, 3 March 1996
Meeting Chair: Elise Gerich
Minutes: Havard Eidnes
1. Introduction
2. IXes and regional updates
2.1 The Amsterdam IX, Erik-Jan Bos
The AMS IX is currently constructed using switched ethernet and
FDDI. The physical location is SARA at WCW in Amsterdam, NL. A
second location at NIKHEF (also WCW) will soon be added, and
extensions to a metro-area exchange will also probably happen
soon. More information is available from
http://www.ams-ix.net/.
2.2 The Packet clearinghouse, Bill Woodlock
The Packet clearinghouse appears to not be a "traditional IX", but
more a cooperative of it's members with a CIX-like setup. It is
constructed with a 10Mbit/s connection to MAE-WEST, with a local
shared ethernet where colocate space is available, and one
cisco-4000 providing the connection to MAE-WEST. The Packet
clearinghouse started January 1995, and permits non-transit peering
between its members. The current members are located in California
LATA 1, and the membership count is 15 at present. More info
available at
http://hardhat.zocalo.net/pch.html, or from
[email protected].
2.3 The LINX, Bill Cessna
The London INternet eXchange is located at Telehouse in the
Docklands in London, GB. It is currently organized as a "not-for-
profit" company. The IX is currently constructed with switched
ethernet, an upgrade to a combination of 100Mbit/s Ethernet and
FDDI is on the way. The LINX currently has 19 members, a doubling
over the 4 last months. The basic model is pure bilateral
arrangements with "bring your own box" policy. See
http://www.linx.net/linx/ for information
on rules, regulations,
procedures, MOU, and prices for the LINX.
2.4 SingNet/STIX, Lorna Leong
The Singapore Telecom Internet eXchange (STIX) is a "layer-three"
interconnect, providing transit services for several networks in
the Asia-Pacific region, some of them being in CN, JP, MY, SG, AU,
PH, NP, IN (this is an incomplete list). The financial model is
similar to a half-circuit model, and they have a T1 to ANS and an
E1 to UUnet (as part of the Microsoft Network).
Peter Lothberg asked if he could put his own box on the STIX, it
turned out that would not be possible unless PL's network was
licensed to operate a network in Singapore (there only exists 3
licensed service providers there).
2.5 CERN Internet exchange
The CERN interconnect is built with FDDI and Ethernet. It is open
to providers selling services to CERN and to providers in CH and
FR. The fees are 1000 CFR/month. There is no AUP for the IX other
than a requirement to use RIPE-181 to register announced routes in
the RIPE database. More info in
http://wwwcs.cern.ch/wwwcs/public/ip/cernixp.home.html.
2.6 Eugene, Oregon IX, Randy Bush
This IX is built with a shared FDDI ring. It currently connects
the state education network and several commercial service
providers. There is no name for the IX, no web page, and a basic
"bring your box" policy. Contact is Dave Meyer, University of
Oregon, or contact Randy Bush at [email protected].
2.7 Hong Kong IX, Randy Conrad
2.8 Stockholm D-GIX, Peter Lothberg
2.9 Tokyo IX, Jun Murai
The Tokyo IX connects 16 commercial service providers, is called
NSPIXP, and is being run by the WIDE project. Currently the IX is
being moved to another physical location with more space. In the
current facility maximally a T1 is allowed in, in the new facility
there will be no such limitation. The IX is currently built with
ethernet switches. There are some international service providers
connected, but most are JP ISPs. There is a web page available,
but so far it's only available in japanese (it is being
translated). It's somewhere in http://www.wide.ad.jp/ (sorry, I've
not been able to locate it myself...).
Guy Almes asked a general question whether the various IXes are
only for ISPs or whether they permit connection of other servers
(web servers etc.). It appeared that most IXes only permit ISP
connections, some exceptions were mentioned: AMS-IX and CERN have
MBONE routers on their IXes, NZ have a Web cache and a news server
at the IX, the NAP in Pennsauken has a separate FDDI (in "ICM-
land") for an MBONE router.
Lastly, more pointers to other IXes are available in Bill Manning's
collection.
3. DNS use & abuse
In short, the quality of the DNS leaves a lot to be desired.
(Stronger wording was used at the meeting...). The quality seems
to be inversely proportional with the amount of clue or usage level
of the DNS.
Randy Bush petitioned everyone to start auditing their data, he had
done a scan of the .COM, .EDU +++ zones and found 236 zones where
one of his name servers had been delegated to but which he had
never heard of (much less configured in his name servers).
A point was made that two separate name servers for each zone may
not be enough, and a painful experience was cited when a primary
was down for a long time and the (single) secondary was down for a
short.
This is possibly fodder for a new I-D.
It was mentioned that InterNIC is currently tightening their
requirements for technical correctness before delegating, and that
the RIPE NCC and APNIC are already checking the data before
delegating (although RIPE NCC and APNIC only handle in-addr.arpa
domains).
4. Draining 192/8 ("Draining the swamp")
This is an effort undertaken to get people to voluntarily return
address space in 192/8 and either renumber into private address
space or into more aggregateable address space.
The InterNIC data was used to try to contact people, and of the
6000 requests sent 1500 failed and only 1800 responded. David
Conrad from APNIC commented that this wasn't too surprising, and
that they typically have approximately 85% failure rate due to
stale data.
An overview of the currently active routes in this address space
can be found in http://www.isi.edu/div7/pier/whose-routes.
Peter Lothberg commented that giving back IP addresses within 192/8
might be a problem since some registries appear to insist on having
a customer relationship with the party giving back the addresses
(the RIPE NCC was mentioned).
Mike O'Dell commented that the process of keeping the registry data
up-to-date is fundamentally broken, since we are using humans to
keep track of what is essentially "computer data", and that some
improvement to this process should be possible.
A suggestion for other targets to perform similar activities in was
mentioned, e.g. 198/8 and 204/8.
5. IP address registry issues
Randy Conrad from the APNIC made a presentation about the formal
organization of the IP address registries.
The "historic" model is a hierarchy US DoD - FNC - IANA - InterNIC.
RFC 1466 introduced RIPE NCC and APNIC as clients of the InterNIC.
This model is as it's name indicates "historical", and will
probably not continue.
The new so-called "politically correct" model is a hierarchy ISOC -
IAB - IANA with InterNIC, RIPE NCC and APNIC as all answering to
IANA, and still with no end-users directly as clients of the
regional registries. This is the likely future.
A more "pragmatic" model would change the top of the hierarchy in
the PC model to have the IANA answer to the ISPs. This binding
currently does not exist. This model is slightly "unstable", in
that it lacks end-user representation.
A couple of comments:
o IPv4 address space is becoming more scarce (and more valuable).
However, it's not as scarce as "route slots"!
o Registries can't modify their policies by ISP whims
o The end of unallocated "free" address space doesn't necessarily
mean the end of the Internet or that it will stop growing.
o Fiscal issues: all registries except the InterNIC are
self-funding
There is a need to nurture the IANA to work independetly of
political distrubances.
6. Routing Arbiter route flap report
Craig Labovitz made a presentation of the work the RA is doing to
present Internet routing flap reports. Collection of raw data is
presently done on 3 of the "priority NAPs" and 1 other NAP. What
has been made available so far can be seen on
http://www.ra.net/statistics/
and other interfaces to the data will be forthcoming soon. The RA
team is soliciting suggestions for alternate reports and ways to
present the data.
Some observations:
o The Internet is unstable and is getting worse!
o There is a minority of ASes who are responsible for the majority
of route flaps
o The instabilities are increasing despite the deployment of BGP
dampening code
o The general level of "routing experience" in the ISP community
is decreasing (and thus the ability to tackle these problems)
o The route flaps are (not unexpectedly) worst during prime time
EST.
o In general, both filtering of routing updates and dampening are
good things.
o There are a number of /32 prefixes, 1597 addresses and even
default routes being announced. Should there be a bogon report
per source AS? Should this be extended to check consistency of
routing?
o The IRR can be useful as a source of filtering data.
New reports are still to come. Suggestions are welcome, please
contact [email protected].
7. Registry and directory services
Jerry Scarf from Vixie Enterprises held a presentation about the
project sponsored by the CIX to produce a registry tool suite.
This is seen as needed as more registries become established. The
focus is both on tools and documentation. Of the tools envisioned
are:
o The RIPE database software
o selecting a ticketing system, including system for
inter-registry or inter-NOC transfer of responsibility
o various other automation tools
Questions were raised regarding the scalability of this model --
would it be appropriate to have thousands of registries? Mike
O'Dell commented that more automation should be applied, and that
the current model needs more (incremental) improvement. Turning
this problem into an MPP problem isn't necessarily a good idea.
8. Network statistics and measurements + architecture for information
provisioning.
This was partly a summary of a NSF workshop held approx. 1 week
before the IEPG meeting. NN presented the summary.
The availability of statistical data in the post-NSFnet network has
radically decreased. Combined with the rapid growth of the network
and the increasing expectations of the users, this may quickly turn
into an unstable sitation. Most of the commercial service
providers seem to have a (too) short-term focus on their plans.
There was foreseen a clear need for metrics for Quality of
Service. Among the issues brought up were:
o trace-driven experiments
o specs for router vendors
o aggregate flows
o specs to give to ISPs
o routing stability
A basic problem is that a good crystal ball is needed to predict
the next killer application. Statistics are of course useful, but
they are not able to predict drastic changes.
Some pro-active steps can be taken:
o More widespread deployment (and use, when appropriate) of IP
multicast
o Improve protocol efficiency (where HTTP was pointed out as a
particularly bad example)
On the last front the NSF is sponsoring a project on deployment of
HTTP cache servers, done as an experimental prototype. They are
currently using Harvest, more information is available from the web
at http://www.nlanr.net/Cache/Statistics. Others were encouraged
to deploy HTTP caches and to interconnect them with the other
caches. Optimum placement of caches was pointed to as an
outstanding issue.
9. Internet traffic profile reports
A pair of short presentations were made regarding efforts to closer
inspect the traffic profiles.
Dorian Kim from CICnet showed some graphs displaying strong traffic
growth. He pointed out that over the last 270 day period the
average bytes per packet has increased from approx. 200 to approx.
270 bytes, and the average is continuing to increase. Some of the
reason can probably be explained with wider deployment of path MTU
discovery, another explanation could be the ever increasing
percentage of HTTP traffic.
Olivier Martin from CERN made a short presentation on the traffic
pattern measurements they are doing at CERN. They use a special
version of tcpdump which logs 1 of every 20 packets. The purpose
is to check the distribution of the load between the users of their
shared connection to MCI.
10. Real-time performance analysis
Steve Corbato made a presentation of the tools created and
measurements done to get a grip on real-time network performance at
the University of Washington. The slides from his presentation are
available from http://weber.u.washington.e
du/~corbato/iepgtalk/.
Earlier data from Bellcore was collected in 1989 / 1992, and the
effects of the changes seen since then (TCP Reno, HTTP, DS-3
networks and increase in BW times delay product) would be interesting
to investigate.
The method used was to do frequent SNMP polls (1-2 Hz) in
combination with special cisco software which updates its SNMP
counters more frequently than the standard version, and using the
value of sysUpTime.0 as the reference clock.
Plots showing the peak/average ratio (approx 1.9 +/- 15% in this
measurement), rate distribution and rate specturm were shown, as
well as a couple of other plots characterizing the data set and
measurements.
The interesting region of the load from a real-time perspective is
the 99 - 99.7 percentile.
This technique is primarily a "quick and dirty" technique useful
for ISP/campus nets in looking at the real-time performance of
their networks.
11. Closing
The meeting was closed with a short discussion over the usefullness
of these meetings and the frequency they should be held with.
After some discussion it was agreed that there should be an IEPG
meeting in conjunction with every IETF meeting.