IEPG Meeting
14 March 1999
Minneapolis
Authors:
- Paul Wilson, APNIC
- Mirjam Kuehne, RIPE NCC
Cable TV Standards
Ran Atkinson, @Home
Overview
- Cable 101
- Data over Cable
- Internet 101
- etc
terminology:
- Homes/Households Passed
- the number of Homes/Households physically passed by the cable
- Subscriber
- Penetration Rate
- Radio Frequency
- Head-end
- location on CATV network where signals are input/output
- Hybrid Fibre Coax
- critical development for data (fibre from headend to neighbourhood
OTN, then coax to house)
- Optical Transfer Node (OTN)
- unit serving 300-1200 homes (mounted on power poles)
- Amplifier
evolution of cable networks
- in the beginning
- satellite farm at Head End goes on to the coax
- up to 20+ amps in path
- lots of noise
- noise explosion problem, the closer you get to the head-end
- today
- different topology (satellite also upstream)
- master headend, feeds others (via SONET in the US or other network
transport)
- each headend feeds a number of OTNs
- better noise characteristics
- more bandwidth
- consolidation in North America, leads to Multiple System Operators
(MSOs)
- example: originally 3 cable operators in 6 cities,
now all combined, things scale better
- Franchise swaps to regionalise markets
- headends interconnected via fibre-rings
- single video distribution point for all headends
- plant upgrades needed for cable modems:
- HFC
- two way activation
- capacity increase (550-750MHz)
Note: standard channel size is 6MHz for NTSC, 8MHz for PAL
data over cable
- downstream:
- from the headend to the home
- frequency rates: 50
- 650 MHz
- data rates: 1.2 Mbps
- 40 Mbps (typical)
- upstream:
- frequency rates: 5MHz
- 42 MHz
(Note: much more interferences in these bands)
- data rates: 500 Kbps
- 1.5 Mbps
issues:
- HFC bandwidth shared among sets of users
- upstream bandwidth is scare
(no guarantee of minimum bandwidth or fair distribution)
many vendors now in the Multi Media Cable Network System
- 3com, cisco, nortel, thomson, samsung, sony, many more
cable router in headend:
- interface between data network (digital) and RF plant (analog)
- serves multiple nodes
- sometimes a bridge/switch, not a router
- also local servers at headend
data over cable: plant
- downstream channel shared by multiple nodes
- upstream channel dedicated to a node
- ingress and rf noise problems for upstream
- 750 homes passed per node
data over cable: home
- catv line to house
- splitter divides signals to TV and cable modem.
- 10baset link from cable modem to pc.
- F-connector for analog cable connection
- 10BASE-T interface for connection to subscriber computer
- simultaneous data and video (some assume only one at a time
- wrong)
- support for home lans
- varies with vendor and model of cable modem
data over cable: standards
- different standards in US and Europe
US:
- MSOs created MSNS partnerships to develop standards
- relies on IETF standards, IP address, DHCP, obtains configuration
files and new SW via TFTP,configuration by SMTP
- ITU formally standardised msnc 1.0 as itu-t j.83 (~15 vendors)
Europe:
- ECCA/EuroCAbleModem developing a DVB/DAVIC-based standard
- oriented to PAL, 8MHz channel width
- specification in progress, not finalised yet
- results:
- multi vendor operability
- much lower costs!
- retail distribution easier
- faster deployment
Internet trends:
- demand has soared
- overall entropy is rising fast
- no central backbone
- reliability dropping
- everyone is an ISP
- transfer rates are dropping
- national comms infra is hindered in expansion
- new apps rising
- reaction times shrinking
@HOMES perspective:
- fat pipes don't solve the problem
- web/ftp proxy server better
- network intelligence built in
- push data closer to user
- push computation closer to user (e.g. java)
- almost all customers use Windows
- multicast is a perfect fit
- simplicity wins
Diagram:
global Internet
|
NAPS
|
@Home private ATM backbone
|
22 regional data centres
|
many headends connected to these
|
users connected to headend (via OTNs)
Notes:
- ATM currently being replaced
- 300,000 subscribers end of 1998
- monitoring of 300,000 modems with SNMP
- big task
- aggregation of monitoring within system
- "interesting" events sent to ops centre
conclusions:
cable data networks:
- can provide better service, more bandwidth than normal modem
- much more complex to manage
- scaling is a big issue
- standards based product
- start to appear
- prices going down (50% drop in past 18-24 mths)
- will increase deployment
Questions
Why low freq range for upstream?
- Because noise props are not suitable for video.
- Bandwidth is indirectly limited, by use of "conservative" modulation
schemes.
Oversubscription strategies?
- This is not meaningful.
- Result of sharing circuits is graceful degradation of quality.
Some companies may still oversubscribe more than with modems?
- Narrowest pipe for headend to upstream is 4xT1 (even with 0 cust).
- Normally 100Mbps from headend to RDC.
- 50% util would be unusually high.
- Caching strategy
- customers use web and email.
(Web is cached at headend.)
Net reliability is decreasing??
- Compare with mid-late 80s
- simple networks with few contact persons.
- Now
- many problems between tier 1 and tier 2 networks. Not clear who to
call.
- Generally, more "moving parts" means less reliability.
RTFM (Real Time Traffic Flow Management)
Neville Brownee, University of Aukland
Outline:
- research project in progress, feedback welcome
- TCP stream measurement implemented in NeTraMet Oct 98
- try to find out problems before the user
- TCP performance indicators presented at ISMA
(Internet statistics measurement analysis?) workshop, Jan 99
- New material added Jan-mar 99 on time and transfer rate distribution
attributes for TCP streams
- a bidirectional RTFM flow may be the sum of many TCP streams
- for each such stream the meter can maintain some TCP state
- flags seen (syn, fin, rst)
- last ACK and Seq Number for last packet
- direction, arrival time, bytes etc.
- TCP info for the flow can be aggregated
- max active, total streams
- total bytes sent and acked
To-From Sends and Acks:
- packet loss causes resends for both directions
- meter can only see one point in the stream
- sent bytes can only be an estimate
(lost packets, wrongly retransmitted packets)
- acked bytes can be measured accurately from seq field
(only need packets in one direction) or ack field
a few new TCP attributes added in NeTraMet 4.3
- tcpdata attribute is a list of the flow's tcp info
- sample SRL program to measure this
- data was collected at the UA Internet gateway diring 15-19/13/98
- highest observed maxstreams active was 28
- sample tcp data
- 5 min interval
- table of source ip, dest ip, n/mx streams, to (send/ack) from (send/ack)
Byte Lost percentage (BLP), a 'TCP Performance' indicator:
- For each direction os a flow tcpdata gives us
S = number of bytes sent
A = number of bytes acked
BLP = (S-A)*100/A
- Larger of toBLP and fromBLP should be useful as a single metric
showed interesting plots
- Slides matching BLP measured with "surveyor plots"
(delay times for packets from harvard to NZ) at same time.
- byte loss for various ASNs
Sequence Decrease Percentage (SDP), another TCP indicator:
- For each direction of a flow TCPdata was extended to measure
D = number of packets where seq < previous
- So Sequence Decrease Percentage
SDP = D*100/p
Where p is total packets in stream.
Turnaround times for tcp data
- For nontcp flows, netramet can build "turnaround" distributions
using time between successive packets in opp dirs
- Can't do for tcp, due to window
- Proposal: only use packets pairs where 1st packet sends data
(non-zero tcp len), and 2nd packed comes back in response.
- Eg web cache queries?
Graphs
- histogram plots showing good day and bad day.
- Ping resp time is always less than tcp resp time
(which depends on speed of cache server in question)
- Contour plots on histograms
- shows turnaround times and sudden changes in the delay times
- not much progress to use these measurements to actually
analyse delay times of networks
- not clear yet how useful this is
TCP Transfer Performance
- What factors influence user perception of Internet perf?
- transfer rate for large files, time to display web pages etc
- For each tcp stream netramet can measure
- tcpsize = diff from 1st and last seq fields
- tcptime = ? (missed this)
- tcprate = ? (missed this)
conclusions:
- BLP and SDP seem to be useful in monitoring performance of TCP flows
- TCP turnaround distributions indicate both net and host response times
(will need to look at other applications than web)
- TCP transfer rate and time distribution look interesting
- More work needed to...
- improve understanding of BLP, SDP and TCP-turnaround
- develop useful monitoring tools using these attributes
- work together with users to find out what their perception is
Questions, Comments:
Paul Vixie: ASNs might not be the best way to look at this.
We need smaller bucket size - ie customer nets within ASNs.
Eg 701 spans too many continents.
A: Agreed.
Ran Atkinson: can you extract similar netflow data from cisco router?
useful alternative for Netramets to collect data?
Neville: cisco netflow has not primarily been developed as data collecting
tool, it only gives summaries, can't see any detailed information
netramet tool doesn't also try to be a router at the same time.
Y2K and Root Servers
Mark Kosters, NSI
Y2K program: What is InterNIC doing wrt to Y2K
- history & schedule
- which tests done
- root server testing
- contingency plans
- next steps
- other roots
Summary of programme:
- high visibility and oversight
- wide awareness ad participation within the organisation
- cross-departmental teams
- structured methodology and independent validation
Schedule:
- Inventory - all systems
- Assessment - risk mgmt
- Remediation plans
- Test and validation
- Implementation
- Contingency planning
Testing:
- Independent test environments
- Multiple configurations
- Automated scripts
- full data range coverage
- current dates (before Y2K)
- future dates (after Y2K)
- key dates (12 special cases)
root NSes:
- idea to make sure that here would be NO impact at 2000 and other dates
(all components of systems, SW and HW)
- wide range of tests
- scripts cover common query types
- wide range of names
- no problems so far
- key dates tests went well
- any date in dec 98
- 31 dec 98
- 8 sep 99
- 9 sep 99
- etc (missed the rest)
contingency planning:
- risk assessment and mitigation
- make sure nothing will break as we go on
- if it would, what would we do
- identify type, impact and duration of problems, before it breaks
- identify alternatives, and triggers for alternatives
- full staffing on 1 Jan 2000 (a Saturday)
what about the other root servers:
- a number of meetings currently organised among the root name server
operators (Y2K is also discussed)
- each root is independent
- responsible for its own testing and remediation
- need to examine communication scripts operating between the servers
Phil: NSI has a lot of resources and has put a lot of effort into this.
Has NSI been sharing the information with the other root name servers?
Mark: exchange with other roots so far on an informal basis.
General feel is that sharing their results will be ok, and will be
happy to divulge what they have done.
Bill: a lot of root name server use GPS timing, and GPS rollover issue
is coming up on 28 August 1999. Many people relying on GPS are issing
"compliance statements" on this. Has NSI addressed it?
Mark: no
Brian: has this been discussed in the root name server advisory committee?
Mark: yes, and we will continue to do so.
Phil: last stats slide from A root NS sustain 2 Mbps of load.
Are you provisioned to cope with more (6 or 7 times as mush) just in case
the other
servers break?
Mark: Would like to ensure that they are better provisioned even now, and
trying to fight that battle internally inside NSI.
Paul Vixie: Y2K issues
only a few additions:
- all root NS are different, run different operating systems, routers,
SW etc. This increases robustness against being hit by any single
problem.
- only common element is BIND, only if this breaks, all root NSs
would break
- ISC has gone to the code, there is no formal posting about Y2K
compliance on their web site, you have to be a customer, so that ISC
can give customers money to their insurance company
- F root server: some other RS ops have F as 2nd point for zone transfer
request after A. currently have 100 Meg connection, operating now at
about 5Mbps (sustained load).
Bill: probably almost all root NS would be able to answer all queries
they receive on 1 Jan 2000, the question is if they receive the
queries. the problem is not in the root NS system, it is in the telephone
system. Spoke with US FCC. May be problems in this area, so FCC should
prioritise circuit restoral, to ensure that root server system can get
reconnected into infrastructure.
Paul ensures that the NS operators have loads of experience to operate
root NS system, even with half the servers down. Internet doesn't stop
to work, it just gets a little slower.
ARIN status report
Michael O'Neill
Overview:
- registration stats and services
- news from arin
- membership details
- meeting coming up
stats for Jan,Feb 1999
/24s issued to ISPs: 3496 3128
/24s issued to end-users 80 48
ASN's issued: 74 110
approved: 110 118
ASNs pending: 58 159
IP requests rejected: 4 2
IP addresses transfers: 13 8
new ISP requests: 87 96
new IP requests: 45 27
first time allocs to ISPs: 22 39
of those multihomed-policies: 9 17
rejected ISP requests: n/a 3
ISPs receiving allocation: 46 57
number of allocations to ISP members in 1998:
small (/24 - /19): 227
medium (/18 - /16): 183
large (/15 - /14): 32
XLarge (/14): 7
news, new initiatives, new policies:
- ARIN now issues /20s and shorter prefixes
- everyone else has to go to their upstream ISP
- multi-homed policy remains unchanged:
- demonstrate efficient use of /21
- registration of /20 can be made
- must have prefixes announced by at least 2 of their upstreams
- agree to renumber
ARIN routing registry
- launched on 8 Feb 99
- documentation on http://www.arin.net/rr
- stats since launch:
- 6 maintainer objects
- 3 aut-num objects
- 10 objects in total
IPv6
- first IPv6 templates/forms being tested
- testing whois with IPv6 support
- registration services tests
- allocations
- reassignments
- modifications of subTLA registrations
ARIN staff introductions:
ARIN membership meeting:
- Atlanta, Georgia, 12-13 April
- first day: working group meetings (for the first time):
- routing registry
- grandfathered IP registration
- Rwhois/Swip
- ARIN database
- second day: plenary meeting
URLs:
Questions:
Paula: is it a requirement to register the ASN in the routing registry
now?
Michael: no, it is only encouraged
Phil: what is the scope on the grandfathered WG?
Kim: issues including
- DB cleanup
- issue of current members paying for services given to those old cases
Randy Bush: How is the co-ordination with the other IRRs, e.g. the RADB.
Thought that ARIN would take over the RR from Merit's RADB.
Kim: this is not the idea at the moment, more co-ordination needed
with Merit. discussion postponed til later when Abha is in the room.
Initially the main idea was to link IP and RR to be able to verify
announcements.
RIPE NCC Status report
Mirjam Kuehne
see http://www.ripe.net/meetings/pres/iepg-9903/
APNIC Status report
Paul Wilson
see http://www.apnic.net/presentations/IEPGmar99
BIND version 9
David Randy Conrad
Initial comment from Paul Vixie: recent developments re root server.
It is no longer possible to ftp the .com zone file from NSI.
This has been the case for some time, but private arrangement existed to
allow some people to access the files. This arrangement has recently
ended.
ISC received advice from NSI asking that zone transfers be turned off,
then sent email to other roots and others asking their advice - no
decision made yet as to whether they should turn it off.
BIND version 9
BIND version 8.2 is in the final stages of beta, due for release tomorrow.
One remaining issue: some changes in the architecture required, which
resulted in 30% increase of the memory consumption. Therefore Bind 8.2
couldn't be run on some root name servers. This has been resolved.
Other issues to be resolved relating to part implementation of DNSsec
in BIND 8.2, and export restrictions.
Code (executable?) is publicly available on www.isc.org, but there
is some question if ISC is allowed to self-classify this code.
Government might ask them to stop in future (even though crypto is used
for authorisation/signing, not for encryption).
Question: John Gilmore in conjunction with others (Network Associates)
wrote DNS sec implementation around 6-9 months back. release was
officially approved. it was then referenced in a book and the government
realised that DNS sec can't be shipped anymore.
A: DRC - details, including stripping of Des and other code to ensure
that only authentication is done, and not any encryption.
TIS DSA (Digital Sig Algorithm) can be exported.
Major issue is that code is publicly available, that means everyone can
go ahead and implement strong cryptography.
BIND 9:
In august 98, unix vendors agreed with ISC to fund new version.
Feature requirements:
- scalability
- thread-safe
- deal with com zone scaling
(note: zone file 600M right now, mem image more like 850Meg)
- security (to support dnssec)
- audibility
- support firewalls (split dns)
- portability
- maintainability
- new features (ixfr, dynamic dns, notify...)
- better conformance with standards.
- IPv6 support
Operational issues:
- reliability
- concept of abstract namespace
- partition front end protocol support from back end database
(maybe SQL backend some time in the future)
- IPv6 support
- needs to be highly parallel
- eg can answer queries while updating a zone.
Note DNS protocol has become vastly more complex.
ISC not sure that IPv6, DNS sec, A6 records, and other stuff can
scale globally on the "live" Internet. Code will work, but
successful deployment yet to be proven.
A6 records give a major potential problem:
single query - multiple secondary queries - multiple answers
(big potential for denial of service attacks)
Implementation and porting:
Due to limitations of BIND 8 (single threaded, hard to maintain etc),
ISC decided to redesign and reimplement BIND 9.
(bind 9 was called "deep space bind")
First code drop has been made to funding organisations, but little
feedback so far.
Ported to Solaris 251, hpux, aix, irix, digital, linux, freebsd,
netbsd, openbsd, bsdi. Also to AS400 and others.
NT support not deliverable at this stage (noone funded it!), but
ISC is taking care to ensure it can be done if funding arrives.
Brian: can you explain the issues around IPv6
A: have you done an analysis. might be a chicken and egg problem,
since IPv6 is not deployed yet. But what are you concerned about?
Paul Vixie: where one query can result in more work required in
answer than in generating the query, there is a problem. especially
if increase in work is exponential with recursive queries.
Denial of service attack could be v easy.
if there were no multi-homing and renumbering requirements, it would
be a lot easier.
DNS root-server queries (A few scary notes)
Maurizio, LINX
RIPE NCC operate the K root server, host it at LINX (London Internet
Exchange)
Has heard it said that 18% Of total packets on the net are DNS traffic???
(but not sure of this)
Diagram:
transit ISPs
| | | |
transit router -
collector Router
|
k root server
traffic statistics:
- average
traffic out : 600 Kbps
traffic in : 2.2 Kbps
26M packages per day
Query type distribution:
A 77%
NS 15%
MX 5%
Etc
.2% IPv6 queries?
Scary statistics:
26% valid root domain queries
6% valid queries, but server can't answer in authoritative way
(.com, .net etc.)
.5% valid queries for TLD domain,br>
66% queries for non existent TLDs
(probably to do with Windows)
Hit parade of weird domain queries:
desktop, loopback, notes, mail, http, hotmail, yahoo, www,
proxy, server, Internet, news, Internet, exchange...
(request -pls put this on the IEPG list)
Paul Vixie: this is consistent which what he sees at F.
Agree that it seems to be NT workgroup and similar names,
and because clients have no "negative name caches" errors not
remembered and the queries get repeated - up to 10 times/sec.
Question: what if we sent an answer in response to these bad names
eg 127.0.0.1?
no objections - people seem to like the idea.
Paul V will take it up with the IANA people.
Lars-Johan Liman: supports this. Harvard Eidnes did statistics and
found out that much is originated from one place, there is a region or
a ISP that has deployed certain SW to TCP queries. new version of
this SW doesn't have this implemented, but old version is still
around.
Observations on Internet Routing
Abha Ahuja, Merit.
Measurement Infrastructure
- Probe machines at mae-east, Mae-west, paix etc
- Default-free collector at UM
- Michnet instrumentation
- 28 Gig of accum data
Route tracker:
- combination of SW
- peers with ISP trouters
- log all routing traffic to disk
- Maintain statistics
Review:
- pathological routing observed
- Duplicate withdrawals and announcements
- Private nets
- Found 30 second frequency component
- small ISPs have big impact on routing table load
Problems:
- misconfigurations
- router SW bugs etc
- DSU/CSU oscillation
- etc.
has been fixed over time
in 1996 huge number of withdrawals, now reduced with update of router SW
Internet failures analysis
- default free feeds from various ISPs
- case study of Michnet
- mean-time failure, repair and availability
default free route
- 10% of the routes had under 95% availability
- route fail over: 80% of all Internet routes failed over 2 days,
due to path changes
- 75% of routes had failed in 30 days
how long does it take to get failures fixed:
- found out that failures got either fixed very quickly
the majority - 80% in under 2 hours
the rest took much longer - up to 2 days
reasons of failure:
- maintenance and power outages
- fibre cut/circuit/carrier problem
- unreachable
want to find out the reasons for each outage.
looking for people who provide them with their default-free routing
tables.
See details on the Web:
http://www.merit.edu/ipma
- follow links to papers and graphs.
Visualisation of Internet routing tables
Kim Claffy, CAIDA
Summary:
- 2D visualisation was a mess
- 3D visualisation is much better
Demo and discussion of Skitter software for visualising BGP routing table.
URLc/o CAIDA
Other discussion:
Question re Merit vs ARIN routing registry
- which is authoritative and in what order should they be consulted.
Abha:
Transition was discussed, but not under active discussion any more.
Discussion should resume.
RADB will continue, for as long as people keep putting data there.
Next meeting:
there is still a general desire to have one, so will be announced
in advance of next ietf, on mailing list.