[Dxspider-support] Old routing info. What to do ?
Rene Olsen
rene_olsen at post3.tele.dk
Fri Jan 17 19:51:06 GMT 2003
Hi.
There has been a lot of talk about routing problems lately. I have done a bit of
investigatiom, and I think that the result is not very uplifting.
Here in OZ we don't have a lot of nodes, and we keep to ourselves because we still
have quite some 1k2 links that can not handle a lot of login/logout trafic. That is why all
nodes in OZ with links to the outside world have them either isolated or filtered very
heavily.
Never the less I found that all of OZ is floating around the world. What has hapenned is
the following.
Between 1-2 months ago oz2dxc made a link with on0dxk-5. The first try was not
isolated or filtered, because the sysop forgot, and therefore all the OZ nodes and their
users were sent to on0dxk-5. Both spiders. The link was broken and the link isolated at
both ends.
But, the mistake had been made and now after 1-2 months that old info is still floating
around the net. Really amazing that it can be possible.
What I have found is that the info comes from clx systems still having the old info.
It is a well known fact that clx can not handle more than one active link. If you start
having more than 1 active link you ask for problems. Sooner or later your nodelist will
be totally fucked up and out of date.
A clx system running with just 1 active link and several passive links, for spot/ann
backup, will work just fine.
Now, what do we do about this. It can't be that we have to live with this. We need to
figure out a way to fix this.
The hard way would be to hardcode spider, so that whenever it links to a clx node it will
not accept anything else than the local users of that node. That would surely cause a lot
of bad feelings I think. But it would do the job and maybe more sysops would go for
spider or dxnet instead of clx.
Another way would be for spider sysops with links to clx systems, to make sure that the
clx node they link to does not have other active links. This may work but depends
heavilly on the spider sysops to check their clx links if they have any.
So, my question really is. What do we do about this. Do we just accept that we have
300 nodes and 3000 users and know that a lot if it is not correct anyway, or do we do
something actively to get to the bottom of this.
Lets get a good debate going about this, and lets hear what Dirk thinks about it.
My opinion is that we have no reason to put up with it any longer. The clx developers
have had more than two years to fix this, and they have not yet done so. And probably
won't do it either since the last clx release was in November 2000 I think.
Vy 73 de Rene / OZ1LQH
More information about the Dxspider-support
mailing list