[Dxspider-support] List of nodes that should be unlocked

Dirk Koopman djk at tobit.co.uk
Wed Feb 22 11:06:15 GMT 2023


I have been looking at the code and doing some thinking. And I am afraid 
that we may have started to go down the wrong path by using set/lockout. 
As currently implemented when a new node appears on the cluster, a 
number things happen:

1. A new user record is created.
2. That record is marked as a basic node
3. It is locked out.

The reason it is locked out is prevent any node connecting to any other 
node without asking the sysop of that other node first. What it isn't - 
is a good method to distinguish a 'good' node from 'bad' (whatever is 
decided that means - this week).

There are a few other things that happen to this user record as various 
other PC messages appear from that node:

A PC41 will update any other information for that node (qth, name, qra etc).
A PC92/3 record of any sort will set that node as a Spider node (at the 
moment I don't differentiate between CC-Cluster and DXSpider nodes, they 
speak (roughly) the same PC92/3 and PC61 protocol.  It does not unlock it.

If a sysop grants another node's request to connect, they will then 
issue a set/spider (or node, arccluster etc), this will 
_*automatically*_ unlock that node.

By doing all these set/unlocks globally, we are unconditionally allowing 
access by those unlocked nodes to your node without your _*explicit*_ 
consent.  Which probably _*isn't*_ what is wanted.

But it gets worse: I cannot see a method of distinguishing whether an 
incoming spot (from afar, or a non-DXSpider node) is forged or not. I 
remain unconvinced that registration, by itself, whether flagged by some 
field (for instance, in a PC92 K) helps anything much. Registration - to 
get closer to a guarantee that the user/ip address combo in the PC61/93 
is truly genuine - will likely require something more than simply a 
password. An "ideal" solution might be available through SSL client 
certificates but that is a whole lump of infrastructure that would be 
even more difficult to administer (and get logging programs to use) than 
passwords.

I observe that last weekend there were just 20 odd spots that could be 
regarded as 'rude'. I suspect that there were many more spots sent that 
were wrong in some way, 'busted' or even downright forged (for whatever 
reason). I wonder whether now would be a good time to put this all to 
sleep for a while, to draw breath - so that when we do something extra 
it will make a tangible difference.  As it stands at the moment, given 
that there are a large set of non-DXSpider sources of spots, we seem to 
be rushing headlong into cutting our noses off to spite our faces.

Kin is doing an excellent job in motivating nodes running the master 
branch of the code to upgrade to mojo (and keep it up to date in the 
future). I am very grateful that he has taken the time (and quite a lot 
of trouble) to do this. You could all help by encouraging your DXSpider 
neighbour nodes to so as well. I am also available to help you to 
upgrade, via ssh, if it is all too difficult or something goes wrong. 
Just email me privately.

This is my current work plan:

1. Add the missing load/bad... commands to allow the creation of a a 
global (as well as local) baddx, badword, badnode lists.
2. Establish a formal way of submitting a 'bad...' entry to the central 
repository in such a way that it does not become a burden for that 
repository's owner to keep it all up to date.
3. Establish a way for all sysops to see the lists of all of these 
global entries.

Question: given the work plan being done and tested, is now not the time 
to check to see that what has been done is enough? The new badword and 
badip system seem to have taken a very large chunk out of the overtly 
abusive behaviour such that I believe we are now (and especially after 
the work plan is done) well beyond the point of diminishing returns.

Another question: I would like some suggestions as to how to (preferably 
automatically) check that a new user, requesting registration, is who 
they say are and, should a method involve the use of email addresses 
and/or database lookups (such as qrz.com), how that can be done without 
having to register nodes as 'data controllers' under GDPR et al.

Dirk G1TLH

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.tobit.co.uk/pipermail/dxspider-support/attachments/20230222/d353a48d/attachment.htm>


More information about the Dxspider-support mailing list