[Dxspider-support] Fwd: Max Spot per Minute (how to avoid flooding)

Kin ea3cv at cronux.net
Sun Jan 26 13:05:17 GMT 2025


Hi,
 
I believe there are third-party developments that have chosen the easiest path or have not considered the impact of their software on clusters.
Sending all configuration commands in every session is very straightforward and poses no burden on the sender when one thinks of a single client. It would be advisable for them to consult or ask about the best approach. I believe the answer is quite simple: issuing a query about the state of filters (for example) would avoid repetitions.
 
Another possibility would be to create a new command in Spider that provides a summary of significant data that other applications might need. For instance: QRA, QTH, RBN, filters, etc. This would simplify the need to repeatedly send the same commands.
 
Another recurring issue is when a user is connected to three nodes and attempts to connect to a fourth node. Since their software—and the user themselves—does not register the information that they already have three connections, a loop begins with hundreds of unsuccessful connection attempts, sometimes at a rate of more than one attempt per second. This should be addressed and prevented.
 
Regarding limiting the number of commands per unit of time with a penalty mechanism like the one you proposed, I think it’s an excellent idea. Even if the received command is invalid, it’s common to see attempts to access the system using all sorts of commands more suitable for an OS.
The 10-second delay is more than sufficient for this semi-blocking mechanism.
 
For those who like having their skimmer connected to the feed of spots, whether due to ignorance, testing, or simply boredom, this should be restricted since the RBN network exists precisely for this purpose.
 
For human spots, the worst-case scenario might involve sending four spots in one minute (CW), but I don’t think many operators can sustain such a pace. Only in FT4 could a similar rate be maintained.
 
Regarding Andrea’s comment about using the subterfuge of including HH:MM in the comments field to evade duplicate detection by a sysop, I find this behaviour inappropriate for someone using a network that does not depend solely on them.
A few days ago, I asked who could provide me with information about the node N9SIN-3. This node does not appear to be directly connected to any Spider, which means I have not yet been able to determine who they are or who their partners are. Personally, I believe they should be blocked for their behaviour. If it is not possible to block them directly, and their partners fail to act, I would also block those partners. The end does not justify the means. However, if it has been an error, it would suffice for them to resolve it.
 
I would also like to highlight the large number of nodes that remain poorly configured and do nothing to reduce the generated noise.
Perhaps it would be necessary to introduce a code change in new builds to prevent the sending of PC92D as a result of a failed connection attempt. If the TCP/IP session is not established, there should be no application-level notification. The same applies to cases of PC92A or C.
 
DXVars.pm could be modified to include a table of partner nodes, as well as the type of node. This may seem unnecessary, but it would allow for a centralised location to store this data, avoiding the current situation where some sysops add other nodes as partners but do not include them in their crontab or maintain a record of these partners. As a result, no connection file is created, and there is no way to determine who is authorised to access a node.
By including partners in DXVars.pm, it would be possible to enforce, with every restart, the execution of set/<node_type> <partner>, set/register <partner>, set/password, etc. This would resolve some current issues and maintain a database of our links.
 
There is also the issue of callsigns that were once nodes but are no longer so. The variable ‘sort’ is no longer ‘U’, and when that callsign is used, it incorrectly appears as a node. We sysops often forget to redefine them as users, which causes this incorrect information to be propagated across the network. However, I am beginning to suspect that some deliberately retain this attribute, even though they know it is incorrect. I am unsure whether this is done to make it easier to establish more than three sessions on the network.
 
Kin EA3CV
 
 
De: Dxspider-support <dxspider-support-bounces at tobit.co.uk> En nombre de Dirk Koopman via Dxspider-support
Enviado el: viernes, 24 de enero de 2025 14:38
Para: dxspider-support at tobit.co.uk
CC: Dirk Koopman <djk at tobit.co.uk>
Asunto: Re: [Dxspider-support] Fwd: Max Spot per Minute (how to avoid flooding)
 
We seem to be starting to lose the "battle" between nodes <-> users on client programs issuing data (for whatever reason).

The piece of code shown below was introduced in March 2023, together with the following comment underneath:

   These default values are set generously deliberately to allow certain user
   programs to get with the program and reduce the number of cmds that they
   issue on connection down to something reasonable. For instance, I cannot
   see why things like name, qth, lat/long/QRA (amongst several other sticky
   user attributes that only need to be entered once) are sent on every login.

It is clear to me that the situation has got worse and the time to tighten the defaults has arrived. In addition, I will add a general re-login delay so that programs cannot instantly reconnect and just carry on. Maybe with a second/other safeguard of recording the IP address rather than the callsign with an backoff timer after Z number of "fast" attempts to re-login. Or something like that (maybe a timed local IP address ban?).

I will happily accept suggestions for "better" values for X = 16 and Y = 9 below. As well as other ways of discouraging this sort of behaviour. 

I fail to understand the point of spotting an entire FTx channel's decoded callsigns. You haven't worked them, your program just heard them, but you're probably drinking tea and working someone else OR you've simply left the computer on whilst going out for the day. This, incidentally, is why I won't, ever, gate out raw skimmer spots to users. Speaking of which: the FTx skimmer network will likely do a better job than your random user "skimmer" so why not just connect to that instead!

This person appears to have taken it upon himself (gender deliberately chosen) to become an FTx skimmer that gates his data out into the general spot pool. But he could not do this unless the CLIENT SOFTWARE he is using provides that facility. So the obvious solution to this is to try to identify the author(s) of the client software and persuade them to not allow this sort of thing to occur. Experience shows that authors are reluctant to change the behaviour of their creations (I can understand that) and simply ignore requests for changes from "outside" their user communities. It probably takes at least 15 years of full time professional programming before one truly believes that all software has bugs, or undesirable behaviours that have been discovered by users that require changes. Unfortunately many authors are hobby programmers and resistant to external pressure for change. Probably, because their software is written in a way that makes it too difficult to change. I remember that :-)

As I have been writing this, I am starting to get a bit annoyed by the thoughtlessness of some authors and users. So I will implement an linearly increasing IP address ban time, together with message on login (with a fixed delay of say 10 secs before forced disconnect) saying something like "You are sending too many commands too quickly, you are banned from reconnecting until <date/time>". Obviously if they reconnect and do it again (within some interval) they will be have more time added - and - "good behaviour" over a period of time will reduce their penalty ban time. 

Your thoughts and suggestions for default values for these times / intervals will be gratefully received.

73 Dirk G1TLH
 
 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.tobit.co.uk/pipermail/dxspider-support/attachments/20250126/fb9ca5cb/attachment-0001.htm>


More information about the Dxspider-support mailing list