[Dxspider-support] Queues and classes against flood attack
Kin
ea3cv at cronux.net
Tue Feb 18 16:13:38 GMT 2025
Hi,
Dirk is doing his best to curb these abuses without altering the protocol. This is to be welcomed, but it is expensive work that unfortunately will not have the desired impact until we get a natural evolution as you have said Keith.
Andrea is showing us a clear evidence of what could be a real attack on the network, not what we had this weekend, which was nothing.
I propose to Dirk, if possible, that along with the information that is currently sent for each node, the variables $main::reqreg and $main::passwdrq should also be sent, so that the rest of the sysops know if it is a node that is to be considered trusted or not.
This would allow us, together with the authentication of the links, to decide what relationship we want to have with the other nodes. What do you think?
Kin EA3CV
De: Dxspider-support <dxspider-support-bounces at tobit.co.uk> En nombre de Keith, G6NHU via Dxspider-support
Enviado el: martes, 18 de febrero de 2025 10:40
Para: The DXSpider Support list <dxspider-support at tobit.co.uk>
CC: Keith, G6NHU <g6nhu at me.com>
Asunto: Re: [Dxspider-support] Queues and classes against flood attack
I know it may sound harsh but actually, I think we should discard spots that fail the sender verify checks.
The network needs to move forward and sender verification is a very good start. If people are using software which is no longer supported and therefore a security hole then they have two options: Stay as they are and lose their users because their spots are being rejected or change the software they’re using to something that does support sender verification.
If software is still actively being maintained then the developers should be adding features to enhance the security of the network.
Backward compatibility can’t last forever.
73 Keith G6NHU
On 18 Feb 2025 at 08:23 +0000, IZ2LSC via Dxspider-support <dxspider-support at tobit.co.uk>, wrote:
Hi,
I think that that we cannot simply discard spots that are failing the senderverify check because of the very different flavors of cluster on the net.
And it's not just a matter of good or fake spots. The other important topic we have to deal with is the flooding itself. Consider that a single IP PC61packet (with a standard MTU of 1500 Bytes) can contain at least 20 spots (it depends which info are in the spot itself) or more.
So
a rate of 10KByte/s (80Kbit/s) generates 133 spots per second (10/1,5*20)
a rate of 125KByte/s (1Mbit/s) generates 1666 spots per second
Considering internet speed availability nowadays....... 1Mb/s is nothing in terms of bandwidth, but the effect on the cluster is huge.
At this point I would suggest a different approach.
We need to classify spots in different classes. Let's say gold and silver class.
In the gold class we move all the spots that are verified. In the silver the rest.
In case of a flooding attack we have to drop spots from the silver class first.
Usually queues are used to achieve this. Queues with different length and different "serving" speed.
Example:
Gold queue is 100 spots long and we serve 5 spots per second.
Silver queue is 50 spots long and we serve 2 spots per second.
When queues are full, spots are discarded.
I know that this is the theory....and implementation is not easy, but considering what happened last weekend, I cannot imagine another solution.
73s
Andrea iz2lsc
-->
_______________________________________________
Dxspider-support mailing list
Dxspider-support at tobit.co.uk
https://mailman.tobit.co.uk/mailman/listinfo/dxspider-support
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.tobit.co.uk/pipermail/dxspider-support/attachments/20250218/123840ff/attachment-0001.htm>
More information about the Dxspider-support
mailing list