[Dxspider-support] Three months limit

Dirk Koopman djk at tobit.co.uk
Wed Mar 22 19:45:43 GMT 2006


On Wed, 2006-03-22 at 11:18 -0800, Lee Sawkins wrote:
> With performance like this with Dx Spider software, it is very
> surprising to me that Dirk is looking into using a database for dx
> spots.

"looking into it" is about as far as it has gone so far (although them's
that can read the code can try what is there for themselves). In all the
tests I have done (so far), there is no significant difference in speed
between the native perl code and the database version.

>  
> AR Cluster uses a database for dx spots.  This database must be
> trimmed on a monthly basis to 30 days worth of data or else the
> cluster takes a huge performance hit.  After the trimming, the cluster
> must be shutdown, then the trimmed database must be compacted, the old
> one moved out of the way and then the cluster restarted.

I have tried SQLite and MySQL. I don't believe that these databases, at
least on the reasonably modern hardware I have, need this sort of thing
doing to them. When I last looked I had 5,000,000 odd spots in the
database. I was not intending to provide a "Jet Engine" (an oxymoron if
ever I heard one) interface!

>  
> Some ARC sysops do not do this.  You can easily tell which ones.   You
> connect to their cluster, do a sh/dx and wait 60 seconds for a reply.
> Most users grow impatient and figure the cluster has not received
> their request, so they send it a couple of times more.  Then the
> cluster is completely locked up for 3 minutes, and no one else can do
> anything.

Which is will happen if you set the limit too big on DXSpider as well.
The 100 days, I found by experiment, was a reasonable compromise.

Can I also say that 'help sh/dx' will give users a steer on how to look
further back if they are so minded (although they will need to look in
100 [or whatever you have set the maximum number to]) day chunks).

Dirk





More information about the Dxspider-support mailing list