[Dxspider-support] User database/info seems to vanish
Dirk Koopman
djk at tobit.co.uk
Fri Nov 23 01:53:34 CET 2018
I'm struggling to understand why (one presumes) DXSpider is consuming so
much disk space.
On GB7DJK I have 20 years of data
spots 3.4G
logs 5.5G
debug 1.3G
debug is only so big because I have several users and node links, each
day is 120-150MB and there will be up to 11 files.
Is there any evidence in the debug files of restarts? (grep orft debug/*)
This is what a "normal" restart or stoppage from any signal except KILL
should look like:
If you find that string in one of your debug files, what happens before
that? Specifically before the actual start string - something like:
1542243714^DXSpider V1.57, build 135 (git: c60ec0c[r]) started
1542243714^Copyright (c) 1998-2018 Dirk Koopman G1TLH
"orft we jolly well go" is the end of initialisation and "bye bye
everyone - bye bye" indicates the end of a controlled shutdown (either
by command or a TERM or other catchable signal). [ed: (cough) older
people in the UK may recognise these as catch phrases from the TV and radio]
They are there as erm.. "unusual" strings that one can grep for.
73 Dirk G1TLH
On 23/11/2018 00:14, Michael Carper, Ph.D. wrote:
> I just doubled the disk space. It's now got (55% free):
>
> Filesystem Size Used Avail Use% Mounted on
> /dev/sda1 40G 17G 21G 45% /
> tmpfs 1.8G 0 1.8G 0% /dev/shm
>
> Mike, WA9PIE
>
> On Thu, Nov 22, 2018 at 2:57 PM Michael Carper, Ph.D. <mike at wa9pie.net
> <mailto:mike at wa9pie.net>> wrote:
>
> Dirk,
>
> The specs on this machine are as follows:
>
> Google Cloud Platform
> Machine type; n1-standard-1 (1 vCPU, 3.75 GB memory)
> ********
> Disk Usage (GB)
> Filesystem Size Used Avail Use% Mounted on
> /dev/sda1 20G 18G 1.5G 93% /
> tmpfs 1.8G 0 1.8G 0% /dev/shm
> ********
> Memory Usage (MB)
> total used free shared buffers cached
> Mem: 3707 3587 120 0 159 3045
> -/+ buffers/cache: 382 3324
> Swap: 0 0 0
> ********
>
> ...maybe it's low on disk space?
>
> Mike, WA9PIE
>
> On Fri, Nov 16, 2018 at 6:38 AM Dirk Koopman via Dxspider-support
> <dxspider-support at tobit.co.uk
> <mailto:dxspider-support at tobit.co.uk>> wrote:
>
> What machine is this running on and what are you using for
> disk space?
>
> Dirk
>
> On 15/11/2018 07:36, Michael Carper, Ph.D. via
> Dxspider-support wrote:
>> Greetings all.
>>
>> Recently, it seems my node (WA9PIE-2) has lost its user
>> database. The node has probably restarted without cause.
>>
>> I say the node has lots its user database because my own
>> login is greeted with the following:
>>
>> Hello WA9PIE, this is WA9PIE-2 in Prosper, TX
>> running DXSpider V1.55 build 0.181
>> ===
>> Questions about the WA9PIE-2 Global DX Spotting Network:
>> email: mike at wa9pie.net <mailto:mike at wa9pie.net>
>> ===
>> Cluster: 396 nodes, 344 local / 3207 total users Max users
>> 6269 Uptime 3 22:26
>> Please enter your name, set/name <your name>
>> Please enter your QTH, set/qth <your qth>
>> Please enter your location with set/location or set/qra
>> Please enter your Home Node, set/homenode <your home DX Cluster>
>> WA9PIE de WA9PIE-2 15-Nov-2018 0727Z dxspider >
>>
>> I've logged into my own node many times and have set all
>> these values. Why would it lose them? Why would it restart?
>>
>> Has anyone else seen this happen? Does anyone know what I
>> can do to avoid it and get the node back to "normal?"
>>
>> Mike, WA9PIE
>>
>> _______________________________________________
>> Dxspider-support mailing list
>> Dxspider-support at tobit.co.uk <mailto:Dxspider-support at tobit.co.uk>
>> https://mailman.tobit.co.uk/mailman/listinfo/dxspider-support
>
> _______________________________________________
> Dxspider-support mailing list
> Dxspider-support at tobit.co.uk <mailto:Dxspider-support at tobit.co.uk>
> https://mailman.tobit.co.uk/mailman/listinfo/dxspider-support
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.tobit.co.uk/pipermail/dxspider-support/attachments/20181123/e20cc51b/attachment-0001.html>
More information about the Dxspider-support
mailing list