[Dxspider-support] Out of Memory - Linux

Dirk Koopman djk at tobit.co.uk
Sun Sep 16 14:39:25 CEST 2018


Andy

I am a bit concerned about this, because the way that sh/dx (or any of 
show/<something that is stored in files like spots> cmds) only read in 
one file at a time and then will filter each line, one at a time, and 
only store a maximum of the number of spots asked for (default: 10 spots 
/ 25 lines). It was deliberately designed to run on hardware that was 
secondhand and/or old in 1997 so should (and has) work(ed) on machines  
with only 10s or 100s of MB of RAM.

Also, as default, it will only search the last 3 months worth of spots.

My spot directory is 3.3GB and my RAM usage after 3 days (odd) of 
running is ~146KB (virtual) - even after having searched for 
non-existent callsigns like G1TLH :-).

Finally, I would expect perl programs to exit fairly gracefully 
(actually.. just exit) on running out of user memory. In any case, it 
isn't something that the program(mer) has any control over. There are 
not any "out of ram" hooks that I can use to do something more elegant.

This suggestion may sound off the wall and irrelevant to spots, but 
would you try blowing away your user file and then recreating it (stop 
the node then: cd /spider/data; rm data.v3; perl user_asc then restart 
the node). Also, if you are using an SD card for your filesystem, 
consider replacing it. It may be worn out. Going into swap on a old SD 
card is probably erm... sub-optimal.

Dirk


6/09/18 10:47, Andy Cook, G4PIQ via Dxspider-support wrote:
>
> I’ve been running a node for contests (GB7DXM) where it receives a 
> feed from the RBN network – so lots of spots over a period. It’s been 
> running on small hardware, Raspberry Pi 3 and more recently an AWS 
> t2.micro instance – both with only 1 GB of RAM. Currently running v 
> 1.55 Build 0.197.
>
> Once there are plenty of spots in the database (spots directory is 
> currently 578 Mb – but the crash still happened when it was smaller), 
> if you run a sh/dx on a call which is nowhere in the database, or a 
> long way back, the cluster.pl process quickly consumes all free memory 
> and after a few seconds the whole host freezes and needs a hard re-boot.
>
> Now – I know I could (and will) just try this with more memory – but 
> the full crash of the host is a bit nasty. Have I missed some memory 
> management controls somewhere? Forgive my Linux skills – they are basic!
>
> 73,
>
> Andy, G4PIQ
>
>
>
> _______________________________________________
> Dxspider-support mailing list
> Dxspider-support at tobit.co.uk
> https://mailman.tobit.co.uk/mailman/listinfo/dxspider-support

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.tobit.co.uk/pipermail/dxspider-support/attachments/20180916/bb8bd695/attachment.html>


More information about the Dxspider-support mailing list