PowerPool: A flexible mining server

Features

  • Lightweight, asynchronous, gevent based internals.
  • Built in HTTP statistics/monitoring server.
  • Flexible statistics collection engine.
  • Multiple coinserver (RPC server) support for redundancy. Support for coinserver prioritization.
  • Redis driven share logging allows multiple servers to log shares and statistics to a central source for easy scaling out.
  • SHA256, X11, scrypt, and scrypt-n support
  • Support for merge mining multiple auxilury (merge mined) blockchains
  • Modular architecture makes customization simple(r)

Uses Redis to log shares and statistics for miners. Work generation and (bit|lite|alt)coin data structure serialization is performed by Cryptokit and connects to bitcoind using GBT for work generation (or getauxblock for merged work). Currently only Python 2.7 is supported.

Built to power the SimpleMulti mining pool.

Getting Setup

The only external service PowerPool relies on in is Redis.

sudo apt-get install redis-server

Setup a virtualenv and install...

mkvirtualenv pp  # if you've got virtualenvwrapper...
# Install all of powerpools dependencies
pip install -r requirements.txt
# Install powerpool
pip install -e .
# Install the hashing algorithm modules
pip install vtc_scrypt  # for scryptn support
pip install drk_hash  # for x11 support
pip install ltc_scrypt  # for scrypt support
pip install git+https://github.com/BlueDragon747/Blakecoin_Python_POW_Module.git@e3fb2a5d4ea5486f52f9568ffda132bb69ed8772#egg=blake_hash

Now copy config.yml.example to config.yml. Fill out all required fields and you should be good to go for testing.

pp config.yml

And now your stratum server is running. Point a miner at it on localhost:3333 (or more specifically, stratum+tcp://localhost:3333 and do some mining. View server health on the monitor port at http://localhost:3855. Various events will be getting logged into RabbitMQ to be picked up by a celery worker. See Simple Coin for a reference implementation of Celery task handler.

Setting up push block notification

To check for new blocks Powerpool defaults to polling each of the coinservers you configure. It just runs the rpc call ‘getblockcount’ 5x/second (configurable) to see if the block height has changed. If it has changed, it runs getblocktemplate to grab the new info.

Since polling creates a 100ms delay (on average) for detecting new blocks one optimization is to configure the coinservers to push PowerPool a notification when they accept a new block. Since this reduces the delay to <1ms you’ll end up with fewer orphans. The impact of the faster speed is more pronounced with currencies that have shorter block times.

Although this is an improvement, its worth mentioning that it is pretty minor. We’re talking about shaving off ~100ms or so, which should reduce orphan percentages by ~0.01% - 0.1%, depending on block times. Miners often connect with far more latency than this.

How it push block works

Standard Bitcoin/Litecoin based coinservers have a built in config option to allow executing a script right after a new block is discovered. We want to run a script that notifies our PowerPool process to check for a new block.

To accomplish this PowerPool has built in support for receiving a UDP datagram on its monitor port. The basic system flow looks like this:

Coinserver -> Learns of new block Coinserver -> Executes blocknotify script (Alertblock) Alertblock -> Parses the passed in .push file Alertblock -> Sends a UDP datagram based on that .push file PowerPool -> Receives UDP datagram PowerPool -> Runs getblocktemplate on the Coinserver

Note: Using a pushblock script to deliver a UDP datagram to PowerPool can be accomplished in many different ways. We’re going to walk through how we’ve set it up on our own servers, but please note if your server configuration/architechture differs much from ours you may have to adapt this guide.

Modify the coinserver’s config

This is the part that tells the coinserver what script to run when it learns of a new block.

blocknotify=/usr/bin/alertblock /home/USER/coinserver_push/vertcoin.push

You’ll want something similar to this in each coinserver’s config. Make sure to restart it after.

Alertblock script

Now that the coin server is trying to run /usr/bin/alertblock, you’ll need to make that Alertblock script.

Open your text editor of choice and save this to /usr/bin/alertblock

#!/bin/bash
cat $1 | xargs -P 0 -d '\n' -I ARGS bash -c 'a="ARGS"; args=($a); echo "${args[@]:2}" | nc -4u -w0 -q1 ${args[@]:0:2}'
# For testing the command
#cat $1 | xargs -P 0 -td '\n' -I ARGS bash -xc 'a="ARGS"; args=($a); echo "${args[@]:2}" | nc -4u -w0 -q1 ${args[@]:0:2}'

Block .push script

Now your Alertblock script will be looking for a /home/USER/coinserver_push/vertcoin.push file. The data in this file is interpreted by the Alertblock script. It looks at each line and tries to send a UDP packet based on the info. The .push file might contain something like this:

127.0.0.1 6855 VTC getblocktemplate signal=1 __spawn=1

Basically, this tells the Alertblock script to send a UDP datagram to 127.0.0.1 on port 6855. PowerPool will parse the datagram and run getblocktemplate for the currency VTC.

The port (6855) should be the monitor port for the stratum process you want to send the notification to. The currency code (VTC) should match one of the configured currencies in that stratum’s config.

If you need to push to multiple monitor ports just do something like:

127.0.0.1 6855 VTC getblocktemplate signal=1 __spawn=1
127.0.0.1 6856 VTC getblocktemplate signal=1 __spawn=1

For merge mined coins you’ll want something slightly different:

127.0.0.1 6855 DOGE _check_new_jobs signal=1 _single_exec=True __spawn=1

Powerpool config

Now we need to update PowerPool’s config to not poll, as it is no longer needed, and makes the coinserver’s logs a lot harder to use. All that needs to be done is set the poll key to False for each currency you have push block setup for.

VTC:
    poll: False
    type: powerpool.jobmanagers.MonitorNetwork
    algo: scryptn
    currency: VTC
    etc...

Confirm it is working

You’ll want to double check push block notifications are actually working as planned. The easiest way is to visit PowerPool’s monitoring endpoint and look for the last_signal key. It should be updated each time PowerPool is notified of a block via push block.

Components

Component Base

class powerpool.lib.Component[source]

Abstract base class documenting the component architecture expectations

_configure(config)[source]

Applies defaults and checks requirements of component configuration

_incr(counter, amount=1)[source]
_lookup(key)[source]
defaults = {}
dependencies = {}
gl_methods = []
key = None
name
one_min_stats = []
one_sec_stats = []
start()[source]

Called when the application is starting.

status

Should return a json convertable data structure to be shown in the web interface.

stop()[source]

Called when the application is trying to exit. Should not block.

update_config(updated_config)[source]

A call performed when the configuration file gets reloaded at runtime. self.raw_config will have bee pre-populated by the manager before call is made.

Since configuration values of certain components can’t be reloaded at runtime it’s good practice to log a warning when a change is detected but can’t be implemented.

PowerPool (manager)

class powerpool.main.PowerPool(config)[source]

This is a singelton class that manages starting/stopping of the server, along with all statistical counters rotation schedules. It takes the raw config and distributes it to each module, as well as loading dynamic modules.

It also handles logging facilities by being the central logging registry. Each module can “register” a logger with the main object, which attaches it to configured handlers.

_tick_stats()[source]

A greenlet that handles rotation of statistics

defaults = {'loggers': [{'type': 'StreamHandler', 'level': 'NOTSET'}], 'server_number': 0, 'datagram': {'host': '127.0.0.1', 'enabled': False, 'port': 6855}, 'default_component_log_level': 'INFO', 'term_timeout': 10, 'procname': 'powerpool', 'algorithms': {'scrypt': {'hashes_per_share': 65536, 'module': 'ltc_scrypt.getPoWHash'}, 'x11': {'hashes_per_share': 4294967296, 'module': 'drk_hash.getPoWHash'}, 'scryptn': {'hashes_per_share': 65536, 'module': 'vtc_scrypt.getPoWHash'}, 'sha256': {'hashes_per_share': 4294967296, 'module': 'cryptokit.sha256d'}, 'lyra2re': {'hashes_per_share': 33554432, 'module': 'lyra2re_hash.getPoWHash'}, 'blake256': {'hashes_per_share': 65536, 'module': 'blake_hash.getPoWHash'}}, 'extranonce_size': 4, 'events': {'host': '127.0.0.1', 'enabled': False, 'port': 8125}, 'extranonce_serv_size': 4}
dump_objgraph()[source]
exit(signal=None)[source]

Handle an exit request

classmethod from_raw_config(raw_config, args)[source]
gl_methods = ['_tick_stats']
handle(data, address)[source]
log_event(event)[source]
manager = None
register_logger(name)[source]
register_stat_counters(comp, min_counters, sec_counters=None)[source]

Creates and adds the stat counters to internal tracking dictionaries. These dictionaries are iterated to perform stat rotation, as well as accessed to perform stat logging

start()[source]
status

For display in the http monitor

Stratum Server

Handles spawning one or many stratum servers (which bind to a single port each), as well as spawning corresponding agent servers as well. It holds data structures that allow lookup of all StratumClient objects.

class powerpool.stratum_server.StratumServer(config)[source]

A single port binding of our stratum server.

_spawn = None
add_client(client)[source]
defaults = {'minimum_manual_diff': 64, 'reporter': None, 'server_seed': 0, 'vardiff': {'spm_target': 20, 'tiers': [8, 16, 32, 64, 96, 128, 192, 256, 512], 'interval': 30, 'enabled': False}, 'agent': {'port_diff': 1111, 'enabled': False, 'timeout': 120, 'accepted_types': ['temp', 'status', 'hashrate', 'thresholds']}, 'address': '0.0.0.0', 'start_difficulty': 128, 'port': 3333, 'aliases': {}, 'idle_worker_disconnect_threshold': 3600, 'donate_key': 'donate', 'valid_address_versions': [], 'jobmanager': None, 'algo': 2345987234589723495872345L, 'idle_worker_threshold': 300, 'push_job_interval': 30}
handle(sock, address)[source]

A new connection appears on the server, so setup a new StratumClient object to manage it.

new_job(event)[source]

Gets called whenever there’s a new job generated by our jobmanager.

one_min_stats = ['stratum_connects', 'stratum_disconnects', 'agent_connects', 'agent_disconnects', 'reject_low_share_n1', 'reject_dup_share_n1', 'reject_stale_share_n1', 'acc_share_n1', 'reject_low_share_count', 'reject_dup_share_count', 'reject_stale_share_count', 'acc_share_count', 'unk_err', 'not_authed_err', 'not_subbed_err']
remove_client(client)[source]

Manages removing the StratumClient from the luts

set_user(client)[source]

Add the client (or create) appropriate worker and address trackers

start(*args, **kwargs)[source]
status

For display in the http monitor

stop(*args, **kwargs)[source]
class powerpool.stratum_server.StratumClient(sock, address, logger, manager, server, reporter, algo, config)[source]

Object representation of a single stratum connection to the server.

DUP_SHARE = 1
DUP_SHARE_ERR = 22
LOW_DIFF_ERR = 23
LOW_DIFF_SHARE = 2
STALE_SHARE = 3
STALE_SHARE_ERR = 21
VALID_SHARE = 0
_incr(*args)[source]
_push(job, flush=False, block=True)[source]

Abbreviated push update that will occur when pushing new block notifications. Mico-optimized to try and cut stale share rates as much as possible.

details

Displayed on the single client view in the http status monitor

error_counter = {24: 'not_authed_err', 25: 'not_subbed_err', 20: 'unk_err'}
errors = {20: 'Other/Unknown', 21: 'Job not found (=stale)', 22: 'Duplicate share', 23: 'Low difficulty share', 24: 'Unauthorized worker', 25: 'Not subscribed'}
last_share_submit_delta
push_difficulty()[source]

Pushes the current difficulty to the client. Currently this only happens uppon initial connect, but would be used for vardiff

push_job(flush=False, timeout=False)[source]

Pushes the latest job down to the client. Flush is whether or not he should dump his previous jobs or not. Dump will occur when a new block is found since work on the old block is invalid.

read(*args, **kwargs)[source]
recalc_vardiff()[source]
send_error(num=20, id_val=1)[source]

Utility for transmitting an error to the client

send_success(id_val=1)[source]

Utility for transmitting success to the client

share_type_strings = {0: 'acc', 1: 'dup', 2: 'low', 3: 'stale'}
submit_job(data, t)[source]

Handles recieving work submission and checking that it is valid , if it meets network diff, etc. Sends reply to stratum client.

summary

Displayed on the all client view in the http status monitor

Jobmanager

This module generates mining jobs and sends them to workers. It must provide current jobs for the stratum server to be able to push. The reference implementation monitors an RPC daemon server.

class powerpool.jobmanagers.monitor_aux_network.MonitorAuxNetwork(config)[source]
_check_new_jobs(*args, **kwargs)[source]
defaults = {'signal': None, 'enabled': False, 'send': True, 'currency': 2345987234589723495872345L, 'rpc_ping_int': 2, 'algo': 2345987234589723495872345L, 'flush': False, 'coinservs': 2345987234589723495872345L, 'work_interval': 1}
found_block(address, worker, header, coinbase_raw, job, start)[source]
gl_methods = ['_monitor_nodes', '_check_new_jobs']
one_min_stats = ['work_restarts', 'new_jobs']
start()[source]
status
class powerpool.jobmanagers.monitor_network.MonitorNetwork(config)[source]
_check_new_jobs(*args, **kwargs)[source]
_poll_height(*args, **kwargs)[source]
config = {'diff1': 1766820104831717178943502833727831496196810259731196417549125097682370560L, 'pool_address': '', 'pow_block_hash': False, 'max_blockheight': None, 'block_poll': 0.2, 'hashes_per_share': 65535, 'currency': 2345987234589723495872345L, 'rpc_ping_int': 2, 'algo': 2345987234589723495872345L, 'poll': None, 'payout_drk_mn': True, 'coinservs': 2345987234589723495872345L, 'signal': None, 'merged': (), 'job_refresh': 15}
defaults = {'diff1': 1766820104831717178943502833727831496196810259731196417549125097682370560L, 'pool_address': '', 'pow_block_hash': False, 'max_blockheight': None, 'block_poll': 0.2, 'hashes_per_share': 65535, 'currency': 2345987234589723495872345L, 'rpc_ping_int': 2, 'algo': 2345987234589723495872345L, 'poll': None, 'payout_drk_mn': True, 'coinservs': 2345987234589723495872345L, 'signal': None, 'merged': (), 'job_refresh': 15}
found_block(raw_coinbase, address, worker, hash_hex, header, job, start)[source]

Submit a valid block (hopefully!) to the RPC servers

generate_job(push=False, flush=False, new_block=False, network='main')[source]

Creates a new job for miners to work on. Push will trigger an event that sends new work but doesn’t force a restart. If flush is true a job restart will be triggered.

getblocktemplate(new_block=False, signal=False)[source]
new_merged_work(event)[source]
one_min_stats = ['work_restarts', 'new_jobs', 'work_pushes']
start()[source]
status

For display in the http monitor

Reporters

The reporter is responsible for transmitting shares, mining statistics, and new blocks to some external storage. The reference implementation is the CeleryReporter which aggregates shares into batches and logs them in a way designed to interface with SimpleCoin. The reporter is also responsible for tracking share rates for vardiff. This makes sense if you want vardiff to be based off the shares per second of an entire address, instead of a single connection.

class powerpool.reporters.base.Reporter[source]

An abstract base class to document the Reporter interface.

add_block(address, height, total_subsidy, fees, hex_bits, hash, merged, worker, algo)[source]

Called when a share is submitted with a hash that is valid for the network.

agent_send(address, worker, typ, data, time)[source]

Called when valid data is recieved from a PPAgent connection.

log_share(client, diff, typ, params, job=None, header_hash=None, header=None, start=None, **kwargs)[source]

Logs a share to external sources for payout calculation and statistics

class powerpool.reporters.redis_reporter.RedisReporter(config)[source]
_queue_add_block(address, height, total_subsidy, fees, hex_bits, hex_hash, currency, algo, merged=False, worker=None, **kwargs)[source]
_queue_agent_send(address, worker, typ, data, stamp)[source]
_queue_log_one_minute(address, worker, algo, stamp, typ, amount)[source]
_queue_log_share(address, shares, algo, currency, merged=False)[source]
agent_send(*args, **kwargs)[source]
defaults = {'pool_report_configs': {}, 'redis': {}, 'attrs': {}, 'chain': 1}
gl_methods = ['_queue_proc', '_report_one_min']
log_share(client, diff, typ, params, job=None, header_hash=None, header=None, **kwargs)[source]
one_sec_stats = ['queued']
status

Monitor

class powerpool.monitor.ServerMonitor(config)[source]

Provides a few useful json endpoints for viewing server health and performance.

client(comp_key, username)[source]
clients_0_5()[source]

Legacy client view emulating version 0.5 support

clients_comp(comp_key)[source]
comp(comp_key)[source]
comp_config(comp_key)[source]
counters()[source]
debug()[source]
defaults = {'DEBUG': False, 'JSONIFY_PRETTYPRINT_REGULAR': False, 'port': 3855, 'JSON_SORT_KEYS': False, 'address': '127.0.0.1'}
general()[source]
general_0_5()[source]

Legacy 0.5 emulating view

handler_class

alias of CustomWSGIHandler

start(*args, **kwargs)[source]
stop(*args, **kwargs)[source]

Indices and tables