See here how to manually set up a CLI node on your PC step-by-step, explained
There's also the same CLI node running behind the GUI of KDX, so you can download KDX and start it; this is kind of the easiest way to start a node. Meanwhile it's still worth to read the link above (and specifically this chapter of it) just to know what's going on, what to expect and how long to wait for the node to be fully synced.
In case you need a containerized Kaspa node, you can take it here: https://hub.docker.com/r/supertypo/kaspad .
In case you don't want to mine to a pool, you can use kaspad compatible miner, or install & run the kaspad to stratum adapter and use your own kaspad node with a stratum-enabled miner eg. lolMiner, SRBminer.
There are three kaspad to stratum adapters (thanks to Lolliedieb, the lolMiner author):
https://github.com/onemorebsmith/kaspa-stratum-bridge/blob/main/hive-setup.md
Thanks to our great community, we have some scripts to run the adapters in your local rig in HiveOS and use your local node or external node, what are shown below in the next several sections.
cd /home/user/ && wget https://www.dropbox.com/s/x7xlminjq3f0y73/kaspad-stratum?dl=0 && mv kaspad-stratum?dl=0 kaspad-stratum && chmod +x kaspad-stratum && echo "$(echo 'cat $MINER_DIR/$MINER_VER/lolminer.conf | grep KASPADUAL ; if [ $? -eq 0 ]; then /home/user/kaspad-stratum --rpc-url kas.pow.eco:16110 --mining-addr kaspa:qrmmcazulntw8c5zgztn548797efkn490u9lk8kvzjgfcjwyvx3qzl4jyndqa & fi;' | cat - /hive/miners/lolminer/h-run.sh)" > /hive/miners/lolminer/h-run.sh
curl -s https://deb.nodesource.com/setup_16.x | sudo bash && sudo apt install nodejs -y && sudo apt install git -y && npm install -g [email protected] -y && echo "$(echo 'cat $MINER_DIR/$MINER_VER/lolminer.conf | grep KASPADUAL ; if [ $? -eq 0 ]; then npx kstratum@latest --node kas.pow.eco:16110 --address kaspa:qrmmcazulntw8c5zgztn548797efkn490u9lk8kvzjgfcjwyvx3qzl4jyndqa --port 6968 --listen-address 0.0.0.0 -y & fi;' | cat - /hive/miners/lolminer/h-run.sh)" > /hive/miners/lolminer/h-run.sh
Remember to change kas.pow.eco:16110 for your node (kas.pow.eco doesn't guarantee any uptime) and also your kaspa:address.
That will make and add line to the h-run.sh of lolMiner from HiveOS /hive/miners/lolminer. It only needs to be run 1 time that line, later any start of the miner will automatic launch the Kaspad-Stratum or the KStratum. If you want to change node or kaspa wallet, you need to do a nano to the file /hive/miners/lolminer/h-run.sh
In the Flighsheet for Kaspad-Stratum use
--dualpool 127.0.0.1:6969
In the Flighsheet for KStratum use
--dualpool 127.0.0.1:6968
If you put both lines KStratum and Kaspad-Stratum with different nodes in each line, then you can have node and failsafe node. You will only need to add in Flighsheet:
--dualpool 127.0.0.1:6968 --dualuser kaspa:xxxx --dualpool 127.0.0.1:6969 --dualuser kaspa:xxxx
Both options work great, KStratum is autoupdated each time is launched, KASPAD-Stratum will need a recompile in case update are done.
Script source at Dropbox: https://www.dropbox.com/s/x7xlminjq3f0y73/kaspad-stratum?dl=0
Here's a docker repo to make it easy for anyone wanting to run a Kaspa node on their Raspberry Pi by @mater#0296 on Discord: https://hub.docker.com/r/nwbower/pi-kaspad
A Kaspa archival node stores a huge amount of data, so the setup of a complete new archival node requires some special steps. In the following chapters the idea and the setup steps will be described.
At the time of writing this documentation, the amount of data is at 710G. So it is recommended to have at least 1T of storage available. Additionally it will take a lot of disc IO during the node startup, which will fail on a traditional, mechanical harddisk. So instead of a HDD, a SSD is required. The machine itself should have at least 32G of memory. CPU requirements are not that strict, so a CPU from within the last five years will be sufficient.
Regarding the operating system, nothing special is required beside the fact, that you must be able to run the cmdline tool rsync. On any Linux system, this is just a package to be installed. If you're on Windows, the recommended way to go is the usage of WSL (Windows Subsystem for Linux). Within WSL, rsync could be installed the same way as you're on Linux then.
To run the node, at least installed kaspad is required. To do so, follow the corresponding setup instructions here on the wiki:
Important: Additionally to the setup of the node, you need to add the parameter --archival
to the start command of the service! See detailed description below.
Summarized requirements:
High level view of setup procedure
The whole setup procedure will work the following way:
With this approach the downtime of the existing archival node is minimized as the overall transfer time for the data is used before the final sync step.
At first a ssh key pair needs to be created. To do so, the cmd ssh-keygen
is used. Just issue the cmd on the Linux cmdline and answer the questions appropriately. Here's an example how this might look like:
$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hlxeasy/.ssh/id_rsa): /home/hlxeasy/.ssh/arch-node-sync-key
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/hlxeasy/.ssh/arch-node-sync-key.
Your public key has been saved in /home/hlxeasy/.ssh/arch-node-sync-key.pub.
...
Note that the full path /home/hlxeasy/.ssh/arch-node-sync-key
was given at the first question! Otherwise the keypair will be stored at the location, where the cmd was triggered.
To grant access to a running archival node, the operator needs to know your public key. To do so, just output the public key on the cmdline, copy the output and give it to the archival node operator. Here's an example how this might look like:
$ cat ~/.ssh/arch-node-sync-key.pub
ssh-rsa AAAAB3NzaC1yc2EAAAAD...this.might.be.a.long.line...1VYAP79rkuRmpw+iwu/KnGQev8b5jRZc83dhk+4OMtQI1sqH hlxeasy@archivalnode1
The line to copy starts with ssh-rsa
and is very long! At the end of the line you see your local username and the hostname, separated by @
. As already written, copy the whole line and forward that to the node operator of the archival node, from which you like to sync from.
As already written on the requirements, an archival node needs a huge amount of disk space. To let kaspad use this diskspace, it must be configured according to your setup.
1. Data location
In the following it is assumed, that the big ssd is mounted on /data
and that kaspad should store the archival data /data/data1
. If this is different on your setup, modifiy this to your needs.
To let kaspad know where to store the data, it's configuration file needs to be modified. So open ~/.kaspad/kaspad.conf
with the editor of your choice and have a look at the very first configuration section:
[Application Options]
; ------------------------------------------------------------------------------
; Data settings
; ------------------------------------------------------------------------------
; The directory to store data such as the block DAG and peer addresses. The
; block DAG takes several GB, so this location must have a lot of free space.
; The default is ~/.kaspad/data on POSIX OSes, $LOCALAPPDATA/Kaspad/data on Windows,
; ~/Library/Application Support/Kaspad/data on Mac OS, and $home/kaspad/data on
; Plan9. Environment variables are expanded so they may be used. NOTE: Windows
; environment variables are typically %VARIABLE%, but they must be accessed with
; $VARIABLE here. Also, ~ is expanded to $LOCALAPPDATA on Windows.
; datadir=~/.kaspad/data
Insert the following right after this block:
appdir=/data/data1
Save and close the file. From now on kaspad will store it's data below the configured path.
2. Activate archival mode
To let kaspad run in archival mode, it must be started using the option --archival
. So a cmdline might look like this:
kaspad --utxoindex --archival
If the system is configured to use systemd, then this option needs to be added to the ExecStart
line on /etc/systemd/system/kaspad.service
:
ExecStart=/usr/local/bin/kaspad --utxoindex --archival
Don't forget to
sudo systemctl daemon-reload
after updating the service!
In general the same adjustments as on Linux needs to be done:
--archival
on the kaspad cmd
This step needs to be done by the operator of the existing archival node!
Feel free to get in touch with Helix (helixeasy) on Kaspa Discord to sync your node.
To gave access to the archival node data, the following line needs to be added to the file ~/.ssh/authorized_keys
of an account on the archival host, who has read access to the node data:
command="rsync --stats --progress --numeric-ids -axAhHSP --server --sender --delete /data/data1/" ssh-rsa AAAAB3NzaC1yc2EAAAAD...public.key.from.above
It might be necessary to modify /data/data1
to the real location of the archival data.
With this line added, nothing else than executing exactly this cmd is allowed, so the data will be syncronized through ssh but the archival node is not really accessible this way.
After the node operator added the public key as described on the previous step, the sync of the data can be started. This is done by executing rsync multiple times until only a small amount of files is synchronized.
The very first rsync run will synchronize the majority of data and will take hours, depending on the internet connection. The first step which rsync performs is the determination of files to sync. As there are nearly 400k files, this alone might take some minutes. After that rsync starts to transfer the files, which will take some hours.
At first some settings needs to be defined:
$ HOST=<url-of-remote-archival-node> # \
$ PORT=<port-to-use-for-ssh> # > The operator of the existing archival node will tell you
$ ACCOUNT=<account-name-to-login> # /
$ KEY=<private-key-with-path> # The private key you created before with full path
$ DESTINATION=<destination-folder-at-your-side> # Where the data should be stored
The cmd to sync is as following:
$ rsync --stats --progress --delete --numeric-ids -axAhHSP -e "ssh -l ${ACCOUNT} -p ${PORT} -i ${KEY}" ${HOST}:/data/data1/ /data/data1
\________________________________________________/ \_________________________________________/ \__________________/ \_________/
options, see rsync --help Account, port and private key URL and path Where to sync to
to authenticate to sync from
The first lines of the initial sync might look like this:
$ HOST=some.url.to.connect
$ PORT=1234
$ ACCOUNT=kaspad
$ KEY=~/.ssh/arch-node-sync-key
$ DESTINATION=/data/data1
$ rsync --stats --progress --delete --numeric-ids -axAhHSP -e "ssh -l ${ACCOUNT} -p ${PORT} -i ${KEY}" ${HOST}:/data/data1/ ${DESTINATION}
The authenticity of host '[some.url.to.connect]:1234 ([123.123.123.123]:1234)' can't be established.
ECDSA key fingerprint is SHA256:aDqHwB1KiSSVJLDHD9PuWQ4oqOax8/J3oFSaaZt3IxQ.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '[some.url.to.connect]:1234,[123.123.123.123]:1234' (ECDSA) to the list of known hosts.
receiving file list ...
380230 files to consider
kaspa-mainnet/
kaspa-mainnet/datadir2/
kaspa-mainnet/datadir2/042382.ldb
2.13M 100% 61.48MB/s 0:00:00 (xfr#1, to-chk=380226/380230)
kaspa-mainnet/datadir2/042383.ldb
2.13M 100% 31.70MB/s 0:00:00 (xfr#2, to-chk=380225/380230)
kaspa-mainnet/datadir2/042384.ldb
2.13M 100% 21.13MB/s 0:00:00 (xfr#3, to-chk=380224/380230)
...
At the time of that screenshot rsync determines 380230 files to sync and start right after that with the synchronization. On the end of each 2nd line you see how many files are left to transfer.
Right after that initial sync run is finished, the same cmd should be triggered again. The amount of files to sync is way smaller now, as the majority is already transfered. The rsync cmd should be executed that often, until only a very small amount of files is synchronized.
For an automated restart of rsync, something like the following oneliner might be handy:
$ while true ; do rsync ... ; echo "Sleeping 60s" ; sleep 60 ; done
At the three dots you need to insert the rsync parameters as written before.
This chain of commands will trigger rsync at first. If that is finished, the output “Sleeping 60s” will be written to the console and after 60s rsync will be triggered again. The cmd could be stopped using Ctrl-C
right after the “Sleeping” output.
The final sync needs to be coordinated with the operator of the source node. The source node needs to be stopped to write a consistant state to the disc. As soon as the node is stopped, the final sync can be triggered.
A final sync might look like this:
$ rsync --stats --progress --delete --numeric-ids -axAhHSP -e "ssh -l ${ACCOUNT} -p ${PORT} -i ${KEY}" ${HOST}:/data/data1/ ${DESTINATION}
receiving file list ...
380230 files to consider
kaspa-mainnet/datadir2/8969076.log
117.73M 100% 149.70MB/s 0:00:00 (xfr#1, to-chk=33/380230)
kaspa-mainnet/logs/kaspad.log
29.25M 100% 21.72MB/s 0:00:01 (xfr#2, to-chk=2/380230)
Number of files: 380,230 (reg: 380,225, dir: 5)
Number of created files: 0
Number of deleted files: 0
Number of regular files transferred: 2
Total file size: 747.60G bytes
Total transferred file size: 146.98M bytes
Literal data: 550.25K bytes
Matched data: 146.66M bytes
File list size: 10.40M
File list generation time: 3.297 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 108.45K
Total bytes received: 11.01M
sent 108.45K bytes received 11.01M bytes 1.17M bytes/sec
total size is 747.60G speedup is 67,233.77
As soon as the final sync is finished, the source node can be restarted again. With this final sync the real downtime of the source node is as short as possible.
If you're on Linux, double check the file permission a/o ownership before starting the node! The files should belong to the account which is running kaspad service.
Now kaspad could be started and should show something like this on the log:
2023-09-29 17:47:20.755 [INF] KASD: Version 0.12.14
2023-09-29 17:47:20.756 [INF] KASD: Loading database from '/data/data1/kaspa-mainnet/datadir2'
2023-09-29 17:47:28.312 [INF] ADXR: Loaded 4096 addresses and 0 banned addresses
2023-09-29 17:47:28.647 [INF] KASD: UTXO index started
2023-09-29 17:47:28.648 [INF] TXMP: P2P Server listening on [::]:16111
2023-09-29 17:47:28.648 [INF] TXMP: RPC Server listening on [::]:16110
...
As soon as output like
...
2023-09-29 17:47:48.620 [INF] PROT: Accepted block c43a15813393d6cee0750227122ac3a89db8aac3b255b215c40ca308e4dc2360 via relay
2023-09-29 17:47:48.721 [INF] PROT: Accepted block b1c9705e9a462e4725f9d84b102806f2353f52d6c8a6b916fb689e9a262a340c via relay
2023-09-29 17:47:49.687 [INF] PROT: Accepted block b82cd4a0823b79cce8acbc52e1b843132a436236ed0d14204c5a3c1fed4c162a via relay
...
is in the log, the node is in sync with the Kaspa DAG.
Congratulations, your archival node is up&running!
As seen in the previous sections, the log of kaspad contains diffferent abbreviations. These abbreviations will be explained in the following:
The first and second column contains date and time of the log entry.
Example:
2023-10-05 22:57:25.779
The third column categorizes the type of the entry into informational entries (INF
), warnings (WRN
) and errors (To the developers: Are there more?). These entries are encapsulated into square brackets.
Example:
2023-10-05 23:00:04.081 [WRN]
2023-10-05 23:00:04.289 [INF]
To be explaind, help appreciated!
The fifth column up to the end of the line contains the log message content.
Example:
2023-10-05 23:03:14.007 [WRN] TXMP: Rejected spam tx c56320cd1cfe20869d0a20a95e5...
2023-10-05 23:03:14.230 [INF] PROT: Accepted block 732a37799c353457d55b212d94b95...
2023-10-05 23:03:14.230 [INF] PROT: Ignoring duplicate block 732a37799c353457d55...