This section sets up the docker container insert-gossip which is able to read gossip messages in bytes format from a specified file.
Such a file is e. g. the gossip_store file that gets created by a Core Lightning node automatically.
In case you want to get a deeper understanding about this file, feel free to check out the official documentation as well as the source code of the insert-gossip service.
Create an .env file and fill in your credentials.
The easiest way would be to just copy your gossip_store file from your Core Lightning node in this directory.
FILE_PATH=gossip_store# File path to gossip_messagesBLOCKCHAIN_RPC_URL=YOUR_BLOCKCHAIN_RPC_URL# See the introduction for detailsEXPLORER_RPC_PASSWORD=YOUR_BLOCKCHAIN_RPC_PASSWORD# Password protection of the web interface with BASIC AUTHLN_HISTORY_DATABASE_CONNECTION_STRING=YOUR_DATABASE_CONNECTION_STRING# The connection string to your ln-history-database
🔐 Important: Never commit .env files containing credentials to version control.
Ultimatly the folder structure should look like this:
database/├──.env# .env file with credentials├──docker-compose.yml# Docker setup for this service└──gossip_store# The file that contains the gossip messages in raw bytes
Please make sure that the permissions are correctly set, such that the insert-gossip service has the permissions to read the file at FILE_PATH. info
Please note that the current implementation channel_announcement makes a http request to your Bitcoin RPC explorer.
The time it takes to finish the insertion depends heavily on your bandwidth as well as the performance of your Bitcoin RPC explorer.
The insert-gossip service will first iterate through the whole file and log statistics about the parsable gossip messages found.
During that run it is sorting the gossip messages into three seperate files node_announcements.bin, channel_announcements.bin, channel_updates.bin.
You can see the statistics in the logs:
The `insert-gossip` service started. Press CTRL+C or `docker compose down` to stop it.
Starting phase 1
Iterating through the file: `gossip_store`.
File has a size of `234123` bytes
Creating node_announcements.bin
...
Creating channel_announcements.bin
...
Creating channel_updates.bin
...
Splitting the file `gossip_store` into the following three files:
- node_announcements.bin
- channel_announcements.bin
- channel_updates.bin
Finished iterating through `gossip_store` file
Distribution of messages:
- node_announcements: 2314
- channel_announcement: 1832
- channel_updates: 43187
- unable to parse: 0
Sucessfully finished phase 1
The insert-gossip service will setup a DuckDB - an in-memory database - that temporarly stores the gossip messages in the file temp-db.duckdb.
The DuckDB has the same schema as the ln-history-database.
After that it will go through the phases 2a -> 2b -> 2c.
The service interates through the nodes_announcements.bin. For every node_announcement it checks if the node_id has been seen before. If not it inserts the node_id into the nodes table of the DuckDB. In any case it adds the gossip message into the nodes_raw_gossip table.
The service interates through the channel_announcements.bin. For every channel_announcement it needs to request the BLOCKCHAIN_RPC service to get the amount_sat and timestamp (timestamp of the Bitcoin block when it was mined) of that channel.
It also checks if the node_ids of the participating nodes namely node_id_1 and node_id_2 exist in the nodes table. In case they don't, the missing node_id gets inserted with the from_timestamp (and last_seen) being initially the timestamp of the block.
For every channel_announcement a new row gets created in the channels table.
The service iterates through the channel_updates.bin. For every channel_update it checks if the channel has been announced before. If not it creates a new entry in the channels table.
Every channel_update gets inserted into the channel_updates table.
After the duckdb tables in the temp-db.duckdb have been filled completly, the insert-gossip services exports each table as a parquet file (?).
As the last step of the insertion, the service imports the created parquet (?) files into the postgresql database.
The insert-gossip service is concepted to run once over a given file and does not perisist information about which files have already been read.
To help the user keep track of the files it appends a .done to the file that was imported into ln-history-database.
Renaming the initial file `gossip_store` to `gossip_store.done` to indicate that the data has been inserted.
Removing parquet temp files:
- nodes.parquet
- nodes_raw_gossip.parquet
- channel_announcements.parquet
- channel_updates.parquet
Removing temp-db.duckdb
Sucessfully cleaned up resources.