Introduction
########b oo d####b.
## '##
##aaaa ##d###b. dP .d####b. .d####b. ##d###b. aaad#'
## ##' '## ## ##' '## ##' '## ##' '## '##
## ## ## ##. .## ##. .## ## ## .##
########P dP dP '####P## '#####P' dP dP d#####P
.##
d####P
đź“– Contributing
You can contribute to this book on GitHub.
For instructions regarding Erigon 2 please refer to https://erigon.gitbook.io.
Erigon is an efficient Ethereum implementation designed for speed, modularity, and optimization. By default, it functions as an archive node, utilizing technologies like staged sync, efficient state storage, and database compression.
With Erigon 3 the default configuration shifts from archive node to full node, enhancing efficiency, accessibility, and versatility for a wider range of users. Archive nodes remain available for developers and researchers needing full historical data, while the full node offers faster sync times and lower resource usage for everyday operations. More info here.
Information
If you wanto to test Erigon without reading all the documentation, go straight to the quick nodes section.
DISCLAIMER
Erigon 3 is in alpha phase and it is not recommended to be used in production environments. The Erigon team does not take any responsibility for losses or damages occurred through the use of Erigon. While we employ a seasoned in-house security team, we have not subjected our systems to independent third-party security assessments, leaving potential vulnerabilities to bugs or malicious activity unaddressed. It is essential for validators to be aware that unforeseen events, including software glitches, may result in lost rewards.
Features
Erigon offers several features that make it a good option for a node application such as efficient state storage through the use of a key-value database and faster initial synchronisation.
Built with modularity in mind, it also offers separate components such as the JSON RPC daemon, that can connect to both local and remote databases. For read-only calls, this RPC daemon does not need to run on the same system as the main Erigon binary, and can even run from a database snapshot.
Erigon 3 is a major update that introduces several significant changes, improvements, and optimizations. Some of the key features and differences include:
The main changes from Erigon 2 are listed here.
Release Process
Erigon 3 also introduces changes to the release process, including:
- New Docker Image Repository: Erigon images are now available on Dockerhub repository "erigontech/erigon".
- Multi-Platform Support: The docker image is built for linux/amd64/v2 and linux/arm64 platforms using Alpine 3.20.2.
- Release Workflow Changes: Build flags are now passed to the release workflow, allowing users to view previously missed build information in released binaries.
Known Issues
See https://github.com/erigontech/erigon?tab=readme-ov-file#known-issues.
Getting Started
In order to use Erigon, the software has to be installed first. There are several ways to install Erigon, depending on the operating system and the user's choice of installation method, e.g. using a package manager, docker container or building from source.
Verify carefully that your hardware satisfy the requirements and your machine is running the required software.
Hardware Requirements
Disk type
A locally mounted SSD (Solid-State Drive) or NVMe (Non-Volatile Memory Express) disk is recommended for storage. Avoid Hard Disk Drives (HDD), as they can cause Erigon to lag behind the blockchain tip, albeit not fall behind.
Additionally, SSDs may experience performance degradation when nearing full capacity.
See here how you can optimize storage.
RAID Configuration
When using multiple disks, consider implementing a RAID 0 configuration to maximize performance and utilize space efficiently. RAID ZFS is not recommended.
Disk size
Please refer to disk space required for details. To ensure smooth operation, it is recommended to maintain at least 25% of free disk space. For more detailed guidance on optimizing storage, refer to disk space required.
CPU Requirements
- Architecture: 64-bit architecture.
- Number of core and threads: While a powerful CPU can be beneficial, it's not essential for running Erigon. A moderate number of cores and threads should be sufficient. However, we recommend at least 4 cores, or 8 cores for high performance.
RAM Requirements
- Minimum: 64GB
Kernel Requirements
- Linux: kernel version > v4
Bandwith
A stable and reliable internet connection is crucial for running a node, especially if you're running a validator node, as downtime can lead to missed rewards or penalties. We recommend a minimum inbound and outbound bandwidth of 20 Mbps, with a stable connection and low latency. For optimal performance, it's best to use an ISP with an uncapped data allowance.
Tips for faster syncing
Optimize for Low Latency
Use a machine with low-latency storage (focused on latency, not throughput) and ample RAM to speed up the initial sync process.
Memory Optimized Nodes
Consider using memory-optimized nodes for faster sync, such as AWS EC2 r5 or r6 series instances, for faster syncing.
Additional Recommendations
- Only expose ports that are necessary for specific use cases (e.g., JSON-RPC or WebSocket).
- Regularly review and audit your firewall rules to ensure they align with your infrastructure needs.
- Utilize monitoring tools like Prometheus or Grafana to track P2P communication metrics.
This minimal configuration ensures proper P2P functionality for both the Execution and Consensus layers, without exposing unnecessary services.
Software Requirements
Before we start, please note that building software from source can be complex. If you're not comfortable with technical tasks, you might want to check the Docker installation.
Erigon works only from command line interface (CLI), so it is advisable to have a good confidence with basic commands.
Please ensure that the following prerequisites are met.
Build essential (only for Linux)
Install Build-essential and Cmake:
sudo apt install build-essential cmake -y
Git
Git is a tool that helps download and manage the Erigon source code. To install Git, visit:
https://git-scm.com/downloads.
Go Programming Language
Erigon utilizes Go (also known as Golang) version 1.22 or newer for part of its development. It is recommended to have a fresh Go installation. If you have an older version, consider deleting the /usr/local/go folder (you may need to use sudo) and re-extract the new version in its place.
To install the latest Go version, visit the official documentation at https://golang.org/doc/install.
C++ Compiler
This turns the C++ part of Erigon's code into a program your computer can run. You can use either Clang or GCC:
- For Clang follow the instructions at https://clang.llvm.org/get_started.html;
- For GCC (version 10 or newer): https://gcc.gnu.org/install/index.html.
You can now proceed with Erigon installation.
Installation
In order to use Erigon, the software has to be installed first. There are several ways to install Erigon, depending on the operating system and the user's choice of installation method, e.g. using a package manager, container or building from source.
Always check the list of releases for release notes.
Upgrading from a previous version
Linux and MacOS
How to install Erigon in Linux and MacOS
The basic Erigon configuration is suitable for most users just wanting to run a node. For building the latest stable release use the following command:
git clone --branch v3.0.0-beta1 --single-branch https://github.com/erigontech/erigon.git
cd erigon
make erigon
This should create the binary at ./build/bin/erigon
.
Windows
How to install and run Erigon 3 on Windows 10 and Windows 11
There are 3 options for running Erigon 3 on Windows, listed from easiest to most difficult installation:
-
Build executable binaries natively for Windows: Use the pre-built Windows executables that can be natively run on Windows without any emulation or containers required.
-
Use Docker: Run Erigon in a Docker container for isolation from the host Windows system. This avoids dependencies on Windows but requires installing Docker.
-
Use Windows Subsystem for Linux (WSL): Install the Windows Subsystem for Linux (WSL) to create a Linux environment within Windows. Erigon can then be installed in WSL by following the Linux installation instructions. This provides compatibility with Linux builds but involves more setup overhead.
Build executable binaries natively for Windows
Before proceeding, ensure that the hardware and software requirements are met.
Installing Chocolatey
Install Chocolatey package manager by following these instructions.
Once your Windows machine has the above installed, open the Command Prompt by typing "cmd" in the search bar and check that you have correctly installed Chocolatey:
choco -v

Now you need to install the following components: cmake
, make
, mingw
by:
choco install cmake make mingw
Important note about Anti-Virus:
During the compiler detection phase of MinGW, some temporary executable files are generated to test the compiler capabilities. It's been reported that some anti-virus programs detect these files as possibly infected with the Win64/Kryptic.CIS
Trojan horse (or a variant of it). Although these are false positives, we have no control over the 100+ vendors of security products for Windows and their respective detection algorithms and we understand that this may make your experience with Windows builds uncomfortable. To work around this, you can either set exclusions for your antivirus software specifically for thebuild\bin\mdbx\CMakeFiles
subfolder of the cloned repo, or you can run Erigon using the other two options below.
Make sure that the Windows System Path variable is set correctly. Use the search bar on your computer to search for “Edit the system environment variable”.

Click the “Environment Variables...” button.

Look down at the "System variables" box and double click on "Path" to add a new path.

Then click on the "New" button and paste the following path:
C:\ProgramData\chocolatey\lib\mingw\tools\install\mingw64\bin

Clone the Erigon repository
Open the Command Prompt and type the following:
git clone --branch v3.0.0-beta1 --single-branch https://github.com/erigontech/erigon.git
You might need to change the ExecutionPolicy
to allow scripts created locally or signed by a trusted publisher to run:
Set-ExecutionPolicy RemoteSigned
Compiling Erigon
To compile Erigon there are two alternative methods:
1. Compiling from the wmake.ps1 file in the File Explorer
This is the fastest way which normally works for everyone. Open the File Explorer and go to the Erigon folder, then right click the wmake
file and choose "Run with PowerShell".

PowerShell will compile Erigon and all of its modules. All binaries will be placed in the .\build\bin\
subfolder.

2. Using the PowerShell CLI
In the search bar on your computer, search for “Windows PowerShell” and open it.

Change the working directory to "erigon"
cd erigon

Before modifying security settings, ensure PowerShell script execution is allowed in your Windows account settings using the following command:
Set-ExecutionPolicy Bypass -Scope CurrentUser -Force
This change allows script execution, but use caution to avoid security risks. Remember to only make these adjustments if you trust the scripts you intend to run. Unauthorized changes can impact system security. For more info read Set-Execution Policy documentation.
Now you can compile Erigon and/or any of its component:
.\wmake.ps1 [-target] <targetname>
For example, to build the Erigon executable write:
.\wmake.ps1 erigon

The executable binary erigon.exe
should have been created in the .\build\bin\
subfolder.
You can use the same command to build other binaries such as RPCDaemon
, TxPool
, Sentry
and Downloader
.
Running Erigon
To start Erigon place your command prompt in the .\build\bin\
subfolder and use:
start erigon.exe.
or from any place use the full address of the executable:
start C:\Users\username\AppData\Local\erigon.exe
See basic usage documentation on available options and flags to customize your Erigon experience.
Windows Subsystem for Linux (WSL)
WSL enables running a complete GNU/Linux environment natively within Windows 10, providing Linux compatibility without the performance overhead of traditional virtualization.
To install WSL, follow Microsoft official instructions: https://learn.microsoft.com/en-us/windows/wsl/install.
Information
WSL Version 2 is the only version supported.
Under this option you can build Erigon as you would on a regular Linux distribution (see detailed instructions here).
You can also point your data to any of the mounted Windows partitions ( e.g. /mnt/c/[...]
, /mnt/d/[...]
etc..) but be aware that performance will be affected: this is due to the fact that these mount points use DrvFS
, which is a network file system, and additionally MDBX locks the db for exclusive access, meaning that only one process at a time can access the data.
Warning
The remote db RPCdaemon is an experimental feature and is not recommended, it is extremely slow. It is highly preferable to use the embedded RPCdaemon.
This has implications for running rpcdaemon
, which must be configured as a remote DB, even if it is running on the same machine. If your data is hosted on the native Linux filesystem instead, there are no restrictions. Also note that the default WSL2 environment has its own IP address, which does not match the network interface of the Windows host: take this into account when configuring NAT on port 30303 on your router.
Docker
How to run a Erigon node with Docker
Using Docker allows starting Erigon packaged as a Docker image without installing the program directly on your system.
General info
- The released archive now comprises 10 key binaries: erigon, downloader, devnet, EVM, caplin, diag, integration, RPCDaemon, Sentry, and txpool;
- The new Docker images feature seven binaries: erigon, integration, diag, Sentry, txpool, downloader, and RPCDaemon (same binaries included in the released archive);
- Multi-platform docker image available for linux/amd64/v2 and linux/arm64 platforms and based on alpine:3.20.2; No need to pull another docker image for another different platform.
- The Docker image is now compatible with multiple platforms: Linux (amd64, v2) and arm64. It's built on top of Alpine 3.20.2;
- All build flags now passed to the release workflow — so, user can see previously missed build info in our released binaries (as well as in docker images) and also better build optimization expected;
- With recent updates, all build configurations are now included in the release process. This provides users with more comprehensive build information for both binaries and Docker images, along with enhanced build optimizations..
- Images are stored at https://hub.docker.com/r/erigontech/erigon
Download and start Erigon in Docker
Here are the steps to download and start Erigon 3 in Docker:
-
Install the latest version of Docker Engine, see instructions here.
-
Visit the Erigon Docker Hub page to view the available releases. For Erigon 3, search for the latest available release.
-
Download the latest version:
docker pull erigontech/erigon:v3.0.0-beta1
- List the downloaded images to get the IMAGE ID:
docker images
- Check which Erigon version has been downloaded:
docker run -it <image_id> --v
- If you want to start Erigon add the options according to the basic usage page or the advanced customization page. For example:
docker run -it 50bef1b5d0f9 --chain=holesky --prune.mode=minimal
- When done, exit the container or press
Ctrl+C
. The container will stop.
Optional: Setup dedicated user
User UID/GID need to be synchronized between the host OS and container so files are written with correct permission.
You may wish to setup a dedicated user/group on the host OS, in which case the following make
targets are available.
# create "erigon" user
make user_linux
# or
make user_macos
Environment Variables
There is a .env.example
file in the root of the repo.
* DOCKER_UID - The UID of the docker user
* DOCKER_GID - The GID of the docker user
* XDG_DATA_HOME - The data directory which will be mounted to the docker containers
If not specified, the UID/GID will use the current user.
A good choice for XDG_DATA_HOME
is to use the ~erigon/.ethereum
directory created by helper targets make user_linux
or make user_macos
.
Check: Permissions
In all cases, XDG_DATA_HOME
(specified or default) must be writeable by the user UID/GID in docker, which will be determined by the DOCKER_UID
and DOCKER_GID
at build time.
If a build or service startup is failing due to permissions, check that all the directories, UID, and GID controlled by these environment variables are correct.
Run
Next command starts: Erigon on port 30303, rpcdaemon on port 8545, prometheus on port 9090, and grafana on port 3000:
#
# Will mount ~/.local/share/erigon to /home/erigon/.local/share/erigon inside container
#
make docker-compose
#
# or
#
# if you want to use a custom data directory
# or, if you want to use different uid/gid for a dedicated user
#
# To solve this, pass in the uid/gid parameters into the container.
#
# DOCKER_UID: the user id
# DOCKER_GID: the group id
# XDG_DATA_HOME: the data directory (default: ~/.local/share)
#
# Note: /preferred/data/folder must be read/writeable on host OS by user with UID/GID given
# if you followed above instructions
#
# Note: uid/gid syntax below will automatically use uid/gid of running user so this syntax
# is intended to be run via the dedicated user setup earlier
#
DOCKER_UID=$(id -u) DOCKER_GID=$(id -g) XDG_DATA_HOME=/preferred/data/folder DOCKER_BUILDKIT=1 COMPOSE_DOCKER_CLI_BUILD=1 make docker-compose
#
# if you want to run the docker, but you are not logged in as the $ERIGON_USER
# then you'll need to adjust the syntax above to grab the correct uid/gid
#
# To run the command via another user, use
#
ERIGON_USER=erigon
sudo -u ${ERIGON_USER} DOCKER_UID=$(id -u ${ERIGON_USER}) DOCKER_GID=$(id -g ${ERIGON_USER}) XDG_DATA_HOME=~${ERIGON_USER}/.ethereum DOCKER_BUILDKIT=1 COMPOSE_DOCKER_CLI_BUILD=1 make docker-compose
Makefile
creates the initial directories for Erigon, Prometheus and Grafana. The PID namespace is shared between erigon and rpcdaemon which is required to open Erigon's DB from another process (RPCDaemon local-mode). See: https://github.com/ledgerwatch/erigon/pull/2392/files
If your docker installation requires the docker daemon to run as root (which is by default), you will need to prefix the command above with sudo
. However, it is sometimes recommended running docker (and therefore its containers) as a non-root user for security reasons. For more information about how to do this, refer to this article.
Upgrading from a previous version
To upgrade Erigon to a newer version when you've originally installed it via Git and manual compilation, you should follow these steps without needing to delete the entire folder:
-
Terminate your Erigon session by pressing CTRL+C
-
Navigate to your Erigon directory
-
Fetch the latest changes from the repository: You need to make sure your local repository is up-to-date with the main GitHub repository. Run:
git fetch --tags
-
Check out the latest version and switch to it using:
git checkout <new_version_tag>
Replace
<new_version_tag>
with the version tag of the new release, for example:git checkout v3.0.0-beta1
-
Rebuild Erigon: Since the codebase has changed, you need to compile the new version. Run:
make erigon
This process updates your installation to the latest version you specify, while maintaining your existing data and configuration settings in the Erigon folder. You're essentially just replacing the executable with a newer version.
Docker
If you're using Docker to run Erigon, the process to upgrade to a newer version of the software is straightforward and revolves around pulling the latest Docker image and then running it. Here's how you can upgrade Erigon using Docker:
-
Pull the Latest Docker Image: First, find out the tag of the new release from the Erigon Docker Hub page. Once you know the tag, pull the new image:
docker pull erigontech/erigon:<new_version_tag>
Replace
<new_version_tag>
with the actual version tag you wish to use. For example:docker pull erigontech/erigon:v3.0.0-beta1
-
List Your Docker Images: Check your downloaded images to confirm the new image is there and get the new image ID:
docker images
-
Stop the Running Erigon Container: If you have a currently running Erigon container, you'll need to stop it before you can start the new version. First, find the container ID by listing the running containers:
docker ps
Then stop the container using:
docker stop <container_id>
Replace
<container_id>
with the actual ID of the container running Erigon. -
Remove the Old Container: (Optional) If you want to clean up, you can remove the old container after stopping it:
docker rm <container_id>
-
Run the New Image: Now you can start a new container with the new Erigon version using the new image ID:
docker run -it <new_image_id>
-
Verify Operation: Ensure that Erigon starts correctly and connects to the desired network, verifying the logs for any initial errors.
By following these steps, you'll keep your Docker setup clean and up-to-date with the latest Erigon version without needing to manually clean up or reconfigure your environment. Docker's ability to encapsulate software in containers simplifies upgrades and reduces conflicts with existing software on your machine.
Basic Usage
All-in-One Client
The all-in-one client is the preferred option for most users:
./build/bin/erigon
This CLI command allows you to run an Ethereum full node where every process is integrated and no special configuration is needed.
The default Consensus Layer utilized is Caplin, the Erigon flagship embedded CL.
Basic Configuration​
-
Default data directory is
/home/usr/.local/share/erigon
. If you want to store Erigon files in a non-default location, add flag:--datadir=<your_data_dir>
-
Based on the type of node you want to run you can add
--prune.mode=archive
to run a archive node or--prune.mode=minimal
for a minimal node. The default node is archive node. -
--chain=mainnet
, add the flag--chain=sepolia
for Sepolia testnet or--chain=holesky
for Holesky testnet. -
--http.addr="0.0.0.0" --http.api=eth,web3,net,debug,trace,txpool
to use RPC and e.g. be able to connect your wallet. -
To increase download speed add
--torrent.download.rate=512mb
(default is 16mb).
To stop the Erigon node you can use the CTRL+C
command.
Additional flags can be added to configure the node with several options.
Testnets
If you would like to give Erigon a try, but do not have spare 2TB on your drive, a good option is to start syncing one of the public testnets, Holesky, adding the option --chain=holesky
and using the default Consensus Layer, Caplin. You can also had the flag --prune.mode=minimal
to have a node that is syncing fast while taking not so much disk space:
./build/bin/erigon --chain=holesky --prune.mode=minimal
Help
To learn about the available commands, open your terminal in your Erigon 3 installation directory and run:
make help
This command will display a list of convenience commands available in the Makefile, along with their descriptions.
go-version: print and verify go version
validate_docker_build_args: ensure docker build args are valid
docker: validate, update submodules and build with docker
setup_xdg_data_home: TODO
docker-compose: validate build args, setup xdg data home, and run docker-compose up
dbg debug build allows see C stack traces, run it with GOTRACEBACK=crash. You don't need debug build for C pit for profiling. To profile C code use SETCGOTRCKEBACK=1
erigon: build erigon
all: run erigon with all commands
db-tools: build db tools
test: run unit tests with a 100s timeout
test-integration: run integration tests with a 30m timeout
lint-deps: install lint dependencies
lintci: run golangci-lint linters
lint: run all linters
clean: cleans the go cache, build dir, libmdbx db dir
devtools: installs dev tools (and checks for npm installation etc.)
mocks: generate test mocks
mocks-clean: cleans all generated test mocks
solc: generate all solidity contracts
abigen: generate abis using abigen
gencodec: generate marshalling code using gencodec
graphql: generate graphql code
gen: generate all auto-generated code in the codebase
bindings: generate test contracts and core contracts
prometheus: run prometheus and grafana with docker-compose
escape: run escape path={path} to check for memory leaks e.g. run escape path=cmd/erigon
git-submodules: update git submodules
install: copies binaries and libraries to DIST
user_linux: create "erigon" user (Linux)
user_macos: create "erigon" user (MacOS)
hive: run hive test suite locally using docker e.g. OUTPUT_DIR=~/results/hive SIM=ethereum/engine make hive
automated-tests run automated tests (BUILD_ERIGON=0 to prevent erigon build with local image tag)
help: print commands help
For example, from your Erigon 3 installation directory, run:
make clean
This will execute the clean target in the Makefile, which cleans the go cache
, build
directory, and libmdbx
db directory.
Type of Node
Erigon 3 introduces a flexible approach to node configuration, offering three distinct types to suit various user needs. Depending on your need, you can choose from three different node types.
Usage | Minimal Node | Full Node | Archive Node |
---|---|---|---|
Privacy, RPC | Yes | Yes | Yes |
Contribute to network | No | Yes | Yes |
Research | No | No | Yes |
Staking | Yes | Yes | Yes |
Minimal node
Erigon 3 implements support for EIP-4444 through its Minimal Node configuration, enabled by the flag --prune.mode=minimal
. For example:
./build/bin/erigon --prune.mode=minimal
Minimal node is suitable for users with constrained hardware who wants to achieve more privacy during their interaction with EVM, like for example sending transactions with your node. Minimal node is also suitable for staking.
Full node
Erigon 3 is full node by default (--prune.mode=full
). This configuration delivers faster sync times and reduced resource consumption for everyday operation, maintaining essential data while reducing storage requirements. We recommend running a full node whenever possible, as it supports the network's decentralization, resilience, and robustness, aligning with Ethereum's trustless and distributed ethos. Given the reduced disk space requirements of Erigon 3, the full node configuration is suitable for the majority of users.
Archive node
Ethereum's state refers to account balances, contracts, and consensus data. Archive nodes store every historical state, making it easier to access past data, but requiring more disk space. They provide comprehensive historical data, making them optimal for conducting extensive research on the chain, ranging from searching for old states of the EVM to implementing advanced block explorers, such as Otterscan, and undertaking development activities.
Erigon 3 has consistently reduced the disk space requirements for running an archive node, rendering it more affordable and accessible to a broader range of users. To run a archive node use the flag --prune.mode=archive
.
Information
In order to switch type of node, you must first delete the /chaindata
folder in the chosen --datadir
directory.
Disk Space Required
How much space your Erigon node will take
Mainnets
Erigon with Caplin
Network | Archive Node | Full Node | Minimal Node |
---|---|---|---|
Ethereum | 1.7 TB | 900 GB | 310 GB |
Gnosis | 535 GB | 365 GB | 210 GB |
Polygon | 4.3 TB | 2 TB | 873 GB |
See also sync times.
Erigon with an external Consensus Layer client
(Values obtained with Lighthouse)
Network | Archive Node | Full Node | Minimal Node |
---|---|---|---|
Ethereum | --- TB | --- GB | --- GB |
Gnosis | --- GB | --- GB | --- GB |
Polygon | --- TB | --- TB | --- GB |
Chiado | ... GB | ... GB | ... GB |
Amoy | ... GB | ... GB | ... GB |
Testnets
Erigon with Caplin
Network | Archive Node | Full Node | Minimal Node |
---|---|---|---|
Holesky | 170 GB | 110 GB | 53 GB |
Sepolia | 186 GB | 116 GB | 63 GB |
Chiado | 25 GB | 17 GB | 12 GB |
See also hints on optimizing storage.
Optimizing Storage
Using fast disks and cheap disks
For optimal performance, it's recommended to store the datadir on a fast NVMe-RAID disk. However, if this is not feasible, you can store the history on a cheaper disk and still achieve good performance.
Step 1: Store datadir on the slow disk
Place the datadir
on the slower disk. Then, create symbolic links (using ln -s
) to the fast disk for the following sub-folders:
chaindata
snapshots/domain
This will speed up the execution of E3.
On the slow disk place datadir
folder with the following structure:
chaindata
(linked to fast disk)snapshots
domain
(linked to fast disk)history
idx
accessor
temp
Step 2: Speed Up History Access (Optional)
If you need to further improve performance try the following improvements step by step:
- Store the
snapshots/accessor
folder on the fast disk. This should provide a noticeable speed boost. - If the speed is still not satisfactory, move the
snapshots/idx
folder to the fast disk. - If performance is still an issue, consider moving the entire
snapshots/history
folder to the fast disk.
By following these steps, you can optimize your Erigon 3 storage setup to achieve a good balance between performance and cost.
Supported Networks
The default flag is --chain=mainnet
, which enables Erigon 3 to operate on the Ethereum mainnet.
Utilize the flag --chain=<tag>
to synchronize with one of the supported networks. For example, to synchronize Holesky, one of the Ethereum testnets, use:
./build/bin/erigon --chain=holesky
Mainnets
Chain | Tag | ChainId |
---|---|---|
Ethereum | mainnet | 1 |
Polygon | bor-mainnet | 137 |
Gnosis | gnosis | 100 |
Testnets
Ethereum testnets
Chain | Tag | ChainId |
---|---|---|
Holesky | holesky | 17000 |
Sepolia | sepolia | 11155111 |
Polygon testnets
Chain | Tag | ChainId |
---|---|---|
Amoy | amoy | 80002 |
Gnosis Chain Testnets
Chain | Tag | ChainId |
---|---|---|
Chiado | chiado | 10200 |
Default Ports and Firewalls
Too see ports used by Erigon and its components please refer to https://github.com/erigontech/erigon?tab=readme-ov-file#default-ports-and-firewalls.
To ensure proper P2P functionality for both the Execution and Consensus layers use a minimal configuration without exposing unnecessary services:
- Avoid exposing other ports unless necessary for specific use cases (e.g., JSON-RPC or WebSocket);
- Regularly audit your firewall rules to ensure they are aligned with your infrastructure needs;
- Use monitoring tools like Prometheus or Grafana to track P2P communication metrics.
Command-Line Switches for Network and Port Configuration
Here is an extensive list of port-related options from the options list:
Engine
--private.api.addr [value]
: Erigon's internal gRPC API, empty string means not to start the listener (default:127.0.0.1:9090
)--txpool.api.addr [value]
: TxPool api network address, (default: use value of--private.api.addr
)--torrent.port [value]
: Port to listen and serve BitTorrent protocol (default:42069
)--authrpc.port [value]
: HTTP-RPC server listening port for the Engine API (default:8551
)
Sentry
--port [value]
: Network listening port (default:30303
)--p2p.allowed-ports [value]
: Allowed ports to pick for different eth p2p protocol versions (default:30303
,30304
,30305
,30306
,30307
)--sentry.api.addr [value]
: Comma separated sentry addresses<host>:<port>,<host>:<port>
(default127.0.0.1:9091
)
RPCdaemon
--ws.port [value]
: WS-RPC server listening port (default:8546
)--http.port [value]
: HTTP-RPC server listening port (default:8545
)
Caplin
--caplin.discovery.port [value]
: Port for Caplin DISCV5 protocol (default:4000
)--caplin.discovery.tcpport [value]
: TCP Port for Caplin DISCV5 protocol (default:4001
)
BeaconAPI
--beacon.api.port [value]
: Sets the port to listen for beacon api requests (default:5555
)
Diagnostics
--diagnostics.endpoint.port [value]
: Diagnostics HTTP server listening port (default:6062
)
Shared ports
--pprof.port [value]
: pprof HTTP server listening port (default:6060
)--metrics.port [value]
: Metrics HTTP server listening port (default:6061
)--downloader.api.addr [value]
: Downloader address<host>:<port>
Web3 Wallet
How to configure your web3 wallet to use your Erigon node RPC
Whatever network you are running, it's easy to connect your Erigon node to your local web3 wallet.
For Erigon to provide access to wallet functionalities it is necessary to enable RPC by adding the flags
--http.addr="0.0.0.0" --http.api=eth,web3,net,debug,trace,txpool
For example:
/build/bin/erigon --http.addr="0.0.0.0" --http.api=eth,web3,net,debug,trace,txpool
Metamask
To configure your local Metamask wallet (browser extension):
- Click on the network selector button. This will display a list of networks to which you're already connected
- Click Add network
- A new browser tab will open, displaying various fields to fill out. Complete the fields with the proper information, in this example for Ethereum network:
- Network Name:
Ethereum on E3
(or any name of your choice) - Chain ID:
1
for chain ID parameter see Supported Networks - New RPC URL:
http://127.0.0.1:8545
- Currency Symbol:
ETH
- Block Explorer URL:
https://www.etherscan.io
(or any explorer of your choice)
- Network Name:
After performing the above steps, you will be able to see the custom network the next time you access the network selector.
Quick nodes
These guides are recommended if you want to test Erigon and have your node up and running without reading all the documentation.
How to run an Ethereum node
Follow the hardware and software prerequisites.
Check which type of node you might want to run and the disk space required.
Information
Do not use HDD: Hard Disk Drives (HDD) are not recommended for running Erigon, as it may cause the node to stay N blocks behind the chain tip and lead to performance issues.
Use SSD or NVMe: Solid State Drives (SSD) or Non-Volatile Memory Express (NVMe) drives are recommended for optimal performance. These storage devices provide faster read/write speeds and can handle the demanding requirements of an Erigon node.
Install Erigon​
For MacOS and Linux, run the following commands to build from source the latest Erigon version:
git clone --branch v3.0.0-beta1 --single-branch https://github.com/erigontech/erigon.git
cd erigon
make erigon
This should create the binary at ./build/bin/erigon
Start Erigon​
If you want to be able to send transactions with your wallet and access the Ethereum network directly, contribute to the network decentralization it is advised to run Erigon with Caplin, the internal Consensus Layer (CL).
Alternatively you can also run Prysm, Lighthouse or any other Consensus Layer client alongside with Erigon by adding the --externalcl
flag. This will also allow you to access the Ethereum blockchain directly and give you the possibility to stake your ETH and do block production.
Erigon with Caplin
The basic command to run Erigon with Caplin on Ethereum mainnet is:
./build/bin/erigon
Erigon with Prysm as the external consensus layer
-
Start Erigon with the
--externalcl
flag:./build/bin/erigon --externalcl
-
Install and run Prysm by following the official guide: https://docs.prylabs.network/docs/install/install-with-script.
Prysm must fully synchronize before Erigon can start syncing, since Erigon requires an existing target head to sync to. The quickest way to get Prysm synced is to use a public checkpoint synchronization endpoint from the list at https://eth-clients.github.io/checkpoint-sync-endpoints.
-
To communicate with Erigon, the execution endpoint must be specified as
:8551, where is either //localhost
or the IP address of the device running Erigon. -
Prysm must point to the JWT secret automatically created by Erigon in the datadir directory. In the following example the default data directory is used.
./prysm.sh beacon-chain --execution-endpoint=http://localhost:8551 --mainnet --jwt-secret=/home/usr/.local/share/erigon/jwt.hex --checkpoint-sync-url=https://beaconstate.info --genesis-beacon-api-url=https://beaconstate.info
If your Prysm is on a different device, add
--authrpc.addr 0.0.0.0
(Engine API listens on localhost by default) as well as--authrpc.vhosts <CL host>
to your Prysm configuration.
Erigon with Lighthouse as the external consensus layer
-
Start Erigon:
./build/bin/erigon --externalcl
-
Install and run Lighthouse by following the official guide: https://lighthouse-book.sigmaprime.io/installation.html
-
Because Erigon needs a target head in order to sync, Lighthouse must be synced before Erigon may synchronize. The fastest way to synchronize Lighthouse is to use one of the many public checkpoint synchronization endpoints at https://eth-clients.github.io/checkpoint-sync-endpoints.
-
To communicate with Erigon, the execution endpoint must be specified as
:8551, where is either //localhost
or the IP address of the device running Erigon. -
Lighthouse must point to the JWT secret automatically created by Erigon in the datadir director. In the following example the default data directory is used.
lighthouse bn \ --network mainnet \ --execution-endpoint http://localhost:8551 \ --execution-jwt /home/admin/.local/share/erigon/jwt.hex \ --checkpoint-sync-url https://mainnet.checkpoint.sigp.io \
Basic Configuration​
- If you want to store Erigon files in a non-default location, add flag
--datadir=<your_data_dir>
. Default data directory is/home/usr/.local/share/erigon
. - Erigon is full node by default, use
--prune.mode=archive
to run a archive node or--prune.mode=minimal
(EIP-4444). If you want to change type of node delete the--datadir
folder content and restart Erigon with the appropriate flags. - Default chain is
--chain=mainnet
for Ethereum mainnet:- add the flag
--chain=holesky
for Holesky testnet; --chain=sepolia
for Sepolia testnet.
- add the flag
--http.addr="0.0.0.0" --http.api=eth,web3,net,debug,trace,txpool
to use RPC and e.g. be able to connect your wallet.- To increase download speed add
--torrent.download.rate=512mb
(default is 16mb) - To stop the Erigon node you can use the
CTRL+C
command.
Additional flags can be added to configure Erigon with several options.
How to run a Gnosis Chain node
Follow the hardware and software prerequisites.
Check which type of node you might want to run and the disk space required.
Information
Do not use HDD: Hard Disk Drives (HDD) are not recommended for running Erigon, as it may cause the node to stay N blocks behind the chain tip and lead to performance issues.
Use SSD or NVMe: Solid State Drives (SSD) or Non-Volatile Memory Express (NVMe) drives are recommended for optimal performance. These storage devices provide faster read/write speeds and can handle the demanding requirements of an Erigon node.
Install Erigon​
For MacOS and Linux, run the following commands to build from source the latest Erigon version:
git clone --branch v3.0.0-beta1 --single-branch https://github.com/erigontech/erigon.git
cd erigon
make erigon
This should create the binary at ./build/bin/erigon
Start Erigon​
If you want to be able to send transactions with your wallet and access the Gnosis Chain network directly, contribute to the network decentralization it is advised to run Erigon with Caplin, the internal Consensus Layer (CL).
Alternatively you can also run Prysm, Lighthouse or any other Consensus Layer client alongside with Erigon by adding the --externalcl
flag. This will also allow you to access the Ethereum blockchain directly and give you the possibility to stake your ETH and do block production.
Erigon with Caplin
The basic command to run Erigon with Caplin on Gnosis Chain is:
./build/bin/erigon --chain=gnosis
Erigon with Lighthouse
-
Start Erigon:
./build/bin/erigon --externalcl
-
Install Lighthouse, another popular client that can be used with Erigon for block building. Follow the instructions until the chapter Build Lighthouse, skipping the
make
instruction.: https://lighthouse-book.sigmaprime.io/installation.html -
Now compile Lighthouse in order to run Gnosis Chain using the feature flags :
cd lighthouse env FEATURES=gnosis make
-
Because Erigon needs a target head in order to sync, Lighthouse must be synced before Erigon may synchronize. The fastest way to synchronize Lighthouse is to use one of the many public checkpoint synchronization endpoints:
https://checkpoint.gnosischain.com
for Gnosis Chainhttps://checkpoint.chiadochain.net
for Chiado Testnet
-
To communicate with Erigon, the execution endpoint must be specified as
:8551, where is either //localhost
or the IP address of the device running Erigon. -
Lighthouse must point to the JWT secret automatically created by Erigon in the datadir director. In the following example the default data directory is used.
Below is an example of Lighthouse running Gnosis Chain:
```bash
lighthouse \
--network gnosis beacon_node \
--datadir=data \
--http \
--execution-endpoint http://localhost:8551 \
--execution-jwt /home/usr/.local/share/erigon/jwt.hex \
--checkpoint-sync-url "https://checkpoint.gnosischain.com"
```
And an example of Lighthouse running Chiado testnet:
```bash
lighthouse \
--network chiado \
--datadir=data \
--http \
--execution-endpoint http://localhost:8551 \
--execution-jwt /home/usr/.local/share/erigon/jwt.hex \
--checkpoint-sync-url "https://checkpoint.chiadochain.net"
```
Basic Configuration​
- If you want to store Erigon files in a non-default location, add flag
--datadir=<your_data_dir>
. Default data directory is/home/usr/.local/share/erigon
. - Erigon is full node by default, use
--prune.mode=archive
to run a archive node or--prune.mode=minimal
(EIP-4444). If you want to change type of node delete the--datadir
folder content and restart Erigon with the appropriate flags. - Add the flag
--chain=chiado
for Chiado testnet. --http.addr="0.0.0.0" --http.api=eth,web3,net,debug,trace,txpool
to use RPC and e.g. be able to connect your wallet.- To increase download speed add
--torrent.download.rate=512mb
(default is 16mb) - To stop the Erigon node you can use the
CTRL+C
command.
Additional flags can be added to configure Erigon with several options.
How to run a Polygon node
Follow the hardware and software prerequisites.
Check which type of node you might want to run and the disk space required.
Information
Do not use HDD: Hard Disk Drives (HDD) are not recommended for running Erigon, as it may cause the node to stay N blocks behind the chain tip and lead to performance issues.
Use SSD or NVMe: Solid State Drives (SSD) or Non-Volatile Memory Express (NVMe) drives are recommended for optimal performance. These storage devices provide faster read/write speeds and can handle the demanding requirements of an Erigon node.
Install Erigon​
For MacOS and Linux, run the following commands to build from source the latest Erigon version:
git clone --branch v3.0.0-beta1 --single-branch https://github.com/erigontech/erigon.git
cd erigon
make erigon
This should create the binary at ./build/bin/erigon
Start Erigon
To start a Erigon full node for Polygon mainnet with remote Heimdall:
./build/bin/erigon --chain=bor-mainnet --bor.heimdall=https://heimdall-api.polygon.technology
For a Amoy testnet archive node with remote Heimdall:
./build/bin/erigon --chain=amoy --bor.heimdall=https://heimdall-api-amoy.polygon.technology
Basic Configuration​
- If you want to store Erigon files in a non-default location, add flag
--datadir=<your_data_dir>
. Default data directory is/home/usr/.local/share/erigon
. - Erigon is full node by default, use
--prune.mode=archive
to run a archive node or--prune.mode=minimal
(EIP-4444). If you want to change type of node delete the--datadir
folder content and restart Erigon with the appropriate flags. --http.addr="0.0.0.0" --http.api=eth,web3,net,debug,trace,txpool
to use RPC and e.g. be able to connect your wallet.- To increase download speed add
--torrent.download.rate=512mb
(default is 16mb) - To stop the Erigon node you can use the
CTRL+C
command.
Additional flags can be added to configure Erigon with several options.
Advanced Usage
Erigon is by default an "all-in-one" binary solution, but it's possible start any internal component as a separated processes:
- RPCDaemon, the JSON RPC layer
- TxPool, the transaction pool
- Sentry, the p2p layer
- Downloader, the history download layer
- Caplin, the novel Consensus Layer
This may be for security, scalability, decentralisation, resource limitation, custom implementation, or any other reason you/your team deems appropriate. See the appropriate section to understand how to start each service separately.
Don't start services as separated processes unless you have clear reason for it.
Configuring Erigon
The Erigon 3 CLI has a wide range of flags that can be used to customize its behavior. Here's a breakdown of some of the flags, see Options for the full list:
Data Storage
--datadir
: Set the data directory for the databases (default:/home/usr/.local/share/erigon
)--ethash.dagdir
: Set the directory to store the ethash mining DAGs (default:/home/usr/.local/share/erigon-ethash
)--database.verbosity
Enabling internal db logs. Very high verbosity levels may require recompile db. Default: 2, means warning. (default:2
)
Logging
--log.json
: Format console logs with JSON (default:false
)--log.console.json
: Format console logs with JSON (default:false
)--log.dir.json
: Format file logs with JSON (default:false
)--verbosity
: Set the log level for console logs (default:info
)--log.console.verbosity
: Set the log level for console logs (default:info
)--log.dir.disable
: Disable disk logging (default:false
)--log.dir.path
: Set the path to store user and error logs to disk--log.dir.prefix
: Set the file name prefix for logs stored to disk--log.dir.verbosity
: Set the log verbosity for logs stored to disk (default:info
)--log.delays
: Enable block delay logging (default:false
)
Pruning Presets
--prune.mode
: Choose a pruning preset:archive
,full
, orminimal
(default:full
) see also Type of node--prune.distance
: Keep state history for the latest N blocks (default: everything) (default:0
)--prune.distance.blocks
: Keep block history for the latest N blocks (default: everything) (default:0
)
Performance Optimization
--batchSize
: Set the batch size for the execution stage (default:512M
)--bodies.cache
: Limit the cache for block bodies (default:268435456
)--private.api.addr
: Set the internal grpc API address (default:127.0.0.1:9090
)--private.api.ratelimit
: Set the amount of requests the server can handle simultaneously (default:31872
)
Txpool
--txpool.api.addr
: Set the txPool api network address (default: use value of--private.api.addr
)--txpool.disable
Experimental external pool and block producer, see./cmd/txpool/readme.md
for more info. Disabling internal txpool and block producer. (default:false
)--txpool.pricebump
Price bump percentage to replace an already existing transaction (default:10
)--txpool.pricelimit
Minimum gas price (fee cap) limit to enforce for acceptance into the pool (default:1
)--txpool.locals
: Comma separated accounts to treat as locals (no flush, priority inclusion)--txpool.nolocals
: Disables price exemptions for locally submitted transactions (default:false
)--txpool.accountslots
: Set the minimum number of executable transaction slots guaranteed per account (default:16
)--txpool.blobslots
: Set the max allowed total number of blobs (within type-3 txs) per account (default:48
)--txpool.blobpricebump
: Price bump percentage to replace existing (type-3) blob transaction (default:100
)--txpool.totalblobpoollimit
: Set the total limit of number of all blobs in txs within the txpool (default:480
)--txpool.globalslots
: Set the maximum number of executable transaction slots for all accounts (default:10000
)--txpool.globalbasefeeslots
: Set the maximum number of non-executable transactions where only not enough baseFee (default:30000
)--txpool.accountqueue
: Set the maximum number of non-executable transaction slots permitted per account (default:64
)--txpool.globalqueue
: Set the maximum number of non-executable transaction slots for all accounts (default:30000
)--txpool.lifetime
: Set the maximum amount of time non-executable transaction are queued (default:3h0m0s
)--txpool.trace.senders
: Set the comma-separated list of addresses, whose transactions will traced in transaction pool with debug printing--txpool.commit.every
: Set the how often transactions should be committed to the storage (default:15s
)
Remote Procedure Call (RPC)
--rpc.accessList
: Specify granular (method-by-method) API allowlist--rpc.allow-unprotected-txs
: Allow for unprotected (non-EIP155 signed) transactions to be submitted via RPC (default:false
)--rpc.batch.concurrency
: Limit the amount of goroutines to process 1 batch request (default:2
)--rpc.streaming.disable
: Disable json streaming for some heavy endpoints--rpc.accessList
: Specify granular (method-by-method) API allowlist--rpc.gascap
: Set a cap on gas that can be used in eth_call/estimateGas (default:50000000
)--rpc.batch.limit
: Set the maximum number of requests in a batch (default:100
)--rpc.returndata.limit
: Set the maximum number of bytes returned from eth_call or similar invocations (default:100000
)--rpc.allow-unprotected-txs
: Allow for unprotected (non-EIP155 signed) transactions to be submitted via RPC (default:false
)--rpc.maxgetproofrewindblockcount.limit
: Set the max GetProof rewind block count (default:100000
)--rpc.txfeecap
: Set a cap on transaction fee (in ether) that can be sent via the RPC APIs (0
= no cap) (default:1
)
Network and Peers
--chain
: Set the name of the network to join (default:mainnet
)--dev.period
: Set the block period to use in developer mode (0 = mine only if transaction pending) (default:0
)--maxpeers
: Set the maximum number of network peers (network disabled if set to 0) (default:100
)--nodiscover
: Disable the peer discovery mechanism (manual peer addition) (default:false
)--netrestrict
: Restrict network communication to the given IP networks (CIDR masks)--trustedpeers
: Set the comma-separated enode URLs which are always allowed to connect, even above the peer limit
Miscellaneous
--externalcl
: Enables the external consensus layer (default:false
)--override.prague
: Manually specify the Prague fork time, overriding the bundled setting (default:0
)--pprof
: Enable the pprof HTTP server (default:false
)--metrics
: Enable metrics collection and reporting (default:false
)--diagnostics
: Disable diagnostics (default:false
)--config
: Set erigon flags from YAML/TOML file--help
: Show help--version
: Print the version
Consensus Layer
The Consensus Layer is a critical component of a decentralized network, responsible for reaching agreement on the state of the network. In the context of blockchain technology, the Consensus Layer is the layer that ensures the security and integrity of the blockchain by validating transactions and blocks.
Historically, an Execution Layer (EL) client alone was enough to run a full Ethereum node. However, as Ethereum has moved from proof-of-work (PoW) to proof-of-stake (PoS) based consensus with "The Merge", a Consensus Layer (CL) client needs to run alongside the EL to run a full Ethereum node, a Gnosis Chain node or a Polygon node.
The execution client listens to new transactions, executes them in the Ethereum Virtual Machine (EVM), and holds the latest state and database of all current Ethereum data.
The consensus client, also known as the Beacon Node or CL client, implements the Proof-of-Stake consensus algorithm, which enables the network to achieve agreement based on validated data from the execution client. Both clients work together to keep track of the head of the Ethereum chain and allow users to interact with the Ethereum network.
Information
By default, Erigon is configured to run with Caplin, the embedded Consensus Layer.
Choosing the Consensus Layer client
A Consensus Layer (CL) client needs to run alongside Erigon to run a full Ethereum node, a Gnosis Chain node and a Polygon node and its respective testnets. Basically, without a CL client the EL will never get in sync. See below which Beacon node for which chain you can run along with Erigon
Caplin
Caplin, the novel embedded Consensus Layer, brings unparalleled performance, efficiency, and reliability to Ethereum infrastructure. Its innovative design minimize disk usage, enabling faster transaction processing and a more secure network.
By integrating the consensus layer into the EVM-node, Caplin eliminates the need for separate disk storage, reducing overall system complexity and improving overall efficiency. OtterSync, a new syncing algorithm, further enhances performance by shifting 98% of the computation to network bandwidth, reducing sync times and improving chain tip performance, disk footprint, and decentralization.
Caplin Usage
Caplin is enabled by default, at which point an external consensus layer is no longer needed.
./build/bin/erigon
Caplin also has an archive mode for historical states, blocks, and blobs. These can be enabled with the following flags:
--caplin.states-archive
: Enables the storage and retrieval of historical state data, allowing access to past states of the blockchain for debugging, analytics, or other use cases.--caplin.blocks-archive
: Enables the storage of historical block data, making it possible to query or analyze full block history.--caplin.blobs-archive
: Enables the storage of historical blobs, ensuring access to additional off-chain data that might be required for specific applications.
In addition, Caplin can backfill recent blobs for an op-node or other uses with the new flag:
--caplin.blobs-immediate-backfill
: Backfills the last 18 days' worth of blobs to quickly populate historical blob data for operational needs or analytics.
Caplin can also be used for block production, aka staking.
Prysm
Prysm is a popular client that combined with Erigon can be used for staking. The necessary steps to run Erigon with Prysm are listed here following:
-
Start Erigon with the flag
--externalcl
to allow a external Consesus Layer:./build/bin/erigon --externalcl
-
Install Prysm by following the official instructions.
-
Prysm must fully synchronize before Erigon can start syncing, since Erigon requires an existing target head to sync to. The quickest way to get Prysm synced is to use a public checkpoint synchronization endpoint from the list at https://eth-clients.github.io/checkpoint-sync-endpoints.
In order to communicate with Erigon the execution endpoint
<erigon address>:8551
must be specified, where<erigon address>
is either//localhost
or the IP address of the device running Erigon.Prysm must point to the JWT secret automatically created by Erigon in the datadir directory (in the below example the default data directory is used).
./prysm.sh beacon-chain --execution-endpoint=http://localhost:8551 --mainnet --jwt-secret=/home/usr/.local/share/erigon/jwt.hex --checkpoint-sync-url=https://beaconstate.info --genesis-beacon-api-url=https://beaconstate.info
If your Prysm is on a different device, add
--authrpc.addr 0.0.0.0
(Engine API listens on localhost by default) as well as--authrpc.vhosts <CL host>
.
Lighthouse
Lighthouse is another popular client that combined with Erigon can be used for block building. The necessary steps to run Erigon with Lightouse are listed here following:
-
Start Erigon with the flag
--externalcl
to allow a external Consesus Layer:./build/bin/erigon --externalcl
-
Install Lighthouse by following the official instructions.
-
Lighthouse must fully synchronize before Erigon can start syncing, since Erigon requires an existing target head to sync to. The quickest way to get Lighthouse synced is to use a public checkpoint synchronization endpoint from the list at https://eth-clients.github.io/checkpoint-sync-endpoints.
In order to communicate with Erigon the execution endpoint
<erigon address>:8551
must be specified, where<erigon address>
is either//localhost
or the IP address of the device running Erigon.Lighthouse must point to the JWT secret automatically created by Erigon in the datadir directory (in the below example the default data directory is used).
lighthouse bn \ --network mainnet \ --execution-endpoint http://localhost:8551 \ --execution-jwt /home/usr/.local/share/erigon/jwt.hex \ --checkpoint-sync-url https://mainnet.checkpoint.sigp.io \
If your Lighthouse is on a different device, add
--authrpc.addr 0.0.0.0
(Engine API listens on localhost by default) as well as--authrpc.vhosts <CL host>
.
JWT secret
The JWT secret is a key that allows Ethereum entities to securely validate JWTs used for authentication, authorization, and transmitting information, like a passphrase that allows Ethereum nodes/servers to verify if requests are legitimate. It should be protected and not exposed publicly.
JWT stands for JSON Web Token, and it is a way to securely transmit information between parties as a JSON object. The JWT contains a header, payload, and signature, generated by encrypting the header and payload with a secret.
In Ethereum, JWTs can be used to validate transactions or API calls. The Ethereum node or API server would have the JWT secret stored locally. When a JWT is received, the node/server uses the same secret to generate a signature from the header and payload.
If the newly generated signature matches the one in the JWT, it proves the JWT is valid and comes from an authorized source in possession of the secret. Different nodes/servers would have different secrets allowing them to verify the JWTs intended for them.
More information here: https://github.com/ethereum/execution-apis/blob/main/src/engine/authentication.md
Erigon JWT secret
Erigon creates automatically a JWT secret upon launch.
By default, the JWT secret key is located in the datadir as jwt.hex
, and its path can be specified with the --authrpc.jwtsecret
flag.
Both Erigon and any external Consensus Layer need to point to the same JWT secret file.
Options
All available options
Erigon is primarily controlled using the command line, started using the ./build/bin/erigon
command and stopped by pressing CTRL-C
.
Using the command-line options allows for configurations, and several functionalities can be called using sub commands.
The --help
flag listing is reproduced below for your convenience.
./build/bin/erigon --help
Commands
NAME:
erigon - erigon
USAGE:
erigon [command] [flags]
VERSION:
3.00.0-beta1-0b94461f
COMMANDS:
init Bootstrap and initialize a new genesis block
import Import a blockchain file
seg, snapshots, segments Managing historical data segments (partitions)
support Connect Erigon instance to a diagnostics system for support
help, h Shows a list of commands or help for one command
GLOBAL OPTIONS:
--datadir value Data directory for the databases (default: /home/bloxster/.local/share/erigon)
--ethash.dagdir value Directory to store the ethash mining DAGs (default: /home/bloxster/.local/share/erigon-ethash)
--externalcl Enables the external consensus layer (default: false)
--txpool.disable Experimental external pool and block producer, see ./cmd/txpool/readme.md for more info. Disabling internal txpool and block producer. (default: false)
--txpool.pricelimit value Minimum gas price (fee cap) limit to enforce for acceptance into the pool (default: 1)
--txpool.pricebump value Price bump percentage to replace an already existing transaction (default: 10)
--txpool.blobpricebump value Price bump percentage to replace existing (type-3) blob transaction (default: 100)
--txpool.accountslots value Minimum number of executable transaction slots guaranteed per account (default: 16)
--txpool.blobslots value Max allowed total number of blobs (within type-3 txs) per account (default: 48)
--txpool.totalblobpoollimit value Total limit of number of all blobs in txs within the txpool (default: 480)
--txpool.globalslots value Maximum number of executable transaction slots for all accounts (default: 10000)
--txpool.globalbasefeeslots value Maximum number of non-executable transactions where only not enough baseFee (default: 30000)
--txpool.globalqueue value Maximum number of non-executable transaction slots for all accounts (default: 30000)
--txpool.trace.senders value Comma separated list of addresses, whose transactions will traced in transaction pool with debug printing
--txpool.commit.every value How often transactions should be committed to the storage (default: 15s)
--prune.distance value Keep state history for the latest N blocks (default: everything) (default: 0)
--prune.distance.blocks value Keep block history for the latest N blocks (default: everything) (default: 0)
--prune.mode value Choose a pruning preset to run onto. Available values: "full", "archive", "minimal".
Full: Keep only blocks and latest state,
Archive: Keep the entire indexed database, aka. no pruning,
Minimal: Keep only latest state (default: "full")
--batchSize value Batch size for the execution stage (default: "512M")
--bodies.cache value Limit on the cache for block bodies (default: "268435456")
--database.verbosity value Enabling internal db logs. Very high verbosity levels may require recompile db. Default: 2, means warning. (default: 2)
--private.api.addr value Erigon's components (txpool, rpcdaemon, sentry, downloader, ...) can be deployed as independent Processes on same/another server. Then components will connect to erigon by this internal grpc API. example: 127.0.0.1:9090, empty string means not to start the listener. do not expose to public network. serves remote database interface (default: "127.0.0.1:9090")
--private.api.ratelimit value Amount of requests server handle simultaneously - requests over this limit will wait. Increase it - if clients see 'request timeout' while server load is low - it means your 'hot data' is small or have much RAM. (default: 31872)
--etl.bufferSize value Buffer size for ETL operations. (default: "256MB")
--tls Enable TLS handshake (default: false)
--tls.cert value Specify certificate
--tls.key value Specify key file
--tls.cacert value Specify certificate authority
--state.stream.disable Disable streaming of state changes from core to RPC daemon (default: false)
--sync.loop.throttle value Sets the minimum time between sync loop starts (e.g. 1h30m, default is none)
--bad.block value Marks block with given hex string as bad and forces initial reorg before normal staged sync
--http JSON-RPC server (enabled by default). Use --http=false to disable it (default: true)
--http.enabled JSON-RPC HTTP server (enabled by default). Use --http.enabled=false to disable it (default: true)
--graphql Enable the graphql endpoint (default: false)
--http.addr value HTTP-RPC server listening interface (default: "localhost")
--http.port value HTTP-RPC server listening port (default: 8545)
--authrpc.addr value HTTP-RPC server listening interface for the Engine API (default: "localhost")
--authrpc.port value HTTP-RPC server listening port for the Engine API (default: 8551)
--authrpc.jwtsecret value Path to the token that ensures safe connection between CL and EL
--http.compression Enable compression over HTTP-RPC (default: false)
--http.corsdomain value Comma separated list of domains from which to accept cross origin requests (browser enforced)
--http.vhosts value Comma separated list of virtual hostnames from which to accept requests (server enforced). Accepts 'any' or '*' as wildcard. (default: "localhost")
--authrpc.vhosts value Comma separated list of virtual hostnames from which to accept Engine API requests (server enforced). Accepts 'any' or '*' as wildcard. (default: "localhost")
--http.api value API's offered over the HTTP-RPC interface (default: "eth,erigon,engine")
--ws.port value WS-RPC server listening port (default: 8546)
--ws Enable the WS-RPC server (default: false)
--ws.compression Enable compression over WebSocket (default: false)
--http.trace Print all HTTP requests to logs with INFO level (default: false)
--http.dbg.single Allow pass HTTP header 'dbg: true' to printt more detailed logs - how this request was executed (default: false)
--state.cache value Amount of data to store in StateCache (enabled if no --datadir set). Set 0 to disable StateCache. Defaults to 0MB (default: "0MB")
--rpc.batch.concurrency value Does limit amount of goroutines to process 1 batch request. Means 1 bach request can't overload server. 1 batch still can have unlimited amount of request (default: 2)
--rpc.streaming.disable Erigon has enabled json streaming for some heavy endpoints (like trace_*). It's a trade-off: greatly reduce amount of RAM (in some cases from 30GB to 30mb), but it produce invalid json format if error happened in the middle of streaming (because json is not streaming-friendly format) (default: false)
--db.read.concurrency value Does limit amount of parallel db reads. Default: equal to GOMAXPROCS (or number of CPU) (default: 1408)
--rpc.accessList value Specify granular (method-by-method) API allowlist
--trace.compat Bug for bug compatibility with OE for trace_ routines (default: false)
--rpc.gascap value Sets a cap on gas that can be used in eth_call/estimateGas (default: 50000000)
--rpc.batch.limit value Maximum number of requests in a batch (default: 100)
--rpc.returndata.limit value Maximum number of bytes returned from eth_call or similar invocations (default: 100000)
--rpc.allow-unprotected-txs Allow for unprotected (non-EIP155 signed) transactions to be submitted via RPC (default: false)
--rpc.maxgetproofrewindblockcount.limit value Max GetProof rewind block count (default: 100000)
--rpc.txfeecap value Sets a cap on transaction fee (in ether) that can be sent via the RPC APIs (0 = no cap) (default: 1)
--txpool.api.addr value TxPool api network address, for example: 127.0.0.1:9090 (default: use value of --private.api.addr)
--trace.maxtraces value Sets a limit on traces that can be returned in trace_filter (default: 200)
--http.timeouts.read value Maximum duration for reading the entire request, including the body. (default: 30s)
--http.timeouts.write value Maximum duration before timing out writes of the response. It is reset whenever a new request's header is read. (default: 30m0s)
--http.timeouts.idle value Maximum amount of time to wait for the next request when keep-alives are enabled. If http.timeouts.idle is zero, the value of http.timeouts.read is used. (default: 2m0s)
--authrpc.timeouts.read value Maximum duration for reading the entire request, including the body. (default: 30s)
--authrpc.timeouts.write value Maximum duration before timing out writes of the response. It is reset whenever a new request's header is read. (default: 30m0s)
--authrpc.timeouts.idle value Maximum amount of time to wait for the next request when keep-alives are enabled. If authrpc.timeouts.idle is zero, the value of authrpc.timeouts.read is used. (default: 2m0s)
--rpc.evmtimeout value Maximum amount of time to wait for the answer from EVM call. (default: 5m0s)
--rpc.overlay.getlogstimeout value Maximum amount of time to wait for the answer from the overlay_getLogs call. (default: 5m0s)
--rpc.overlay.replayblocktimeout value Maximum amount of time to wait for the answer to replay a single block when called from an overlay_getLogs call. (default: 10s)
--rpc.subscription.filters.maxlogs value Maximum number of logs to store per subscription. (default: 0)
--rpc.subscription.filters.maxheaders value Maximum number of block headers to store per subscription. (default: 0)
--rpc.subscription.filters.maxtxs value Maximum number of transactions to store per subscription. (default: 0)
--rpc.subscription.filters.maxaddresses value Maximum number of addresses per subscription to filter logs by. (default: 0)
--rpc.subscription.filters.maxtopics value Maximum number of topics per subscription to filter logs by. (default: 0)
--snap.keepblocks Keep ancient blocks in db (useful for debug) (default: false)
--snap.stop Workaround to stop producing new snapshots, if you meet some snapshots-related critical bug. It will stop move historical data from DB to new immutable snapshots. DB will grow and may slightly slow-down - and removing this flag in future will not fix this effect (db size will not greatly reduce). (default: false)
--snap.state.stop Workaround to stop producing new state files, if you meet some state-related critical bug. It will stop aggregate DB history in a state files. DB will grow and may slightly slow-down - and removing this flag in future will not fix this effect (db size will not greatly reduce). (default: false)
--snap.skip-state-snapshot-download Skip state download and start from genesis block (default: false)
--db.pagesize value DB is splitted to 'pages' of fixed size. Can't change DB creation. Must be power of 2 and '256b <= pagesize <= 64kb'. Default: equal to OperationSystem's pageSize. Bigger pageSize causing: 1. More writes to disk during commit 2. Smaller b-tree high 3. Less fragmentation 4. Less overhead on 'free-pages list' maintainance (a bit faster Put/Commit) 5. If expecting DB-size > 8Tb then set pageSize >= 8Kb (default: "4KB")
--db.size.limit value Runtime limit of chaindata db size (can change at any time) (default: "200GB")
--db.writemap Enable WRITE_MAP feature for fast database writes and fast commit times (default: true)
--torrent.port value Port to listen and serve BitTorrent protocol (default: 42069)
--torrent.maxpeers value Unused parameter (reserved for future use) (default: 100)
--torrent.conns.perfile value Number of connections per file (default: 10)
--torrent.download.slots value Amount of files to download in parallel. (default: 128)
--torrent.staticpeers value Comma separated host:port to connect to
--torrent.upload.rate value Bytes per second, example: 32mb (default: "4mb")
--torrent.download.rate value Bytes per second, example: 32mb (default: "128mb")
--torrent.verbosity value 0=silent, 1=error, 2=warn, 3=info, 4=debug, 5=detail (must set --verbosity to equal or higher level and has default: 2) (default: 2)
--port value Network listening port (default: 30303)
--p2p.protocol value [ --p2p.protocol value ] Version of eth p2p protocol (default: 68, 67)
--p2p.allowed-ports value [ --p2p.allowed-ports value ] Allowed ports to pick for different eth p2p protocol versions as follows <porta>,<portb>,..,<porti> (default: 30303, 30304, 30305, 30306, 30307)
--nat value NAT port mapping mechanism (any|none|upnp|pmp|stun|extip:<IP>)
"" or "none" Default - do not nat
"extip:77.12.33.4" Will assume the local machine is reachable on the given IP
"any" Uses the first auto-detected mechanism
"upnp" Uses the Universal Plug and Play protocol
"pmp" Uses NAT-PMP with an auto-detected gateway address
"pmp:192.168.0.1" Uses NAT-PMP with the given gateway address
"stun" Uses STUN to detect an external IP using a default server
"stun:<server>" Uses STUN to detect an external IP using the given server (host:port)
--nodiscover Disables the peer discovery mechanism (manual peer addition) (default: false)
--v5disc Enables the experimental RLPx V5 (Topic Discovery) mechanism (default: false)
--netrestrict value Restricts network communication to the given IP networks (CIDR masks)
--nodekey value P2P node key file
--nodekeyhex value P2P node key as hex (for testing)
--discovery.dns value Sets DNS discovery entry points (use "" to disable DNS)
--bootnodes value Comma separated enode URLs for P2P discovery bootstrap
--staticpeers value Comma separated enode URLs to connect to
--trustedpeers value Comma separated enode URLs which are always allowed to connect, even above the peer limit
--maxpeers value Maximum number of network peers (network disabled if set to 0) (default: 32)
--chain value name of the network to join (default: "mainnet")
--dev.period value Block period to use in developer mode (0 = mine only if transaction pending) (default: 0)
--vmdebug Record information useful for VM and contract debugging (default: false)
--networkid value Explicitly set network id (integer)(For testnets: use --chain <testnet_name> instead) (default: 1)
--fakepow Disables proof-of-work verification (default: false)
--gpo.blocks value Number of recent blocks to check for gas prices (default: 20)
--gpo.percentile value Suggested gas price is the given percentile of a set of recent transaction gas prices (default: 60)
--allow-insecure-unlock Allow insecure account unlocking when account-related RPCs are exposed by http (default: false)
--identity value Custom node name
--clique.checkpoint value Number of blocks after which to save the vote snapshot to the database (default: 10)
--clique.snapshots value Number of recent vote snapshots to keep in memory (default: 1024)
--clique.signatures value Number of recent block signatures to keep in memory (default: 16384)
--clique.datadir value Path to clique db folder
--mine Enable mining (default: false)
--proposer.disable Disables PoS proposer (default: false)
--miner.notify value Comma separated HTTP URL list to notify of new work packages
--miner.gaslimit value Target gas limit for mined blocks (default: 36000000)
--miner.etherbase value Public address for block mining rewards (default: "0")
--miner.extradata value Block extra data set by the miner (default = client version)
--miner.noverify Disable remote sealing verification (default: false)
--miner.sigfile value Private key to sign blocks with
--miner.recommit value Time interval to recreate the block being mined (default: 3s)
--sentry.api.addr value Comma separated sentry addresses '<host>:<port>,<host>:<port>'
--sentry.log-peer-info Log detailed peer info when a peer connects or disconnects. Enable to integrate with observer. (default: false)
--downloader.api.addr value downloader address '<host>:<port>'
--downloader.disable.ipv4 Turns off ipv4 for the downloader (default: false)
--downloader.disable.ipv6 Turns off ipv6 for the downloader (default: false)
--no-downloader Disables downloader component (default: false)
--downloader.verify Verify snapshots on startup. It will not report problems found, but re-download broken pieces. (default: false)
--healthcheck Enable grpc health check (default: false)
--bor.heimdall value URL of Heimdall service (default: "http://localhost:1317")
--webseed value Comma-separated URL's, holding metadata about network-support infrastructure (like S3 buckets with snapshots, bootnodes, etc...)
--bor.withoutheimdall Run without Heimdall service (for testing purposes) (default: false)
--bor.period Override the bor block period (for testing purposes) (default: false)
--bor.minblocksize Ignore the bor block period and wait for 'blocksize' transactions (for testing purposes) (default: false)
--bor.milestone Enabling bor milestone processing (default: true)
--bor.waypoints Enabling bor waypont recording (default: false)
--polygon.sync Enabling syncing using the new polygon sync component (default: true)
--polygon.sync.stage Enabling syncing with a stage that uses the polygon sync component (default: false)
--ethstats value Reporting URL of a ethstats service (nodename:secret@host:port)
--override.prague value Manually specify the Prague fork time, overriding the bundled setting (default: 0)
--caplin.discovery.addr value Address for Caplin DISCV5 protocol (default: "127.0.0.1")
--caplin.discovery.port value Port for Caplin DISCV5 protocol (default: 4000)
--caplin.discovery.tcpport value TCP Port for Caplin DISCV5 protocol (default: 4001)
--caplin.checkpoint-sync-url value [ --caplin.checkpoint-sync-url value ] checkpoint sync endpoint
--caplin.subscribe-all-topics Subscribe to all gossip topics (default: false)
--caplin.max-peer-count value Max number of peers to connect (default: 80)
--caplin.enable-upnp Enable NAT porting for Caplin (default: false)
--caplin.max-inbound-traffic-per-peer value Max inbound traffic per second per peer (default: "256KB")
--caplin.max-outbound-traffic-per-peer value Max outbound traffic per second per peer (default: "256KB")
--caplin.adaptable-maximum-traffic-requirements Make the node adaptable to the maximum traffic requirement based on how many validators are being ran (default: true)
--sentinel.addr value Address for sentinel (default: "localhost")
--sentinel.port value Port for sentinel (default: 7777)
--sentinel.bootnodes value [ --sentinel.bootnodes value ] Comma separated enode URLs for P2P discovery bootstrap
--sentinel.staticpeers value [ --sentinel.staticpeers value ] connect to comma-separated Consensus static peers
--ots.search.max.pagesize value Max allowed page size for search methods (default: 25)
--silkworm.exec Enable Silkworm block execution (default: false)
--silkworm.rpc Enable embedded Silkworm RPC service (default: false)
--silkworm.sentry Enable embedded Silkworm Sentry service (default: false)
--silkworm.verbosity value Set the log level for Silkworm console logs (default: "info")
--silkworm.contexts value Number of I/O contexts used in embedded Silkworm RPC and Sentry services (zero means use default in Silkworm) (default: 0)
--silkworm.rpc.log Enable interface log for embedded Silkworm RPC service (default: false)
--silkworm.rpc.log.maxsize value Max interface log file size in MB for embedded Silkworm RPC service (default: 1)
--silkworm.rpc.log.maxfiles value Max interface log files for embedded Silkworm RPC service (default: 100)
--silkworm.rpc.log.response Dump responses in interface logs for embedded Silkworm RPC service (default: false)
--silkworm.rpc.workers value Number of worker threads used in embedded Silkworm RPC service (zero means use default in Silkworm) (default: 0)
--silkworm.rpc.compatibility Preserve JSON-RPC compatibility using embedded Silkworm RPC service (default: true)
--beacon.api value [ --beacon.api value ] Enable beacon API (available endpoints: beacon, builder, config, debug, events, node, validator, lighthouse)
--beacon.api.addr value sets the host to listen for beacon api requests (default: "localhost")
--beacon.api.cors.allow-methods value [ --beacon.api.cors.allow-methods value ] set the cors' allow methods (default: "GET", "POST", "PUT", "DELETE", "OPTIONS")
--beacon.api.cors.allow-origins value [ --beacon.api.cors.allow-origins value ] set the cors' allow origins
--beacon.api.cors.allow-credentials set the cors' allow credentials (default: false)
--beacon.api.port value sets the port to listen for beacon api requests (default: 5555)
--beacon.api.read.timeout value Sets the seconds for a read time out in the beacon api (default: 5)
--beacon.api.write.timeout value Sets the seconds for a write time out in the beacon api (default: 31536000)
--beacon.api.protocol value Protocol for beacon API (default: "tcp")
--beacon.api.ide.timeout value Sets the seconds for a write time out in the beacon api (default: 25)
--caplin.blocks-archive sets whether backfilling is enabled for caplin (default: false)
--caplin.blobs-archive sets whether backfilling is enabled for caplin (default: false)
--caplin.states-archive enables archival node for historical states in caplin (it will enable block archival as well) (default: false)
--caplin.blobs-immediate-backfill sets whether caplin should immediatelly backfill blobs (4096 epochs) (default: false)
--caplin.blobs-no-pruning disable blob pruning in caplin (default: false)
--caplin.checkpoint-sync.disable disable checkpoint sync in caplin (default: false)
--caplin.snapgen enables snapshot generation in caplin (default: false)
--caplin.mev-relay-url value MEV relay endpoint. Caplin runs in builder mode if this is set
--caplin.validator-monitor Enable caplin validator monitoring metrics (default: false)
--caplin.custom-config value set the custom config for caplin
--caplin.custom-genesis value set the custom genesis for caplin
--trusted-setup-file value Absolute path to trusted_setup.json file
--rpc.slow value Print in logs RPC requests slower than given threshold: 100ms, 1s, 1m. Exluded methods: eth_getBlock,eth_getBlockByNumber,eth_getBlockByHash,eth_blockNumber,erigon_blockNumber,erigon_getHeaderByNumber,erigon_getHeaderByHash,erigon_getBlockByTimestamp,eth_call (default: 0s)
--txpool.gossip.disable Disabling p2p gossip of txs. Any txs received by p2p - will be dropped. Some networks like 'Optimism execution engine'/'Optimistic Rollup' - using it to protect against MEV attacks (default: false)
--sync.loop.block.limit value Sets the maximum number of blocks to process per loop iteration (default: 5000)
--sync.loop.break.after value Sets the last stage of the sync loop to run
--sync.parallel-state-flushing Enables parallel state flushing (default: true)
--chaos.monkey Enable 'chaos monkey' to generate spontaneous network/consensus/etc failures. Use ONLY for testing (default: false)
--shutter Enable the Shutter encrypted transactions mempool (defaults to false) (default: false)
--shutter.p2p.bootstrap.nodes value [ --shutter.p2p.bootstrap.nodes value ] Use to override the default p2p bootstrap nodes (defaults to using the values in the embedded config)
--shutter.p2p.listen.port value Use to override the default p2p listen port (defaults to 23102) (default: 0)
--pprof Enable the pprof HTTP server (default: false)
--pprof.addr value pprof HTTP server listening interface (default: "127.0.0.1")
--pprof.port value pprof HTTP server listening port (default: 6060)
--pprof.cpuprofile value Write CPU profile to the given file
--trace value Write execution trace to the given file
--metrics Enable metrics collection and reporting (default: false)
--metrics.addr value Enable stand-alone metrics HTTP server listening interface (default: "127.0.0.1")
--metrics.port value Metrics HTTP server listening port (default: 6061)
--diagnostics.disabled Disable diagnostics (default: false)
--diagnostics.endpoint.addr value Diagnostics HTTP server listening interface (default: "127.0.0.1")
--diagnostics.endpoint.port value Diagnostics HTTP server listening port (default: 6062)
--diagnostics.speedtest Enable speed test (default: false)
--log.json Format console logs with JSON (default: false)
--log.console.json Format console logs with JSON (default: false)
--log.dir.json Format file logs with JSON (default: false)
--verbosity value Set the log level for console logs (default: "info")
--log.console.verbosity value Set the log level for console logs (default: "info")
--log.dir.disable disable disk logging (default: false)
--log.dir.path value Path to store user and error logs to disk
--log.dir.prefix value The file name prefix for logs stored to disk
--log.dir.verbosity value Set the log verbosity for logs stored to disk (default: "info")
--log.delays Enable block delay logging (default: false)
--config value Sets erigon flags from YAML/TOML file
--help, -h show help
--version, -v print the version
RPC Daemon
Remote Procedure Call
The RPC daemon is a crucial component of Erigon, enabling JSON remote procedure calls and providing access to various APIs.
Erigon RPC Method Guidelines
This document provides guidelines for understanding and using the various RPC methods available in Erigon.
-
Compatibility with
eth
namespace -
Compatibility with standard Geth methods
- All methods featured by Geth including WebSocket Server, IPC Server, TLS, GraphQL, etc..., are supported by Erigon.
-
Otterscan Methods (
ots_
)- In addition to the standard Geth methods, Erigon includes RPC methods prefixed with
ots_
for Otterscan. These are specific to the Otterscan functionality integrated with Erigon. See more details here.
- In addition to the standard Geth methods, Erigon includes RPC methods prefixed with
-
Erigon Extensions (
erigon_
)- Erigon introduces some small extensions to the Geth API, denoted by the
erigon_
prefix aimed to enhance the functionality, see more details here about implementation status.
- Erigon introduces some small extensions to the Geth API, denoted by the
-
gRPC API
- Erigon also exposes a gRPC API for lower-level data access. This is primarily used by Erigon’s components when they are deployed separately as independent processes (either on the same or different servers).
- This gRPC API is also accessible to users. For more information, visit the Erigon Interfaces GitHub repository.
-
Trace Module (
trace_
)- Erigon includes the
trace_
module, which originates from OpenEthereum. This module provides additional functionality related to tracing transactions and state changes, which is valuable for advanced debugging and analysis.
- Erigon includes the
More info
For a comprehensive understanding of the RPC daemon's functionality, configuration, and usage, please refer to https://github.com/erigontech/erigon/blob/main/cmd/rpcdaemon/README.md* (also contained in your locally compiled Erigon folder at /cmd/rpcdaemon
) which covers the following key topics:
- Introduction: An overview of the RPC daemon, its benefits, and how it integrates with Erigon.
- Getting Started: Step-by-step guides for running the RPC daemon locally and remotely, including configuration options and command-line flags.
- Healthcheck: Information on performing health checks using POST requests or GET requests with custom headers.
- Testing and debugging: Examples of testing the RPC daemon using
curl
commands and Postman, debugging. - FAQ: Frequently asked questions and answers covering topics such as prune options, RPC implementation status, and securing communication between the RPC daemon and Erigon instance.
- For Developers: Resources for developers, including code generation and information on working with the RPC daemon.
- Relations between prune options and RPC methods: Explains how prune options affect RPC methods.
- RPC Implementation Status: Provides a table showing the current implementation status of Erigon's RPC daemon.
- Securing the communication between RPC daemon and Erigon instance via TLS and authentication: Outlines the steps to secure communication between the RPC daemon and Erigon instance.
- Ethstats: Describes how to run ethstats with the RPC daemon.
- Allowing only specific methods (Allowlist): Explains how to restrict access to specific RPC methods.
Command Line Options
To display available options for RPCdaemon digit:
./build/bin/rpcdaemon --help
The --help
flag listing is reproduced below for your convenience.
rpcdaemon is JSON RPC server that connects to Erigon node for remote DB access
Usage:
rpcdaemon [flags]
Flags:
--datadir string path to Erigon working directory
--db.read.concurrency int Does limit amount of parallel db reads. Default: equal to GOMAXPROCS (or number of CPU) (default 1408)
--diagnostics.disabled Disable diagnostics
--diagnostics.endpoint.addr string Diagnostics HTTP server listening interface (default "127.0.0.1")
--diagnostics.endpoint.port uint Diagnostics HTTP server listening port (default 6062)
--diagnostics.speedtest Enable speed test
--graphql enables graphql endpoint (disabled by default)
--grpc Enable GRPC server
--grpc.addr string GRPC server listening interface (default "localhost")
--grpc.healthcheck Enable GRPC health check
--grpc.port int GRPC server listening port (default 8547)
-h, --help help for rpcdaemon
--http.addr string HTTP server listening interface (default "localhost")
--http.api strings API's offered over the RPC interface: eth,erigon,web3,net,debug,trace,txpool,db. Supported methods: https://github.com/erigontech/erigon/tree/main/cmd/rpcdaemon (default [eth,erigon])
--http.compression Disable http compression (default true)
--http.corsdomain strings Comma separated list of domains from which to accept cross origin requests (browser enforced)
--http.dbg.single Allow pass HTTP header 'dbg: true' to printt more detailed logs - how this request was executed
--http.enabled enable http server (default true)
--http.port int HTTP server listening port (default 8545)
--http.timeouts.idle duration Maximum amount of time to wait for the next request when keep-alives are enabled. If http.timeouts.idle is zero, the value of http.timeouts.read is used (default 2m0s)
--http.timeouts.read duration Maximum duration for reading the entire request, including the body. (default 30s)
--http.timeouts.write duration Maximum duration before timing out writes of the response. It is reset whenever a new request's header is read (default 30m0s)
--http.trace Trace HTTP requests with INFO level
--http.url string HTTP server listening url. will OVERRIDE http.addr and http.port. will NOT respect http paths. prefix supported are tcp, unix
--http.vhosts strings Comma separated list of virtual hostnames from which to accept requests (server enforced). Accepts '*' wildcard. (default [localhost])
--https.addr string rpc HTTPS server listening interface (default "localhost")
--https.cert string certificate for rpc HTTPS server
--https.enabled enable http server
--https.key string key file for rpc HTTPS server
--https.port int rpc HTTPS server listening port. default to http+363 if not set
--https.url string rpc HTTPS server listening url. will OVERRIDE https.addr and https.port. will NOT respect paths. prefix supported are tcp, unix
--log.console.json Format console logs with JSON
--log.console.verbosity string Set the log level for console logs (default "info")
--log.delays Enable block delay logging
--log.dir.disable disable disk logging
--log.dir.json Format file logs with JSON
--log.dir.path string Path to store user and error logs to disk
--log.dir.prefix string The file name prefix for logs stored to disk
--log.dir.verbosity string Set the log verbosity for logs stored to disk (default "info")
--log.json Format console logs with JSON
--metrics Enable metrics collection and reporting
--metrics.addr string Enable stand-alone metrics HTTP server listening interface (default "127.0.0.1")
--metrics.port int Metrics HTTP server listening port (default 6061)
--ots.search.max.pagesize uint Max allowed page size for search methods (default 25)
--polygon.sync Enable if Erigon has been synced using the new polygon sync component
--pprof Enable the pprof HTTP server
--pprof.addr string pprof HTTP server listening interface (default "127.0.0.1")
--pprof.cpuprofile string Write CPU profile to the given file
--pprof.port int pprof HTTP server listening port (default 6060)
--private.api.addr string Erigon's components (txpool, rpcdaemon, sentry, downloader, ...) can be deployed as independent Processes on same/another server. Then components will connect to erigon by this internal grpc API. Example: 127.0.0.1:9090 (default "127.0.0.1:9090")
--rpc.accessList string Specify granular (method-by-method) API allowlist
--rpc.allow-unprotected-txs Allow for unprotected (non-EIP155 signed) transactions to be submitted via RPC
--rpc.batch.concurrency uint Does limit amount of goroutines to process 1 batch request. Means 1 bach request can't overload server. 1 batch still can have unlimited amount of request (default 2)
--rpc.batch.limit int Maximum number of requests in a batch (default 100)
--rpc.evmtimeout duration Maximum amount of time to wait for the answer from EVM call. (default 5m0s)
--rpc.gascap uint Sets a cap on gas that can be used in eth_call/estimateGas (default 50000000)
--rpc.maxgetproofrewindblockcount.limit int Max GetProof rewind block count (default 100000)
--rpc.overlay.getlogstimeout duration Maximum amount of time to wait for the answer from the overlay_getLogs call. (default 5m0s)
--rpc.overlay.replayblocktimeout duration Maximum amount of time to wait for the answer to replay a single block when called from an overlay_getLogs call. (default 10s)
--rpc.returndata.limit int Maximum number of bytes returned from eth_call or similar invocations (default 100000)
--rpc.slow duration Print in logs RPC requests slower than given threshold: 100ms, 1s, 1m. Excluded methods: eth_getBlock,eth_getBlockByNumber,eth_getBlockByHash,eth_blockNumber,erigon_blockNumber,erigon_getHeaderByNumber,erigon_getHeaderByHash,erigon_getBlockByTimestamp,eth_call
--rpc.streaming.disable Erigon has enabled json streaming for some heavy endpoints (like trace_*). It's a trade-off: greatly reduce amount of RAM (in some cases from 30GB to 30mb), but it produce invalid json format if error happened in the middle of streaming (because json is not streaming-friendly format)
--rpc.subscription.filters.maxaddresses int Maximum number of addresses per subscription to filter logs by.
--rpc.subscription.filters.maxheaders int Maximum number of block headers to store per subscription.
--rpc.subscription.filters.maxlogs int Maximum number of logs to store per subscription.
--rpc.subscription.filters.maxtopics int Maximum number of topics per subscription to filter logs by.
--rpc.subscription.filters.maxtxs int Maximum number of transactions to store per subscription.
--rpc.txfeecap float Sets a cap on transaction fee (in ether) that can be sent via the RPC APIs (0 = no cap) (default 1)
--socket.enabled Enable IPC server
--socket.url string IPC server listening url. prefix supported are tcp, unix (default "unix:///var/run/erigon.sock")
--state.cache string Amount of data to store in StateCache (enabled if no --datadir set). Set 0 to disable StateCache. Defaults to 0MB RAM (default "0MB")
--tls.cacert string CA certificate for client side TLS handshake for GRPC
--tls.cert string certificate for client side TLS handshake for GRPC
--tls.key string key file for client side TLS handshake for GRPC
--trace string Write execution trace to the given file
--trace.compat Bug for bug compatibility with OE for trace_ routines
--trace.maxtraces uint Sets a limit on traces that can be returned in trace_filter (default 200)
--txpool.api.addr string txpool api network address, for example: 127.0.0.1:9090 (default: use value of --private.api.addr)
--verbosity string Set the log level for console logs (default "info")
--ws Enable Websockets - Same port as HTTP[S]
--ws.api.subscribelogs.channelsize int Size of the channel used for websocket logs subscriptions (default 8192)
--ws.compression Enable Websocket compression (RFC 7692)
The trace
Module
The trace module is for getting a deeper insight into transaction processing. It includes two sets of calls; the transaction trace filtering API and the ad-hoc tracing API.
Note: In order to use the Transaction-Trace Filtering API, Erigon must be fully synced with the argument trace
in http.api
flag.
./build/bin/erigon --http.api: eth,erigon,trace
As for the Ad-hoc Tracing API, as long the blocks have not yet been pruned, the RPC calls will work.
The Ad-hoc Tracing API
The ad-hoc tracing API allows you to perform a number of different diagnostics on calls or transactions, either historical ones from the chain or hypothetical ones not yet mined. The diagnostics include:
trace
Transaction trace. An equivalent trace to that in the previous section.vmTrace
Virtual Machine execution trace. Provides a full trace of the VM's state throughout the execution of the transaction, including for any subcalls.stateDiff
State difference. Provides information detailing all altered portions of the Ethereum state made due to the execution of the transaction.
There are three means of providing a transaction to execute; either providing the same information as when making
a call using eth_call
(see trace_call
), through providing raw, signed, transaction data as when using
eth_sendRawTransaction
(see trace_rawTransaction
) or simply a transaction hash for a previously mined
transaction (see trace_replayTransaction
). In the latter case, your node must be in archive mode or the
transaction should be within the most recent 1000 blocks.
The Transaction-Trace Filtering API
These APIs allow you to get a full externality trace on any transaction executed throughout the Erigon chain.
Unlike the log filtering API, you are able to search and filter based only upon address information.
Information returned includes the execution of all CREATE
s, SUICIDE
s and all variants of CALL
together
with input data, output data, gas usage, amount transferred and the success status of each individual action.
traceAddress
field
The traceAddress
field of all returned traces, gives the exact location in the call trace [index in root,
index in first CALL
, index in second CALL
, ...].
i.e. if the trace is:
A
CALLs B
CALLs G
CALLs C
CALLs G
then it should look something like:
[ {A: []}, {B: [0]}, {G: [0, 0]}, {C: [1]}, {G: [1, 0]} ]
JSON-RPC methods
Ad-hoc Tracing
- trace_call
- trace_callMany
- trace_rawTransaction
- trace_replayBlockTransactions
- trace_replayTransaction
Transaction-Trace Filtering
JSON-RPC API Reference
trace_call
Executes the given call and returns a number of possible traces for it.
Parameters
Object
- Transaction object wherefrom
field is optional andnonce
field is omitted.Array
- Type of trace, one or more of:"vmTrace"
,"trace"
,"stateDiff"
.Quantity
orTag
- (optional) Integer of a block number, or the string'earliest'
,'latest'
or'pending'
.
Returns
Array
- Block traces
Example
Request
curl --data '{"method":"trace_call","params":[{ ... },["trace"]],"id":1,"jsonrpc":"2.0"}' -H "Content-Type: application/json" -X POST localhost:8545
Response
{
"id": 1,
"jsonrpc": "2.0",
"result": {
"output": "0x",
"stateDiff": null,
"trace": [{
"action": { ... },
"result": {
"gasUsed": "0x0",
"output": "0x"
},
"subtraces": 0,
"traceAddress": [],
"type": "call"
}],
"vmTrace": null
}
}
trace_callMany
Performs multiple call traces on top of the same block. i.e. transaction n
will be executed on top of a pending block with all n-1
transactions applied (traced) first. Allows to trace dependent transactions.
Parameters
Array
- List of trace calls with the type of trace, one or more of:"vmTrace"
,"trace"
,"stateDiff"
.Quantity
orTag
- (optional) integer block number, or the string'latest'
,'earliest'
or'pending'
, see the default block parameter.
params: [
[
[
{
"from": "0x407d73d8a49eeb85d32cf465507dd71d507100c1",
"to": "0xa94f5374fce5edbc8e2a8697c15331677e6ebf0b",
"value": "0x186a0"
},
["trace"]
],
[
{
"from": "0x407d73d8a49eeb85d32cf465507dd71d507100c1",
"to": "0xa94f5374fce5edbc8e2a8697c15331677e6ebf0b",
"value": "0x186a0"
},
["trace"]
]
],
"latest"
]
Returns
Array
- Array of the given transactions' traces
Example
Request
curl --data '{"method":"trace_callMany","params":[[[{"from":"0x407d73d8a49eeb85d32cf465507dd71d507100c1","to":"0xa94f5374fce5edbc8e2a8697c15331677e6ebf0b","value":"0x186a0"},["trace"]],[{"from":"0x407d73d8a49eeb85d32cf465507dd71d507100c1","to":"0xa94f5374fce5edbc8e2a8697c15331677e6ebf0b","value":"0x186a0"},["trace"]]],"latest"],"id":1,"jsonrpc":"2.0"}' -H "Content-Type: application/json" -X POST localhost:8545
Response
{
"id": 1,
"jsonrpc": "2.0",
"result": [
{
"output": "0x",
"stateDiff": null,
"trace": [{
"action": {
"callType": "call",
"from": "0x407d73d8a49eeb85d32cf465507dd71d507100c1",
"gas": "0x1dcd12f8",
"input": "0x",
"to": "0xa94f5374fce5edbc8e2a8697c15331677e6ebf0b",
"value": "0x186a0"
},
"result": {
"gasUsed": "0x0",
"output": "0x"
},
"subtraces": 0,
"traceAddress": [],
"type": "call"
}],
"vmTrace": null
},
{
"output": "0x",
"stateDiff": null,
"trace": [{
"action": {
"callType": "call",
"from": "0x407d73d8a49eeb85d32cf465507dd71d507100c1",
"gas": "0x1dcd12f8",
"input": "0x",
"to": "0xa94f5374fce5edbc8e2a8697c15331677e6ebf0b",
"value": "0x186a0"
},
"result": {
"gasUsed": "0x0",
"output": "0x"
},
"subtraces": 0,
"traceAddress": [],
"type": "call"
}],
"vmTrace": null
}
]
}
trace_rawTransaction
Traces a call to eth_sendRawTransaction
without making the call, returning the traces
Parameters
Data
- Raw transaction data.Array
- Type of trace, one or more of:"vmTrace"
,"trace"
,"stateDiff"
.
params: [
"0xd46e8dd67c5d32be8d46e8dd67c5d32be8058bb8eb970870f072445675058bb8eb970870f072445675",
["trace"]
]
Returns
Object
- Block traces.
Example
Request
curl --data '{"method":"trace_rawTransaction","params":["0xd46e8dd67c5d32be8d46e8dd67c5d32be8058bb8eb970870f072445675058bb8eb970870f072445675",["trace"]],"id":1,"jsonrpc":"2.0"}' -H "Content-Type: application/json" -X POST localhost:8545
Response
{
"id": 1,
"jsonrpc": "2.0",
"result": {
"output": "0x",
"stateDiff": null,
"trace": [{
"action": { ... },
"result": {
"gasUsed": "0x0",
"output": "0x"
},
"subtraces": 0,
"traceAddress": [],
"type": "call"
}],
"vmTrace": null
}
}
trace_replayBlockTransactions
Replays all transactions in a block returning the requested traces for each transaction.
Parameters
Quantity
orTag
- Integer of a block number, or the string'earliest'
,'latest'
or'pending'
.Array
- Type of trace, one or more of:"vmTrace"
,"trace"
,"stateDiff"
.
params: [
"0x2ed119",
["trace"]
]
Returns
Array
- Block transactions traces.
Example
Request
curl --data '{"method":"trace_replayBlockTransactions","params":["0x2ed119",["trace"]],"id":1,"jsonrpc":"2.0"}' -H "Content-Type: application/json" -X POST localhost:8545
Response
{
"id": 1,
"jsonrpc": "2.0",
"result": [
{
"output": "0x",
"stateDiff": null,
"trace": [{
"action": { ... },
"result": {
"gasUsed": "0x0",
"output": "0x"
},
"subtraces": 0,
"traceAddress": [],
"type": "call"
}],
"transactionHash": "0x...",
"vmTrace": null
},
{ ... }
]
}
trace_replayTransaction
Replays a transaction, returning the traces.
Parameters
Hash
- Transaction hash.Array
- Type of trace, one or more of:"vmTrace"
,"trace"
,"stateDiff"
.
params: [
"0x02d4a872e096445e80d05276ee756cefef7f3b376bcec14246469c0cd97dad8f",
["trace"]
]
Returns
Object
- Block traces.
Example
Request
curl --data '{"method":"trace_replayTransaction","params":["0x02d4a872e096445e80d05276ee756cefef7f3b376bcec14246469c0cd97dad8f",["trace"]],"id":1,"jsonrpc":"2.0"}' -H "Content-Type: application/json" -X POST localhost:8545
Response
{
"id": 1,
"jsonrpc": "2.0",
"result": {
"output": "0x",
"stateDiff": null,
"trace": [{
"action": { ... },
"result": {
"gasUsed": "0x0",
"output": "0x"
},
"subtraces": 0,
"traceAddress": [],
"type": "call"
}],
"vmTrace": null
}
}
trace_block
Returns traces created at given block.
Parameters
Quantity
orTag
- Integer of a block number, or the string'earliest'
,'latest'
or'pending'
.
params: [
"0x2ed119" // 3068185
]
Returns
Array
- Block traces.
Example
Request
curl --data '{"method":"trace_block","params":["0x2ed119"],"id":1,"jsonrpc":"2.0"}' -H "Content-Type: application/json" -X POST localhost:8545
Response
{
"id": 1,
"jsonrpc": "2.0",
"result": [
{
"action": {
"callType": "call",
"from": "0xaa7b131dc60b80d3cf5e59b5a21a666aa039c951",
"gas": "0x0",
"input": "0x",
"to": "0xd40aba8166a212d6892125f079c33e6f5ca19814",
"value": "0x4768d7effc3fbe"
},
"blockHash": "0x7eb25504e4c202cf3d62fd585d3e238f592c780cca82dacb2ed3cb5b38883add",
"blockNumber": 3068185,
"result": {
"gasUsed": "0x0",
"output": "0x"
},
"subtraces": 0,
"traceAddress": [],
"transactionHash": "0x07da28d752aba3b9dd7060005e554719c6205c8a3aea358599fc9b245c52f1f6",
"transactionPosition": 0,
"type": "call"
},
...
]
}
trace_filter
Returns traces matching given filter
Parameters
Object
- The filter objectfromBlock
:Quantity
orTag
- (optional) From this block.toBlock
:Quantity
orTag
- (optional) To this block.fromAddress
:Array
- (optional) Sent from these addresses.toAddress
:Address
- (optional) Sent to these addresses.after
:Quantity
- (optional) The offset trace numbercount
:Quantity
- (optional) Integer number of traces to display in a batch.
params: [{
"fromBlock": "0x2ed0c4", // 3068100
"toBlock": "0x2ed128", // 3068200
"toAddress": ["0x8bbB73BCB5d553B5A556358d27625323Fd781D37"],
"after": 1000,
"count": 100
}]
Returns
Array
- Traces matching given filter
Example
Request
curl --data '{"method":"trace_filter","params":[{"fromBlock":"0x2ed0c4","toBlock":"0x2ed128","toAddress":["0x8bbB73BCB5d553B5A556358d27625323Fd781D37"],"after":1000,"count":100}],"id":1,"jsonrpc":"2.0"}' -H "Content-Type: application/json" -X POST localhost:8545
Response
{
"id": 1,
"jsonrpc": "2.0",
"result": [
{
"action": {
"callType": "call",
"from": "0x32be343b94f860124dc4fee278fdcbd38c102d88",
"gas": "0x4c40d",
"input": "0x",
"to": "0x8bbb73bcb5d553b5a556358d27625323fd781d37",
"value": "0x3f0650ec47fd240000"
},
"blockHash": "0x86df301bcdd8248d982dbf039f09faf792684e1aeee99d5b58b77d620008b80f",
"blockNumber": 3068183,
"result": {
"gasUsed": "0x0",
"output": "0x"
},
"subtraces": 0,
"traceAddress": [],
"transactionHash": "0x3321a7708b1083130bd78da0d62ead9f6683033231617c9d268e2c7e3fa6c104",
"transactionPosition": 3,
"type": "call"
},
...
]
}
trace_get
Returns trace at given position.
Parameters
Hash
- Transaction hash.Array
- Index positions of the traces.
params: [
"0x17104ac9d3312d8c136b7f44d4b8b47852618065ebfa534bd2d3b5ef218ca1f3",
["0x0"]
]
Returns
Object
- Trace object
Example
Request
curl --data '{"method":"trace_get","params":["0x17104ac9d3312d8c136b7f44d4b8b47852618065ebfa534bd2d3b5ef218ca1f3",["0x0"]],"id":1,"jsonrpc":"2.0"}' -H "Content-Type: application/json" -X POST localhost:8545
Response
{
"id": 1,
"jsonrpc": "2.0",
"result": {
"action": {
"callType": "call",
"from": "0x1c39ba39e4735cb65978d4db400ddd70a72dc750",
"gas": "0x13e99",
"input": "0x16c72721",
"to": "0x2bd2326c993dfaef84f696526064ff22eba5b362",
"value": "0x0"
},
"blockHash": "0x7eb25504e4c202cf3d62fd585d3e238f592c780cca82dacb2ed3cb5b38883add",
"blockNumber": 3068185,
"result": {
"gasUsed": "0x183",
"output": "0x0000000000000000000000000000000000000000000000000000000000000001"
},
"subtraces": 0,
"traceAddress": [
0
],
"transactionHash": "0x17104ac9d3312d8c136b7f44d4b8b47852618065ebfa534bd2d3b5ef218ca1f3",
"transactionPosition": 2,
"type": "call"
}
}
trace_transaction
Returns all traces of given transaction
Parameters
Hash
- Transaction hash
params: ["0x17104ac9d3312d8c136b7f44d4b8b47852618065ebfa534bd2d3b5ef218ca1f3"]
Returns
Array
- Traces of given transaction
Example
Request
curl --data '{"method":"trace_transaction","params":["0x17104ac9d3312d8c136b7f44d4b8b47852618065ebfa534bd2d3b5ef218ca1f3"],"id":1,"jsonrpc":"2.0"}' -H "Content-Type: application/json" -X POST localhost:8545
Response
{
"id": 1,
"jsonrpc": "2.0",
"result": [
{
"action": {
"callType": "call",
"from": "0x1c39ba39e4735cb65978d4db400ddd70a72dc750",
"gas": "0x13e99",
"input": "0x16c72721",
"to": "0x2bd2326c993dfaef84f696526064ff22eba5b362",
"value": "0x0"
},
"blockHash": "0x7eb25504e4c202cf3d62fd585d3e238f592c780cca82dacb2ed3cb5b38883add",
"blockNumber": 3068185,
"result": {
"gasUsed": "0x183",
"output": "0x0000000000000000000000000000000000000000000000000000000000000001"
},
"subtraces": 0,
"traceAddress": [
0
],
"transactionHash": "0x17104ac9d3312d8c136b7f44d4b8b47852618065ebfa534bd2d3b5ef218ca1f3",
"transactionPosition": 2,
"type": "call"
},
...
]
}
TxPool
Memory pool management
In Erigon, txpool
is a specific API namespace that keeps pending and queued transactions in the local memory pool. It is used to store transactions that are waiting to be processed by miners. Default is 4096
pending and 1024
queued transactions. However, the number of pending transactions can be much higher than this default value.
The transaction pool (txpool or mempool) is the dynamic in-memory area where pending transactions reside before they are included in a block and thus become static. Each node on the Ethereum mainnet has its own pool of transactions and, combined, they all form the global pool.
The thousands of pending transactions that enter the global pool by being broadcast on the network and before being included in a block are an always changing data set that’s holding millions of dollars at any given second. There are many ways to use txpool such as yield farming, liquidity providing, arbitrage, front running and MEV .
WHile Txpool component is run by default as an internal Erigon component, it can also be run as a separate process.
Running with TX pool as a separate process
Before using a separate TxPool process the executable must be built:
cd erigon
make txpool
If Erigon is on a different device, add the flag --pprof.addr 0.0.0.0
or TxPool will listen on localhost by default.
./build/bin/txpool --pprof.addr 0.0.0.0
Erigon must be launched with options to listen to external TxPool
./build/bin/erigon --pprof --pprof.addr 123.123.123.123
More info
For other information regardin Txpool functionality, configuration, and usage, please refer to the embedded file you can find in your compiled Erigon folder at ./cmd/txpool/README.md
.
Command Line Options
To display available options for Txpool digit:
./build/bin/txpool --help
The --help
flag listing is reproduced below for your convenience.
Launch external Transaction Pool instance - same as built-into Erigon, but as independent Process
Usage:
txpool [flags]
Flags:
--datadir string Data directory for the databases (default "/home/bloxster/.local/share/erigon")
--db.writemap Enable WRITE_MAP feature for fast database writes and fast commit times (default true)
--diagnostics.disabled Disable diagnostics
--diagnostics.endpoint.addr string Diagnostics HTTP server listening interface (default "127.0.0.1")
--diagnostics.endpoint.port uint Diagnostics HTTP server listening port (default 6062)
--diagnostics.speedtest Enable speed test
-h, --help help for txpool
--log.console.json Format console logs with JSON
--log.console.verbosity string Set the log level for console logs (default "info")
--log.delays Enable block delay logging
--log.dir.disable disable disk logging
--log.dir.json Format file logs with JSON
--log.dir.path string Path to store user and error logs to disk
--log.dir.prefix string The file name prefix for logs stored to disk
--log.dir.verbosity string Set the log verbosity for logs stored to disk (default "info")
--log.json Format console logs with JSON
--metrics Enable metrics collection and reporting
--metrics.addr string Enable stand-alone metrics HTTP server listening interface (default "127.0.0.1")
--metrics.port int Metrics HTTP server listening port (default 6061)
--pprof Enable the pprof HTTP server
--pprof.addr string pprof HTTP server listening interface (default "127.0.0.1")
--pprof.cpuprofile string Write CPU profile to the given file
--pprof.port int pprof HTTP server listening port (default 6060)
--private.api.addr string execution service <host>:<port> (default "localhost:9090")
--sentry.api.addr strings comma separated sentry addresses '<host>:<port>,<host>:<port>' (default [localhost:9091])
--tls.cacert string CA certificate for client side TLS handshake
--tls.cert string certificate for client side TLS handshake
--tls.key string key file for client side TLS handshake
--trace string Write execution trace to the given file
--txpool.accountslots uint Minimum number of executable transaction slots guaranteed per account (default 16)
--txpool.api.addr string txpool service <host>:<port> (default "localhost:9094")
--txpool.blobpricebump uint Price bump percentage to replace an existing blob (type-3) transaction (default 100)
--txpool.blobslots uint Max allowed total number of blobs (within type-3 txs) per account (default 48)
--txpool.commit.every duration How often transactions should be committed to the storage (default 15s)
--txpool.globalbasefeeslots int Maximum number of non-executable transactions where only not enough baseFee (default 30000)
--txpool.globalqueue int Maximum number of non-executable transaction slots for all accounts (default 30000)
--txpool.globalslots int Maximum number of executable transaction slots for all accounts (default 10000)
--txpool.gossip.disable Disabling p2p gossip of txs. Any txs received by p2p - will be dropped. Some networks like 'Optimism execution engine'/'Optimistic Rollup' - using it to protect against MEV attacks
--txpool.pricebump uint Price bump percentage to replace an already existing transaction (default 10)
--txpool.pricelimit uint Minimum gas price (fee cap) limit to enforce for acceptance into the pool (default 1)
--txpool.totalblobpoollimit uint Total limit of number of all blobs in txs within the txpool (default 480)
--txpool.trace.senders strings Comma separated list of addresses, whose transactions will traced in transaction pool with debug printing
--verbosity string Set the log level for console logs (default "info")
Sentry
P2P network management
Sentry connects Erigon to the Ethereum P2P network, enabling the discovery of other participants across the Internet and secure communication with them. It performs these main functions:
-
Peer discovery via the following:
- Kademlia DHT
- DNS lookup
- Configured static peers
- Node info saved in the database
- Boot nodes pre-configured in the source code
-
Peer management:
- handshakes
- holding p2p connection even if Erigon is restarted
The ETH core interacts with the Ethereum p2p network through the Sentry component. Sentry provides a simple interface to the core, with functions to download data, receive notifications about gossip messages, upload data on request from peers, and broadcast gossip messages either to a selected set of peers or to all peers.
Running with an external Sentry or multiple Sentries
It is possible to have multiple Sentry to increase connectivity to the network or to obscure the location of the core computer. In this case it is necessary to define address and port of each Sentry that should be connected to the Core.
Before using the Sentry component the executable must be built. Head over to /erigon directory and type:
make sentry
Then it can be launched as an independent component with the command:
./build/bin/sentry
Example
In this example we will run an instance of Erigon and Sentry on the same machine.
Following is the Sentry client running separately:
screen ./build/bin/sentry --datadir=~/.local/share/erigon
And here is Erigon attaching to it
./build/bin/erigon --internalcl --snapshots=true --sentry.api.addr=127.0.0.1:9091
Erigon might be attached to several Sentry instances running across different machines. As per Erigon help:
--sentry.api.addr value
Where value
is comma separated sentry addresses '
More info
For other information regarding Sentry functionality, configuration, and usage, please refer to the embedded file you can find in your compiled Erigon folder at ./cmd/sentry/README.md
.
Command Line Options
To display available options for Sentry digit:
./build/bin/sentry --help
The --help
flag listing is reproduced below for your convenience.
Run p2p sentry
Usage:
sentry [flags]
Flags:
--datadir string Data directory for the databases (default "/home/bloxster/.local/share/erigon")
--diagnostics.disabled Disable diagnostics
--diagnostics.endpoint.addr string Diagnostics HTTP server listening interface (default "127.0.0.1")
--diagnostics.endpoint.port uint Diagnostics HTTP server listening port (default 6062)
--diagnostics.speedtest Enable speed test
--discovery.dns strings Sets DNS discovery entry points (use "" to disable DNS)
--healthcheck Enabling grpc health check
-h, --help help for sentry
--log.console.json Format console logs with JSON
--log.console.verbosity string Set the log level for console logs (default "info")
--log.delays Enable block delay logging
--log.dir.disable disable disk logging
--log.dir.json Format file logs with JSON
--log.dir.path string Path to store user and error logs to disk
--log.dir.prefix string The file name prefix for logs stored to disk
--log.dir.verbosity string Set the log verbosity for logs stored to disk (default "info")
--log.json Format console logs with JSON
--maxpeers int Maximum number of network peers (network disabled if set to 0) (default 32)
--maxpendpeers int Maximum number of TCP connections pending to become connected peers (default 1000)
--metrics Enable metrics collection and reporting
--metrics.addr string Enable stand-alone metrics HTTP server listening interface (default "127.0.0.1")
--metrics.port int Metrics HTTP server listening port (default 6061)
--nat string NAT port mapping mechanism (any|none|upnp|pmp|stun|extip:<IP>)
"" or "none" Default - do not nat
"extip:77.12.33.4" Will assume the local machine is reachable on the given IP
"any" Uses the first auto-detected mechanism
"upnp" Uses the Universal Plug and Play protocol
"pmp" Uses NAT-PMP with an auto-detected gateway address
"pmp:192.168.0.1" Uses NAT-PMP with the given gateway address
"stun" Uses STUN to detect an external IP using a default server
"stun:<server>" Uses STUN to detect an external IP using the given server (host:port)
--netrestrict string Restricts network communication to the given IP networks (CIDR masks)
--nodiscover Disables the peer discovery mechanism (manual peer addition)
--p2p.allowed-ports uints Allowed ports to pick for different eth p2p protocol versions as follows <porta>,<portb>,..,<porti> (default [30303,30304,30305,30306,30307])
--p2p.protocol uint Version of eth p2p protocol (default 68)
--port int Network listening port (default 30303)
--pprof Enable the pprof HTTP server
--pprof.addr string pprof HTTP server listening interface (default "127.0.0.1")
--pprof.cpuprofile string Write CPU profile to the given file
--pprof.port int pprof HTTP server listening port (default 6060)
--sentry.api.addr string grpc addresses (default "localhost:9091")
--staticpeers strings Comma separated enode URLs to connect to
--trace string Write execution trace to the given file
--trustedpeers strings Comma separated enode URLs which are always allowed to connect, even above the peer limit
--verbosity string Set the log level for console logs (default "info")
Downloader
Seeding/downloading historical data
The Downloader is a service responsible for seeding and downloading historical data using the BitTorrent protocol. Data is stored in the form of immutable .seg
files, known as snapshots. The Ethereum core instructs the Downloader to download specific files, identified by their unique info hashes, which include both block headers and block bodies. The Downloader then communicates with the BitTorrent network to retrieve the necessary files, as specified by the Ethereum core.
Information:
While all Erigon components are separable and can be run on different machines, the Downloader must run on the same machine as Erigon to be able to share downloaded and seeded files.
For a comprehensive understanding of the Downloader's functionality, configuration, and usage, please refer to ./cmd/downloader/README.md with the following key topics:
- Snapshots overview: An introduction to snapshots, their benefits, and how they are created and used in Erigon.
- Starting Erigon with snapshots support: Instructions on how to start Erigon with snapshots support, either by default or as a separate process.
- Creating new networks or bootnodes: A guide on how to create new networks or bootnodes, including creating new snapshots and starting the Downloader.
- Architecture: An overview of the Downloader's architecture, including how it works with Erigon and the different ways .torrent files can be created.
- Utilities: A list of available utilities, including
torrent_cat
,torrent_magnet
, andtorrent_clean
. - Remote manifest verify: Instructions on how to verify that remote webseeds have available manifests and all manifested files are available.
- Faster rsync: Tips on how to use
rsync
for faster file transfer. - Release details: Information on how to start automatic commits of new hashes to the
master
branch. - Creating a seedbox: A guide on how to create a seedbox to support a new network or type of snapshots.
Some of the key sections in the documentation include:
- How to create new snapshots: Instructions on how to create new snapshots, including using the
seg
command and creating .torrent files. - How to start the Downloader: Instructions on how to start the Downloader, either as a separate process or as part of Erigon.
- How to verify .seg files: Instructions on how to verify that .seg files have the same checksum as the current .torrent files.
By referring to the embedded documentation file, you can gain a deeper understanding of the Downloader's capabilities and how to effectively utilize it in your Erigon setup.
Command line options
To display available options for downloader digit:
./build/bin/downloader --help
The --help
flag listing is reproduced below for your convenience.
snapshot downloader
Usage:
[flags]
[command]
Examples:
go run ./cmd/downloader --datadir <your_datadir> --downloader.api.addr 127.0.0.1:9093
Available Commands:
completion Generate the autocompletion script for the specified shell
help Help about any command
manifest
manifest-verify
torrent_cat
torrent_clean Remove all .torrent files from datadir directory
torrent_create
torrent_hashes
torrent_magnet
Flags:
--chain string name of the network to join (default "mainnet")
--datadir string Data directory for the databases (default "/home/admin/.local/share/erigon")
--db.writemap Enable WRITE_MAP feature for fast database writes and fast commit times (default true)
--diagnostics.disabled Disable diagnostics
--diagnostics.endpoint.addr string Diagnostics HTTP server listening interface (default "127.0.0.1")
--diagnostics.endpoint.port uint Diagnostics HTTP server listening port (default 6062)
--diagnostics.speedtest Enable speed test
--downloader.api.addr string external downloader api network address, for example: 127.0.0.1:9093 serves remote downloader interface (default "127.0.0.1:9093")
--downloader.disable.ipv4 Turns off ipv6 for the downloader
--downloader.disable.ipv6 Turns off ipv6 for the downloader
-h, --help help for this command
--log.console.json Format console logs with JSON
--log.console.verbosity string Set the log level for console logs (default "info")
--log.delays Enable block delay logging
--log.dir.disable disable disk logging
--log.dir.json Format file logs with JSON
--log.dir.path string Path to store user and error logs to disk
--log.dir.prefix string The file name prefix for logs stored to disk
--log.dir.verbosity string Set the log verbosity for logs stored to disk (default "info")
--log.json Format console logs with JSON
--metrics Enable metrics collection and reporting
--metrics.addr string Enable stand-alone metrics HTTP server listening interface (default "127.0.0.1")
--metrics.port int Metrics HTTP server listening port (default 6061)
--nat string NAT port mapping mechanism (any|none|upnp|pmp|stun|extip:<IP>)
"" or "none" Default - do not nat
"extip:77.12.33.4" Will assume the local machine is reachable on the given IP
"any" Uses the first auto-detected mechanism
"upnp" Uses the Universal Plug and Play protocol
"pmp" Uses NAT-PMP with an auto-detected gateway address
"pmp:192.168.0.1" Uses NAT-PMP with the given gateway address
"stun" Uses STUN to detect an external IP using a default server
"stun:<server>" Uses STUN to detect an external IP using the given server (host:port)
--pprof Enable the pprof HTTP server
--pprof.addr string pprof HTTP server listening interface (default "127.0.0.1")
--pprof.cpuprofile string Write CPU profile to the given file
--pprof.port int pprof HTTP server listening port (default 6060)
--seedbox Turns downloader into independent (doesn't need Erigon) software which discover/download/seed new files - useful for Erigon network, and can work on very cheap hardware. It will: 1) download .torrent from webseed 2) download new files after upgrade 3) we planing add discovery of new files soon
--torrent.conns.perfile int Number of connections per file (default 10)
--torrent.download.rate string Bytes per second, example: 32mb (default "128mb")
--torrent.download.slots int Amount of files to download in parallel. (default 128)
--torrent.maxpeers int Unused parameter (reserved for future use) (default 100)
--torrent.port int Port to listen and serve BitTorrent protocol (default 42069)
--torrent.staticpeers string Comma separated host:port to connect to
--torrent.upload.rate string Bytes per second, example: 32mb (default "4mb")
--torrent.verbosity int 0=silent, 1=error, 2=warn, 3=info, 4=debug, 5=detail (must set --verbosity to equal or higher level and has default: 2) (default 2)
--trace string Write execution trace to the given file
--verbosity string Set the log level for console logs (default "info")
--verify Verify snapshots on startup. It will not report problems found, but re-download broken pieces.
--verify.failfast Stop on first found error. Report it and exit
--verify.files string Limit list of files to verify
--webseed string Comma-separated URL's, holding metadata about network-support infrastructure (like S3 buckets with snapshots, bootnodes, etc...)
Use " [command] --help" for more information about a command.
Running an Op-Node Alongside Erigon
To run an op-node alongside Erigon, follow these steps:
- Start Erigon with Caplin Enabled:
If Caplin is running as the consensus layer (CL), use the
--caplin.blobs-immediate-backfill
flag to ensure the last 18 days of blobs are backfilled, which is critical for proper synchronization with the op-node, assuming you start from a snapshot../build/bin/erigon --caplin.blobs-immediate-backfill
- Run the Op-Node:
Configure the op-node with the
--l1.trustrpc
flag to trust the Erigon RPC layer as the L1 node. This setup ensures smooth communication and synchronization.
This configuration enables the op-node to function effectively with Erigon serving as both the L1 node and the CL.
TLS Authentication
TLS authentication can be enabled to ensure communication integrity and access control to the Erigon node.
At a high level, the process consists of:
- Generate the Certificate Authority (CA) key pair.
- Create the Certificate Authority certificate file
- Generate a key pair
- Create the certificate file for each public key
- Deploy the files to each instance
- Run Erigon and RPCdaemon with the correct tags
The following is a detailed description of how to use the OpenSSL suite of tools to secure the connection between a remote Erigon node and a remote or local RPCdaemon.
The same procedure applies to any Erigon component you wish to run separately; it is recommended to name the files accordingly.
Warning
To maintain a high level of security, it is recommended to create all the keys locally and then copy the 3 required files remotely to the remote node.
To install openssl open your terminal and paste:
sudo apt install openssl
1. Generating the key pair for the Certificate Authority (CA)
Generate the CA key pair using Elliptic Curve (as opposed to RSA). The generated CA key will be in the CA-key.pem
file.
Warning
Access to this file will allow anyone to later add any new instance key pair to the “cluster of trust”, so keep this file safe.
openssl ecparam -name prime256v1 -genkey -noout -out CA-key.pem
2. Creating the CA certificate file
Create CA self-signed certificate (this command will ask questions, the answers aren’t important for now, but at least the first one needs to be filled in with some data). The file created by this command will be called CA-cert.pem
:
openssl req -x509 -new -nodes -key CA-key.pem -sha256 -days 3650 -out CA-cert.pem
3. Generating a key pair
Generate a key pair for the Erigon node:
openssl ecparam -name prime256v1 -genkey -noout -out erigon-key.pem
Also generate a key pair for the RPC daemon:
openssl ecparam -name prime256v1 -genkey -noout -out RPC-key.pem
4. Creating the certificate file for each public key
Now create the Certificate Signing Request for the Erigon key pair, and from this request, produce the certificate (signed by the CA) that proves that this key is now part of the “cluster of trust”:
openssl x509 -req -in erigon.csr -CA CA-cert.pem -CAkey CA-key.pem -CAcreateserial -out erigon.crt -days 3650 -sha256
Then create the certificate signing request for the RPC daemon key pair:
openssl req -new -key RPC-key.pem -out RPC.csr
From this request, produce the certificate (signed by CA), proving that this key is now part of the “cluster of trust”:
openssl x509 -req -in RPC.csr -CA CA-cert.pem -CAkey CA-key.pem -CAcreateserial -out RPC.crt -days 3650 -sha256
5. Deploy the files on each instance
These three files must be placed in the /erigon folder on the machine running Erigon:
CA-cert.pem
erigon-key.pem
erigon.crt
On the RPCdaemon machine, these three files must also be placed in the /erigon folder:
CA-cert.pem
RPC key.pem
RPC.crtv
6. Run Erigon and RPCdaemon with the correct tags
Once all the files have been moved, Erigon must be run with these additional options:
--tls --tls.cacert CA-cert.pem --tls.key erigon-key.pem --tls.cert erigon.crt
While the RPC daemon must be started with these additional options:
--tls.key RPC-key.pem --tls.cacert CA-cert.pem --tls.cert RPC.crt
Warning
Normally, the "client side" (in our case, the RPCdaemon) will check that the server's host name matches the "Common Name" attribute of the "server" certificate. At this time, this check is disabled and will be re-enabled when the instructions above on how to correctly generate Common Name certificates are updated. For example, if you are running the Erigon instance in the Google Cloud, you will need to specify the internal IP in the -private.api.addr option. You will also need to open the firewall on the port you use to connect to the Erigon instances.
Performance Tricks
These instructions are designed to improve the performance of Erigon 3, particularly for synchronization and memory management, on cloud drives and other systems with specific performance characteristics.
Increase Sync Speed
- Set
--sync.loop.block.limit=10_000
and--batchSize=2g
to speed up the synchronization process.
--sync.loop.block.limit=10_000 --batchSize=2g
Optimize for Cloud Drives
- Set
SNAPSHOT_MADV_RND=false
to enable the operating system's cache prefetching for better performance on cloud drives with good throughput but bad latency.
SNAPSHOT_MADV_RND=false
Lock Latest State in RAM
- Use
vmtouch -vdlw /mnt/erigon/snapshots/domain/*bt
to lock the latest state in RAM, preventing it from being evicted due to high historical RPC traffic.
vmtouch -vdlw /mnt/erigon/snapshots/domain/*bt
- Run
ls /mnt/erigon/snapshots/domain/*.kv | parallel vmtouch -vdlw
to apply the same locking to all relevant files.
Handle Memory Allocation Issues
- If you encounter issues with memory allocation, run the following to flush any pending write operations and free up memory:
sync && sudo sysctl vm.drop_caches=3
- Alternatively, set:
echo 1 > /proc/sys/vm/compact_memory
to help with memory allocation.
Staking
How to propose and validate blocks with Erigon
Erigon is a comprehensive execution and consensus layer that also supports staking, aka block production, for Ethereum and Gnosis Chain. Both remote miners and Caplin are supported.
-
Using a external consensus client as validator;
-
Using Caplin as validator.
Using an external consensus client as validator
To enable external consensus clients, add the flags:
--mine --miner.etherbase=...
or
--mine --miner.miner.sigkey=...
Other supported options are:
--miner.notify
: Comma separated HTTP URL list to notify of new work packages--miner.gaslimit
: Target gas limit for mined blocks (default:36000000
)--miner.etherbase
: Public address for block mining rewards (default: "0
")--miner.extradata
: Block extra data set by the miner (default:client version
)--miner.noverify
: Disable remote sealing verification (default:false
)--miner.noverify
: Disable remote sealing verification (default:false
)--miner.sigfile
: Private key to sign blocks with--miner.recommit
: Time interval to recreate the block being mined (default:3s
)--miner.gasprice
: This option sets the minimum gas price for mined transactions--miner.gastarget
: This option sets the maximum amount of gas that could be spent during a transaction.
Using Caplin as validator
Running Erigon with Caplin as validator
Caplin is also suitable for staking. However, it is required to pair it with a validator key manager, such as Lighthouse or Teku, since it doesn't have a native key management system.
This guide explains how to use Erigon with its embedded Caplin consensus layer and Lighthouse as the validator client for staking on Ethereum.
1. Start Erigon with Caplin
Run the following command to start Erigon with the embedded Caplin consensus layer with the beacon API on:
erigon \
--datadir=/data/erigon \
--chain=mainnet \
--prune.mode=full \
--http \
--http.addr=0.0.0.0 \
--http.port=8545 \
--http.api=engine,eth,net,web3 \
--ws \
--ws.port=8546 \
--caplin.enable-upnp \
--caplin.discovery.addr=0.0.0.0 \
--caplin.discovery.port=4000 \
--caplin.discovery.tcpport=4001 \
--chain=<NETWORK>
--beacon.api=beacon,validator,builder,config,debug,events,node,lighthouse
Flags Explanation:
- Execution Layer:
--http.api=engine,eth,net,web3
: enables the necessary APIs for external clients and Caplin.--ws
: enables WebSocket-based communication (optional).
- Consensus Layer (Caplin):
--caplin.discovery.addr
and--caplin.discovery.port
: configures Caplin's gossip and discovery layer.--beacon.api=beacon,validator,builder,config,debug,events,node,lighthouse
: enables all possible API endpoints for the validator client.
2. Set Up Lighthouse Validator Client
2.1 Install Lighthouse
Download the latest Lighthouse binary:
curl -LO https://github.com/sigp/lighthouse/releases/latest/download/lighthouse
chmod +x lighthouse
sudo mv lighthouse /usr/local/bin/
Or, use Docker:
docker pull sigp/lighthouse:latest
2.2. Create Lighthouse Validator Key Directory
mkdir -p ~/.lighthouse/validators
2.3. Run Lighthouse Validator Client
Start the validator client and connect it to the Caplin consensus layer:
lighthouse vc \
--network mainnet \
--beacon-nodes http://127.0.0.1:5555 \
--suggested-fee-recipient=<your_eth_address>
Flags Explanation:
--network mainnet
: Specifies the Ethereum mainnet.--beacon-nodes
: Points to the Caplin beacon API athttp://127.0.0.1:5555
.--suggested-fee-recipient
: Specifies your Ethereum address for block rewards.
2.4. Import Validator Keys
If you have existing validator keys, import them:
lighthouse account validator import --directory <path_to_validator_keys>
Tools

Diagnostic Tool
As the Erigon ecosystem expands, the demand for an effective system to diagnose and resolve user issues grows. The Erigon Diagnostics Tool is designed to offer a simplified approach to pinpointing the underlying reasons for problems faced by Erigon users, be they individuals, companies utilizing Erigon internally, or enterprises granting Erigon node access to others. Key Features
The Erigon Diagnostics tool offers the following features:
-
Automated Data Collection: The tool can gather essential information about the user's Erigon node, including the Erigon version, system parameters, and recent console output, without requiring extensive manual input from the user.
-
Interactive Diagnostics: When additional data is needed to pinpoint the issue, the tool facilitates an interactive process, guiding the user through targeted data collection to optimize the troubleshooting process.
-
Diagnostic Reporting: The tool generates comprehensive diagnostic reports, making it easier for the Erigon development team to analyze the issues and provide effective solutions.
Installation
Setup
User Interface
Otterscan
Otterscan is an Ethereum block explorer designed to be run locally along with Erigon.
Entirely based on open source code, it is blazing fast and fully private since it works on your local machine. The user interface is intentionally very similar, but with many improvements, to the most popular Ethereum block explorer to make it easy to locate where the information is. Installation and usage instructions
For the installation and usage follow the official documentation: https://docs.otterscan.io/
Frequently Asked Questions

About
This book is open source, contribute at https://github.com/erigontech/docs.
The Erigon CI/CD system maintains a hosted version of the unstable
branch at https://development.erigon-documentation-preview.pages.dev/.
This book is built on mdbook.
License
The Erigon 3 Book © 2024 by Erigon contributors is licensed under CC BY 4.0.
Contributing to Erigon 3
Development
Erigon is an open-source project that welcomes contributions from developers worldwide who are passionate about advancing the Ethereum ecosystem. Bounties may be offered for noteworthy contributions, as the team is committed to continuously enhancing the tool to better serve the Erigon community.
Programmer's Guide
Begin by exploring the comprehensive Programmer's Guide, which covers topics such as the Ethereum state structure, account contents, and account addressing mechanisms. This guide serves as a valuable resource, providing detailed information on Erigon's architecture, coding conventions, and development workflows related to managing and interacting with the Ethereum state.
Dive Deeper into the Architecture
For those interested in gaining a deeper understanding of Erigon's underlying architecture, visit the following resources:
- DB Walk-through: This document provides a detailed walk-through of Erigon's database structure. It explains how Erigon organizes persistent data into tables like PlainState for accounts and storage, History Of Accounts for tracking account changes, and Change Sets for optimized binary searches on changes. It contrasts Erigon's approach with go-ethereum's use of the Merkle Patricia Trie.
- Database FAQ: The Database FAQ addresses common questions and concerns related to Erigon's database design. It covers how to directly read the database via gRPC or while Erigon is running, details on the MDBX storage engine and RAM usage model, and points to further resources on the database interface rationale and architecture.
Feature Exploration
Erigon introduces several innovative features that contributors may find interesting to explore and contribute to:
- DupSort Feature Explanation: Erigon's DupSort feature optimizes storage and retrieval of duplicate data by utilizing prefixes for keys in databases without the concept of "Buckets/Tables/Collections" or by creating tables for efficient storage with named "Buckets/Tables/Collections."
- EVM without Opcodes (Ether Transfers Only): Erigon explores a simplified version of the Ethereum Virtual Machine (EVM) focusing solely on ether transfers, offering an efficient execution environment for specific use cases.
Wiki
Visit also Erigon's Wiki to gain more important insights:
- Caplin downloader sync
- Choice of storage engine
- Consensus Engine separation
- Criteria for transitioning from Alpha to Beta
- Erigon Beta 1 announcement
- Erigon2 prototype
- EVM with abstract interpretation and backtracking
- Header downloader
- LMDB freelist
- LMDB freelist illustrated guide
- State sync design
- TEVM Trans-piled EVM: accelerate EVM improvement R&D, but learning from eWASM
- Transaction Pool Design
- Using Postman to test RPC.
Documentation
To contribute to this documentation, commit your change to the development branch on Github. You might want to run it locally to verify the output before committing, see how MdBook works here.
Donate
Driving Ethereum progress through community-supported research and development
Erigon Technologies AG is a non-profit project dedicated to advancing Ethereum technology for the public good. Our work is funded entirely by grants from blockchain companies and donations from our community.
Your contribution will help us to:
- Advance the Ethereum protocol for a more secure, efficient, and user-friendly experience
- Foster innovation, collaboration, and open-source development
- Empower individuals and organizations to harness blockchain technology
Every donation brings us closer to a more decentralized, equitable, and connected world.
Support Erigon's mission today and help shape the future of Ethereum by donating to our Gitcoin grant address 0x8BFBB529A9E85fDC4b70A4FCdC0D68Bb298B8816.
How to reach us
The Erigon Technologies AG office is located in the CV Labs in Zug:
Erigon Technologies AG
Damstrasse 16
6300 Zug
Switzerland
Erigon Discord Server
The most important discussions take place on the Discord server where also some support is provided. To get an invite, send an email to bloxster [at] proton.me with your name, profession, a short explanation why you want to join the Discord server and how you heard about Erigon.