Running Your Own Hub

To sample the hub in action on rinkeby or mainnet check out the Dai Card.

Hub API reference and configurable parameters can be found here.


With Docker

To get started locally, make sure you have the following prerequisites installed:

  • Node 9+ and NPM

  • Docker

  • Make: (probably already installed) Install w brew install make or apt install make or similar.

  • jq: (probably not installed yet) Install w brew install jq or apt install jq or similar.

Then, open a terminal window and run the following:

git clone
cd indra
npm start

Starting Indra will take a while the first time, but the build gets cached so it will only take this long the first time. While it’s building, configure your metamask to use the rpc localhost:3000/api/eth and import the hub’s private key (659CBB0E2411A44DB63778987B1E22153C086A95EB6B18BDF89DE078917ABC63) so you can easily send money to the signing wallet.

Before trying to make payments, ensure that that hub is collateralized by sending tokens to the contract address.

Once all the components are up and running, navigate to http://localhost/ to checkout the sandbox. To implement the client on your own frontend, checko ut the Getting Started guide.

Without Docker

Make sure you have the following prerequisites installed:

  • PostgreSQL running locally: brew install postgres for Mac. See here for Linux.

  • Redis running locally: brew install redis for Mac. sudo apt-get install redis for Linux.

  • Yarn: brew install yarn for Mac. sudo apt-get install yarn for Linux.

Before starting, make sure your PostgreSQL and Redis services are running: brew services start redis, brew services start postgresql on mac.

Next, run the following steps in order. For each section, use a separate terminal window. Closing the terminal window will stop the process.


Run the following from modules/hub.

  • yarn install - Install dependencies.

  • bash development/ganache-reset - Migrates the contracts.

  • bash development/ganache-run - Runs Ganache (if you put a number after the ganache-run command you can set the blocktime).


Run the following from modules/hub.

  • createdb sc-hub - Creates the hub’s database (if it already exists, skip this step).

  • bash development/hub-reset - Resets the hub’s database.

  • bash development/hub-run - Runs hub and chainsaw.


Run the following from modules/wallet.

  • Add the following to a file called .env inside modules/wallet. Do not commit this file to Git:

  • npm install - Install dependencies.

  • npm start - Runs the local dev server at http://localhost:3000.

  • Set up Metamask to use one of the following accounts:

Address: 0xFB482f8f779fd96A857f1486471524808B97452D

Private Key: 09cd8192c4ad4dd3b023a8ef381a24d29266ebd4af88ecdac92ec874e1c2fed8 (hub’s account, contains tokens)

Address: 0x2DA565caa7037Eb198393181089e92181ef5Fb53

Private Key: 54dec5a04356ed96fc469803f3e45b901c69c5d5fd93a34fbf3568cd4c6efadd

Deploying to Production

Tweak, check, tweak, check, commit. Time to deploy?

First, setup CircleCI Environment Variables

Once per CircleCI account or organization

Run ssh-keygen -t rsa -b 4096 -C "circleci" -m pem -f .ssh/circleci to generate a new ssh key pair. Load the private key (.ssh/circleci) into CircleCI -> Settings -> Permissions -> SSH Permissions.

Go to CircleCI -> Settings -> Build Settings -> Environment Variables

  • DOCKER_USER & DOCKER_PASSWORD: Login credentials for someone with push access to the docker repository specified by the repository vars at the top of the Makefile & ops/

  • STAGING_URL & RINKEBY_URL & MAINNET_URL: The URLs from which the Indra application will be served.

  • AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY: (Optional) To enable database backup to remote AWS S3 storage

If then

  • DNS needs to be properly configured so that will resolve to the IP address of your staging server

  • The admin should have ssh access via ssh root@$STAGING_URL or ssh ubuntu@$STAGING_URL after completing the next step.

  • The application will be accessible from after deploying.

Second, set up the production server

Once per server

First, copy your hub’s private key to your clipboard (I usually load my mnemonic into metamask and then export the private key).

Then, run the following script (for best results, run it with a $SERVER_IP that points to a fresh Ubuntu VM):

bash ops/ $SERVER_IP

To run the setup script, we need to be able to use the above ssh key to access either root@$SERVER_IP or ubuntu@$SERVER_IP. If root, this script will setup the ubuntu user and disable root login for security.

This setup script expects to find the private key for ssh access to the server in ~/.ssh/connext-aws & CircleCI’s public key in ~/.ssh/

By default, this script will load your hub’s rinkeby private key into a docker secret stored on the server. To setup a server for another network (eg mainnet) add a network arg to the end, eg: bash ops/ $SERVER_IP mainnet

You can remove the server’s private key like this:

(Make sure that this server doesn’t have a hub running on it before removing it’s key)

ssh -t -i ~/.ssh/connext-aws [email protected]$SERVER_IP docker secret rm hub_key_mainnet

And add a new private key like this:

ssh -t -i ~/.ssh/connext-aws [email protected]$SERVER_IP bash indra/ops/ hub_key_mainnet

Second, deploy the contracts

To deploy the ChannelManager contract & dependencies to Rinkeby:

bash ops/ rinkeby

This script will prompt you to paste in the hub wallet’s private key if one called hub_key_rinkeby hasn’t already been saved to the secret store, this address will be used to deploy contracts. See saved secrets with docker secret ls.

The contract deployment script will save the addresses of your deployed contracts in modules/contracts/ops/address-book.json. This file is automatically generated and you probably won’t need to mess with it. One exception: if you want to redeploy some contract(s), then delete their addresses from the address book & re-run the above deployment script.

We have committed the address book that the Connext team is using to launch the Dai Card & will be tracking these changes via git. If you want to deploy an independent hub then, after running the above contract deployment script, copy the modified address book to the project root: cp modules/contracts/ops/address-book.js address-book.json.

An address book in the project root will be ignored by git and will take priority over the one in the contracts module.

You can upload a custom address book to your prod server’s project root like this:

scp -i ~/.ssh/connext-aws address-book.json ubuntu@$SERVER_IP:~/indra/

Third, activate the CI pipeline

git push

This will trigger the CI pipeline that will run all test suites and, if none fail, deploy this app to production.

Pushing to any branch other than master will trigger a deployment to the server at $STAGING_URL specified by CircleCI. Pushing or merging into master will deploy to the servers at $RINKEBY_URL and `$MAINNET_URL.

If you haven’t set up CircleCI yet or need to deploy a hotfix immediately, you can run the following:

# push docker images tagged :latest to docker hub
make push

# make sure that the indra repo is available on the prod server
ssh -i ~/.ssh/connext-aws ubuntu@SERVER_IP bash -c 'git clone || true'

# make sure that the remote repo is up-to-date with master
ssh -i ~/.ssh/connext-aws ubuntu@SERVER_IP bash -c 'cd indra && git fetch && git reset --hard origin/master'

# Having a mode != "live" will deploy :latest images rather than ones tagged w an explicit version
ssh -i ~/.ssh/connext-aws ubuntu@SERVER_IP bash -c 'cd indra && MODE=hotfix ops/ prod'

Beware, CircleCI manages the env vars previously mentioned. If you don’t deploy via CircleCI, then you need to manage these env vars manually by adding them to the server’s ~/.bashrc. Check out the server’s current env vars with: ssh -i ~/.ssh/connext-aws ubuntu@SERVER_IP env and make sure it looks good before doing a manual deployment.

Ongoing: Dealing with issues in production

Monitor the prod hub’s logs with

ssh -i ~/.ssh/connext-aws ubuntu@SERVER_IP bash indra/ops/ hub

The ChannelManager contract needs collateral to keep doing it’s thing. Make sure the hub’s wallet has enough funds before deploying. Funds can be moved from the hub’s wallet to the contract manually via:

ssh -i ~/.ssh/connext-aws ubuntu@SERVER_IP bash indra/ops/ 3.14 eth
# To move Eth, or to move tokens:
ssh -i ~/.ssh/connext-aws ubuntu@SERVER_IP bash indra/ops/ 1000 token

How to interact with an Indra hub

A prod-mode indra hub exposes the following API (source):

  • /api/hub is the prefix for the hub’s api

  • /api/hub/config returns the hub’s config for example

  • /api/eth connects to the hub’s eth provider

  • /api/dashboard connects to a server that gives the admin dashboard it’s info

  • /dashboard/ serves html/css/js for the dashboard client

  • anything else, redirects the user to a daicard client

..from a dai card

Dai card in production runs a proxy with endpoints:

  • /api/rinkeby/hub ->

  • /api/rinkeby/eth ->

  • /api/mainnet/hub ->

  • /api/mainnet/eth ->

  • anything else: serves the daicard html/css/js files


If you encounter problems while the app is running, the first thing to do is check the logs of each component:

  • bash ops/ chainsaw (The source code in the hub module powers both the hub and the chainsaw)

  • bash ops/ dashboard (The node server powering the dashboard)

  • bash ops/ dashboard_client (The webpack dev server that hot-reloads the dashboard UI)

  • bash ops/ database

  • bash ops/ ethprovider (The migrations-runner aka contract deployer)

  • bash ops/ ganache (The dev-mode ethprovider. Runs as a child process inside the ethprovider and outputs logs to modules/contracts/ops/ganache.log)

  • bash ops/ proxy

The container name "/indra_buidler" is already in use

Full error message:

docker: Error response from daemon: Conflict. The container name "/indra_buidler" is already in use by container "6d37b932d8047e16f4a8fdf58780fe6974e6beef58bf4cc5e48d00d3e94a67c3". You have to remove (or rename) that container to be able to reuse that name.

You probably started to build something and then stopped it with ctrl-c. It only looks like the build stopped: the builder process is still hanging out in the background wrapping up what it was working on. If you wait for a few seconds, this problem will usually go away as the builder finishes & exits.

To speed things up, run make stop to tell the builder to hurry up and finish.

Improperly installed dependencies

You’ll notice this by an error that looks like this in some module’s logs:

2019-03-04T15:13:46.213763000Z internal/modules/cjs/loader.js:718
2019-03-04T15:13:46.213801600Z   return process.dlopen(module, path.toNamespacedPath(filename));
2019-03-04T15:13:46.213822300Z                  ^
2019-03-04T15:13:46.213862700Z Error: Error loading shared library /root/node_modules/scrypt/build/Release/scrypt.node: Exec format error
2019-03-04T15:13:46.213882900Z     at Object.Module._extensions..node (internal/modules/cjs/loader.js:718:18)
2019-03-04T15:13:46.213903000Z     at Module.load (internal/modules/cjs/loader.js:599:32)
2019-03-04T15:13:46.213923100Z     at tryModuleLoad (internal/modules/cjs/loader.js:538:12)
2019-03-04T15:13:46.213943100Z     at Function.Module._load (internal/modules/cjs/loader.js:530:3)
2019-03-04T15:13:46.213963100Z     at Module.require (internal/modules/cjs/loader.js:637:17)
2019-03-04T15:13:46.213983100Z     at require (internal/modules/cjs/helpers.js:22:18)
2019-03-04T15:13:46.214003200Z     at Object.<anonymous> (/root/node_modules/scrypt/index.js:3:20)
2019-03-04T15:13:46.214023700Z     at Module._compile (internal/modules/cjs/loader.js:689:30)

If you noticed this error in the hub or chainsaw, for example, you can reinstall dependencies by running make reset-hub && npm start.

This happen when you run npm install manually and then try to deploy the app using docker. Some dependencies (eg scrypt) have pieces in C that need to be compiled. If they get compiled for your local machine, they won’t work in docker & vice versa.

Ethprovider or Ganache not working

cat -> <<EOF
url=$ETH_PROVIDER; [[ $url ]] || url=http://localhost:8545
echo "Sending $1 query to provider: $url"
curl -H "Content-Type: application/json" -X POST --data '{"id":31415,"jsonrpc":"2.0","method":"'$1'","params":'$2'}' $url

This lets us do a simple bash net_version '[]' as a sanity check to make sure the ethprovider is alive & listening. If not, curl might give more useful errors that direct you towards investigating either metamask or ganache.

One other sanity check is to run docker service ls and make sure that you see an ethprovider service that has port 8545 exposed.

You can also run docker exec -it indra_ethprovider.1.<containerId> bash to start a shell inside the docker container. Even if there are networking issues between the container & host, you can still ping localhost:8545 here to see if ganache is listening & run ps to see if it’s even alive.

Ganache should dump its logs onto your host and you can print/follow them with: tail -f modules/contracts/ops/ganache.log as another way to make sure it’s alive. Try deleting this file then running npm restart to see if it gets recreated & if so, check to see if there is anything suspicious there

Have you tried turning it off and back on again?

Restarting: the debugger’s most useful tool.

Some problems will be fixed by just restarting the app so try this first: npm restart (takes about 60 seconds if nothing needs to be rebuilt)

If this doesn’t work, try resetting all persistent data (database + the ethprovider’s chain data) and starting the app again: npm run reset && npm start (This takes about 90 seconds). After doing this, you’ll likely need to reset your MetaMask account to get your tx nonces synced up correctly.

If that doesn’t work either, try rebuilding everything with npm run rebuild && npm start. (Takes about 7 minutes to complete)

make purge && npm start is the most aggressive option because it completely resets the app as if you deleted the repo and recloned it. This should be an option of last resort because it usually takes more than 10 minutes to reinstall all the dependencies & rebuild everything. Review the above trouble shooting tips first and, if nothing helps, then give this a shot.