+ ~ - + - + - + - ~ ~ *
| commonware
* ~ + + - ~ - + - * - +

Your p2p demo runs locally. Now what?

March 6, 2025

You've built a peer-to-peer (p2p) demo that hums along on your local machine. Peers connect, messages flow, and for a moment, you're basking in localhost bliss. Then comes the inevitable question: how do you migrate this from your laptop to the cloud?

Enter deployer, a CLI and library that bridges the gap between localhost and remote host. Roll custom binaries and configurations to instances in multiple regions, configure networking policies that allow peers to talk to each other, automatically collect metrics and logs, and clean up when you're done.

Spend your time coding, not tweaking CIDR blocks.

From Localhost to the Cloud

Deploying a p2p application isn't as straightforward as spinning up a web server. You'll need to reckon with:

You could spend weeks configuring VPC peering, wrestling with IAM roles, and praying that your Grafana dashboard actually shows something. Or, you could use deployer.

Introducing deployer

deployer is your one-stop shop for deploying p2p applications in the cloud. It's both a CLI for quick wins and a Rust library when you need to get custom. Think of it like a high-level abstraction over infrastructure APIs with sane defaults and observability built-in.

The first deployer dialect, deployer::ec2, is focused on reproducible benchmarking. With a single YAML config, it performs:

Deployment of custom binaries across multiple AWS regions
Figure 1: Deployment of custom binaries across multiple AWS regions

Try it Out: flood

To demonstrate how to use deployer, we build a p2p::authenticated benchmarking tool called flood. flood does one thing: spam peers with as many random messages as possible.

flood leverages deployer to transform a local stress test into a global one. Here's how it works:

  1. Setup: The setup binary (included in the flood crate) generates a deployer::ec2 config.yaml file and peer-specific configs from user-specified peer, bootstrapper, region, and performance parameters (like message size and message backlog).
  2. Compile: flood uses Docker to compile the flood binary for ARM64 (deployer::ec2 uses Graviton EC2 instances).
  3. Deploy: deployer ec2 create spins up instances across regions, wires them together, and starts the flood binary on each peer. The custom Grafana dashboard is started at http://monitoring-ip:3000/d/flood.
  4. Tweak: deployer ec2 update deploys new binaries and configs on each peer.
  5. Cleanup: deployer ec2 destroy tears down all provisioned infrastructure.

Step through the full walkthrough in the README.

Automatically deployed Grafana dashboard for flood running on c7g.xlarge (4 vCPU, 8GB RAM)
Figure 2: Automatically deployed Grafana dashboard for flood running on c7g.xlarge (4 vCPU, 8GB RAM)

A local demo is a proof of concept; deployer::ec2 makes it a proving ground.