inlets/inlets
Cloud Native Tunnel written in Go
repo name | inlets/inlets |
repo link | https://github.com/inlets/inlets |
homepage | https://docs.inlets.dev |
language | Go |
size (curr.) | 4894 kB |
stars (curr.) | 5849 |
created | 2018-12-23 |
license | MIT License |
Inlets is a Cloud Native Tunnel written in Go
Expose your local endpoints to the Internet or to another network, traversing firewalls and NAT.
Intro
inlets combines a reverse proxy and websocket tunnels to expose your internal and development endpoints to the public Internet via an exit-node. An exit-node may be a 5-10 USD VPS or any other computer with an IPv4 IP address.
Why do we need this project? Similar tools such as ngrok or Argo Tunnel from Cloudflare are closed-source, have limits built-in, can work out expensive, and have limited support for arm/arm64. Ngrok is also often banned by corporate firewall policies meaning it can be unusable. Other open-source tunnel tools are designed to only set up a single static tunnel. inlets aims to dynamically bind and discover your local services to DNS entries with automated TLS certificates to a public IP address over a websocket tunnel.
When combined with SSL - inlets can be used with any corporate HTTP proxy which supports CONNECT
.
Conceptual diagram for inlets
News: Docs and new SWAG Store launched!
Read our documentation for inlets/-pro inletsctl and inlets-operator, all under one roof.
- New - docs.inlets.dev
For self-service orders, or email sales@openfaas.com for a bulk order including shipping discounts.
- Buy your own inlets t-shirt, hoodie or mug
License & terms
Important
Developers wishing to use inlets within a corporate network are advised to seek approval from their administrators or management before using the tool. By downloading, using, or distributing inlets, you agree to the LICENSE terms & conditions. No warranty or liability is provided.
Who is behind this project?
inlets is brought to you by Alex Ellis. Alex is a CNCF Ambassador and the founder of OpenFaaS.
OpenFaaS® makes it easy for developers to deploy event-driven functions and microservices to Kubernetes without repetitive, boiler-plate coding. Package your code or an existing binary in a Docker image to get a highly scalable endpoint with auto-scaling and metrics. The project has around 19k GitHub stars, over 240 contributors and a growing number of end-users in production.
New SWAG Store launched
Head over to the new OpenFaaS Ltd SWAG store to get your very own t-shirt.
Backlog & goals
Completed
- automatically create endpoints on exit-node based upon client definitions
- multiplex sites on same port and websocket through the use of DNS / host entries
- link encryption using SSL over websockets (
wss://
) - automatic reconnect
- authentication using service account or basic auth
- automatic TLS provisioning for endpoints using cert-magic
- configure staging or production LetsEncrypt issuer using HTTP01 challenge
- native multi-arch with ARMHF/ARM64 support
- Dockerfile and Kubernetes YAML files
- discover and implement
Service
type ofLoadBalancer
for Kubernetes - inlets-operator - tunnelling websocket traffic in addition to HTTP(s)
- get a logo for the project
Stretch goals
- automatic configuration of DNS / A records
- configuration to run “exit-node” as serverless container with Azure ACI / AWS Fargate
Inlets PRO
The following features / use-cases are covered by inlets.pro.
- Tunnel L4 TCP traffic in addition to HTTP/s at L7
- Automated TLS - including via inletsctl/inlets-operator
- Commercial services & support
Status
Unlike HTTP 1.1 which follows a synchronous request/response model websockets use an asynchronous pub/sub model for sending and receiving messages. This presents a challenge for tunneling a synchronous protocol over an asynchronous bus.
inlets 2.0 introduces performance enhancements and leverages parts of the Kubernetes and Rancher API. It uses the same tunnelling packages that enable node-to-node communication in Rancher’s k3s project. It is suitable for development and may be useful in production. Before deploying inlets
into production, it is advised that you do adequate testing.
Feel free to open issues if you have comments, suggestions or contributions.
- The tunnel link is secured via
--token
flag using a shared secret - The default configuration uses websockets without SSL
ws://
, but to enable encryption you could enable SSLwss://
- A timeout for requests can be configured via args on the server
The upstream URL has to be configured on both server and client until a discovery or service advertisement mechanism is addedThe client can advertise upstream URLs, which it can serve- The tunnel transport is wrapped by default which strips CORS headers from responses, but you can disable it with the
--disable-transport-wrapping
flag on the server
inlets projects
Inlets is a Cloud Native Tunnel and is listed on the Cloud Native Landscape under Service Proxies.
- inlets - Cloud Native Tunnel for L7 / HTTP traffic written in Go
- inlets-pro - Cloud Native Tunnel for L4 TCP
- inlets-operator - Public IPs for your private Kubernetes Services and CRD
- inletsctl - Automate the cloud for fast HTTP (L7) and TCP (L4) tunnels
What are people saying about inlets?
You can share about inlets using
@inletsdev
,#inletsdev
, andhttps://inlets.dev
.
inlets has trended on the front page of Hacker News twice.
- inlets 1.0 - 146 points, 48 comments
- inlets 2.0 - 218 points, 66 comments
Official tutorials:
- HTTPS for your local endpoints with inlets and Caddy - Alex Ellis
- Build a 10 USD Raspberry Pi Tunnel Gateway - Alex Ellis
- Share work with clients using inlets - Alex Ellis
- Get a LoadBalancer for your private Kubernetes cluster with inlets-operator - Alex Ellis
- Webhooks, great when you can get them - Alex Ellis
- Loan a cloud IP to your minikube cluster - Alex Ellis
Community tutorials:
- The Awesomeness of Inlets by Ruan Bekker
- K8Spin - What does fit in a low resources namespace? Inlets
- Exposing Magnificent Image Classifier with inlets
- “Securely access external applications as Kubernetes Services, from your laptop or from any other host, using inlets”
- Setting up an EC2 Instance as an Inlets Exit Node
- Micro-tutorial inlets with KinD by Alex Ellis
- Using local services in Gitpod with inlets
- Setting up a GCE Instance as an Inlets Exit Node
- Scheduling Kubernetes workloads to Raspberry Pi using Inlets and Crossplane and YouTube Live by Daniel Mangum
- inlets with minikube and IBM Kubernetes Services (IKS) free plan by Carlos Santana
Twitter:
- “I just transferred a 70Gb disk image from a NATed NAS to a remote NATed server with @alexellisuk inlets tunnels and a one-liner python web server” by Roman Dodin
- “Really amazed by inlets by @alexellisuk - “Up and running in 15min - I will be able to watch my #RaspberryPi servers running at home while staying on the beach 🏄♂️🌴🍸👏👏👏” by Florian Dambrine
- Testing an OAuth proxy by Vivek Singh
- inlets used at KubeCon to power a live IoT demo at a booth
- PR to support Risc-V by Carlos Eduardo
- Recommended by Michael Hausenblas for use with local Kubernetes
- 5 top facts about inlets by Alex Ellis
- “Cool! I hadn’t heard of inlets until now, but I love the idea of exposing internal services this way. I’ve been using TOR to do this!” by Stephen Doskett, Tech Field Day
- “Learn how to set up HTTPS for your local endpoints with inlets, Caddy, and DigitalOcean thanks to @alexellisuk!” by @DigitalOcean
- “See how Inlets helped me to expose my local endpoints for my homelab that sits behind a Carrier-Grade NAT”
Note: add a PR to send your story or use-case, I’d love to hear from you.
See ADOPTERS.md for what companies are doing with inlets today.
Get started
You can install the CLI with a curl
utility script, brew
or by downloading the binary from the releases page. Once installed you’ll get the inlets
command.
Install the CLI
Note:
inlets
is made available free-of-charge, but you can support its ongoing development through GitHub Sponsors 💪
Utility script with curl
:
# Install to local directory
curl -sLS https://get.inlets.dev | sh
# Install to /usr/local/bin/
curl -sLS https://get.inlets.dev | sudo sh
Via brew
:
brew install inlets
Note: the
brew
distribution is maintained by the brew team, so it may lag a little behind the GitHub release.
Binaries are made available on the releases page for Linux (x86_64, armhf & arm64), Windows (experimental), and for Darwin (MacOS). You will also find SHA checksums available if you want to verify your download.
Windows users are encouraged to use git bash to install the inlets binary.
Quickstart tutorial
You can run inlets between any two computers with connectivity, these could be containers, VMs, bare metal or even “loop-back” on your own laptop.
See how to provision an “exit-node” with a public IPv4 address using a VPS.
- On the exit-node (or server)
Start the tunnel server on a machine with a publicly-accessible IPv4 IP address such as a VPS.
Example with a token for client authentication:
export token=$(head -c 16 /dev/urandom | shasum | cut -d" " -f1)
inlets server --port=8090 --token="$token"
Note: You can pass the
--token
argument followed by a token value to both the server and client to prevent unauthorized connections to the tunnel.
inlets server --port=8090
You can also run unprotected, but this is not recommended.
Note down your public IPv4 IP address.
- Head over to your machine where you are running a sample service, or something you want to expose.
You can use my hash-browns service for instance which generates hashes.
Install hash-browns or run your own HTTP server
export GO111MODULE=off
export GOPATH=$HOME/go/
go get -u github.com/alexellis/hash-browns
port=3000 $GOPATH/bin/hash-browns
If you don’t have Go installed, then you could run Python’s built-in HTTP server:
mkdir -p /tmp/inlets-test/
cd /tmp/inlets-test/
touch hello-world
python -m SimpleHTTPServer 3000
- On the same machine, start the inlets client
Start the tunnel client:
export REMOTE="127.0.0.1:8090" # for testing inlets on your laptop, replace with the public IPv4
export TOKEN="CLIENT-TOKEN-HERE" # the client token is found on your VPS or on start-up of "inlets server"
inlets client \
--remote=$REMOTE \
--upstream=http://127.0.0.1:3000 \
--token $TOKEN
- Replace the
--remote
with the address where your exit-node is runninginlets server
. - Replace the
--token
with the value from your server
We now have three processes:
- example service running (hash-browns) or Python’s webserver
- an exit-node running the tunnel server (
inlets server
) - a client running the tunnel client (
inlets client
)
So send a request to the inlets server - use its domain name or IP address:
Assuming gateway.mydomain.tk
points to 127.0.0.1
in /etc/hosts
or your DNS server.
curl -d "hash this" http://127.0.0.1:8090/hash -H "Host: gateway.mydomain.tk"
# or
curl -d "hash this" http://127.0.0.1:8090/hash
# or
curl -d "hash this" http://gateway.mydomain.tk/hash
You will see the traffic pass between the exit node / server and your development machine. You’ll see the hash message appear in the logs as below:
~/go/src/github.com/alexellis/hash-browns$ port=3000 go run server.go
2018/12/23 20:15:00 Listening on port: 3000
"hash this"
Now check the metrics endpoint which is built-into the hash-browns example service:
curl $REMOTE/metrics | grep hash
You can also use multiple domain names and tie them back to different internal services.
Here we start the Python server on two different ports, serving content from two different locations and then map it to two different Host headers, or domain names:
mkdir -p /tmp/store1
cd /tmp/store1/
touch hello-store-1
python -m SimpleHTTPServer 8001 &
mkdir -p /tmp/store2
cd /tmp/store2/
touch hello-store-2
python -m SimpleHTTPServer 8002 &
export REMOTE="127.0.0.1:8090" # for testing inlets on your laptop, replace with the public IPv4
export TOKEN="CLIENT-TOKEN-HERE" # the client token is found on your VPS or on start-up of "inlets server"
inlets client \
--remote=$REMOTE \
--token $TOKEN \
--upstream="store1.example.com=http://127.0.0.1:8001,store2.example.com=http://127.0.0.1:8002"
You can now create two DNS entries or /etc/hosts
file entries for store1.example.com
and store2.example.com
, then connect through your browser.
Going further
Docs & Featured tutorials
Tutorial: HTTPS for your local endpoints with inlets and Caddy
Docs: Inlets & Kubernetes recipes
Docs: Run Inlets on a VPS
Tutorial: Get a LoadBalancer for your private Kubernetes cluster with inlets-operator
Video demo
Using inlets I was able to set up a public endpoint (with a custom domain name) for my JavaScript & Webpack Create React App.
Docker
Docker images are published as multi-arch for x86_64
, arm64
and armhf
inlets/inlets:2.6.3
Multiple services with on exit-node
You can expose an OpenFaaS or OpenFaaS Cloud deployment with inlets
- just change --upstream=http://127.0.0.1:3000
to --upstream=http://127.0.0.1:8080
or --upstream=http://127.0.0.1:31112
. You can even point at an IP address inside or outside your network for instance: --upstream=http://192.168.0.101:8080
.
When using the scripts in hack
to configure inlets with system, the process will restart if the tunnel crashes.
Bind a different port for the control-plane
You can bind two separate TCP ports for the user-facing port and the tunnel.
--port
- the port for users to connect to and for serving data, i.e. the Data Plane--control-port
- the port for the websocket to connect to i.e. the Control Plane
Development
For development you will need Golang 1.10 or 1.11 on both the exit-node or server and the client.
You can get the code like this:
go get -u github.com/inlets/inlets
cd $GOPATH/src/github.com/inlets/inlets
Alternatively, you can get everything setup right in the browser with a single click using Gitpod:
Contributions are welcome. All commits must be signed-off with git commit -s
to accept the Developer Certificate of Origin.
Appendix
Other Kubernetes port-forwarding tooling:
kubectl port-forward
- built into the Kubernetes CLI, forwards a single port to the local computer.- kubefwd - Kubernetes utility to port-forward multiple services to your local computer.
- kurun - Run main.go in Kubernetes with one command, also port-forward your app into Kubernetes.