Configuring intercept using CLI
Specifying a namespace for an intercept
The namespace of the intercepted workload is specified during connect using the --namespace
option.
telepresence connect --namespace myns
telepresence intercept hello --port 9000
Importing environment variables
Telepresence can import the environment variables from the pod that is being intercepted, see this doc for more details.
Creating an intercept
The following command will intercept all traffic bound to the service and proxy it to your laptop. This includes traffic coming through your ingress controller, so use this option carefully as to not disrupt production environments.
telepresence intercept <deployment name> --port=<TCP port>
Run telepresence status
to see the list of active intercepts.
$ telepresence statusOSS User Daemon: RunningVersion : v2.18.0Executable : /usr/local/bin/telepresenceInstall ID : 4b1658f3-7ff8-4af3-66693-f521bc1da32fStatus : ConnectedKubernetes server : https://cluster public IP>Kubernetes context: defaultNamespace : defaultManager namespace : ambassadorIntercepts : 1 totaldataprocessingnodeservice: <laptop username>@<laptop name>OSS Root Daemon: RunningVersion: v2.18.0DNS :Remote IP : 127.0.0.1Exclude suffixes: [.com .io .net .org .ru]Include suffixes: []Timeout : 8sSubnets: (2 subnets)- 10.96.0.0/16- 10.244.0.0/24OSS Traffic Manager: ConnectedVersion : v2.19.0Traffic Agent: docker.io/datawire/tel2:2.18.0
Finally, run telepresence leave <name of intercept>
to stop the intercept.
$ telepresence intercept <base name of intercept> --port=<local TCP port>:<servicePortIdentifier>Using Deployment <name of deployment>interceptedIntercept name : <full name of intercept>State : ACTIVEWorkload kind : DeploymentDestination : 127.0.0.1:<local TCP port>Service Port Identifier: <servicePortIdentifier>Intercepting : all TCP connections
When intercepting a service that has multiple ports, the name of the service port that has been intercepted is also listed.
If you want to change which port has been intercepted, you can create a new intercept the same way you did above, and it will change which service port is being intercepted.
Creating an intercept when multiple services match your workload
Oftentimes, there's a 1-to-1 relationship between a service and a workload, so telepresence is able to auto-detect which service it should intercept based on the workload you are trying to intercept. But if you use something like Argo, there may be two services (that use the same labels) to manage traffic between a canary and a stable service.
Fortunately, if you know which service you want to use when
intercepting a workload, you can use the --service
flag. So in the
aforementioned example, if you wanted to use the echo-stable
service
when intercepting your workload, your command would look like this:
$ telepresence intercept echo-rollout-<generatedHash> --port <local TCP port> --service echo-stableUsing ReplicaSet echo-rollout-<generatedHash>interceptedIntercept name : echo-rollout-<generatedHash>State : ACTIVEWorkload kind : ReplicaSetDestination : 127.0.0.1:3000Volume Mount Point: /var/folders/cp/2r22shfd50d9ymgrw14fd23r0000gp/T/telfs-921196036Intercepting : all TCP connections
Intercepting multiple ports
It is possible to intercept more than one service and/or service port that are using the same workload. You do this
by creating more than one intercept that identify the same workload using the --workload
flag.
Let's assume that we have a service multi-echo
with the two ports http
and grpc
. They are both
targeting the same multi-echo
deployment.
$ telepresence intercept multi-echo-http --workload multi-echo --port 8080:httpUsing Deployment multi-echointerceptedIntercept name : multi-echo-httpState : ACTIVEWorkload kind : DeploymentDestination : 127.0.0.1:8080Service Port Identifier: httpVolume Mount Point : /tmp/telfs-893700837Intercepting : all TCP requests$ telepresence intercept multi-echo-grpc --workload multi-echo --port 8443:grpc --mechanism tcpUsing Deployment multi-echointerceptedIntercept name : multi-echo-grpcState : ACTIVEWorkload kind : DeploymentDestination : 127.0.0.1:8443Service Port Identifier: extraVolume Mount Point : /tmp/telfs-1277723591Intercepting : all TCP requests
Port-forwarding an intercepted container's sidecars
Sidecars are containers that sit in the same pod as an application
container; they usually provide auxiliary functionality to an
application, and can usually be reached at
localhost:${SIDECAR_PORT}
. For example, a common use case for a
sidecar is to proxy requests to a database, your application would
connect to localhost:${SIDECAR_PORT}
, and the sidecar would then
connect to the database, perhaps augmenting the connection with TLS or
authentication.
When intercepting a container that uses sidecars, you might want those
sidecars' ports to be available to your local application at
localhost:${SIDECAR_PORT}
, exactly as they would be if running
in-cluster. Telepresence's --to-pod ${PORT}
flag implements this
behavior, adding port-forwards for the port given.
$ telepresence intercept <base name of intercept> --port=<local TCP port>:<servicePortIdentifier> --to-pod=<sidecarPort>Using Deployment <name of deployment>interceptedIntercept name : <full name of intercept>State : ACTIVEWorkload kind : DeploymentDestination : 127.0.0.1:<local TCP port>Service Port Identifier: <servicePortIdentifier>Intercepting : all TCP connections
If there are multiple ports that you need forwarded, simply repeat the
flag (--to-pod=<sidecarPort0> --to-pod=<sidecarPort1>
).
Intercepting headless services
Kubernetes supports creating services without a ClusterIP,
which, when they have a pod selector, serve to provide a DNS record that will directly point to the service's backing pods.
Telepresence supports intercepting these headless
services as it would a regular service with a ClusterIP.
So, for example, if you have the following service:
---
apiVersion: v1
kind: Service
metadata:
name: my-headless
spec:
type: ClusterIP
clusterIP: None
selector:
service: my-headless
ports:
- port: 8080
targetPort: 8080
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: my-headless
labels:
service: my-headless
spec:
replicas: 1
serviceName: my-headless
selector:
matchLabels:
service: my-headless
template:
metadata:
labels:
service: my-headless
spec:
containers:
- name: my-headless
image: jmalloc/echo-server
ports:
- containerPort: 8080
resources: {}
You can intercept it like any other:
$ telepresence intercept my-headless --port 8080Using StatefulSet my-headlessinterceptedIntercept name : my-headlessState : ACTIVEWorkload kind : StatefulSetDestination : 127.0.0.1:8080Volume Mount Point: /var/folders/j8/kzkn41mx2wsd_ny9hrgd66fc0000gp/T/telfs-524189712Intercepting : all TCP connections
This utilizes an initContainer
that requires NET_ADMIN
capabilities.
If your cluster administrator has disabled them, you will be unable to use numeric ports with the agent injector.
Intercepting without a service
You can intercept a workload without a service by adding an annotation that informs Telepresence what container ports that are eligable for intercepts. Telepresence will then inject a traffic-agent when the workload is deployed, and you will be able to intercept the given ports as if they were service ports. The annotation is:
annotations:
telepresence.getambassador.io/inject-container-ports: http
The annotation value is a comma separated list of port identifiers consisting of either the name or the port number of a container
port, optionally suffixed with /TCP
or /UDP
Let's try it out!
-
Deploy an annotation similar to this one to your cluster:
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo-no-svc
labels:
app: echo-no-svc
spec:
replicas: 1
selector:
matchLabels:
app: echo-no-svc
template:
metadata:
labels:
app: echo-no-svc
annotations:
telepresence.getambassador.io/inject-container-ports: http
spec:
automountServiceAccountToken: false
containers:
- name: echo-server
image: ghcr.io/telepresenceio/echo-server:latest
ports:
- name: http
containerPort: 8080
env:
- name: PORT
value: "8080"
resources:
limits:
cpu: 50m
memory: 8Mi -
Connect telepresence:
Terminal$ telepresence connectLaunching Telepresence User DaemonLaunching Telepresence Root DaemonConnected to context kind-dev, namespace default (https://127.0.0.1:36767) -
List your intercept eligible workloads. If the annotation is correct, the deployment should show up in the list:
Terminal$ telepresence listecho-no-svc: ready to intercept (traffic-agent not yet installed) -
Start an intercept handler locally that will receive the incoming traffic. Here's an example using a simple python http service:
Terminal$ python3 -m http.server 8080 -
Create an intercept:
Terminal$ telepresence intercept echo-no-svcUsing Deployment echo-no-svcIntercept name : echo-no-svcState : ACTIVEWorkload kind : DeploymentDestination : 127.0.0.1:8080Volume Mount Point: /tmp/telfs-3306285526Intercepting : all TCP connectionsAddress : 10.244.0.13:8080
Note that the response contains an "Address" that you can curl to reach the intercepted pod. You will not be able to curl the name "echo-no-svc". Since there's no service by that name, there's no DNS entry for it either.
-
Curl the intercepted workload:
Terminal$ curl 10.244.0.13:8080< output from your local service>
A service-less intercept utilizes an initContainer
that requires NET_ADMIN
capabilities.
If your cluster administrator has disabled them, you will only be able to intercept services using symbolic target ports.
Specifying the intercept traffic target
By default, it's assumed that your local app is reachable on 127.0.0.1
, and intercepted traffic will be sent to that IP
at the port given by --port
. If you wish to change this behavior and send traffic to a different IP address, you can use the --address
parameter
to telepresence intercept
. Say your machine is configured to respond to HTTP requests for an intercept on 172.16.0.19:8080
. You would run this as:
$ telepresence intercept my-service --address 172.16.0.19 --port 8080Using Deployment echo-easyIntercept name : echo-easyState : ACTIVEWorkload kind : DeploymentDestination : 172.16.0.19:8080Service Port Identifier: proxiedVolume Mount Point : /var/folders/j8/kzkn41mx2wsd_ny9hrgd66fc0000gp/T/telfs-517018422Intercepting : all TCP connections
Replacing a running workload
By default, your application keeps running as Telepresence intercepts it, even if it doesn't receive any traffic (or receives only a subset, as with personal intercepts). This can pose a problem for applications that are active even when they're not receiving requests. For instance, if your application consumes from a message queue as soon as it starts up, intercepting it won't stop the pod from consuming from the queue.
To work around this issue, telepresence intercept
allows you to pass in a --replace
flag that will stop every
application container from running on your pod. When you pass in --replace
, Telepresence will restart your application
with a dummy application container that sleeps infinitely, and instead just place a traffic agent to redirect traffic to
your local machine. The application container will be restored as soon as you leave the intercept.
$ telepresence intercept my-service --port 8080 --replaceIntercept name : my-serviceState : ACTIVEWorkload kind : DeploymentDestination : 127.0.0.1:8080Service Port Identifier: proxiedVolume Mount Point : /var/folders/j8/kzkn41mx2wsd_ny9hrgd66fc0000gp/T/telfs-517018422Intercepting : all TCP connections
Sidecars will not be stopped. Only the container serving the intercepted port will be removed from the pod.