In k8s cluster I have linkerd and envoy installed
one of my httproute that serve the envoy (parentref exist) has also URLRewrite filter (supported bt envoy) my linkerd destination logs crazy amount of logs “URLRewrite filter is not supported”
We are having the same issue. We are adding a ton of HttpRoutes these days with Traefik as GatewayApi. The problem lately is like you say. Crazy amount of warnings in the linkerd_policy_controller(linkerd-destination logs)
BUT even worse is that we have now started to get sync errors in ArgoCD because of timeouts in the linkerd_policy_controller
We get this in the linkerd-destination logs:
linkerd-destination-64dc9f9fd6-vw29x policy 2026-02-17T11:24:03.634466Z WARN httproutes.gateway.networking.k8s.io: linkerd_policy_controller_k8s_index::outbound::index: Failed to convert route error=URLRewrite filter is not supported
linkerd-destination-64dc9f9fd6-vw29x policy 2026-02-17T13:20:51.371504Z INFO server{port=9443}:conn{client.ip=10.241.77.46 client.port=55624}: kubert::server: Connection lost error=read header from client timeout
Can someone from Buoyant please take a look at this and comment. I can live with the warnings. They are just annoying and should be removed. But the BAD thing here is that the linked-destination pods stops responding and fails our ArgoCD syncs
I even tried to scale out linkerd-destination to 5 pods. But is still always fails
linkerd-destination-64dc9f9fd6-vw29x policy 2026-02-17T13:55:56.315343Z INFO server{port=9443}:conn{client.ip=10.241.77.46 client.port=40878}: kubert::server: Connection lost error=read header from client timeout
linkerd-destination-64dc9f9fd6-cg8h6 policy 2026-02-17T13:55:57.057970Z INFO server{port=9443}:conn{client.ip=10.241.73.13 client.port=58542}: kubert::server: Connection lost error=read header from client timeout
linkerd-destination-64dc9f9fd6-vw29x policy 2026-02-17T13:55:57.170899Z INFO server{port=9443}:conn{client.ip=10.241.77.46 client.port=40884}: kubert::server: Connection lost error=read header from client timeout
linkerd-destination-64dc9f9fd6-4mh7x policy 2026-02-17T13:55:58.211460Z INFO server{port=9443}:conn{client.ip=10.241.72.155 client.port=51908}: kubert::server: Connection lost error=read header from client timeout
linkerd-destination-64dc9f9fd6-vw29x policy 2026-02-17T13:55:58.496461Z INFO server{port=9443}:conn{client.ip=10.241.77.46 client.port=40896}: kubert::server: Connection lost error=read header from client timeout
linkerd-destination-64dc9f9fd6-vw29x policy 2026-02-17T13:55:58.502724Z INFO server{port=9443}:conn{client.ip=10.241.77.46 client.port=40904}: kubert::server: Connection lost error=read header from client timeout
linkerd-destination-64dc9f9fd6-qp5js policy 2026-02-17T13:56:00.225114Z INFO server{port=9443}:conn{client.ip=10.241.76.131 client.port=48534}: kubert::server: Connection lost error=read header from client timeout
linkerd-destination-64dc9f9fd6-vw29x policy 2026-02-17T13:56:01.166978Z INFO server{port=9443}:conn{client.ip=10.241.77.46 client.port=40910}: kubert::server: Connection lost error=read header from client timeout
linkerd-destination-64dc9f9fd6-8xvc9 policy 2026-02-17T13:56:01.753059Z INFO server{port=9443}:conn{client.ip=10.241.73.163 client.port=55768}: kubert::server: Connection lost error=read header from client timeout
linkerd-destination-64dc9f9fd6-qp5js policy 2026-02-17T13:56:01.985898Z INFO server{port=9443}:conn{client.ip=10.241.76.131 client.port=48542}: kubert::server: Connection lost error=read header from client timeout
@william We could really need some feedback from Buoyant on this. As we are migrating all our Ingress resources to HttpRoute the number will grow a lot. And we are already hitting this issue with Linkerd. And Linkerd should not be a factor at all when using HttpRoute for ingress
To me this sounds like a bug. Could you please file a bug report at GitHub · Where software is built and include as many relevant details as possible. If it is possible to provide a repro, that will allow us to tackle this more quickly. Thank you.