- Newest
- Most votes
- Most comments
While some additional latency is expected with an API Gateway, an increase from 3s to 11s for a simple service-ingress-service test is indeed significant and warrants investigation.
Here are some potential reasons for the high latency and possible solutions: 1/ Check the API Gateway configuration for enabled plugins and authentication/authorization mechanisms. These can add processing overhead, especially for complex configurations.
2/ Explore caching . Caching frequently accessed responses can significantly reduce latency for subsequent requests.
3/ Network Latency: Analyze the network path between the API Gateway and the backend service. Look for bottlenecks or high latency hops that might be contributing to the overall delay.
4/ Investigate resource utilization within the Kubernetes cluster. If other pods or services are competing for resources, it can impact the performance
I would suggest to make use of logging ,tracking and metrics using cloudwatch to nail down the issue. Try to figure out which hop takes most time. Usually Api Gateway shouldn't add more than few millisec latency.
API Gateway adds a few 10s of ms, not 8 seconds. I recommend that you enable execution logs on API Gateway and check what is going on there. In addition, you can enable X-Ray which will do end to end tracing of your transactions.
Relevant content
- asked 2 years ago
- asked 3 years ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 9 months ago
- AWS OFFICIALUpdated 7 months ago
- AWS OFFICIALUpdated 2 years ago