Actix (Rust) vs Fiber (Go): Performance (Latency - Throughput - Saturation - Availability)

Anton Putra
30 Aug 202420:10

Summary

TLDRIn this video, the presenter benchmarks Rust and Go frameworks—Actix for Rust and Fiber for Go—using a production-ready AWS EKS cluster. The tests measure CPU and memory usage, client latency, and availability through real-world scenarios, including file uploads to S3 and metadata storage in PostgreSQL. The results reveal Rust's efficiency in memory usage but higher CPU consumption and latency compared to Go under load. The presenter emphasizes the importance of custom metrics for autoscaling applications effectively and discusses the benefits of using Prometheus histograms for accurate performance tracking.

Takeaways

  • 😀 The video compares Rust and Go using real-world use cases instead of simple algorithms like Fibonacci to evaluate performance.
  • 🔍 The test uses a production-ready EKS cluster in AWS to run applications developed in Rust (Actix framework) and Go (Fiber framework).
  • 📊 Key metrics measured include CPU usage, memory usage, client latency, and availability of the applications under load.
  • 🚀 The first test involved increasing the number of virtual clients and measuring performance until both applications began to fail.
  • 💾 Availability is defined as the ratio of failed HTTP requests to total requests, highlighting the importance of reliable responses.
  • ⏳ The second test simulated a real-world use case involving file uploads to S3 and writing metadata to PostgreSQL.
  • 🔧 Prometheus metrics were used to instrument both applications for detailed monitoring and performance analysis.
  • 📈 The results showed that Rust initially used less memory but increased CPU usage compared to Go under load.
  • 📉 Latency and performance were notably better for Go, particularly in the context of the framework implementations.
  • 💡 The video emphasizes the need for custom metrics in autoscaling applications, as traditional CPU and memory metrics can be misleading.

Q & A

  • What programming languages are being compared in this video?

    -The video compares Rust and Go, specifically using the Actix framework for Rust and the Fiber framework for Go.

  • What type of cluster is used for running the applications?

    -A production-ready EKS (Elastic Kubernetes Service) cluster in AWS is used for running the Rust and Go applications.

  • What metrics are measured during the tests?

    -The tests measure CPU usage, memory usage, client-side latency, and application availability, along with the number of virtual clients and requests per second.

  • How is availability calculated in the tests?

    -Availability is calculated as the ratio of failed HTTP requests (those with status codes higher than 400) divided by the total number of requests, then multiplied by 100.

  • What is the main focus of the first test conducted in the video?

    -The first test focuses on comparing the CPU and memory usage of the Rust and Go frameworks in Kubernetes until both applications start to fail.

  • What is the key takeaway from the first test regarding Rust and Go's performance?

    -The key takeaway is that while Rust uses more CPU as the load increases, its memory usage remains low. In terms of latency, Go performs better than Rust, particularly as both applications approach their breaking point.

  • What use case is simulated in the second test?

    -The second test simulates a real-world use case where a client sends a request to read a file from the local file system, uploads it to an S3 bucket, and saves metadata about that file in a PostgreSQL database.

  • What performance differences are observed between Rust and Go during the second test?

    -In the second test, Rust consumes more CPU and has higher end-user latency compared to Go, particularly due to slower S3 upload times. However, database latency remains similar for both languages.

  • What is emphasized about the use of Prometheus metrics in the video?

    -The video emphasizes the importance of using Histograms over Summaries for applications that can scale horizontally, as Histograms allow for better aggregation of metrics across multiple application replicas.

  • How does the presenter suggest optimizing applications based on the test results?

    -The presenter suggests that by instrumenting applications, bottlenecks can be quickly identified and optimized to efficiently utilize resources, especially focusing on S3 upload function calls as a significant bottleneck.

Outlines

plate

This section is available to paid users only. Please upgrade to access this part.

Upgrade Now

Mindmap

plate

This section is available to paid users only. Please upgrade to access this part.

Upgrade Now

Keywords

plate

This section is available to paid users only. Please upgrade to access this part.

Upgrade Now

Highlights

plate

This section is available to paid users only. Please upgrade to access this part.

Upgrade Now

Transcripts

plate

This section is available to paid users only. Please upgrade to access this part.

Upgrade Now
Rate This

5.0 / 5 (0 votes)

Related Tags
RustGoBenchmark TestingAWS EKSPerformance MetricsResource UsageLatency AnalysisWeb DevelopmentMicroservicesProgramming Languages