dotGo 2017 - Sameer Ajmani - Simulating a real-world system in Go
Summary
TLDRIn this talk, the speaker shares their journey with Go’s concurrency model, highlighting how it transformed their approach to distributed systems. They present a coffee shop simulation to explore concepts like latency, throughput, and resource contention. Through real-world analogies, such as managing a coffee shop’s machines, they explain how Go’s concurrency model solves complex problems. The talk emphasizes performance optimizations, such as handling locks and scaling systems, and showcases how understanding system design and removing structural barriers can lead to significant improvements. The speaker encourages developers to experiment with Go’s tools to enhance their understanding of concurrency and system dynamics.
Takeaways
- 😀 Go's concurrency model, introduced in 2009, revolutionized the speaker's approach to programming by simplifying concurrency compared to previous systems using callbacks and locks.
- 😀 The speaker transitioned from C++ to Go, improving the clarity and efficiency of large-scale distributed systems with Go's concurrency model.
- 😀 Studying real-world systems, like New York City, offers insights into improving programming practices, especially in scaling systems and handling concurrency.
- 😀 The coffee shop simulation created in Go highlights important system properties such as latency, throughput, contention, and utilization in service-oriented systems.
- 😀 Using the race detector in Go is essential when testing concurrency, as it ensures that shared resources are properly synchronized between goroutines.
- 😀 The whole kitchen lock scenario, where a single lock is applied for all machines, resulted in poor throughput and high latency due to waiting times for resources.
- 😀 A more efficient approach is to lock individual machines (grinder, espresso machine, steamer), reducing contention and improving throughput up to a point where the system is CPU-bound.
- 😀 Introducing additional machines in the system (via multiple sets of machines and channels) eliminates bottlenecks, enabling higher throughput and parallelism.
- 😀 Performance can be maximized when resources are fully utilized, as demonstrated by adding more machines and allowing multiple pipelines to work concurrently.
- 😀 Real-world systems, like coffee shops, can inspire solutions to improve system designs, such as minimizing wait times between tasks, implementing buffers, and avoiding resource contention.
Q & A
What was the speaker's first introduction to Go's concurrency model?
-The speaker's first introduction to Go's concurrency model was in 2009, during a tutorial taught by Rob Pike. This was when they fell in love with the language and its concurrency model, which was different from the way they had previously thought about concurrency in distributed systems.
How did Go's concurrency model differ from the speaker's previous experience with concurrency?
-Before Go, the speaker was using callbacks, locks, and thread pools in C++ to handle concurrency in distributed systems. Go introduced them to a simpler model of concurrency, making algorithms easier to understand and enabling more streamlined code.
What key observation did the speaker make when comparing Go's concurrency model to real-world systems?
-The speaker observed that Go’s concurrency model mirrors real-world systems, such as the complexity of scaling New York City’s services. This analogy helped them understand the practical benefits of Go's concurrency model in handling latency, throughput, and contention in systems.
What real-world system did the speaker simulate to demonstrate the power of Go’s concurrency model?
-The speaker created a simulation of a coffee shop to demonstrate the power of Go's concurrency model. This simulation explored properties such as latency, throughput, contention, and utilization, and allowed them to test different approaches for handling a coffee shop's orders.
How did the ideal implementation of the coffee shop simulation perform as the number of CPUs increased?
-In the ideal implementation, throughput increased linearly with more CPUs. With one CPU, the system could prepare one latte every 4 milliseconds. As CPUs were added, the throughput improved, reaching 1,500 lattes per second with six CPUs.
What problem was discovered when the coffee shop simulation was run with multiple CPUs and shared resources?
-A race condition occurred when multiple goroutines tried to access shared resources (the machines in the coffee shop simulation). This led to a runtime error, highlighting the importance of using Go’s race detector to catch synchronization issues.
How did the speaker suggest improving the concurrency in the coffee shop simulation after the race condition was discovered?
-The speaker recommended using fine-grained locking with individual mutexes for each machine. This allowed different people (goroutines) to use different machines concurrently, improving throughput without the bottleneck of locking the entire kitchen.
What happens to the performance of the coffee shop simulation after the number of CPUs exceeds the number of available machines?
-Once the number of CPUs exceeds the number of available machines, throughput begins to flatten. This is because the critical machines (grinder, espresso machine, steamer) become bottlenecks, limiting the ability to process more orders in parallel.
How did the speaker simulate adding more capacity to the coffee shop system, and what was the result?
-The speaker simulated adding more capacity by introducing additional machines (grinders, espresso machines, steamers). This was achieved by creating buffer channels, which allowed multiple goroutines to access different machines concurrently. This change led to ideal performance with throughput increasing linearly and latency staying flat, up to six CPUs.
What structural changes were identified as major contributors to improving the coffee shop simulation's performance?
-The key structural changes included moving from a whole kitchen lock to fine-grained locking, adding more machines (capacity), and using buffered channels to allow pipeline stages to proceed without blocking on downstream stages. These changes helped eliminate bottlenecks and improve overall system performance.
Outlines

This section is available to paid users only. Please upgrade to access this part.
Upgrade NowMindmap

This section is available to paid users only. Please upgrade to access this part.
Upgrade NowKeywords

This section is available to paid users only. Please upgrade to access this part.
Upgrade NowHighlights

This section is available to paid users only. Please upgrade to access this part.
Upgrade NowTranscripts

This section is available to paid users only. Please upgrade to access this part.
Upgrade NowBrowse More Related Video

David Goggins Demonstrates How to Build Mental Toughness

Top 5 techniques for building the worst microservice system ever - William Brander - NDC London 2023

Go is officially cooked

Why the CISSP Changed My Cyber Security Career Forever

How I Make $30K Per Month Building Systems For Businesses | Ops As A Service Business Model

How I deal with peer pressure | Trang Lê | TEDxULIS
5.0 / 5 (0 votes)