What are goroutines?
In Go, a goroutine is the basic unit for concurrency. It is a lightweight thread managed by the Go runtime. Developers can create tens of thousands of goroutines, which run efficiently on a small number of operating system threads. Using goroutines simplifies and clarifies concurrent programming.
Differences between goroutines and threads
-
Resource Consumption:
- Threads: Traditional threads are directly managed by the operating system, and each thread typically has a relatively large fixed stack (usually a few MBs), meaning creating many threads consumes significant memory resources.
- Goroutines: In contrast, goroutines are managed by the Go runtime, with an initial stack size of only a few KB, and can dynamically scale as needed. Therefore, more goroutines can be created under the same memory conditions.
-
Scheduling:
- Threads: Thread scheduling is handled by the operating system, which involves switching from user mode to kernel mode, resulting in higher scheduling overhead.
- Goroutines: Goroutine scheduling is performed by the Go runtime, using M:N scheduling (multiple goroutines mapped to multiple OS threads). This approach reduces interaction with the kernel, thereby lowering scheduling overhead.
-
Creation and Switching Speed:
- Threads: Creating threads and context switching between threads are typically time-consuming.
- Goroutines: Due to being managed by the Go runtime, the creation and switching speed are very fast.
Practical Application Example
In a network service, handling a large number of concurrent requests is necessary. Using a traditional thread model, if each request is assigned a thread, system resources will be exhausted quickly, leading to performance degradation.
By using Go's goroutines, we can assign a goroutine to each network request. For example:
gohttp.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) { go handleRequest(r) fmt.Fprintf(w, "Request is being processed...") })
In this example, handleRequest is a function that creates a new goroutine for each HTTP request received. This efficiently utilizes system resources while maintaining high throughput and low latency, making it ideal for scenarios requiring handling a large number of concurrent requests.