In Golang, there is no direct concept called 'thread'. Golang uses 'goroutines', which are lightweight concurrency primitives managed by the Go runtime environment. Goroutines consume less memory compared to system threads and have lower overhead when created and destroyed.
Goroutines' Characteristics:
-
Lightweight: Each goroutine requires only a few kilobytes of memory on the stack, enabling the easy creation of tens of thousands of goroutines.
-
Non-blocking Design: Goroutines do not block the execution of other goroutines when performing I/O operations (such as reading/writing files or making network requests).
-
Dynamic Stack: The Go runtime automatically adjusts the stack size of goroutines, eliminating concerns about stack overflow issues.
-
Concurrency vs. Parallelism: By default, Go uses a number of system threads equal to the number of CPU cores to run all goroutines. This means that on a single-core CPU, even with multiple goroutines, only concurrency (via time-slicing) is achieved, while on multi-core CPUs, true parallelism is possible.
Example:
Assume we need to write a program that fetches data from three different network services. Using goroutines, we can concurrently fetch data from these services, with each request handled by a separate goroutine, significantly improving the program's efficiency.
gopackage main import ( "fmt" "net/http" "time" ) func fetchService(url string, ch chan<- string) { start := time.Now() resp, err := http.Get(url) if err != nil { ch <- fmt.Sprintf("Error fetching from %s: %v", url, err) return } defer resp.Body.Close() elapsed := time.Since(start).Seconds() ch <- fmt.Sprintf("Fetched from %s in %.2f seconds", url, elapsed) } func main() { urls := []string{ "http://example.com/service1", "http://example.com/service2", "http://example.com/service3", } ch := make(chan string) for _, url := range urls { go fetchService(url, ch) } for range urls { fmt.Println(<-ch) } }
In this example, we create a goroutine for each network service request. These goroutines can run in parallel (on multi-core processors), reducing the processing time for each request and thus improving the overall program performance.