乐闻世界logo
搜索文章和话题

What is concurrency handling and goroutine management in the Gin framework?

2月21日 16:01

Concurrency handling and goroutine management in the Gin framework are as follows:

1. Concurrency handling overview

The Gin framework itself is concurrency-safe, and each request is processed in a separate goroutine. However, there are important considerations when using goroutines.

2. Using goroutines in handler functions

2.1 Basic usage

go
func handleRequest(c *gin.Context) { // Execute async tasks in goroutine go func() { // Execute time-consuming operations result := longRunningTask() // Note: Cannot directly use c, as the request may have ended log.Printf("Result: %v", result) }() c.JSON(200, gin.H{"message": "Request accepted"}) } func longRunningTask() string { time.Sleep(2 * time.Second) return "completed" }

2.2 Correctly using Context copy

go
func handleRequest(c *gin.Context) { // Create a copy of Context cCopy := c.Copy() go func() { // Use the copy Context userID := cCopy.GetInt("user_id") result := processUserData(userID) log.Printf("Processed user %d: %v", userID, result) }() c.JSON(200, gin.H{"message": "Processing started"}) }

3. Worker Pool pattern

3.1 Implementing Worker Pool

go
type Job struct { ID int Payload interface{} } type Result struct { JobID int Output interface{} Error error } type Worker struct { ID int JobQueue chan Job Results chan Result Quit chan bool } func NewWorker(id int, jobQueue chan Job, results chan Result) *Worker { return &Worker{ ID: id, JobQueue: jobQueue, Results: results, Quit: make(chan bool), } } func (w *Worker) Start() { go func() { for { select { case job := <-w.JobQueue: result := w.processJob(job) w.Results <- result case <-w.Quit: return } } }() } func (w *Worker) Stop() { go func() { w.Quit <- true }() } func (w *Worker) processJob(job Job) Result { // Process task time.Sleep(time.Second) return Result{ JobID: job.ID, Output: fmt.Sprintf("Processed job %d by worker %d", job.ID, w.ID), } }

3.2 Using Worker Pool

go
func setupWorkerPool() (chan Job, chan Result) { jobQueue := make(chan Job, 100) results := make(chan Result, 100) // Create worker pool numWorkers := 5 for i := 1; i <= numWorkers; i++ { worker := NewWorker(i, jobQueue, results) worker.Start() } return jobQueue, results } func handleJob(c *gin.Context) { jobQueue, results := setupWorkerPool() // Submit task job := Job{ ID: 1, Payload: c.Query("data"), } jobQueue <- job // Wait for result result := <-results c.JSON(200, gin.H{ "result": result.Output, }) }

4. Concurrency rate limiting

4.1 Implementing rate limiting with channels

go
type RateLimiter struct { semaphore chan struct{} } func NewRateLimiter(maxConcurrent int) *RateLimiter { return &RateLimiter{ semaphore: make(chan struct{}, maxConcurrent), } } func (r *RateLimiter) Acquire() { r.semaphore <- struct{}{} } func (r *RateLimiter) Release() { <-r.semaphore } func handleLimitedRequest(c *gin.Context) { limiter := NewRateLimiter(10) // Max 10 concurrent limiter.Acquire() defer limiter.Release() // Process request result := processRequest() c.JSON(200, gin.H{"result": result}) }

4.2 Using third-party libraries

go
import "golang.org/x/time/rate" var limiter = rate.NewLimiter(rate.Limit(100), 10) // 100 requests per second, burst 10 func rateLimitMiddleware() gin.HandlerFunc { return func(c *gin.Context) { if !limiter.Allow() { c.JSON(429, gin.H{"error": "Too many requests"}) c.Abort() return } c.Next() } }

5. Concurrent-safe data sharing

5.1 Using sync.Map

go
var cache = sync.Map{} func handleCache(c *gin.Context) { key := c.Query("key") // Read from cache if value, ok := cache.Load(key); ok { c.JSON(200, gin.H{"value": value}) return } // Compute and cache value := computeValue(key) cache.Store(key, value) c.JSON(200, gin.H{"value": value}) }

5.2 Using mutex

go
type SafeCounter struct { mu sync.Mutex value int } func (s *SafeCounter) Increment() { s.mu.Lock() defer s.mu.Unlock() s.value++ } func (s *SafeCounter) Value() int { s.mu.Lock() defer s.mu.Unlock() return s.value } var counter = &SafeCounter{} func handleCounter(c *gin.Context) { counter.Increment() c.JSON(200, gin.H{"count": counter.Value()}) }

6. Concurrent task coordination

6.1 Using WaitGroup

go
func handleConcurrentTasks(c *gin.Context) { var wg sync.WaitGroup results := make(chan string, 3) tasks := []string{"task1", "task2", "task3"} for _, task := range tasks { wg.Add(1) go func(t string) { defer wg.Done() result := processTask(t) results <- result }(task) } // Wait for all tasks to complete go func() { wg.Wait() close(results) }() // Collect results var allResults []string for result := range results { allResults = append(allResults, result) } c.JSON(200, gin.H{"results": allResults}) }

6.2 Using context to cancel tasks

go
func handleCancellableTask(c *gin.Context) { ctx, cancel := context.WithTimeout(c.Request.Context(), 5*time.Second) defer cancel() resultChan := make(chan string) go func() { result := longRunningTaskWithContext(ctx) resultChan <- result }() select { case result := <-resultChan: c.JSON(200, gin.H{"result": result}) case <-ctx.Done(): c.JSON(408, gin.H{"error": "Request timeout"}) } } func longRunningTaskWithContext(ctx context.Context) string { for i := 0; i < 10; i++ { select { case <-ctx.Done(): return "cancelled" default: time.Sleep(500 * time.Millisecond) } } return "completed" }

7. Concurrent error handling

7.1 Error collection

go
func handleConcurrentErrors(c *gin.Context) { var wg sync.WaitGroup errChan := make(chan error, 3) tasks := []func() error{ task1, task2, task3, } for _, task := range tasks { wg.Add(1) go func(t func() error) { defer wg.Done() if err := t(); err != nil { errChan <- err } }(task) } go func() { wg.Wait() close(errChan) }() var errors []error for err := range errChan { errors = append(errors, err) } if len(errors) > 0 { c.JSON(500, gin.H{"errors": errors}) return } c.JSON(200, gin.H{"message": "All tasks completed"}) }

8. Best practices

  1. Context usage

    • Use c.Copy() in goroutines
    • Do not directly use original Context in goroutines
    • Use context.WithTimeout to control timeouts
  2. Resource management

    • Use defer to ensure resource release
    • Limit the number of concurrent goroutines
    • Use Worker Pool to manage concurrency
  3. Data safety

    • Use sync.Map or mutex to protect shared data
    • Avoid sharing mutable state in goroutines
    • Use channels for goroutine communication
  4. Error handling

    • Handle errors correctly in goroutines
    • Use channels to collect errors
    • Implement appropriate retry mechanisms
  5. Performance optimization

    • Reasonably set concurrency numbers
    • Use buffered channels to reduce blocking
    • Monitor goroutine count and resource usage

Through the above methods, you can safely and efficiently handle concurrent tasks in the Gin framework.

标签:Gin