GORM provides various performance optimization techniques to help developers improve database operation efficiency.
Query Optimization
1. Select Specific Fields
Only query needed fields to reduce data transfer:
go// Not recommended: Query all fields var users []User db.Find(&users) // Recommended: Only query needed fields var users []User db.Select("id", "name", "email").Find(&users)
2. Use Indexes
Create indexes for frequently used query conditions:
gotype User struct { gorm.Model Name string `gorm:"index:idx_name"` Email string `gorm:"uniqueIndex"` Age int `gorm:"index:idx_age"` }
3. Pagination
Use Limit and Offset for pagination:
go// Basic pagination page := 1 pageSize := 10 offset := (page - 1) * pageSize var users []User db.Limit(pageSize).Offset(offset).Find(&users) // Cursor pagination (more efficient) var users []User db.Where("id > ?", lastID).Limit(pageSize).Find(&users)
4. Avoid N+1 Queries
Use Preload to preload associated data:
go// Not recommended: N+1 queries var users []User db.Find(&users) for _, user := range users { var posts []Post db.Where("user_id = ?", user.ID).Find(&posts) } // Recommended: Use Preload var users []User db.Preload("Posts").Find(&users) // Conditional preload db.Preload("Posts", "status = ?", "published").Find(&users) // Nested preload db.Preload("Posts.Comments").Find(&users)
5. Use Pluck for Single Column
Use Pluck when only needing single column data:
go// Not recommended var users []User db.Find(&users) var names []string for _, user := range users { names = append(names, user.Name) } // Recommended var names []string db.Model(&User{}).Pluck("name", &names)
Batch Operation Optimization
1. Batch Insert
Use CreateInBatches for batch insertion:
go// Not recommended: Loop insertion for _, user := range users { db.Create(&user) } // Recommended: Batch insert db.CreateInBatches(users, 100)
2. Batch Update
Use batch update instead of loop update:
go// Not recommended for _, user := range users { db.Model(&user).Update("status", "active") } // Recommended db.Model(&User{}).Where("id IN ?", userIDs).Update("status", "active")
3. Batch Delete
Use batch delete instead of loop delete:
go// Not recommended for _, user := range users { db.Delete(&user) } // Recommended db.Where("id IN ?", userIDs).Delete(&User{})
Connection Pool Optimization
Configure Connection Pool
gosqlDB, err := db.DB() if err != nil { panic(err) } // Set maximum number of idle connections in the connection pool sqlDB.SetMaxIdleConns(10) // Set maximum number of open database connections sqlDB.SetMaxOpenConns(100) // Set maximum time a connection can be reused sqlDB.SetConnMaxLifetime(time.Hour)
Query Caching
Use GORM Cache Plugin
go// Use gorm-cache plugin import "github.com/go-gorm/caches" db.Use(caches.New(caches.Config{ Redis: redisClient, ExpireTime: 10 * time.Minute, }))
Raw SQL Optimization
Use Raw SQL for Complex Queries
go// Use raw SQL for complex queries var results []struct { UserName string PostCount int } db.Raw(` SELECT u.name as user_name, COUNT(p.id) as post_count FROM users u LEFT JOIN posts p ON u.id = p.user_id WHERE u.age > ? GROUP BY u.id HAVING COUNT(p.id) > ? `, 18, 5).Scan(&results)
Transaction Optimization
Reduce Transaction Scope
go// Not recommended: Large transaction tx := db.Begin() // ... many operations ... tx.Commit() // Recommended: Small transaction db.Transaction(func(tx *gorm.DB) error { // Only include necessary operations return nil })
Database Design Optimization
1. Reasonable Use of Foreign Keys
gotype Order struct { gorm.Model UserID uint `gorm:"index"` User User `gorm:"foreignKey:UserID;references:ID"` }
2. Use Appropriate Data Types
gotype User struct { ID uint `gorm:"primaryKey"` Age int8 `gorm:"type:tinyint"` // Use tinyint to save space Status string `gorm:"type:char(1)"` // Fixed-length string CreatedAt time.Time }
3. Partition Tables
For large tables, consider partitioning:
sql-- MySQL partition example CREATE TABLE orders ( id BIGINT PRIMARY KEY, created_at DATETIME, -- other fields ) PARTITION BY RANGE (YEAR(created_at)) ( PARTITION p2023 VALUES LESS THAN (2024), PARTITION p2024 VALUES LESS THAN (2025), PARTITION pmax VALUES LESS THAN MAXVALUE );
Monitoring and Debugging
1. Enable Logging
go// Enable detailed logging in development db.Logger = logger.Default.LogMode(logger.Info) // Only log slow queries in production db.Logger = logger.Default.LogMode(logger.Silent) db.Callback().Query().Before("gorm:query").Register("slow_query", func(db *gorm.DB) { start := time.Now() db.Statement.Callbacks().Query().After("gorm:query").Register("log_slow_query", func(db *gorm.DB) { if time.Since(start) > time.Second { log.Printf("Slow query: %s", db.Statement.SQL.String()) } }) })
2. Use Explain to Analyze Queries
govar users []User result := db.Explain("SELECT * FROM users WHERE age > ?", 18) fmt.Println(result)
Performance Testing
Benchmark Testing
gofunc BenchmarkGORMQuery(b *testing.B) { db, _ := gorm.Open(mysql.Open(dsn), &gorm.Config{}) b.ResetTimer() for i := 0; i < b.N; i++ { var users []User db.Select("id", "name").Limit(10).Find(&users) } }
Best Practices Summary
- Only query needed fields: Use Select to specify fields
- Use indexes reasonably: Create indexes for frequently used query conditions
- Avoid N+1 queries: Use Preload to preload associated data
- Batch operations: Use batch insert, update, delete
- Configure connection pool: Adjust connection pool parameters based on application load
- Use pagination: Avoid loading large amounts of data at once
- Optimize transactions: Keep transaction scope as small as possible
- Monitor performance: Enable logging and slow query monitoring
- Use raw SQL: Use raw SQL for complex queries
- Regular maintenance: Regularly analyze tables and optimize indexes
Common Performance Issues
Q: How to solve N+1 query problem?
A: Use Preload to preload associated data, or use Joins for association queries.
Q: What to do if batch insert is slow?
A: Use CreateInBatches method and set reasonable batch size (usually 100-1000).
Q: How to optimize slow queries?
A: Use Explain to analyze query plan, check index usage, optimize query conditions.
Q: How large should the connection pool be?
A: Adjust based on application concurrency and database server performance, usually MaxOpenConns set to 2-4 times CPU cores.