Handling large file downloads and resuming interrupted downloads are important features of cURL, effectively managing bandwidth, saving time, and handling network interruptions.
Basic Download Functions
bash# Basic download (show progress) curl -O https://example.com/large-file.zip # Specify output filename curl -o myfile.zip https://example.com/large-file.zip # Follow redirects for download curl -L -O https://example.com/download/file # Silent download curl -s -O https://example.com/large-file.zip
Resume Download
Resume capability is one of cURL's most powerful features, allowing downloads to continue from where they left off after interruption.
bash# Resume download (-C - auto-detects resume point) curl -C - -O https://example.com/large-file.zip # Specify offset for resume (start from byte 1024) curl -C 1024 -O https://example.com/large-file.zip # Complete example: resume with retry for i in {1..5}; do curl -C - -o large-file.zip https://example.com/large-file.zip && break echo "Attempt $i failed, retrying..." sleep 5 done
Chunked Download
Splitting large files into multiple parts for parallel download can significantly improve download speed.
bash# Download specific byte range # Download first 1MB curl -r 0-1048575 -o part1.zip https://example.com/large-file.zip # Download second 1MB curl -r 1048576-2097151 -o part2.zip https://example.com/large-file.zip # Download remaining part curl -r 2097152- -o part3.zip https://example.com/large-file.zip # Merge chunks cat part1.zip part2.zip part3.zip > complete.zip # Remove chunk files rm part1.zip part2.zip part3.zip
Parallel Download Script
bash#!/bin/bash # Parallel chunked download script URL="https://example.com/large-file.zip" FILE="large-file.zip" CHUNKS=4 FILE_SIZE=$(curl -sI "$URL" | grep -i content-length | awk '{print $2}' | tr -d '\r') CHUNK_SIZE=$((FILE_SIZE / CHUNKS)) echo "File size: $FILE_SIZE bytes" echo "Chunk size: $CHUNK_SIZE bytes" # Download chunks in parallel for i in $(seq 0 $((CHUNKS-1))); do START=$((i * CHUNK_SIZE)) if [ $i -eq $((CHUNKS-1)) ]; then END="" else END=$((START + CHUNK_SIZE - 1)) fi echo "Downloading chunk $i: $START-$END" curl -r "$START-$END" -o "${FILE}.part$i" "$URL" & done # Wait for all downloads to complete wait # Merge files for i in $(seq 0 $((CHUNKS-1))); do cat "${FILE}.part$i" >> "$FILE" rm "${FILE}.part$i" done echo "Download complete: $FILE"
Rate Limiting
bash# Limit download speed to 100KB/s curl --limit-rate 100K -O https://example.com/large-file.zip # Limit to 1MB/s curl --limit-rate 1M -O https://example.com/large-file.zip # Rate limit + resume curl --limit-rate 500K -C - -O https://example.com/large-file.zip
Download Progress Control
bash# Show progress bar curl --progress-bar -O https://example.com/large-file.zip # Custom progress display curl -# -o large-file.zip https://example.com/large-file.zip # Silent download (no progress) curl -s -o large-file.zip https://example.com/large-file.zip # Download statistics curl -w "\nDownloaded: %{size_download} bytes\nSpeed: %{speed_download} bytes/s\nTime: %{time_total}s\n" \ -o large-file.zip \ -s https://example.com/large-file.zip
Server Support Detection
bash# Check if server supports resume curl -I https://example.com/large-file.zip | grep -i "accept-ranges" # If returns Accept-Ranges: bytes, resume is supported # Check file size curl -I https://example.com/large-file.zip | grep -i "content-length"
Download Recovery Strategy
bash#!/bin/bash # Smart download recovery script URL="https://example.com/large-file.zip" OUTPUT="large-file.zip" MAX_RETRIES=10 RETRY_COUNT=0 while [ $RETRY_COUNT -lt $MAX_RETRIES ]; do echo "Download attempt $((RETRY_COUNT + 1))..." if curl -C - -o "$OUTPUT" "$URL"; then echo "Download completed successfully!" exit 0 else RETRY_COUNT=$((RETRY_COUNT + 1)) echo "Download failed, waiting to retry..." sleep $((RETRY_COUNT * 2)) # Exponential backoff fi done echo "Download failed after $MAX_RETRIES attempts" exit 1
Multi-threaded Download Tools Comparison
| Feature | cURL | wget | aria2 |
|---|---|---|---|
| Resume | ✅ | ✅ | ✅ |
| Multi-thread | Script needed | ❌ | ✅ |
| Rate limit | ✅ | ✅ | ✅ |
| Auto retry | Script needed | ✅ | ✅ |
| BitTorrent | ❌ | ❌ | ✅ |
Best Practices
bash# 1. Always use resume for large files curl -C - -O https://example.com/large-file.zip # 2. Combine with retry mechanism curl --retry 5 --retry-delay 2 -C - -O https://example.com/large-file.zip # 3. Rate limit to avoid using all bandwidth curl --limit-rate 2M -C - -O https://example.com/large-file.zip # 4. Verify download integrity curl -o file.zip https://example.com/file.zip md5sum file.zip # Compare with server-provided MD5 # 5. Background download nohup curl -C - -o large-file.zip https://example.com/large-file.zip > download.log 2>&1 &
Common Issues
bash# Issue 1: Server doesn't support resume # Solution: Can only re-download, or use multi-source tools # Issue 2: Download speed too slow # Solution: Use chunked parallel download script, or change download source # Issue 3: Insufficient disk space # Solution: Check disk space, or use streaming df -h # Issue 4: Download interrupted # Solution: Use -C - for auto-resume curl -C - -O https://example.com/large-file.zip # Issue 5: Filename encoding issues # Solution: Use -o to specify filename curl -o "$(echo -e 'filename.txt')" https://example.com/file