乐闻世界logo
搜索文章和话题

How to use SSH for automated operations? What are the common automation tools and scripts?

3月6日 21:32

SSH automation is an important part of modern DevOps and operations automation. Through SSH automation, you can achieve batch server management, automated deployment, monitoring and alerting, and other functions.

SSH Automation Tools

1. Ansible

Ansible is one of the most popular SSH automation tools, requiring no agent installation on target servers.

Installation:

bash
# Ubuntu/Debian sudo apt-get install ansible # CentOS/RHEL sudo yum install ansible # macOS brew install ansible # pip pip install ansible

Configuration:

bash
# /etc/ansible/hosts [webservers] web1.example.com web2.example.com web3.example.com [dbservers] db1.example.com db2.example.com [all:vars] ansible_user=admin ansible_ssh_private_key_file=~/.ssh/ansible_key

Usage Examples:

bash
# Execute command ansible webservers -m shell -a "uptime" # Copy file ansible webservers -m copy -a "src=/tmp/file dest=/tmp/" # Install package ansible all -m apt -a "name=nginx state=present" # Execute Playbook ansible-playbook deploy.yml

Playbook Example:

yaml
# deploy.yml --- - hosts: webservers become: yes tasks: - name: Update apt cache apt: update_cache: yes - name: Install nginx apt: name: nginx state: present - name: Start nginx service service: name: nginx state: started enabled: yes - name: Copy configuration file copy: src: nginx.conf dest: /etc/nginx/nginx.conf notify: restart nginx handlers: - name: restart nginx service: name: nginx state: restarted

2. Fabric

Fabric is a Python library for simplifying SSH automation tasks.

Installation:

bash
pip install fabric

Usage Examples:

python
# fabfile.py from fabric import Connection from fabric import task @task def deploy(c): """Deploy application to server""" with Connection('user@server') as conn: # Update code conn.run('git pull origin main') # Install dependencies conn.run('pip install -r requirements.txt') # Restart service conn.sudo('systemctl restart myapp') @task def update(c, server): """Update multiple servers""" with Connection(f'user@{server}') as conn: conn.run('apt-get update && apt-get upgrade -y') @task def backup(c): """Backup database""" with Connection('user@server') as conn: conn.run('mysqldump -u root -p database > backup.sql') conn.get('backup.sql', './backups/')

Running:

bash
# Execute single task fab deploy # Execute task with parameters fab update:server=web1.example.com # Execute multiple tasks fab deploy backup

3. SSH Batch Scripts

Use Shell scripts to implement simple SSH batch processing.

Example Script:

bash
#!/bin/bash # batch_ssh.sh SERVERS=( "user@server1.example.com" "user@server2.example.com" "user@server3.example.com" ) COMMAND="uptime" for server in "${SERVERS[@]}"; do echo "=== $server ===" ssh "$server" "$COMMAND" echo "" done

Advanced Script:

bash
#!/bin/bash # advanced_batch_ssh.sh # Configuration SERVERS_FILE="servers.txt" SSH_KEY="~/.ssh/batch_key" SSH_USER="admin" TIMEOUT=10 # Function: Execute command execute_command() { local server=$1 local command=$2 echo "Executing on $server: $command" timeout $TIMEOUT ssh -i $SSH_KEY -o StrictHostKeyChecking=no $SSH_USER@$server "$command" if [ $? -eq 0 ]; then echo "Success" else echo "Failed" fi } # Function: Parallel execution parallel_execute() { local command=$1 while read -r server; do execute_command "$server" "$command" & done < "$SERVERS_FILE" wait } # Main program case "$1" in "update") parallel_execute "apt-get update && apt-get upgrade -y" ;; "restart") parallel_execute "systemctl restart nginx" ;; "status") parallel_execute "systemctl status nginx" ;; *) echo "Usage: $0 {update|restart|status}" exit 1 ;; esac

4. Pexpect

Pexpect is a Python module for automating interactive programs.

Installation:

bash
pip install pexpect

Usage Example:

python
import pexpect def ssh_interactive(host, user, password, command): """Automate interactive SSH session""" ssh = pexpect.spawn(f'ssh {user}@{host}') # Handle password prompt ssh.expect('password:') ssh.sendline(password) # Execute command ssh.expect('$') ssh.sendline(command) # Get output ssh.expect('$') output = ssh.before.decode() print(output) ssh.close() # Usage ssh_interactive('server.example.com', 'user', 'password', 'ls -la')

Automation Scenarios

Scenario 1: Batch Deployment

bash
#!/bin/bash # deploy.sh APP_DIR="/var/www/myapp" REPO="https://github.com/user/myapp.git" BRANCH="main" # Server list SERVERS=( "web1.example.com" "web2.example.com" "web3.example.com" ) for server in "${SERVERS[@]}"; do echo "Deploying to $server..." ssh admin@$server << EOF cd $APP_DIR git pull origin $BRANCH npm install npm run build pm2 restart myapp EOF echo "Deployment to $server completed" done

Scenario 2: Batch Monitoring

python
#!/usr/bin/env python3 # monitor.py import paramiko import time SERVERS = [ {'host': 'server1.example.com', 'user': 'admin'}, {'host': 'server2.example.com', 'user': 'admin'}, {'host': 'server3.example.com', 'user': 'admin'}, ] def check_server(server): """Check server status""" ssh = paramiko.SSHClient() ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) try: ssh.connect(server['host'], username=server['user']) # Check CPU stdin, stdout, stderr = ssh.exec_command('top -bn1 | grep "Cpu(s)"') cpu_usage = stdout.read().decode() # Check memory stdin, stdout, stderr = ssh.exec_command('free -m') memory = stdout.read().decode() # Check disk stdin, stdout, stderr = ssh.exec_command('df -h') disk = stdout.read().decode() print(f"=== {server['host']} ===") print(f"CPU: {cpu_usage.strip()}") print(f"Memory: {memory.strip()}") print(f"Disk: {disk.strip()}") print("") except Exception as e: print(f"Error connecting to {server['host']}: {e}") finally: ssh.close() # Main program while True: for server in SERVERS: check_server(server) time.sleep(300) # Check every 5 minutes

Scenario 3: Automated Backup

bash
#!/bin/bash # backup.sh BACKUP_DIR="/backups" DATE=$(date +%Y%m%d) RETENTION_DAYS=7 SERVERS=( "db1.example.com" "db2.example.com" ) for server in "${SERVERS[@]}"; do echo "Backing up $server..." # Create backup directory mkdir -p "$BACKUP_DIR/$server" # Backup database ssh admin@$server "mysqldump -u root -p'password' database | gzip" > \ "$BACKUP_DIR/$server/database_$DATE.sql.gz" # Backup files rsync -avz --delete admin@$server:/var/www/ "$BACKUP_DIR/$server/files/" echo "Backup of $server completed" done # Clean up old backups find $BACKUP_DIR -name "*.sql.gz" -mtime +$RETENTION_DAYS -delete

Best Practices

1. Security

bash
# Use key authentication, disable password authentication ssh-keygen -t ed25519 -f ~/.ssh/automation_key ssh-copy-id -i ~/.ssh/automation_key.pub user@server # Restrict key usage # ~/.ssh/authorized_keys command="/usr/local/bin/automation-wrapper.sh",no-port-forwarding,no-X11-forwarding ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAI...

2. Error Handling

bash
#!/bin/bash # Error handling example set -e # Exit immediately on error set -u # Error on undefined variables set -o pipefail # Exit on pipe command failure # Function: Error handling error_exit() { echo "Error: $1" >&2 exit 1 } # Usage ssh user@server "command" || error_exit "SSH command failed"

3. Logging

bash
#!/bin/bash # Logging example LOG_FILE="/var/log/automation.log" log() { local message=$1 echo "[$(date '+%Y-%m-%d %H:%M:%S')] $message" | tee -a $LOG_FILE } # Usage log "Starting deployment" ssh user@server "command" log "Deployment completed"

4. Configuration Management

bash
# Use configuration file # config.ini [general] user=admin key=~/.ssh/automation_key timeout=30 [servers] web1=web1.example.com web2=web2.example.com db1=db1.example.com

5. Idempotency

Ensure automation tasks can be repeated without side effects.

bash
#!/bin/bash # Idempotency example # Check if service is already installed if ! systemctl is-active --quiet nginx; then apt-get install -y nginx fi # Check if configuration is already updated if ! diff -q nginx.conf /etc/nginx/nginx.conf > /dev/null; then cp nginx.conf /etc/nginx/nginx.conf systemctl reload nginx fi

Monitoring and Alerting

1. Automated Monitoring

bash
#!/bin/bash # Monitoring script ALERT_EMAIL="admin@example.com" ALERT_SUBJECT="SSH Automation Alert" check_service() { local server=$1 local service=$2 if ! ssh admin@$server "systemctl is-active --quiet $service"; then send_alert "$service is down on $server" fi } send_alert() { local message=$1 echo "$message" | mail -s "$ALERT_SUBJECT" $ALERT_EMAIL } # Main program for server in web1 web2 web3; do check_service "$server.example.com" nginx check_service "$server.example.com" mysql done

2. Integration with Monitoring Tools

yaml
# Prometheus + Grafana # prometheus.yml scrape_configs: - job_name: 'ssh_automation' static_configs: - targets: ['localhost:9090']

SSH automation can greatly improve operational efficiency, but attention must be paid to security, reliability, and maintainability.

标签:SSH