bump
ci/woodpecker/push/woodpecker Pipeline was successful
Details
ci/woodpecker/push/woodpecker Pipeline was successful
Details
This commit is contained in:
parent
06c91e6dd7
commit
409346e6dd
|
|
@ -0,0 +1,187 @@
|
|||
# Migration Guide: macmini3 → ingress.nixc.us
|
||||
|
||||
This guide explains how to migrate the haste.nixc.us services from macmini3 to ingress.nixc.us.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. **SSH Access**: Ensure you have SSH key-based authentication set up for both hosts:
|
||||
- `macmini3` (source)
|
||||
- `ingress.nixc.us` (target)
|
||||
|
||||
The script uses non-interactive SSH commands like:
|
||||
```bash
|
||||
ssh macmini3 "command"
|
||||
ssh ingress.nixc.us "command"
|
||||
```
|
||||
|
||||
2. **Docker Swarm**: Both hosts should be part of the same Docker Swarm cluster, or you need to deploy to the target's Swarm manager.
|
||||
|
||||
3. **Volume Access**: The script needs to access Docker volumes on both hosts.
|
||||
|
||||
## Migration Steps
|
||||
|
||||
### Option 1: Automated Migration (Recommended)
|
||||
|
||||
Run the full migration script which handles everything:
|
||||
|
||||
```bash
|
||||
./migrate-to-ingress.sh
|
||||
```
|
||||
|
||||
The script will:
|
||||
1. Verify SSH connectivity to both hosts
|
||||
2. Backup Docker volumes (`redis_data` and `public_system`) from macmini3
|
||||
3. Transfer backups to ingress.nixc.us
|
||||
4. Restore volumes on ingress.nixc.us
|
||||
5. Update `stack.production.yml` to use the new hostname
|
||||
6. Deploy the stack to the new location
|
||||
7. Verify the deployment
|
||||
8. Clean up temporary files
|
||||
|
||||
### Option 2: Manual Migration
|
||||
|
||||
If you prefer to migrate manually or need more control:
|
||||
|
||||
#### Step 1: Update Stack Configuration
|
||||
|
||||
```bash
|
||||
# Auto-detect hostname
|
||||
./update-stack-hostname.sh
|
||||
|
||||
# Or specify hostname explicitly
|
||||
./update-stack-hostname.sh ingress
|
||||
```
|
||||
|
||||
#### Step 2: Backup Volumes from macmini3
|
||||
|
||||
```bash
|
||||
BACKUP_DIR="/tmp/haste-migration-$(date +%Y%m%d-%H%M%S)"
|
||||
ssh macmini3 "mkdir -p ${BACKUP_DIR}"
|
||||
|
||||
# Backup redis_data
|
||||
ssh macmini3 "docker run --rm -v haste_redis_data:/source:ro -v ${BACKUP_DIR}:/backup alpine:latest tar czf /backup/redis_data.tar.gz -C /source ."
|
||||
|
||||
# Backup public_system
|
||||
ssh macmini3 "docker run --rm -v haste_public_system:/source:ro -v ${BACKUP_DIR}:/backup alpine:latest tar czf /backup/public_system.tar.gz -C /source ."
|
||||
```
|
||||
|
||||
#### Step 3: Transfer Backups
|
||||
|
||||
```bash
|
||||
ssh ingress.nixc.us "mkdir -p ${BACKUP_DIR}"
|
||||
scp macmini3:${BACKUP_DIR}/*.tar.gz ingress.nixc.us:${BACKUP_DIR}/
|
||||
```
|
||||
|
||||
#### Step 4: Restore Volumes on ingress.nixc.us
|
||||
|
||||
```bash
|
||||
# Create volumes
|
||||
ssh ingress.nixc.us "docker volume create haste_redis_data"
|
||||
ssh ingress.nixc.us "docker volume create haste_public_system"
|
||||
|
||||
# Restore redis_data
|
||||
ssh ingress.nixc.us "docker run --rm -v haste_redis_data:/target -v ${BACKUP_DIR}:/backup alpine:latest sh -c 'rm -rf /target/* && tar xzf /backup/redis_data.tar.gz -C /target'"
|
||||
|
||||
# Restore public_system
|
||||
ssh ingress.nixc.us "docker run --rm -v haste_public_system:/target -v ${BACKUP_DIR}:/backup alpine:latest sh -c 'rm -rf /target/* && tar xzf /backup/public_system.tar.gz -C /target'"
|
||||
```
|
||||
|
||||
#### Step 5: Deploy Updated Stack
|
||||
|
||||
```bash
|
||||
scp stack.production.yml ingress.nixc.us:/tmp/
|
||||
ssh ingress.nixc.us "docker stack deploy --with-registry-auth -c /tmp/stack.production.yml haste"
|
||||
```
|
||||
|
||||
#### Step 6: Verify Deployment
|
||||
|
||||
```bash
|
||||
ssh ingress.nixc.us "docker stack services haste"
|
||||
```
|
||||
|
||||
#### Step 7: Test Service
|
||||
|
||||
Visit https://haste.nixc.us and verify it's working correctly.
|
||||
|
||||
#### Step 8: Cleanup Old Deployment (when ready)
|
||||
|
||||
```bash
|
||||
ssh macmini3 "docker stack rm haste"
|
||||
ssh macmini3 "docker volume rm haste_redis_data haste_public_system"
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
The migration script supports environment variables for customization:
|
||||
|
||||
```bash
|
||||
# Customize source/target hosts
|
||||
SOURCE_HOST=macmini3 TARGET_HOST=ingress.nixc.us ./migrate-to-ingress.sh
|
||||
|
||||
# Customize stack name (default: haste)
|
||||
STACK_NAME=haste-production ./migrate-to-ingress.sh
|
||||
|
||||
# Specify target hostname explicitly
|
||||
TARGET_HOSTNAME=ingress ./migrate-to-ingress.sh
|
||||
|
||||
# Enable automatic cleanup of old deployment
|
||||
CLEANUP_OLD=true ./migrate-to-ingress.sh
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### SSH Connection Issues
|
||||
|
||||
If you get SSH connection errors:
|
||||
- Ensure SSH keys are set up: `ssh-copy-id user@macmini3` and `ssh-copy-id user@ingress.nixc.us`
|
||||
- Test connectivity: `ssh macmini3 "echo OK"` and `ssh ingress.nixc.us "echo OK"`
|
||||
- The script uses `-o BatchMode=yes` for non-interactive SSH, which requires key-based auth (no password prompts)
|
||||
|
||||
### Volume Not Found
|
||||
|
||||
If volumes are not found, check the actual volume names:
|
||||
```bash
|
||||
ssh macmini3 "docker volume ls | grep haste"
|
||||
```
|
||||
|
||||
Docker Swarm prefixes volumes with the stack name, so `redis_data` becomes `haste_redis_data` if the stack is named `haste`.
|
||||
|
||||
### Hostname Detection
|
||||
|
||||
The script auto-detects the target hostname. If it's incorrect, specify it explicitly:
|
||||
```bash
|
||||
TARGET_HOSTNAME=ingress ./migrate-to-ingress.sh
|
||||
```
|
||||
|
||||
### Stack Deployment Issues
|
||||
|
||||
If deployment fails, check:
|
||||
- Docker Swarm is initialized on the target
|
||||
- You have access to the Swarm manager
|
||||
- Registry authentication is set up
|
||||
|
||||
## Rollback
|
||||
|
||||
If something goes wrong, you can rollback:
|
||||
|
||||
1. Restore the original stack file:
|
||||
```bash
|
||||
mv stack.production.yml.bak stack.production.yml
|
||||
```
|
||||
|
||||
2. Redeploy to macmini3:
|
||||
```bash
|
||||
ssh macmini3 "docker stack deploy --with-registry-auth -c stack.production.yml haste"
|
||||
```
|
||||
|
||||
3. Remove volumes from ingress.nixc.us if needed:
|
||||
```bash
|
||||
ssh ingress.nixc.us "docker volume rm haste_redis_data haste_public_system"
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- The migration script creates backups before making changes
|
||||
- Original `stack.production.yml` is backed up with `.bak` extension
|
||||
- Temporary backup files are cleaned up automatically
|
||||
- Old deployment on macmini3 is NOT removed automatically (set `CLEANUP_OLD=true` to enable)
|
||||
|
|
@ -0,0 +1,238 @@
|
|||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
|
||||
# Migration script to move haste.nixc.us services from macmini3 to ingress.nixc.us
|
||||
# This script uses non-interactive SSH and handles Docker volume migration
|
||||
|
||||
# Configuration
|
||||
SOURCE_HOST="${SOURCE_HOST:-macmini3}"
|
||||
TARGET_HOST="${TARGET_HOST:-ingress.nixc.us}"
|
||||
STACK_NAME="${STACK_NAME:-haste}"
|
||||
TARGET_HOSTNAME="${TARGET_HOSTNAME:-}" # Will be auto-detected if not set
|
||||
VOLUMES=("redis_data" "public_system")
|
||||
BACKUP_DIR="/tmp/haste-migration-$(date +%Y%m%d-%H%M%S)"
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
log_info() {
|
||||
echo -e "${GREEN}[INFO]${NC} $1"
|
||||
}
|
||||
|
||||
log_warn() {
|
||||
echo -e "${YELLOW}[WARN]${NC} $1"
|
||||
}
|
||||
|
||||
log_error() {
|
||||
echo -e "${RED}[ERROR]${NC} $1"
|
||||
}
|
||||
|
||||
# SSH helper function for non-interactive commands
|
||||
ssh_cmd() {
|
||||
local host="$1"
|
||||
shift
|
||||
ssh -o BatchMode=yes -o ConnectTimeout=5 "$host" "$@"
|
||||
}
|
||||
|
||||
# Check if SSH key is available
|
||||
check_ssh() {
|
||||
log_info "Checking SSH connectivity..."
|
||||
if ! ssh_cmd "${SOURCE_HOST}" "echo 'SSH to source OK'" 2>/dev/null; then
|
||||
log_error "Cannot connect to ${SOURCE_HOST} via SSH. Please ensure SSH keys are set up."
|
||||
exit 1
|
||||
fi
|
||||
if ! ssh_cmd "${TARGET_HOST}" "echo 'SSH to target OK'" 2>/dev/null; then
|
||||
log_error "Cannot connect to ${TARGET_HOST} via SSH. Please ensure SSH keys are set up."
|
||||
exit 1
|
||||
fi
|
||||
log_info "SSH connectivity verified"
|
||||
}
|
||||
|
||||
# Backup volumes from source host
|
||||
backup_volumes() {
|
||||
log_info "Backing up volumes from ${SOURCE_HOST}..."
|
||||
|
||||
ssh_cmd "${SOURCE_HOST}" "mkdir -p ${BACKUP_DIR}"
|
||||
|
||||
for volume in "${VOLUMES[@]}"; do
|
||||
log_info "Backing up volume: ${volume}"
|
||||
|
||||
# Docker Swarm prefixes volumes with stack name
|
||||
VOLUME_NAME="${STACK_NAME}_${volume}"
|
||||
|
||||
# Check if volume exists
|
||||
if ! ssh_cmd "${SOURCE_HOST}" "docker volume inspect ${VOLUME_NAME} >/dev/null 2>&1"; then
|
||||
log_warn "Volume ${VOLUME_NAME} not found, skipping..."
|
||||
continue
|
||||
fi
|
||||
|
||||
# Create backup using a temporary container
|
||||
ssh_cmd "${SOURCE_HOST}" "docker run --rm -v ${VOLUME_NAME}:/source:ro -v ${BACKUP_DIR}:/backup alpine:latest tar czf /backup/${volume}.tar.gz -C /source ."
|
||||
|
||||
if [ $? -eq 0 ]; then
|
||||
log_info "Successfully backed up ${volume}"
|
||||
else
|
||||
log_error "Failed to backup ${volume}"
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
log_info "Volume backup completed"
|
||||
}
|
||||
|
||||
# Transfer backups to target host
|
||||
transfer_backups() {
|
||||
log_info "Transferring backups to ${TARGET_HOST}..."
|
||||
|
||||
ssh_cmd "${TARGET_HOST}" "mkdir -p ${BACKUP_DIR}"
|
||||
|
||||
for volume in "${VOLUMES[@]}"; do
|
||||
log_info "Transferring ${volume}.tar.gz..."
|
||||
ssh_cmd "${SOURCE_HOST}" "cat ${BACKUP_DIR}/${volume}.tar.gz" | \
|
||||
ssh_cmd "${TARGET_HOST}" "cat > ${BACKUP_DIR}/${volume}.tar.gz"
|
||||
|
||||
if [ $? -eq 0 ]; then
|
||||
log_info "Successfully transferred ${volume}"
|
||||
else
|
||||
log_error "Failed to transfer ${volume}"
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
log_info "Backup transfer completed"
|
||||
}
|
||||
|
||||
# Restore volumes on target host
|
||||
restore_volumes() {
|
||||
log_info "Restoring volumes on ${TARGET_HOST}..."
|
||||
|
||||
for volume in "${VOLUMES[@]}"; do
|
||||
log_info "Restoring volume: ${volume}"
|
||||
|
||||
# Docker Swarm prefixes volumes with stack name
|
||||
VOLUME_NAME="${STACK_NAME}_${volume}"
|
||||
|
||||
# Create volume if it doesn't exist
|
||||
ssh_cmd "${TARGET_HOST}" "docker volume create ${VOLUME_NAME}" || true
|
||||
|
||||
# Restore backup using a temporary container
|
||||
ssh_cmd "${TARGET_HOST}" "docker run --rm -v ${VOLUME_NAME}:/target -v ${BACKUP_DIR}:/backup alpine:latest sh -c 'rm -rf /target/* && tar xzf /backup/${volume}.tar.gz -C /target'"
|
||||
|
||||
if [ $? -eq 0 ]; then
|
||||
log_info "Successfully restored ${volume}"
|
||||
else
|
||||
log_error "Failed to restore ${volume}"
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
log_info "Volume restoration completed"
|
||||
}
|
||||
|
||||
# Update stack.production.yml to use new hostname
|
||||
update_stack_config() {
|
||||
log_info "Updating stack.production.yml..."
|
||||
|
||||
# Determine target hostname if not already set
|
||||
if [ -z "${TARGET_HOSTNAME}" ]; then
|
||||
# Try to get the actual hostname from the target
|
||||
TARGET_HOSTNAME=$(ssh_cmd "${TARGET_HOST}" "hostname" 2>/dev/null || echo "ingress")
|
||||
log_info "Target hostname auto-detected: ${TARGET_HOSTNAME}"
|
||||
else
|
||||
log_info "Using specified target hostname: ${TARGET_HOSTNAME}"
|
||||
fi
|
||||
|
||||
# Update the stack file
|
||||
if [[ "$OSTYPE" == "darwin"* ]]; then
|
||||
# macOS uses BSD sed
|
||||
sed -i.bak "s/node.hostname == macmini3/node.hostname == ${TARGET_HOSTNAME}/g" stack.production.yml
|
||||
else
|
||||
# Linux uses GNU sed
|
||||
sed -i.bak "s/node.hostname == macmini3/node.hostname == ${TARGET_HOSTNAME}/g" stack.production.yml
|
||||
fi
|
||||
|
||||
log_info "Stack configuration updated"
|
||||
log_warn "Backup of original file saved as stack.production.yml.bak"
|
||||
}
|
||||
|
||||
# Deploy to new location
|
||||
deploy_to_target() {
|
||||
log_info "Deploying stack to ${TARGET_HOST}..."
|
||||
|
||||
# Copy updated stack file to target
|
||||
scp -o BatchMode=yes stack.production.yml "${TARGET_HOST}:/tmp/stack.production.yml"
|
||||
|
||||
# Deploy on target (assuming Docker Swarm manager is accessible)
|
||||
ssh_cmd "${TARGET_HOST}" "docker stack deploy --with-registry-auth -c /tmp/stack.production.yml ${STACK_NAME}"
|
||||
|
||||
if [ $? -eq 0 ]; then
|
||||
log_info "Stack deployed successfully"
|
||||
else
|
||||
log_error "Failed to deploy stack"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Verify deployment
|
||||
verify_deployment() {
|
||||
log_info "Verifying deployment..."
|
||||
|
||||
sleep 10 # Give services time to start
|
||||
|
||||
ssh_cmd "${TARGET_HOST}" "docker stack services ${STACK_NAME} --format 'table {{.Name}}\t{{.Replicas}}\t{{.Image}}'"
|
||||
|
||||
log_info "Deployment verification completed"
|
||||
}
|
||||
|
||||
# Optional cleanup function
|
||||
do_cleanup_old() {
|
||||
if [ "${CLEANUP_OLD:-false}" = "true" ]; then
|
||||
log_info "Removing old deployment from ${SOURCE_HOST}..."
|
||||
ssh_cmd "${SOURCE_HOST}" "docker stack rm ${STACK_NAME}" || log_warn "Stack removal failed or already removed"
|
||||
sleep 5
|
||||
ssh_cmd "${SOURCE_HOST}" "docker volume rm ${STACK_NAME}_redis_data ${STACK_NAME}_public_system" || log_warn "Volume removal failed or already removed"
|
||||
log_info "Old deployment cleanup completed"
|
||||
else
|
||||
log_warn "Skipping cleanup of old deployment on ${SOURCE_HOST}"
|
||||
log_warn "To remove old deployment, run manually:"
|
||||
log_warn " ssh ${SOURCE_HOST} 'docker stack rm ${STACK_NAME}'"
|
||||
log_warn " ssh ${SOURCE_HOST} 'docker volume rm ${STACK_NAME}_redis_data ${STACK_NAME}_public_system'"
|
||||
log_warn ""
|
||||
log_warn "Or set CLEANUP_OLD=true environment variable to auto-cleanup"
|
||||
fi
|
||||
}
|
||||
|
||||
# Cleanup temporary files
|
||||
cleanup_temp_files() {
|
||||
log_info "Cleaning up temporary files..."
|
||||
ssh_cmd "${SOURCE_HOST}" "rm -rf ${BACKUP_DIR}" || true
|
||||
ssh_cmd "${TARGET_HOST}" "rm -rf ${BACKUP_DIR}" || true
|
||||
log_info "Cleanup completed"
|
||||
}
|
||||
|
||||
# Main execution
|
||||
main() {
|
||||
log_info "Starting migration from ${SOURCE_HOST} to ${TARGET_HOST}"
|
||||
|
||||
check_ssh
|
||||
backup_volumes
|
||||
transfer_backups
|
||||
restore_volumes
|
||||
update_stack_config
|
||||
deploy_to_target
|
||||
verify_deployment
|
||||
cleanup_temp_files
|
||||
do_cleanup_old
|
||||
|
||||
log_info "Migration completed successfully!"
|
||||
log_warn "Remember to:"
|
||||
log_warn " 1. Test the service at https://haste.nixc.us"
|
||||
log_warn " 2. Commit the updated stack.production.yml"
|
||||
log_warn " 3. Clean up old deployment on ${SOURCE_HOST} when ready"
|
||||
}
|
||||
|
||||
# Run main function
|
||||
main "$@"
|
||||
|
|
@ -0,0 +1,32 @@
|
|||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
|
||||
# Simple script to update stack.production.yml hostname constraint
|
||||
# Usage: ./update-stack-hostname.sh [target_hostname]
|
||||
# Example: ./update-stack-hostname.sh ingress
|
||||
|
||||
TARGET_HOSTNAME="${1:-ingress}"
|
||||
STACK_FILE="stack.production.yml"
|
||||
|
||||
if [ ! -f "${STACK_FILE}" ]; then
|
||||
echo "Error: ${STACK_FILE} not found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Updating ${STACK_FILE} to use hostname: ${TARGET_HOSTNAME}"
|
||||
|
||||
# Create backup
|
||||
cp "${STACK_FILE}" "${STACK_FILE}.bak.$(date +%Y%m%d-%H%M%S)"
|
||||
|
||||
# Update hostname (handles both macOS and Linux sed)
|
||||
if [[ "$OSTYPE" == "darwin"* ]]; then
|
||||
sed -i '' "s/node.hostname == macmini3/node.hostname == ${TARGET_HOSTNAME}/g" "${STACK_FILE}"
|
||||
else
|
||||
sed -i "s/node.hostname == macmini3/node.hostname == ${TARGET_HOSTNAME}/g" "${STACK_FILE}"
|
||||
fi
|
||||
|
||||
echo "Updated ${STACK_FILE}"
|
||||
echo "Backup saved with timestamp suffix"
|
||||
echo ""
|
||||
echo "Changes:"
|
||||
git diff "${STACK_FILE}" || echo "No git diff available"
|
||||
Loading…
Reference in New Issue