通过AWS命令行从AWS RDS导出MySQL转储

通过AWS命令行从AWS RDS导出MySQL转储,mysql,amazon-web-services,amazon-s3,amazon-ec2,amazon-rds,Mysql,Amazon Web Services,Amazon S3,Amazon Ec2,Amazon Rds,我正在尝试从AWS RDS自动获取MySQL备份。我认为使用AWS命令行是有益的,我可以使用EC2 Red Hat上的crontab来自动触发事件 现在的问题是:我如何连接到RDS,备份MySQL,将它放在EC2上,或者复制到S3,让它每晚运行 我不熟悉AWS命令行。请随时提出建议和代码片段 谢谢 您可以使用mysqldump直接从EC2进行备份 编辑RDS实例的安全组,入站规则: 类型:Mysql/Aurora 协议:TCP 港口范围:3306 来源:自定义EC2安全组ID 示例(来源:定制s

我正在尝试从AWS RDS自动获取MySQL备份。我认为使用AWS命令行是有益的,我可以使用EC2 Red Hat上的crontab来自动触发事件

现在的问题是:我如何连接到RDS,备份MySQL,将它放在EC2上,或者复制到S3,让它每晚运行

我不熟悉AWS命令行。请随时提出建议和代码片段


谢谢

您可以使用mysqldump直接从EC2进行备份

  • 编辑RDS实例的安全组,入站规则:

    类型:Mysql/Aurora

    协议:TCP

    港口范围:3306

    来源:自定义EC2安全组ID

    示例(来源:定制sg_451caa43)

  • 使用SSH连接到EC2实例:

    [MacBook Pro:user]$ssh-i keypair.pem ec2-user@PUBLIC_IP

  • 在EC2实例中安装mysql客户端:

    [ec2-user@ip-170-10-20-30]$sudo yum安装mysql

  • [ec2-user@ip-170-10-20-30]$ mysqldump -h db_test.cdsludsd.us-west-2.rds.amazonaws.com -u admin -p my_database > backup_my_database.sql
    
  • 尝试mysqldump命令

  • [ec2-user@ip-170-10-20-30]$mysqldump-h RDS\u ENPOINT-u MASTER\u USER\u DATABASE-p DATABASE\u NAME>backup.sql

    [ec2-user@ip-170-10-20-30]$ mysqldump -h db_test.cdsludsd.us-west-2.rds.amazonaws.com -u admin -p my_database > backup_my_database.sql
    
  • 创建cron作业

  • 您可以使用mysqldump直接从EC2进行备份

  • 编辑RDS实例的安全组,入站规则:

    类型:Mysql/Aurora

    协议:TCP

    港口范围:3306

    来源:自定义EC2安全组ID

    示例(来源:定制sg_451caa43)

  • 使用SSH连接到EC2实例:

    [MacBook Pro:user]$ssh-i keypair.pem ec2-user@PUBLIC_IP

  • 在EC2实例中安装mysql客户端:

    [ec2-user@ip-170-10-20-30]$sudo yum安装mysql

  • [ec2-user@ip-170-10-20-30]$ mysqldump -h db_test.cdsludsd.us-west-2.rds.amazonaws.com -u admin -p my_database > backup_my_database.sql
    
  • 尝试mysqldump命令

  • [ec2-user@ip-170-10-20-30]$mysqldump-h RDS\u ENPOINT-u MASTER\u USER\u DATABASE-p DATABASE\u NAME>backup.sql

    [ec2-user@ip-170-10-20-30]$ mysqldump -h db_test.cdsludsd.us-west-2.rds.amazonaws.com -u admin -p my_database > backup_my_database.sql
    
  • 创建cron作业
    • 在EC2实例上创建一个cron.sh文件,并将其放在下面的内容中

      mysqldump-h RDS\u ENPOINT-u MASTER\u USER\u DATABASE-p DATABASE\u NAME>/backup/bkp.$(日期+%Y%m%d)。sql

    • [ec2-user@ip-170-10-20-30]$ mysqldump -h db_test.cdsludsd.us-west-2.rds.amazonaws.com -u admin -p my_database > backup_my_database.sql
      
    • 创建其他文件move_to_s3.sh并放在内容下方

          #!/bin/bash
          echo "starting upload to s3 ..."
          TODAY=$(date +%Y%m%d);
          month=$(date +"%m");
          year=$(date +"%Y");
          bucket="mybkp"
          file="$year/$month/bkp.$TODAY.tar"
          filepath="/backup/bkp.$TODAY.tar"
          resource="/${bucket}/${file}"
          contentType="application/x-compressed-tar"
          dateValue=`date -R`
          stringToSign="PUT\n\n${contentType}\n${dateValue}\n${resource}"
          s3Key=AKIAI7BE3RKNSsdfsdfASF
          s3Secret=sdfksdfkJsdfgd76sdfkljhdfsdfsdfsdf
          signature=`echo -en ${stringToSign} | openssl sha1 -hmac ${s3Secret} -binary | base64`;
      
          RESPONSE=$(curl -w "%{http_code}" -s -X PUT -T "${filepath}" \
                  -H "Host: ${bucket}.s3.amazonaws.com" \
                  -H "Date: ${dateValue}" \
                  -H "Content-Type: ${contentType}" \
                  -H "Authorization: AWS ${s3Key}:${signature}" \
                  https://${bucket}.s3.amazonaws.com/${file} -o /dev/null $1);
      
          echo $RESPONSE;
          if [ $RESPONSE -ne 200 ] ; then
              echo "There was an issue in transfering DB dbbackup file to S3. Noticed Error Code: $RESPONSE" | mail -s "Issue on transfer to S3" test@gmail.com;
          else
              rm $filepath;
          fi
          echo "finished upload."`
      
    在半小时的距离内将两者都设置在cron中

    希望有帮助:)

    • 在EC2实例上创建一个cron.sh文件,并将其放在下面的内容中

      mysqldump-h RDS\u ENPOINT-u MASTER\u USER\u DATABASE-p DATABASE\u NAME>/backup/bkp.$(日期+%Y%m%d)。sql

    • [ec2-user@ip-170-10-20-30]$ mysqldump -h db_test.cdsludsd.us-west-2.rds.amazonaws.com -u admin -p my_database > backup_my_database.sql
      
    • 创建其他文件move_to_s3.sh并放在内容下方

          #!/bin/bash
          echo "starting upload to s3 ..."
          TODAY=$(date +%Y%m%d);
          month=$(date +"%m");
          year=$(date +"%Y");
          bucket="mybkp"
          file="$year/$month/bkp.$TODAY.tar"
          filepath="/backup/bkp.$TODAY.tar"
          resource="/${bucket}/${file}"
          contentType="application/x-compressed-tar"
          dateValue=`date -R`
          stringToSign="PUT\n\n${contentType}\n${dateValue}\n${resource}"
          s3Key=AKIAI7BE3RKNSsdfsdfASF
          s3Secret=sdfksdfkJsdfgd76sdfkljhdfsdfsdfsdf
          signature=`echo -en ${stringToSign} | openssl sha1 -hmac ${s3Secret} -binary | base64`;
      
          RESPONSE=$(curl -w "%{http_code}" -s -X PUT -T "${filepath}" \
                  -H "Host: ${bucket}.s3.amazonaws.com" \
                  -H "Date: ${dateValue}" \
                  -H "Content-Type: ${contentType}" \
                  -H "Authorization: AWS ${s3Key}:${signature}" \
                  https://${bucket}.s3.amazonaws.com/${file} -o /dev/null $1);
      
          echo $RESPONSE;
          if [ $RESPONSE -ne 200 ] ; then
              echo "There was an issue in transfering DB dbbackup file to S3. Noticed Error Code: $RESPONSE" | mail -s "Issue on transfer to S3" test@gmail.com;
          else
              rm $filepath;
          fi
          echo "finished upload."`
      
    在半小时的距离内将两者都设置在cron中


    希望有帮助:)

    可能的复制可能的复制这足够快吗?我在一个数据库中有大约100GB的数据,我有大约5个这样的数据库。你认为这需要多长时间?这足够快吗?我在一个数据库中有大约100GB的数据,我有大约5个这样的数据库,你认为这需要多长时间?