Amazon web services 使用数据管道将EFS备份到S3

Amazon web services 使用数据管道将EFS备份到S3,amazon-web-services,amazon-s3,terraform,amazon-data-pipeline,Amazon Web Services,Amazon S3,Terraform,Amazon Data Pipeline,我正在编写一个将EFS文件系统备份到S3的解决方案。发生这些备份时,应删除以前的备份。我实现这一点的方式是通过Terraform。在Terraform中,我使用cloudformation堆栈创建数据管道。我还创建了两个S3存储桶:一个用于数据管道的日志,另一个用于EFS卷的备份。当执行我的Terraform代码时,除了数据管道之外,所有创建的东西都没有问题。我收到一个回滚\u完全错误。这就是确切的错误: ROLLBACK_COMPLETE: ["The following resource(s

我正在编写一个将EFS文件系统备份到S3的解决方案。发生这些备份时,应删除以前的备份。我实现这一点的方式是通过Terraform。在Terraform中,我使用cloudformation堆栈创建数据管道。我还创建了两个S3存储桶:一个用于数据管道的日志,另一个用于EFS卷的备份。当执行我的Terraform代码时,除了数据管道之外,所有创建的东西都没有问题。我收到一个回滚\u完全错误。这就是确切的错误:

ROLLBACK_COMPLETE: ["The following resource(s) failed to create: [DataPipelineEFSBackup]. . Rollback requested by user." "Pipeline Definition failed to validate because of following Errors: [{ObjectId = 'ShellCommandActivityObj', errors = [Not a valid S3 Path. It must be of the form s3://bucket/key]}, {ObjectId = 'EC2ResourceObj', errors = [Not a valid S3 Path. It must be of the form s3://bucket/key]}] and Warnings: []"]
我不明白为什么会出现这种情况。下面是创建S3存储桶、cloudformation堆栈的代码,也是产生此错误的数据管道脚本的一部分。如有任何建议,将不胜感激

S3铲斗

resource "aws_s3_bucket" "backup" {


bucket        = var.s3_backup
  force_destroy = true

  versioning {
    enabled = true
  }

  lifecycle_rule {
    enabled = true
    prefix  = "efs"

    noncurrent_version_expiration {
      days = var.noncurrent_version_expiration_days
    }
  }
}

resource "aws_s3_bucket" "logs" {
  bucket        = var.s3_logs
  force_destroy = true
}
数据管道

resource "aws_cloudformation_stack" "datapipeline" {


name          = "${var.name}-datapipeline-stack"
  template_body = file("scripts/templates/datapipeline.yml")

  parameters = {
    myInstanceType             = var.datapipeline_config["instance_type"]
    mySubnetId                 = aws_subnet.public.id
    mySecurityGroupId          = aws_security_group.datapipeline.id
    myS3LogBucket              = aws_s3_bucket.logs.id
    myS3BackupsBucket          = aws_s3_bucket.backup.id
    myEFSId                    = aws_efs_file_system.efs.id
    myEFSSource                = aws_efs_mount_target.efs.dns_name
    myTopicArn                 = aws_cloudformation_stack.sns.outputs["TopicArn"]
    myImageId                  = data.aws_ami.amazon_linux.id
    myDataPipelineResourceRole = aws_iam_instance_profile.resource_role.name
    myDataPipelineRole         = aws_iam_role.datapipeline_role.name
    myKeyPair                  = aws_key_pair.key_pair.key_name
    myPeriod                   = var.datapipeline_config["period"]
    myExecutionTimeout         = var.datapipeline_config["timeout"]
    Tag                        = var.name
    myRegion                   = var.region
  }
}
数据管道脚本

- Id: ShellCommandActivityObj
      Name: ShellCommandActivityObj
      Fields:
        - Key: type
          StringValue: ShellCommandActivity
        - Key: runsOn
          RefValue: EC2ResourceObj
        - Key: command
          StringValue: |
            source="$1"
            region="$2"
            destination="$3"
            sudo yum -y install nfs-utils
            [[ -d /backup ]] || sudo mkdir /backup
            if ! mount -l -t nfs4 | grep -qF $source; then
              sudo mount -t nfs -o nfsvers=4.1 -o rsize=1048576 -o wsize=1048576 -o timeo=600 -o retrans=2 -o hard "$source" /backup
            fi
            sudo aws s3 sync --delete --exact-timestamps /backup/ s3://$destination/
            backup_status="$?"
            if [ "backup_status" -eq "2"]; then
              backup_status="0"
            fi
            exit "$backup_status"