Warning: file_get_contents(/data/phpspider/zhask/data//catemap/1/amazon-web-services/12.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Amazon web services 通过Cloudwatch和Kinesis数据流进行跨帐户Cloudtrail日志传输_Amazon Web Services_Amazon S3_Amazon Cloudwatch_Amazon Kinesis Firehose_Amazon Cloudtrail - Fatal编程技术网

Amazon web services 通过Cloudwatch和Kinesis数据流进行跨帐户Cloudtrail日志传输

Amazon web services 通过Cloudwatch和Kinesis数据流进行跨帐户Cloudtrail日志传输,amazon-web-services,amazon-s3,amazon-cloudwatch,amazon-kinesis-firehose,amazon-cloudtrail,Amazon Web Services,Amazon S3,Amazon Cloudwatch,Amazon Kinesis Firehose,Amazon Cloudtrail,我使用Cloudwatch订阅将一个帐户的cloudtrail日志发送到另一个帐户。接收日志的帐户有一个Kinesis数据流,该数据流从cloudwatch订阅接收日志,并调用AWS提供的标准lambda函数来解析日志并将其存储到日志接收器帐户的S3存储桶中。 写入s3 bucket的日志文件的格式如下: {"eventVersion":"1.08","userIdentity":{"type":"Assu

我使用Cloudwatch订阅将一个帐户的cloudtrail日志发送到另一个帐户。接收日志的帐户有一个Kinesis数据流,该数据流从cloudwatch订阅接收日志,并调用AWS提供的标准lambda函数来解析日志并将其存储到日志接收器帐户的S3存储桶中。 写入s3 bucket的日志文件的格式如下:

{"eventVersion":"1.08","userIdentity":{"type":"AssumedRole","principalId":"AA:i-096379450e69ed082","arn":"arn:aws:sts::34502sdsdsd:assumed-role/RDSAccessRole/i-096379450e69ed082","accountId":"34502sdsdsd","accessKeyId":"ASIAVAVKXAXXXXXXXC","sessionContext":{"sessionIssuer":{"type":"Role","principalId":"AROAVAVKXAKDDDDD","arn":"arn:aws:iam::3450291sdsdsd:role/RDSAccessRole","accountId":"345029asasas","userName":"RDSAccessRole"},"webIdFederationData":{},"attributes":{"mfaAuthenticated":"false","creationDate":"2021-04-27T04:38:52Z"},"ec2RoleDelivery":"2.0"}},"eventTime":"2021-04-27T07:24:20Z","eventSource":"ssm.amazonaws.com","eventName":"ListInstanceAssociations","awsRegion":"us-east-1","sourceIPAddress":"188.208.227.188","userAgent":"aws-sdk-go/1.25.41 (go1.13.15; linux; amd64) amazon-ssm-agent/","requestParameters":{"instanceId":"i-096379450e69ed082","maxResults":20},"responseElements":null,"requestID":"a5c63b9d-aaed-4a3c-9b7d-a4f7c6b774ab","eventID":"70de51df-c6df-4a57-8c1e-0ffdeb5ac29d","readOnly":true,"resources":[{"accountId":"34502914asasas","ARN":"arn:aws:ec2:us-east-1:3450291asasas:instance/i-096379450e69ed082"}],"eventType":"AwsApiCall","managementEvent":true,"eventCategory":"Management","recipientAccountId":"345029149342"}
{"eventVersion":"1.08","userIdentity":{"type":"AssumedRole","principalId":"AROAVAVKXAKPKZ25XXXX:AmazonMWAA-airflow","arn":"arn:aws:sts::3450291asasas:assumed-role/dev-1xdcfd/AmazonMWAA-airflow","accountId":"34502asasas","accessKeyId":"ASIAVAVKXAXXXXXXX","sessionContext":{"sessionIssuer":{"type":"Role","principalId":"AROAVAVKXAKPKZXXXXX","arn":"arn:aws:iam::345029asasas:role/service-role/AmazonMWAA-dlp-dev-1xdcfd","accountId":"3450291asasas","userName":"dlp-dev-1xdcfd"},"webIdFederationData":{},"attributes":{"mfaAuthenticated":"false","creationDate":"2021-04-27T07:04:08Z"}},"invokedBy":"airflow.amazonaws.com"},"eventTime":"2021-04-27T07:23:46Z","eventSource":"logs.amazonaws.com","eventName":"CreateLogStream","awsRegion":"us-east-1","sourceIPAddress":"airflow.amazonaws.com","userAgent":"airflow.amazonaws.com","errorCode":"ResourceAlreadyExistsException","errorMessage":"The specified log stream already exists","requestParameters":{"logStreamName":"scheduler.py.log","logGroupName":"dlp-dev-DAGProcessing"},"responseElements":null,"requestID":"40b48ef9-fc4b-4d1a-8fd1-4f2584aff1e9","eventID":"ef608d43-4765-4a3a-9c92-14ef35104697","readOnly":false,"eventType":"AwsApiCall","apiVersion":"20140328","managementEvent":true,"eventCategory":"Management","recipientAccountId":"3450291asasas"}
这种类型的日志行的问题是,Athena无法解析这些日志行,我也无法使用Athena查询日志

我尝试修改blueprint lambda函数,将日志文件保存为标准JSON结果,这将使Athena更容易解析文件

例如:

Blueprint Lambda函数的修改代码如下所示:


    import base64
    import json
    import gzip
    from io import BytesIO
    import boto3
    
    
    def transformLogEvent(log_event):
       
        return log_event['message'] + '\n'
    
    
    def processRecords(records):
        for r in records:
            data = base64.b64decode(r['data'])
            striodata = BytesIO(data)
            with gzip.GzipFile(fileobj=striodata, mode='r') as f:
                data = json.loads(f.read())
    
            recId = r['recordId']
            
            if data['messageType'] == 'CONTROL_MESSAGE':
                yield {
                    'result': 'Dropped',
                    'recordId': recId
                }
            elif data['messageType'] == 'DATA_MESSAGE':
                result = {}
                result["Records"] = {}
                events = []
                for e in data['logEvents']:
                    events.append(e["message"])
                result["Records"] = events
                print(result)
                
                if len(result) <= 6000000:
                    yield {
                        'data': result,
                        'result': 'Ok',
                        'recordId': recId
                    }
                else:
                    yield {
                        'result': 'ProcessingFailed',
                        'recordId': recId
                    }
            else:
                yield {
                    'result': 'ProcessingFailed',
                    'recordId': recId
                }
    
    
    def putRecordsToFirehoseStream(streamName, records, client, attemptsMade, maxAttempts):
        failedRecords = []
        codes = []
        errMsg = ''
        # if put_record_batch throws for whatever reason, response['xx'] will error out, adding a check for a valid
        # response will prevent this
        response = None
        try:
            response = client.put_record_batch(DeliveryStreamName=streamName, Records=records)
        except Exception as e:
            failedRecords = records
            errMsg = str(e)
    
        # if there are no failedRecords (put_record_batch succeeded), iterate over the response to gather results
        if not failedRecords and response and response['FailedPutCount'] > 0:
            for idx, res in enumerate(response['RequestResponses']):
                # (if the result does not have a key 'ErrorCode' OR if it does and is empty) => we do not need to re-ingest
                if 'ErrorCode' not in res or not res['ErrorCode']:
                    continue
    
                codes.append(res['ErrorCode'])
                failedRecords.append(records[idx])
    
            errMsg = 'Individual error codes: ' + ','.join(codes)
    
        if len(failedRecords) > 0:
            if attemptsMade + 1 < maxAttempts:
                print('Some records failed while calling PutRecordBatch to Firehose stream, retrying. %s' % (errMsg))
                putRecordsToFirehoseStream(streamName, failedRecords, client, attemptsMade + 1, maxAttempts)
            else:
                raise RuntimeError('Could not put records after %s attempts. %s' % (str(maxAttempts), errMsg))
    
    
    def putRecordsToKinesisStream(streamName, records, client, attemptsMade, maxAttempts):
        failedRecords = []
        codes = []
        errMsg = ''
        # if put_records throws for whatever reason, response['xx'] will error out, adding a check for a valid
        # response will prevent this
        response = None
        try:
            response = client.put_records(StreamName=streamName, Records=records)
        except Exception as e:
            failedRecords = records
            errMsg = str(e)
    
        # if there are no failedRecords (put_record_batch succeeded), iterate over the response to gather results
        if not failedRecords and response and response['FailedRecordCount'] > 0:
            for idx, res in enumerate(response['Records']):
                # (if the result does not have a key 'ErrorCode' OR if it does and is empty) => we do not need to re-ingest
                if 'ErrorCode' not in res or not res['ErrorCode']:
                    continue
    
                codes.append(res['ErrorCode'])
                failedRecords.append(records[idx])
    
            errMsg = 'Individual error codes: ' + ','.join(codes)
    
        if len(failedRecords) > 0:
            if attemptsMade + 1 < maxAttempts:
                print('Some records failed while calling PutRecords to Kinesis stream, retrying. %s' % (errMsg))
                putRecordsToKinesisStream(streamName, failedRecords, client, attemptsMade + 1, maxAttempts)
            else:
                raise RuntimeError('Could not put records after %s attempts. %s' % (str(maxAttempts), errMsg))
    
    
    def createReingestionRecord(isSas, originalRecord):
        if isSas:
            return {'data': base64.b64decode(originalRecord['data']), 'partitionKey': originalRecord['kinesisRecordMetadata']['partitionKey']}
        else:
            return {'data': base64.b64decode(originalRecord['data'])}
    
    
    def getReingestionRecord(isSas, reIngestionRecord):
        if isSas:
            return {'Data': reIngestionRecord['data'], 'PartitionKey': reIngestionRecord['partitionKey']}
        else:
            return {'Data': reIngestionRecord['data']}
    
    
    def lambda_handler(event, context):
        print(event)
        isSas = 'sourceKinesisStreamArn' in event
        streamARN = event['sourceKinesisStreamArn'] if isSas else event['deliveryStreamArn']
        region = streamARN.split(':')[3]
        streamName = streamARN.split('/')[1]
        records = list(processRecords(event['records']))
        projectedSize = 0
        dataByRecordId = {rec['recordId']: createReingestionRecord(isSas, rec) for rec in event['records']}
        putRecordBatches = []
        recordsToReingest = []
        totalRecordsToBeReingested = 0
    
        for idx, rec in enumerate(records):
            if rec['result'] != 'Ok':
                continue
            projectedSize += len(rec['data']) + len(rec['recordId'])
            # 6000000 instead of 6291456 to leave ample headroom for the stuff we didn't account for
            if projectedSize > 6000000:
                totalRecordsToBeReingested += 1
                recordsToReingest.append(
                    getReingestionRecord(isSas, dataByRecordId[rec['recordId']])
                )
                records[idx]['result'] = 'Dropped'
                del(records[idx]['data'])
    
            # split out the record batches into multiple groups, 500 records at max per group
            if len(recordsToReingest) == 500:
                putRecordBatches.append(recordsToReingest)
                recordsToReingest = []
    
        if len(recordsToReingest) > 0:
            # add the last batch
            putRecordBatches.append(recordsToReingest)
    
        # iterate and call putRecordBatch for each group
        recordsReingestedSoFar = 0
        if len(putRecordBatches) > 0:
            client = boto3.client('kinesis', region_name=region) if isSas else boto3.client('firehose', region_name=region)
            for recordBatch in putRecordBatches:
                if isSas:
                    putRecordsToKinesisStream(streamName, recordBatch, client, attemptsMade=0, maxAttempts=20)
                else:
                    putRecordsToFirehoseStream(streamName, recordBatch, client, attemptsMade=0, maxAttempts=20)
                recordsReingestedSoFar += len(recordBatch)
                print('Reingested %d/%d records out of %d' % (recordsReingestedSoFar, totalRecordsToBeReingested, len(event['records'])))
        else:
            print('No records to be reingested')
    
        return {"records": records}


如果您能在这方面提供帮助,我们将不胜感激。

您在查询问题中提供的示例时是否遇到任何问题?关于Athena处理JSON文件,您到底需要什么?@prabhakarredy是的,Athena无法解析我问题开头提到的日志行。为了克服这个问题,我修改了lambda代码,将日志文件写为JSON对象,这样就可以用Athena解析它,从而查询数据。@Prabhakarredy如果有办法用Athena解析顶部的日志行,那么我也很擅长。进行转换的唯一原因是雅典娜无法将其作为标准格式。您在这里尝试过DDL吗?如果您没有做任何更改,您应该能够使用此定义查询它。使用此DDL时,请使用收到的错误更新您的问题?@prabhakarredy是的,我使用过。它确实创建了表,但在查询时,不会填充任何结果。

    import base64
    import json
    import gzip
    from io import BytesIO
    import boto3
    
    
    def transformLogEvent(log_event):
       
        return log_event['message'] + '\n'
    
    
    def processRecords(records):
        for r in records:
            data = base64.b64decode(r['data'])
            striodata = BytesIO(data)
            with gzip.GzipFile(fileobj=striodata, mode='r') as f:
                data = json.loads(f.read())
    
            recId = r['recordId']
            
            if data['messageType'] == 'CONTROL_MESSAGE':
                yield {
                    'result': 'Dropped',
                    'recordId': recId
                }
            elif data['messageType'] == 'DATA_MESSAGE':
                result = {}
                result["Records"] = {}
                events = []
                for e in data['logEvents']:
                    events.append(e["message"])
                result["Records"] = events
                print(result)
                
                if len(result) <= 6000000:
                    yield {
                        'data': result,
                        'result': 'Ok',
                        'recordId': recId
                    }
                else:
                    yield {
                        'result': 'ProcessingFailed',
                        'recordId': recId
                    }
            else:
                yield {
                    'result': 'ProcessingFailed',
                    'recordId': recId
                }
    
    
    def putRecordsToFirehoseStream(streamName, records, client, attemptsMade, maxAttempts):
        failedRecords = []
        codes = []
        errMsg = ''
        # if put_record_batch throws for whatever reason, response['xx'] will error out, adding a check for a valid
        # response will prevent this
        response = None
        try:
            response = client.put_record_batch(DeliveryStreamName=streamName, Records=records)
        except Exception as e:
            failedRecords = records
            errMsg = str(e)
    
        # if there are no failedRecords (put_record_batch succeeded), iterate over the response to gather results
        if not failedRecords and response and response['FailedPutCount'] > 0:
            for idx, res in enumerate(response['RequestResponses']):
                # (if the result does not have a key 'ErrorCode' OR if it does and is empty) => we do not need to re-ingest
                if 'ErrorCode' not in res or not res['ErrorCode']:
                    continue
    
                codes.append(res['ErrorCode'])
                failedRecords.append(records[idx])
    
            errMsg = 'Individual error codes: ' + ','.join(codes)
    
        if len(failedRecords) > 0:
            if attemptsMade + 1 < maxAttempts:
                print('Some records failed while calling PutRecordBatch to Firehose stream, retrying. %s' % (errMsg))
                putRecordsToFirehoseStream(streamName, failedRecords, client, attemptsMade + 1, maxAttempts)
            else:
                raise RuntimeError('Could not put records after %s attempts. %s' % (str(maxAttempts), errMsg))
    
    
    def putRecordsToKinesisStream(streamName, records, client, attemptsMade, maxAttempts):
        failedRecords = []
        codes = []
        errMsg = ''
        # if put_records throws for whatever reason, response['xx'] will error out, adding a check for a valid
        # response will prevent this
        response = None
        try:
            response = client.put_records(StreamName=streamName, Records=records)
        except Exception as e:
            failedRecords = records
            errMsg = str(e)
    
        # if there are no failedRecords (put_record_batch succeeded), iterate over the response to gather results
        if not failedRecords and response and response['FailedRecordCount'] > 0:
            for idx, res in enumerate(response['Records']):
                # (if the result does not have a key 'ErrorCode' OR if it does and is empty) => we do not need to re-ingest
                if 'ErrorCode' not in res or not res['ErrorCode']:
                    continue
    
                codes.append(res['ErrorCode'])
                failedRecords.append(records[idx])
    
            errMsg = 'Individual error codes: ' + ','.join(codes)
    
        if len(failedRecords) > 0:
            if attemptsMade + 1 < maxAttempts:
                print('Some records failed while calling PutRecords to Kinesis stream, retrying. %s' % (errMsg))
                putRecordsToKinesisStream(streamName, failedRecords, client, attemptsMade + 1, maxAttempts)
            else:
                raise RuntimeError('Could not put records after %s attempts. %s' % (str(maxAttempts), errMsg))
    
    
    def createReingestionRecord(isSas, originalRecord):
        if isSas:
            return {'data': base64.b64decode(originalRecord['data']), 'partitionKey': originalRecord['kinesisRecordMetadata']['partitionKey']}
        else:
            return {'data': base64.b64decode(originalRecord['data'])}
    
    
    def getReingestionRecord(isSas, reIngestionRecord):
        if isSas:
            return {'Data': reIngestionRecord['data'], 'PartitionKey': reIngestionRecord['partitionKey']}
        else:
            return {'Data': reIngestionRecord['data']}
    
    
    def lambda_handler(event, context):
        print(event)
        isSas = 'sourceKinesisStreamArn' in event
        streamARN = event['sourceKinesisStreamArn'] if isSas else event['deliveryStreamArn']
        region = streamARN.split(':')[3]
        streamName = streamARN.split('/')[1]
        records = list(processRecords(event['records']))
        projectedSize = 0
        dataByRecordId = {rec['recordId']: createReingestionRecord(isSas, rec) for rec in event['records']}
        putRecordBatches = []
        recordsToReingest = []
        totalRecordsToBeReingested = 0
    
        for idx, rec in enumerate(records):
            if rec['result'] != 'Ok':
                continue
            projectedSize += len(rec['data']) + len(rec['recordId'])
            # 6000000 instead of 6291456 to leave ample headroom for the stuff we didn't account for
            if projectedSize > 6000000:
                totalRecordsToBeReingested += 1
                recordsToReingest.append(
                    getReingestionRecord(isSas, dataByRecordId[rec['recordId']])
                )
                records[idx]['result'] = 'Dropped'
                del(records[idx]['data'])
    
            # split out the record batches into multiple groups, 500 records at max per group
            if len(recordsToReingest) == 500:
                putRecordBatches.append(recordsToReingest)
                recordsToReingest = []
    
        if len(recordsToReingest) > 0:
            # add the last batch
            putRecordBatches.append(recordsToReingest)
    
        # iterate and call putRecordBatch for each group
        recordsReingestedSoFar = 0
        if len(putRecordBatches) > 0:
            client = boto3.client('kinesis', region_name=region) if isSas else boto3.client('firehose', region_name=region)
            for recordBatch in putRecordBatches:
                if isSas:
                    putRecordsToKinesisStream(streamName, recordBatch, client, attemptsMade=0, maxAttempts=20)
                else:
                    putRecordsToFirehoseStream(streamName, recordBatch, client, attemptsMade=0, maxAttempts=20)
                recordsReingestedSoFar += len(recordBatch)
                print('Reingested %d/%d records out of %d' % (recordsReingestedSoFar, totalRecordsToBeReingested, len(event['records'])))
        else:
            print('No records to be reingested')
    
        return {"records": records}

elif data['messageType'] == 'DATA_MESSAGE':