Notes on S3 Server Access Logging Settings

Notes on S3 Server Access Logging Settings.

Configuration to check notes on setting up S3 server access logging

The content is related to monitoring and troubleshooting, which is also part of the scope of the AWS DVA.
Here are some notes on setting up S3 server access logging.

S3 provides a variety of monitoring options. Server access logging is one type of log storage option provided by S3.

Server access logging provides detailed records for the requests that are made to an Amazon S3 bucket. Server access logs are useful for many applications. For example, access log information can be useful in security and access audits. It can also help you learn about your customer base and understand your Amazon S3 bill.

Enabling Amazon S3 server access logging

There are various parameters to enable the logging feature, one of which is the parameter regarding the log delivery destination. One of the parameters is the log destination. When setting the destination for the server access logging, it is important to note that the logs should not be delivered to the bucket where the logging feature is enabled.

Don’t push server access logs about a bucket into the same bucket. If you configured your server access logs this way, then there would be an infinite loop of logs. This is because when you write a log file to a bucket, the bucket is also accessed, which then generates another log. A log file would be generated for every log written to the bucket, which creates a loop.

Can I push server access logs about an Amazon S3 bucket into the same bucket?

This time, we will configure the logs to be delivered to the bucket where the logging function is enabled, and check the behavior in that case.

Environment

Diagram of notes on S3 Server Access Logging Settings.

Create an S3 bucket for each pattern.

For the normal pattern, create two buckets: one is the bucket where the server access logging function is enabled (normal-bucket). The other is the destination of the logs (log-bucket).

In the NG pattern, a single bucket (error-bucket) is created. This bucket will be configured to deliver logs to itself with the logging feature enabled.

CloudFormation template files

We will build the above configuration using CloudFormation.
The CloudFormation template is placed at the following URL

https://github.com/awstut-an-r/awstut-dva/tree/main/05/001

Template file points

We will cover the key points of each template file to configure this environment.

To enable server access logging, specify destination bucket

First, create a bucket to enable the logging function in the normal pattern.

Resources:
  NormalBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: !Sub ${Prefix}-normal-bucket
      AccessControl: Private
      LoggingConfiguration:
        DestinationBucketName: !Ref LogBucket
        LogFilePrefix: test-normal-log
Code language: YAML (yaml)

The LoggingConfiguration property is a parameter for the server access logging. The DestinationBucketName property in the same property specifies the bucket to which the logs will be delivered. In this case, we will use the built-in function Fn::Ref to specify the bucket for the log as described below.

Specify “LogDeliveryWrite” in ACL for bucket to receive log delivery

Next, define the bucket that will receive the logs delivered from the bucket described above.

Resources:
  LogBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: !Sub ${Prefix}-log-bucket
      AccessControl: LogDeliveryWrite
Code language: YAML (yaml)

The AccessControl property is a parameter related to the Access Control List (ACL) of the S3 bucket. This property is a parameter related to the ACL (Access Control List) of the S3 bucket. ACL is an access control function for bucket objects.

Amazon S3 access control lists (ACLs) enable you to manage access to buckets and objects. Each bucket and object has an ACL attached to it as a subresource. It defines which AWS accounts or groups are granted access and the type of access. When a request is received against a resource, Amazon S3 checks the corresponding ACL to verify that the requester has the necessary access permissions.

Access control list (ACL) overview

There are ACLs called default ACLs that are prepared in advance by AWS. By using the default ACLs, you can shortcut the creation of the desired access permissions. In the default ACL, there is a setting for the bucket that will receive the log delivery in this case. It is called “log-delivery-write”.

The LogDelivery group gets WRITE and READ_ACP permissions on the bucket.

Canned ACL

According to Enabling Amazon S3 server access logging, using this ACL is synonymous with implementing the following procedures.

  • Grant the Log Delivery Group the authority to write logs to the bucket (WRITE).
  • Authorization (READ_ACP) is given to the log distribution group to read the ACL granted to the bucket.

In CloudFormation, in order to specify this ACL, “LogDeliveryWrite” is to be specified.

Use AWS CLI to deliver logs to yourself

Finally, create a bucket for the NG pattern.

Resources:
  ErrorBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: !Sub ${Prefix}-error-bucket
      AccessControl: LogDeliveryWrite
Code language: YAML (yaml)

For the ACL, as mentioned earlier, specify “LogDeliveryWrite” for the AccessControl property.

As for the LoggingConfiguration property, it is not defined in CloudFormation. In order to deliver logs to yourself, you can use the built-in function Fn::Ref to refer to yourself. However, this kind of behavior will be handled as an error in CloudFormation.

This can be configured manually using the AWS CLI as described below.

Architecting

We will use CloudFormation to build this environment and check its actual behavior.

Create CloudFormation stack and check resources

Create a CloudFormation stack.
For information on how to create stacks and check each stack, please refer to the following page

あわせて読みたい
CloudFormation’s nested stack 【How to build an environment with a nested CloudFormation stack】 Examine nested stacks in CloudFormation. CloudFormation allows you to nest stacks. Nested ...

Next, we will review the various resources in the created CloudFormation stack.The following is a summary of the most important information about the resources in the created stack.

  • Name of the bucket where the normal pattern logging feature is enabled: dva-05-001-normal-bucket
  • Name of the bucket that receives log delivery for the normal pattern: dva-05-001-log-bucket
  • Name of the bucket to deliver logs to itself for NG pattern: dva-05-001-error-bucket

Checking server access logg setting status

Check the log settings for normal-bucket from the AWS Management Console.

S3 Server Access logging enabled.

You will see that the log settings are enabled.

We will also check the details of the log settings.

Specify a different bucket as the destination for S3 Server Access logging.

The log-bucket is specified in the LoggingEnabled value.

Next, check the configuration of the log-bucket that will receive log delivery.

The S3 log delivery group has been authorized in the ACL.

As mentioned earlier, we can see that the “WRITE” and “READ_ACP” permissions have been granted to the log distribution group (http://acs.amazonaws.com/groups/s3/LogDelivery).

Check normal behavior of S3 server access logging

First, we will check the behavior of the normal pattern in the server access logging.
We will generate the log by placing a test file in the normal-bucket.

$ aws s3 cp test.txt s3://dva-05-001-normal-bucket
upload: test.txt to s3://dva-05-001-normal-bucket/test.txt
Code language: Bash (bash)
Set up a file for verification.

After waiting for a while, the log file will be generated in the empty log-bucket.

Logs are delivered by Server access logging.

Download one and check the contents.

cd3b764ff044236dfe910b663c273b1f98dd3299f4d524a32909f70581c332fa dva-05-001-normal-bucket [09/Jan/2022:11:56:02 +0000] 172.18.95.21 cd3b764ff044236dfe910b663c273b1f98dd3299f4d524a32909f70581c332fa A49XFDPQ95407VX8 REST.PUT.LOGGING_STATUS - "PUT /?logging HTTP/1.1" 200 - - - 350 - "-" "AWS CloudFormation, aws-internal/3" - fW2gj4UbfzIKmeLUVkSvzO4qp9hMDn2iLvWJ6b8FzE0Us8VQOYW0nJ1mqOn8NLF16ClLaG2nYNo= SigV4 ECDHE-RSA-AES128-GCM-SHA256 AuthHeader dva-05-001-normal-bucket.s3-ap-northeast-1.amazonaws.com TLSv1.2 -
Code language: plaintext (plaintext)

You can read more about how to read the log in Amazon S3 server access log format, but if you look at the operation field, you will see “PUT”, which indicates that the log is for when the test file was installed.

Check behavior of S3 server access logging when it loops

Let’s continue to check the behavior in case of abnormalities.
First, we will configure the log distribution settings that were not configured in CloudFormation.
Let’s check the current settings for log distribution again.

Server access logging is disabled by default.

Nothing has been configured.

If you check the put-bucket-logging, you will see that in order to configure log distribution settings from the AWS CLI, you need to specify the destination bucket in JSON format. This time, we will prepare the JSON data as a text file and specify it as a parameter.

$ cat logging.json
{
  "LoggingEnabled": {
    "TargetBucket": "dva-05-001-error-bucket",
    "TargetPrefix": "test-error-log",
    "TargetGrants": [
      {
        "Grantee": {
          "Type": "AmazonCustomerByEmail",
          "EmailAddress": "[account-mail-address]"
         },
        "Permission": "FULL_CONTROL"
      }
    ]
  }
}

$ aws s3api put-bucket-logging \
--bucket dva-05-001-error-bucket \
--bucket-logging-status file://logging.json
Code language: Bash (bash)

After executing the command, check the log settings again.

Specify the same bucket as the destination for Server access logging.

The source and destination of the logs are now set to be the same.

We are ready to go. We will also generate logs by setting up a test file here.

$ aws s3 cp test.txt s3://dva-05-001-error-bucket
upload: test.txt to s3://dva-05-001-error-bucket/test.txt
Code language: Bash (bash)
Set up a file for verification.

Immediately after installing the test file, nothing is happening. However, if you wait for a while, you can see that a large number of log files have been generated in the bucket.

Loops are generated and a large amount of logs are generated.

As mentioned earlier, this is due to the fact that a kind of loop structure has been created, where logs are generated for the log writing process. Please note that in this situation, hundreds of GB of log files can be generated in a short period of time, which may result in high charges.

Summary

For the S3 server access log function, we checked how to build and how it behaves in the normal and NG patterns.
In the NG pattern, we confirmed the possibility that logs are generated in a loop, resulting in a large number of files and unintentionally high fees.