Connect to RDS from EC2 (Linux)/Lambda using IAM authentication

TOC

Connect to RDS from EC2 (Linux)/Lambda using IAM authentication

One of the features provided by RDS is IAM authentication.

You can authenticate to your DB instance using AWS Identity and Access Management (IAM) database authentication. IAM database authentication works with MariaDB, MySQL, and PostgreSQL. With this authentication method, you don’t need to use a password when you connect to a DB instance. Instead, you use an authentication token.

IAM database authentication for MariaDB, MySQL, and PostgreSQL

This page will review how to access RDS from EC2 instances and Lambda functions with IAM authentication.

Environment

Diagram of connecting to RDS from EC2(Linux) / Lambda using IAM authentication.

Create a DB instance.
Enable IAM authentication.
Engine is MySQL.

Create an EC2 instance and Lambda function on a private subnet.
The EC2 instance is the latest version of Amazon Linux2 and the Lambda function is Python 3.8.
Both resources will be used as clients connecting to the DB instance with IAM authentication.

Create two types of Lambda functions to be associated with the CloudFormation custom resource.
The first type creates a Lambda layer for the Lambda function.
The second type initializes the DB instance.
Configure the function to run automatically when the CloudFormation stack is created by associating it with a custom resource.

CloudFormation template files

The above configuration is built with CloudFormation.
The CloudFormation templates are placed at the following URL

https://github.com/awstut-an-r/awstut-fa/tree/main/128

Explanation of key points of template files

RDS

DB Instance

Resources:
  DBInstance:
    Type: AWS::RDS::DBInstance
    Properties:
      AllocatedStorage: !Ref DBAllocatedStorage
      AvailabilityZone: !Sub "${AWS::Region}${AvailabilityZone}"
      EnableIAMDatabaseAuthentication: true
      DBInstanceClass: !Ref DBInstanceClass
      DBInstanceIdentifier: !Sub "${Prefix}-dbinstance"
      DBName: !Ref DBName
      DBSubnetGroupName: !Ref DBSubnetGroup
      Engine: !Ref DBEngine
      EngineVersion: !Ref DBEngineVersion
      MasterUsername: !Ref DBMasterUsername
      MasterUserPassword: !Ref DBMasterUserPassword
      VPCSecurityGroups:
        - !Ref DBSecurityGroup
Code language: YAML (yaml)

The key point is the EnableIAMDatabaseAuthentication property.
Setting “true” to this property will enable IAM authentication on the DB instance.

Initialization process using Lambda function

Resources:
  SQLParameter:
    Type: AWS::SSM::Parameter
    Properties:
      Name: !Sub "${Prefix}-customresource-03"
      Type: String
      Value: !Sub |
        CREATE USER ${DBIamUsername} IDENTIFIED WITH AWSAuthenticationPlugin AS 'RDS';
        GRANT SELECT ON *.* TO 'iamuser'@'%';
        USE ${DBName};
        CREATE TABLE ${DBTableName} (id INT UNSIGNED AUTO_INCREMENT, name VARCHAR(30), PRIMARY KEY(id));
        INSERT INTO planet (name) VALUES ("Mercury");
        INSERT INTO planet (name) VALUES ("Venus");
        INSERT INTO planet (name) VALUES ("Earth");
        INSERT INTO planet (name) VALUES ("Mars");
        INSERT INTO planet (name) VALUES ("Jupiter");
        INSERT INTO planet (name) VALUES ("Saturn");
        INSERT INTO planet (name) VALUES ("Uranus");
        INSERT INTO planet (name) VALUES ("Neptune");

  Function:
    Type: AWS::Lambda::Function
    Properties:
      Architectures:
        - !Ref Architecture
      Environment:
        Variables:
          DB_ENDPOINT_ADDRESS: !Ref DBInstanceEndpointAddress
          DB_ENDPOINT_PORT: !Ref MySQLPort
          DB_PASSWORD: !Ref DBMasterUserPassword
          DB_USER: !Ref DBMasterUsername
          REGION: !Ref AWS::Region
          SQL_PARAMETER: !Ref SQLParameter
      Code:
        ZipFile: |
          import boto3
          import cfnresponse
          import mysql.connector
          import os

          db_endpoint_port = int(os.environ['DB_ENDPOINT_PORT'])
          db_endpoint_address = os.environ['DB_ENDPOINT_ADDRESS']
          db_password = os.environ['DB_PASSWORD']
          db_user = os.environ['DB_USER']
          region = os.environ['REGION']
          sql_parameter = os.environ['SQL_PARAMETER']

          CREATE = 'Create'
          response_data = {}

          def lambda_handler(event, context):
            try:
              if event['RequestType'] == CREATE:
                client = boto3.client('ssm', region_name=region)
                response = client.get_parameter(Name=sql_parameter)
                sql_statements = response['Parameter']['Value']

                conn = mysql.connector.connect(
                  host=db_endpoint_address,
                  port=db_endpoint_port,
                  user=db_user,
                  password=db_password
                  )
                cur = conn.cursor()

                for sql in sql_statements.splitlines():
                  print(sql)
                  cur.execute(sql)

                cur.close()
                conn.commit()

              cfnresponse.send(event, context, cfnresponse.SUCCESS, response_data)

            except Exception as e:
              print(e)
              cfnresponse.send(event, context, cfnresponse.FAILED, response_data)
      FunctionName: !Sub "${Prefix}-customresource-03"
      Handler: !Ref Handler
      Layers:
        - !Ref LambdaLayer
      Runtime: !Ref Runtime
      Role: !GetAtt FunctionRole.Arn
      Timeout: !Ref Timeout
      VpcConfig:
        SecurityGroupIds:
          - !Ref FunctionSecurityGroup
        SubnetIds:
          - !Ref FunctionSubnet
Code language: YAML (yaml)

Define the code to be executed by the Lambda function in inline notation.
For more information, please see the following page.

あわせて読みたい
3 parterns to create Lambda with CloudFormation (S3/Inline/Container) 【Creating Lambda with CloudFormation】 When creating a Lambda with CloudFormation, there are three main patterns as follows. Uploading the code to an S3 buc...

Place this function in the VPC to access the DB instance and perform the DB initialization process.
For details, please refer to the following page.

あわせて読みたい
Initialize RDS DB with CFN Custom Resource 【Performing RDS DB initialization with CloudFormation Custom Resource】 When creating an RDS resource with CloudFormation, we also want to initialize the DB...

The SQL statement to be executed is stored in the SSM parameter store, which is then referenced by the function.
The first two lines of the SQL statement are particularly important.

The first line is a command to create a user to access the DB instance through IAM authentication.
The second line is a command to grant the created user permission to operate the database.

The remaining commands handle writing the test record.
The following page was used as a reference.

https://aws.amazon.com/getting-started/hands-on/boosting-mysql-database-performance-with-amazon-elasticache-for-redis/module-three/

EC2 Instance

Resources:
  Instance:
    Type: AWS::EC2::Instance
    Properties:
      IamInstanceProfile: !Ref InstanceProfile
      ImageId: !Ref ImageId
      InstanceType: !Ref InstanceType
      NetworkInterfaces:
        - DeviceIndex: 0
          SubnetId: !Ref InstanceSubnet
          GroupSet:
            - !Ref InstanceSecurityGroup
      UserData: !Base64 |
        #!/bin/bash -xe
        yum update -y
        yum install -y mariadb
        curl -OL https://s3.amazonaws.com/rds-downloads/rds-combined-ca-bundle.pem
Code language: YAML (yaml)

Define the initialization process for the instance with user data.
For more information on the initialization process, please see the following page.

あわせて読みたい
Four ways to initialize Linux instance 【Four ways to initialize a Linux instance】 Consider how to perform the initialization process when an EC2 instance is started. We will cover the following ...

The key points are lines 3 and 4.

The former command installs the MySQL client.
For information on how to connect to various RDSs in Amazon Linux 2, see the following page.

あわせて読みたい
Amazon Linux 2 How to Connect to RDS – ALL Engines 【How to connect to all RDS DB engines from Amazon Linux 2】 As of 2022, RDS offers the following seven DB engines aurora(PostgreSQL) aurora(MySQL) PostgreSQ...

The latter command downloads the certificate required for IAM authentication.
The assumption is that SSL communication is required for IAM authentication.

Network traffic to and from the database is encrypted using Secure Socket Layer (SSL) or Transport Layer Security (TLS).

IAM database authentication for MariaDB, MySQL, and PostgreSQL

And for SSL communication, a certificate must be provided.

SSL/TLS connections provide one layer of security by encrypting data that moves between your client and a DB instance. Using a server certificate provides an extra layer of security by validating that the connection is being made to an Amazon RDS DB instance. It does so by checking the server certificate that is automatically installed on all DB instances that you provision.

Using SSL/TLS to encrypt a connection to a DB instance

Available certificates are summarized in the following pages.

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html

This time, global-bundle.pem, a certificate bundle for all regions, is used.

The following are the IAM roles for this instance

Resources:
  InstanceRole:
    Type: AWS::IAM::Role
    DeletionPolicy: Delete
    Properties:
      AssumeRolePolicyDocument:
        Version: 2012-10-17
        Statement:
          - Effect: Allow
            Action: sts:AssumeRole
            Principal:
              Service:
                - ec2.amazonaws.com
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
      Policies:
        - PolicyName: RDSIamAthenticationPolicy
          PolicyDocument:
            Version: 2012-10-17
            Statement:
              - Effect: Allow
                Action:
                  - rds-db:connect
                Resource:
                  - !Sub "arn:aws:rds-db:${AWS::Region}:${AWS::AccountId}:dbuser:${DBInstanceResourceId}/${DBIamUsername}"
Code language: YAML (yaml)

Attach the AWS management policy AmazonSSMMManagedInstanceCore in order to connect to this instance with SSM Session Manager during the operation check described below.

Refer to the following page to configure inline policies for IAM authentication.

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.IAMPolicy.html

Specify rds-db:connect as the action and the username in the DB instance as the resource.

Lambda Functions

Resources:
  Function:
    Type: AWS::Lambda::Function
    Properties:
      Architectures:
        - !Ref Architecture
      Environment:
        Variables:
          DB_ENDPOINT_ADDRESS: !Ref DBInstanceEndpointAddress
          DB_ENDPOINT_PORT: !Ref MySQLPort
          DB_NAME: !Ref DBName
          DB_TABLENAME: !Ref DBTableName
          DB_USER: !Ref DBIamUsername
          REGION: !Ref AWS::Region
          SSL_CERTIFICATE: /opt/python/global-bundle.pem
      Code:
        ZipFile: |
          import boto3
          import datetime
          import json
          import mysql.connector
          import os

          db_endpoint_address = os.environ['DB_ENDPOINT_ADDRESS']
          db_endpoint_port = int(os.environ['DB_ENDPOINT_PORT'])
          db_name = os.environ['DB_NAME']
          db_tablename = os.environ['DB_TABLENAME']
          db_user = os.environ['DB_USER']
          region = os.environ['REGION']
          ssl_certificate = os.environ['SSL_CERTIFICATE']

          os.environ['LIBMYSQL_ENABLE_CLEARTEXT_PLUGIN'] = '1'

          client = boto3.client('rds', region_name=region)

          def lambda_handler(event, context):
              token = client.generate_db_auth_token(
                  DBHostname=db_endpoint_address,
                  Port=db_endpoint_port,
                  DBUsername=db_user,
                  Region=region)

              conn = mysql.connector.connect(
                  host=db_endpoint_address,
                  user=db_user,
                  password=token,
                  port=db_endpoint_port,
                  database=db_name,
                  ssl_ca=ssl_certificate)

              cur = conn.cursor()
              read_sql = 'select * from {tbl};'.format(tbl=db_tablename)
              cur.execute(read_sql)
              content = [record for record in cur]

              cur.close()
              conn.close()

              return {
                  'statusCode': 200,
                  'body': json.dumps(content, indent=2)
              }
      FunctionName: !Sub "${Prefix}-function"
      Handler: !Ref Handler
      Layers:
        - !Ref LambdaLayer1
        - !Ref LambdaLayer2
      Runtime: !Ref Runtime
      Role: !GetAtt FunctionRole.Arn
      Timeout: !Ref Timeout
      VpcConfig:
        SecurityGroupIds:
          - !Ref FunctionSecurityGroup
        SubnetIds:
          - !Ref FunctionSubnet
Code language: YAML (yaml)

Define the code to be executed by the Lambda function in inline notation.
For more information, please see the following page.

あわせて読みたい
3 parterns to create Lambda with CloudFormation (S3/Inline/Container) 【Creating Lambda with CloudFormation】 When creating a Lambda with CloudFormation, there are three main patterns as follows. Uploading the code to an S3 buc...

In this configuration, the Lambda function will be placed in the VPC.
In the VpcConfig property, specify the subnet where the function will be placed and the security group to apply.

The code to be executed is as follows.

  1. Retrieve the environment variables defined in the CloudFormation template by accessing os.environ.
  2. Create a client object for RDS in Boto3.
  3. Execute the generate_db_auth_token method to obtain a token for IAM authentication.
  4. Execute mysql.connector.connect method to connect to the DB instance with IAM authentication.
  5. Execute a SELECT statement to retrieve all data in the table.

The key point is the Lambda layer.
It is automatically created by the Lambda function associated with the CloudFormation custom resource described below.
In this case, we will create two layers and associate them with this function.
One layer is for the MySQL connection package and the other layer is for the certificate for SSL communication.

The location of the certificate for SSL communication is also a key point.

If your Lambda function includes layers, Lambda extracts the layer contents into the /opt directory in the function execution environment.

Accessing layer content from your function

And since the runtime environment for this function is Python 3.8, the certificate will be placed in the /opt/python directory.

The following is the IAM role for this function.

Resources:
  FunctionRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: 2012-10-17
        Statement:
          - Effect: Allow
            Action: sts:AssumeRole
            Principal:
              Service:
                - lambda.amazonaws.com
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole
      Policies:
        - PolicyName: RDSIamAthenticationPolicy
          PolicyDocument:
            Version: 2012-10-17
            Statement:
              - Effect: Allow
                Action:
                  - rds-db:connect
                Resource:
                  - !Sub "arn:aws:rds-db:${AWS::Region}:${AWS::AccountId}:dbuser:${DBInstanceResourceId}/${DBIamUsername}"
Code language: YAML (yaml)

Attach the AWS management policy AWSLambdaVPCAccessExecutionRole to perform this function in the VPC.

In addition, as with EC2 instances, configure settings for IAM authentication.

Enables the Function URL for this function.

Resources:
  FunctionUrl:
    Type: AWS::Lambda::Url
    Properties:
      AuthType: NONE
      TargetFunctionArn: !GetAtt Function.Arn

  FunctionUrlPermission:
    Type: AWS::Lambda::Permission
    Properties:
      Action: lambda:InvokeFunctionUrl
      FunctionName: !GetAtt Function.Arn
      FunctionUrlAuthType: NONE
      Principal: "*"
Code language: YAML (yaml)

Please check the following pages for details.

あわせて読みたい
Lambda Function URL by CFN – Auth Type: NONE 【Creating Lambda Function URL by CloudFormation (NONE version)】 Lambda Function URL was released on April 22, 2022. AWS Lambda is announcing Lambda Functio...

Lambda Layers

Layer for Python packages
Resources:
  RequirementsParameter:
    Type: AWS::SSM::Parameter
    Properties:
      Name: !Sub "${Prefix}-customresource-01"
      Type: String
      Value: |
        mysql-connector-python

  LambdaLayer:
    Type: AWS::Lambda::LayerVersion
    DependsOn:
      - CustomResource
    Properties:
      CompatibleArchitectures:
        - !Ref Architecture
      CompatibleRuntimes:
        - !Ref Runtime
      Content:
        S3Bucket: !Ref CodeS3Bucket
        S3Key: !Ref LayerS3Key
      Description: !Ref Prefix
      LayerName: !Sub "${Prefix}-customresource-01"

  CustomResource:
    Type: Custom::CustomResource
    Properties:
      ServiceToken: !GetAtt Function.Arn

  Function:
    Type: AWS::Lambda::Function
    Properties:
      Architectures:
        - !Ref Architecture
      Environment:
        Variables:
          LAYER_PACKAGE: !Ref LayerPackage
          REGION: !Ref AWS::Region
          REQUIREMENTS_PARAMETER: !Ref RequirementsParameter
          S3_BUCKET: !Ref CodeS3Bucket
          S3_BUCKET_FOLDER: !Ref Prefix
      Code:
        ZipFile: |
          import boto3
          import cfnresponse
          import os
          import pip
          import shutil
          import subprocess

          layer_package = os.environ['LAYER_PACKAGE']
          region = os.environ['REGION']
          requirements_parameter = os.environ['REQUIREMENTS_PARAMETER']
          s3_bucket = os.environ['S3_BUCKET']
          s3_bucket_folder = os.environ['S3_BUCKET_FOLDER']

          CREATE = 'Create'
          response_data = {}

          work_dir = '/tmp'
          requirements_file = 'requirements.txt'
          package_dir = 'python'

          requirements_path = os.path.join(work_dir, requirements_file)
          package_dir_path = os.path.join(work_dir, package_dir)
          layer_package_path = os.path.join(
            work_dir,
            layer_package
            )

          def lambda_handler(event, context):
            try:
              if event['RequestType'] == CREATE:
                ssm_client = boto3.client('ssm', region_name=region)
                ssm_response = ssm_client.get_parameter(Name=requirements_parameter)
                requirements = ssm_response['Parameter']['Value']
                #print(requirements)

                with open(requirements_path, 'w') as file_data:
                  print(requirements, file=file_data)

                pip.main(['install', '-t', package_dir_path, '-r', requirements_path])
                shutil.make_archive(
                  os.path.splitext(layer_package_path)[0],
                  format='zip',
                  root_dir=work_dir,
                  base_dir=package_dir
                  )

                s3_resource = boto3.resource('s3')
                bucket = s3_resource.Bucket(s3_bucket)

                bucket.upload_file(
                  layer_package_path,
                  '/'.join([s3_bucket_folder, layer_package])
                  )

              cfnresponse.send(event, context, cfnresponse.SUCCESS, response_data)

            except Exception as e:
              print(e)
              cfnresponse.send(event, context, cfnresponse.FAILED, response_data)
      EphemeralStorage:
        Size: !Ref EphemeralStorageSize
      FunctionName: !Sub "${Prefix}-customresource-01"
      Handler: !Ref Handler
      Runtime: !Ref Runtime
      Role: !GetAtt FunctionRole.Arn
      Timeout: !Ref Timeout
Code language: YAML (yaml)

In the aforementioned function, we used the mysql (mysql-connector-python) package to connect to MySQL, which we will prepare as a Lambda layer.

In this case, we will use a Lambda function associated with a CloudFormation custom resource to automatically create a Lambda layer.

Register a list of packages to download in the SSM Parameter Store.
Register the aforementioned packages.

For more information, please see the following page.

あわせて読みたい
Preparing Lambda Layer Package with CFN Custom Resources – Python Version 【Automatically create and deploy Lambda layer package for Python using CloudFormation custom resources】 The following page covers how to create a Lambda la...
Layer for Certificate
Resources:
  UrlsParameter:
    Type: AWS::SSM::Parameter
    Properties:
      Name: !Sub "${Prefix}-customresource-02"
      Type: String
      Value: |
        https://truststore.pki.rds.amazonaws.com/global/global-bundle.pem

  LambdaLayer:
    Type: AWS::Lambda::LayerVersion
    DependsOn:
      - CustomResource
    Properties:
      CompatibleArchitectures:
        - !Ref Architecture
      CompatibleRuntimes:
        - !Ref Runtime
      Content:
        S3Bucket: !Ref CodeS3Bucket
        S3Key: !Ref LayerS3Key
      Description: !Ref Prefix
      LayerName: !Sub "${Prefix}-customresource-02"

  CustomResource:
    Type: Custom::CustomResource
    Properties:
      ServiceToken: !GetAtt Function.Arn

  Function:
    Type: AWS::Lambda::Function
    Properties:
      Architectures:
        - !Ref Architecture
      Environment:
        Variables:
          LAYER_PACKAGE: !Ref LayerPackage
          REGION: !Ref AWS::Region
          URLS_PARAMETER: !Ref UrlsParameter
          S3_BUCKET: !Ref CodeS3Bucket
          S3_BUCKET_FOLDER: !Ref Prefix
      Code:
        ZipFile: |
          import boto3
          import cfnresponse
          import os
          import shutil
          import subprocess
          import urllib

          layer_package = os.environ['LAYER_PACKAGE']
          region = os.environ['REGION']
          urls_parameter = os.environ['URLS_PARAMETER']
          s3_bucket = os.environ['S3_BUCKET']
          s3_bucket_folder = os.environ['S3_BUCKET_FOLDER']

          CREATE = 'Create'
          response_data = {}

          work_dir = '/tmp'
          package_dir = 'python'

          package_dir_path = os.path.join(work_dir, package_dir)
          layer_package_path = os.path.join(
            work_dir,
            layer_package
            )

          ssm_client = boto3.client('ssm', region_name=region)
          s3_client = boto3.client('s3', region_name=region)

          def lambda_handler(event, context):
            try:
              if event['RequestType'] == CREATE:
                ssm_response = ssm_client.get_parameter(Name=urls_parameter)
                urls = ssm_response['Parameter']['Value']

                result = subprocess.run(
                  ['mkdir', package_dir_path],
                  stdout=subprocess.PIPE,
                  stderr=subprocess.PIPE
                )

                for url in urls.splitlines():
                  print(url)
                  file_name = os.path.basename(url)
                  download_path = os.path.join(package_dir_path, file_name)

                  data = urllib.request.urlopen(url).read()

                  with open(download_path, mode='wb') as f:
                    f.write(data)

                shutil.make_archive(
                  os.path.splitext(layer_package_path)[0],
                  format='zip',
                  root_dir=work_dir,
                  base_dir=package_dir
                )

                s3_client.upload_file(
                  layer_package_path,
                  s3_bucket,
                  os.path.join(s3_bucket_folder, layer_package)
                )

              cfnresponse.send(event, context, cfnresponse.SUCCESS, response_data)

            except Exception as e:
              print(e)
              cfnresponse.send(event, context, cfnresponse.FAILED, response_data)
      EphemeralStorage:
        Size: !Ref EphemeralStorageSize
      FunctionName: !Sub "${Prefix}-customresource-02"
      Handler: !Ref Handler
      Runtime: !Ref Runtime
      Role: !GetAtt FunctionRole.Arn
      Timeout: !Ref Timeout
Code language: YAML (yaml)

The aforementioned function used a certificate to perform IAM authentication.
Prepare this as a Lambda layer.

This also automatically creates a Lambda layer using a Lambda function associated with the CloudFormation custom resource.

Register a list of URLs of files to be downloaded to the SSM Parameter Store.
Register the URL of the aforementioned certificate.

For more information, please see the following page.

あわせて読みたい
Preparing Lambda Layer Package with CFN Custom Resources – General File Version 【Preparing Lambda Layer Package with CFN Custom Resources - General File Version】 In the following pages, we have shown you how to automatically create a L...

Architecting

Use CloudFormation to build this environment and check its actual behavior.

Create CloudFormation stacks and check the resources in the stacks

Create CloudFormation stacks.
For information on how to create stacks and check each stack, please refer to the following pages.

あわせて読みたい
CloudFormation’s nested stack 【How to build an environment with a nested CloudFormation stack】 Examine nested stacks in CloudFormation. CloudFormation allows you to nest stacks. Nested ...

After reviewing the resources in each stack, information on the main resources created in this case is as follows

  • EC2 instance: i-0a625917a26ccc6ae
  • Lambda function: fa-128-function
  • Function URL for Lambda function: https://wndut2cyprxccm4cxjpm5or7bm0takbo.lambda-url.ap-northeast-1.on.aws/
  • Lambda layer 1: fa-128-customresource-01
  • Lambda layer 2: fa-128-customresource-02
  • Lambda function for DB initialization process: fa-128-customresource-03
  • DB instance: fa-128-dbinstance
  • DB instance endpoint: fa-128-dbinstance.cl50iikpthxs.ap-northeast-1.rds.amazonaws.com

Check each resource from the AWS Management Console.

First, check the DB instance.

Detail of RDS 1.

The DB instance has been successfully created.
And you can also see that IAM authentication is enabled.

Check the Lambda layer.

Detail of Lambda 1.
Detail of Lambda 2.

You can see that two layers have indeed been created.

Check the execution result of the Lambda function for the initialization process of the DB instance.

Detail of Lambda 3.

Indeed, this function has been executed by the CloudFormation custom resource.
And you can see that the SQL statement for the DB initialization process was executed within this function.

Operation Check

Now that we are ready, we will check the actual operation.

Access to DB instances from EC2 instances with IAM authentication

First, access the EC2 instance.
The instance is accessed using SSM Session Manager.

% aws ssm start-session --target i-0a625917a26ccc6ae
...
sh-4.2$Code language: Bash (bash)

For more information on SSM Session Manager, please refer to the following page.

あわせて読みたい
Accessing Linux instance via SSM Session Manager 【Configure Linux instances to be accessed via SSM Session Manager】 We will check a configuration in which an EC2 instance is accessed via SSM Session Manag...

Check the execution status of the instance initialization process with user data.

sh-4.2$ sudo yum list installed | grep mariadb
mariadb.aarch64                       1:5.5.68-1.amzn2               @amzn2-core
mariadb-libs.aarch64                  1:5.5.68-1.amzn2               installed

sh-4.2$  mysql -V
mysql  Ver 15.1 Distrib 5.5.68-MariaDB, for Linux (aarch64) using readline 5.1

sh-4.2$ ls -l /*.pem
-rw-r--r-- 1 root root 174184 May  3 00:27 /global-bundle.pemCode language: Bash (bash)

You can see that the MySQL client package has been successfully installed.
The certificate for SSL communication has also been downloaded.

Now that it has been confirmed, access the DB instance with IAM authentication.

sh-4.2$ mysql \
--host=fa-128-dbinstance.cl50iikpthxs.ap-northeast-1.rds.amazonaws.com \
--port=3306 \
--ssl-ca=/global-bundle.pem \
--default-auth=mysql_clear_password \
--user=iamuser \
--password=`aws rds generate-db-auth-token \
  --hostname fa-128-dbinstance.cl50iikpthxs.ap-northeast-1.rds.amazonaws.com \
  --port 3306 \
  --username iamuser \
  --region ap-northeast-1`
...
MySQL [(none)]>Code language: Bash (bash)

I was able to access the site successfully.

The commands for access were taken from the following page

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.Connecting.AWSCLI.html

Note that since we are using the client package for MariaDB, the enable-cleartext-plugin option mentioned on the above page does not exist.
So we will use the “–default-auth=mysql_clear_password” option instead.

https://mariadb.com/kb/en/pluggable-authentication-overview/

なんかいろいろと
aws rds の iam 認証 を試してみた時のメモ。 - なんかいろいろと MySQL および Amazon Aurora に対する IAM データベース認証 - Amazon Relational Database Service という資料にある通り、AWSのRDSでは IAM 認証を利用してDBにアクセス...

Execute a SELECT statement against the test table as a trial.

MySQL [(none)]> use tutorial;
...
Database changed

MySQL [tutorial]> select * from planet;
+----+---------+
| id | name    |
+----+---------+
|  1 | Mercury |
|  2 | Venus   |
|  3 | Earth   |
|  4 | Mars    |
|  5 | Jupiter |
|  6 | Saturn  |
|  7 | Uranus  |
|  8 | Neptune |
+----+---------+
8 rows in set (0.01 sec)Code language: SQL (Structured Query Language) (sql)

We were indeed able to execute a SELECT statement.

In this way, you can access and manipulate DB instances from EC2 instances with IAM authentication.

Access DB instances from Lambda function with IAM authentication

Detail of Lambda 4.

Execute the above function to see how it works.
To execute the function, access the Function URL.

$ curl https://wndut2cyprxccm4cxjpm5or7bm0takbo.lambda-url.ap-northeast-1.on.aws/
[
  [
    1,
    "Mercury"
  ],
  [
    2,
    "Venus"
  ],
  [
    3,
    "Earth"
  ],
  [
    4,
    "Mars"
  ],
  [
    5,
    "Jupiter"
  ],
  [
    6,
    "Saturn"
  ],
  [
    7,
    "Uranus"
  ],
  [
    8,
    "Neptune"
  ]
]Code language: Bash (bash)

Successfully responded.
Contents of the test table of the DB instance were returned.

In this way, DB instances can be accessed and manipulated from Lambda functions with IAM authentication.

Summary

We have identified how to access RDS from EC2 instances and Lambda functions with IAM authentication.

TOC