Skip to content

Database Migration Service Workshop

Purpose

Objective of this lab is to give you the hands on experience on refactoring the database using the schema conversation tool and database migration service. In this lab we will migrate the SQL Server 2008 database to Amazon Aurora using the Database Migration Service (DMS)

Picture 1

Summary

In this lab we will use Database Migration Service (DMS) and create the Replication instance with Source and Target endpoints and a replication Task that replicates the data from source to target database instances. To create our environment, we will use a CloudFormation template to deploy the resources needed for this workshop in your AWS account in us-east-2 region (alternatively you can use any other US region for this lab).

The source database is a Microsoft SQL Server 2008 (MSSQL2008) on an EC2 instance with Windows Server 2008 R2 and a sample database ‘SalesDB’ with 4 tables. The EC2 instance also has the SQL Server Management Studio (SSMS), MySQL Workbench (WB) and Schema Conversion Tool (SCT) pre-installed on it to save us time for the workshop. The target database is an Amazon Aurora MySQL Compatible instance with no user data.

We will use the client tools (SSMS & MySQL Workbench) on EC2 instance to validate data in the source and the target database instances in this lab. To familiarize ourselves with SCT, we will run the assessment report on the source schema (MSSQL2008) against the target (Amazon Aurora MySQL) and review the results. We will not use the SCT generated scripts to create the schema on the target for this lab, as we will be using DMS replication task to do just that. Finally, at the end of the lab, we will have a running DMS replication instance with a task that replicates data as a one-time full copy of 4 tables from the source MSSQL2008 instance to the 4 tables in the target Aurora MySQL instance.

Lab Part -I

Creating demo environment using Cloud Formation Template

Note: If you have used up the 5 VPCs or IGWs limit in the region you are launching this CF stack then you can try to launch this stack in a different US region. This CF template works in any 4 US commercial regions us-east-1, us-east-2, us-west-1 and us-west-2.

  1. Login into AWS console using your account https://console.aws.amazon.com/
  2. Go to CloudFormation console and click Create Stack.
  3. Select Specify an Amazon S3 template URL and paste the following s3 path of the CF template for this workshop and click Next. https://s3.amazonaws.com/dbbucketsacr/DMSDemo_cfn_v3c.json
  4. Type the stack name “Partner-Workshop” and click Next.
  5. Enter tag as Key=Name, Value=TFC-Workshop
  6. Click Next.
  7. The CF stack creation process might take 7-8mins to complete.
  8. After successful completion of the stack creation process, take a note of the Output values for the following resources: • SourceEC2EndpointDns = ______ • SourceEC2PrivateDns = _____ • TargetAuroraEndpointDns = _______

Build Environment Details

The CloudFormation Template will create the following resources:

1 VPC with:

Subnet1=172.31.0.0/24
Subnet2=172.31.8.0/24
SecurityGroup1=TFC-workshop-InstanceSecurityGroup-xxxx
SecurityGroup2=TFC-Workshop-AuroraSecurityGroup-xxxx
SecurityGroup3=TFC-Workshop-ReplicationInstanceSecurityGroup-xxxx

1 EC2 Instance with:

m5.large
Windows Server 2008 R2
Pre-installed Microsoft SQL Server 2008 - Express Edition - standalone instance
Pre-installed Microsoft SQL Server Management Studio (SSMS) - client
Pre-installed MySQL Workbench (WB) - client
Pre-installed AWS Schema Conversion Tool (SCT) - client
Downloaded drivers for MSSQL and MySQL connectivity for SCT test

1 Amazon Aurora instance with:

db.r4.large
MySQL 5.6.10a compatibility

Picture 2

Lab Part-2

Connecting to your environment

  1. RDP into the EC2 instance (endpoint is the CF output value for SourceEC2EndpointDns) using the Administrator password (‘Password1’).
    Note: If you’re using MAC and you haven’t already; you might want to install Microsoft Remote Desktop 8 from App Store.
  2. Launch SSMS on EC2 instance and connect to the local source database instance on the EC2 by typing ‘localhost’ for Server name and using Windows Authentication as below. Picture 3

  3. Copy the T-sql script in New Query window and execute. Validate the 4 tables with following row count in the SalesDB database.

    USE [SalesDB];
    SELECT 'dbo.Customers' As Table_Name, count(*) AS rows_count FROM dbo.Customers
    UNION
    SELECT 'dbo.Employees' As Table_Name, count(*) AS rows_count FROM dbo.Employees
    UNION
    SELECT 'dbo.Products' As Table_Name, count(*) AS rows_count FROM dbo.Products
    UNION
    SELECT 'dbo.Sales' As Table_Name, count(*) AS rows_count FROM dbo.Sales;
    

    Output:

    Table_Name rows_count
    dbo.Customers 19759
    dbo.Employees 23
    dbo.Products 504
    dbo.Sales 6715221
  4. Copy the following T-sql script in another New Query window and execute. This will create a SQL Login with the user name ‘awssct’ and password as ‘Password1’ for our use later in SCT and DMS.

    USE [master] 
    GO 
    CREATE LOGIN [awssct] WITH PASSWORD=N'Password1', DEFAULT_DATABASE=[master], CHECK_EXPIRATION=OFF, CHECK_POLICY=OFF 
    GO 
    USE [SalesDB] 
    GO 
    CREATE USER [awssct] FOR LOGIN [awssct] 
    GO 
    EXEC master..sp_addsrvrolemember @loginame=N'awssct', @rolename=N'sysadmin' 
    GO
    
  5. Launch MySQL workbench (client) on the EC2 instance and create a new connection to your Aurora Instance endpoint. (endpoint is the CF output value for TargetAuroraEndpointDns) Picture 4

  6. Validate that there are no tables in the Target Aurora instance.

Lab Part-3

Steps to use the Schema Conversion Tool

  1. Launch Schema Conversion Tool (SCT) on the EC2 instance.
  2. Go to File > New Project Wizard
  3. Enter the following values into the form and then click “Next”

    Project Name AWS Schema Conversion Tool Project1
    Location C:\Users\Administrator\AWS Schema Conversion Tool\Projects
    Database Type Transactional Database (OLTP)
    Source Database Engine Microsoft SQL Server / I want to,switch engine and optimize for the cloud
  4. Enter the following values into the form and then click “Test Connection” Once the connection is tested, click “Next”

    Server Name ,SourceEC2PrivateDns
    Server Port 1433
    Instance Name
    User Name awssct
    Password Password1
    Use SSL Unchecked
    Microsoft SQL Server Driver Path D:\JDBC Drivers\sqljdbc_6.0\enu\jre8\sqljdbc42.jar

    Note: You can also get the EC2 hostname by pasting the following metadata URI in Internet Explorer on your EC2 instance: http://169.254.169.254/latest/meta-data/hostname

  5. Select the ‘SalesDB’ database and click Next

  6. Review the Database Migration Assessment Report and then click “Next”
  7. Enter the following values into the form and then click “Test Connection” Once the connection is tested, click “Next”

    Target Database Engine Amazon Aurora (MySQL Compatible)
    Server Name TargetAuroraEndpointDns
    Server Port 3306
    User Name awssct
    Password Password1
    Use SSL Unchecked
    Amazon Aurora Driver Path D:\JDBC Drivers\mysql-connector-java-8.0.12\mysql-connector-java-8.0.12 \ mysql-connector-java-8.0.12.jar
  8. Right Click on the database ‘SalesDB’ and click ‘Create Report’.

  9. Read the Assessment results and Close the SCT wizard.

    Note: We are not going to convert the schema and apply on target from within SCT for this lab. The 4 tables in our Demo have basic design structures and can be migrated without changes between MSSQL and Aurora MySQL. We will use DMS to migrate the schema along with data in next steps.

Lab Part-4

Using the Database Migration Service

AWS Database Migration Service task require at least a source, a target, and a replication instance. Your source is the database you wish to move data from and the target is the database you're moving data to. The replication instance processes the migration tasks and requires access to your source and target endpoints inside your VPC. Replication instances come in different sizes depending on your performance needs. If you're migrating to a different database engine, AWS Schema Conversion Tool can generate the new schema for you.

Picture 5

In this section of the lab you will Logon to the AWS console and create a DMS cluster, endpoints (Source & Target) and create a replication task. After the replication completes you will come back to EC2 instance and verify the newly replicated tables in Amazon Aurora from MySQL Workbench.

  1. In AWS console, search for DMS under services and go to the Database Migration Service console or click on https://console.aws.amazon.com/dms/
  2. Click Get Started and then Next
  3. Enter the following information for the Replication Instance and click Next

    Name TFC-DMS-Replication
    Description TFC-DMS-Replication
    Instance Class dms.t2.medium
    Replication Engine Version default
    VPC vpc-id-TFC-Workshop-DmsVpc
    Multi-AZ No
    Publicly Accessible Yes (checked)
    dvanced -> VPC Security Group(s) Stack Name-ReplicationInstanceSecurityGroup
  4. Enter the following information for the Database endpoints and Run test respectively.

    Source database connection details:

    Select RDS DB Instance Uncheck
    Endpoint Idenitfier SourceDB-MSSQL2008
    Source Engine sqlserver
    Server name SourceEC2EndpointDns
    Port 1433
    SSL mode none
    User name awssct
    Password Password1
    Database Name SalesDB
    VPC vpcid-TFC-Workshop-DmsVpc

    Target database connection details:

    Select RDS DB Instance Checked
    Endpoint Idenitfier TargetDB-AmazonAurora
    Source Engine Aurora
    Server name TargetAuroraEndpointDns
    Port 3306
    SSL mode none
    User name awssct
    Password Password1
    Database Name SalesDB
    VPC vpcid-TFC-Workshop-DmsVpc
  5. Enter the following information for the Create Task and click Next

    Task name TFC-DMS-Task-MSSQL-to-Aurora
    Task description TFC-DMS-Task-MSSQL-to-Aurora
    Source endpoint SourceDB-MSSQL2008
    Target endpoint TargetDB-AmazonAurora
    Replication instance TFC-DMS-Replication
    Start task on create checked
    Task Settings dd
    Target table preparation mode Do nothing
    Include LOB columns in replication Limited LOB mode
    Max LOB size (kb) 32
    Enable validation Checked
    Enable logging Checked
    Table mappings: dd
    Schema name is dbo
    Table name is like "%"
    Action Include
  6. Click create task.

  7. After the task completes and you see the status as ‘Load complete’, select the task ‘TFC-DMS-Task-MSSQL-to-Aurora’ and check the Table statistics tab. You can see the 4 tables and their rows count loaded successfully in the target Aurora MySQL instance.
  8. Logon to the EC2 instance and within MySQL Workbench connect to the Target Aurora instance and run the following SQL command to get the same row count.

    SELECT 'dbo.Customers' As Table_Name, count(*) AS rows_count FROM dbo.Customers
    UNION
    SELECT 'dbo.Employees' As Table_Name, count(*) AS rows_count FROM dbo.Employees
    UNION
    SELECT 'dbo.Products' As Table_Name, count(*) AS rows_count FROM dbo.Products
    UNION
    SELECT 'dbo.Sales' As Table_Name, count(*) AS rows_count FROM dbo.Sales;
    

    Output:

    Table_Name rows_count
    dbo.Customers 19759
    dbo.Employees 23
    dbo.Products 504
    dbo.Sales 6715221

Conclusion:

You have successfully completed this lab by creating a Replication Task on DMS that copies 4 tables and their data from source to target. You validated the data before and after the migration using the client tools on the EC2 instance.

Clean up:

  • Delete the CloudFormation stack ‘TFC-Workshop’
  • Delete the DMS resources – Task(TFC-DMS-Task-MSSQL-to-Aurora), Endpoints(SourceDB-MSSQL2008, TargetDB-AmazonAurora), Replication Instance(TFC-DMS-Replication).