How to Process and Analyze Streaming Data using AWS Kinesis

Amazon Kinesis makes it easy to process and analyze real-time streaming data so you can get timely insights and react quickly to new information.

AWS Kinesis

Why we use AWS Kinesis

Amazon Kinesis is one of the best-managed services which particularly scales elastically especially for real-time processing of the data at a massive point. These services can be used to collect large streams of data records, which are especially consumed by the application process that runs on Amazon EC2 instances. It is used to smooth the Amazon Kinesis process and analyze data so that we can easily get the perfect insight into information as well as quick answers. It is also offering key capabilities at an affordable cost to process a certain amount of seamless data with the help of flexible tools tailored to needs and requirements through Amazon Kinesis. You can also get real-time data like videos, audio, application logs, as well as website clicks stream, machine learning, and other applications too.

This new technique by Amazon will help you to analyze and process the data instantly instead of waiting for as long as after collecting the data.

Amazon Kinesis Capabilities

  1. Kinesis Data stream
  2. Kinesis Data Delivery Stream/Firehouse
  3. Kinesis Data Analytics Application
  4. Kinesis Video Stream

1. Kinesis data stream

At Amazon, this Amazon Kinesis data stream is specifically used to create real-time custom model applications, using the most popular framework before the data stream process. It can easily ingest all the stored data with all the data streaming prices by using the best tools like Apache spark, which can be successfully run on the EC2 instances.

2. Kinesis Data Delivery Stream/Firehouse

In order to capture, load and transform the data stream into the respective data streams this kinesis data firehouse is used to store in the AWS data stores near all the analytics with all the existing intelligence tools. These tools can be used to continuously generate all data loads according to the destination, which is a sustainable analytics product that provides the same analysis as streaming data.

3. Kinesis Data Analytics application

With Amazon Kinesis, Kinesis Data Analytics is the easiest way to follow all the real-time techniques with MySQL, which has to learn all the programming languages which working frameworks. This kinesis data analytics is used to capture stream data that can run with all standard queries against data streams so that analytics tools for generating alerts by answering in real time can be advanced.

4. Kinesis video stream

Amazon Kinesis video streams are used to store all the data in the stream, such as videos, photos, audio, and connected devices. To the AWS machine learning analytics, other processing can give access to all the video fragments and encrypts the saved data without any problems.

Advantages of AWS Kinesis

  1. Real-time
  2. Fully managed
  3. Scalable

1. Real-time

Amazon Kinesis enables you to process buffer and streaming data in real-time. So that you can drive insights in seconds or minutes instead of hours or days.

2. Fully managed

Amazon Kinesis fully manages and runs all your streaming applications without the need for expensive infrastructure deployment and maintenance.

3. Scalable

Amazon Kinesis can handle any amount of streaming data and process it from hundreds of thousands of sources with little or no delay.

Use Cases of Amazon Kinesis

  1. Video analytical applications
  2. Batch to real-time analytics
  3. Build real-time applications
  4. Analyzing the IOT devices

1. Video analytics applications

This application is also used to secure all streaming video for devices equipped with Amazon Kinesis cameras that are stored in AWS accounts in factories, public places, offices, and homes. This video streaming process is also used to play the video monitor the security, machine learning, and face detection along with other analytics  

2. Batch to real-time analytics

Using Amazon Kinesis, you can easily perform all real-time analytics on respective data to analyze batch processing from data warehouses through the Hadoop framework. Data leaks, data science, and machine learning are some of the most common methods used in such cases. To continuously load data, you use the Kinesis Firehouse to more frequently update all machine learning models for new and accurate data output.

3. Build real-time applications

If you want to create real-time applications, you can use Amazon Kinesis to monitor fraud detection with Live Leader results. This process can be used to easily stream all streaming data with analytics in kinesis streams and with data that is stored in the application itself with closing delays. All of these processes can help you learn more about customers, products, services, and requests so you can react quickly.

4. Analyzing the IOT devices

This Amazon Kinesis is used to process the streaming data directly from the IoT devices like the embedded sensors, TV set up boxes, and consumer appliances. You can also use this data to send real-time alerts according to actions programmatically when the sensor exceeds that entire threshold operating. It is best to use a sample of IoT analytics codes when creating an application.

 

Kinesis vs. SQS

Kinesis

  • Amazon Kinesis is separated from Amazon’s Simple Queue Service, SQS, which uses Kinesys to enable real-time processing of big data streaming.
  • Kinesis provides a routing of records using a given key ordering of records. Ability to read messages from the same stream for multiple clients, simultaneously replay off messages up to as long as seven days in the past and the ability for a client to consumed records at a later time.
  • Kinesis stream will not dynamically scale in the response to an increased demand. So you must provision enough streams ahead of time to meet the anticipated demand of both your data producers and the data consumers.
SQS

  • SQS on the other hand is used as a message queue to store the messages transmitted between distributed application components.
  • SQS provides messaging keywords so that your application can track the successful completion of tasks on the queue, and you can schedule up to 15 minutes delays in messages.
  • Unlike kinesis streams, SQS will scale automatically who meet application demand.
  • SQS has a lower number of messages that can be read or written at a time compared to kinesis.
  • So applications using kinesis can work with messages in larger batches than when using SQS.

A Guide on How does AWS WAF Work

Web Application Firewall (WAF)

AWS WAF is a web-based application firewall service that helps protect your web apps from common deeds that could affect app availability, compromise security, or consume excessive resources.
Web Application Firewall (WAF)

AWS Free Tire provides 10 million monthly bot control requests.

Firewall

A firewall could be a software that is installed in your machine or it could be hardware that sits in between the device on network on the part of and the actual uplink of the internet, which carefully analyze the traffic and allows or restricts the flow of traffic to your device or devices with the help of pre-define configuration rule that are return to the firewall.

WAF Conditions

Condition define basic characteristic that would analyze within a web request.

Six conditions of WAF are as follows:

  1. Cross-Site scripting activities
  2. GEO Match
  3. IP address
  4. Size Constraints
  5. SQL Injection attacks
  6. Strings that appear within the requests

Cross-Site scripting activities a cross-site scripting match condition specifies the parts of a web request (such as a User-Agent header) that you want AWS WAF to inspect for cross-site scripting threats.
Geographical location that requests originate from (GEO Match) this will block all international requests. You can choose filter, which country can use your website. A geo match condition lets you allow, block, or count web requests based on the geographic origin of the request.

filter setting in Web Application Firewall (WAF)

select location in filter setting in Web Application Firewall (WAF)

IP address or address ranges that requests originate from an IP match condition specifies the IP addresses and/or IP address ranges that you want to use to control access to your content. Put IP addresses that you want to allow and IP addresses that you want to block into separate IP match conditions.

IP address in in Web Application Firewall (WAF)

Length of specified parts of the request (Size Constraints) a size constraint condition specifies the parts of a web request (such as a User-Agent header) that you want AWS WAF to compare to a set size.
filter setting in Web Application Firewall (WAF)

Comparison Operator Selection in Web Application Firewall (WAF)

SQL Injection attacks a SQL injection match condition specifies the parts of a web request (such as a User-Agent header) that you want AWS WAF to inspect for SQL queries. Create separate conditions for parts that you want to allow SQL queries in and parts that you don’t.

Transformation filter setting in Web Application Firewall (WAF)

Strings that appear within the requests a string match condition, or a regex match condition, specifies the part of a web request (such as a User-Agent header) and the text (the value of the header) that you want to use to control access to your content. Create separate conditions for strings or regex patterns that you want to allow or block.
Match Type in Filter Setting of Web Application Firewall (WAF)

WAF Rules

We can combine multiple combinations into rules to precisely target requests. A web ACL has a capacity of 1,500. You can add hundreds of rules and rule groups to a web ACL. The total number that you can add is based on the complexity and capacity of each rule.

WAF provides two types of rule:

  1. Regular Rule
  2. Rate-Based Rule
    Create Rules in Web Application Firewall (WAF)

Let’s look into sample Regular Rule:

  • If request comes from 172.30.50
  • They include to be SQL-like code

When rules has multiple conditions, it is treated as AND

Rate-Based rule

Rate-Based rule = Regular Rule + Rate limiting feature

  • If request come from 172.30.0.50
  • They include to be SQL-like code
  • If requests exceeds 1000 requests in 10 minutes

WEB WAF ACL

Web WAF ACL defines action taken against a rule 

Regular Rule:

  • If requests comes from 172.30.0.50
  • They include to be SQL-like code

What action you want to take now?

You can apply these actions on your WAF ACL.

Types of actions: Allow, Block, and Count.
Add Conditions in Web Application Firewall (WAF)

Association

It is defined to which entity WAF is associated. You can’t WAF be associated with EC2 instance directly.

WAF association just support these AWS Services:  CloudFront, Application Load Balancer (ALB), Amazon API Gateway, and AWS Appsync.

AWS Elastic Compute Cloud (EC2): Create Virtual Machine

What is AWS Elastic Compute Cloud (EC2)

AWS Elastic Compute Cloud (EC2) is used to create an instance of a virtual machine; it is a web service that provides secure and resizable compute capacity in the cloud. We can also choose the virtual machine (VU) type which server will be best for the website and future, and I can also choose storage hardware, tags, Protocols, Security group for inbound outbound traffic, etc.

AWS EC2 service provides the fastest processors and all other specifications. We have powerful GPU instances for machine learning (AI) training and future work. We can create new instances through AWS EC2 Autoscaling if required for more instances in the future. Autoscaling automatically creates new instances if your website traffic increases to your selected limits and terminates extra instances through your given minimum limit.

We can only create 20 instances in EC2 under one region; this is by the default AWS setting. If we require 20+ instances, then we will request AWS Support to increase this limit.

AWS EC2 Instances types:

Seven types of EC2 instances are:

  1. General Purpose (Balanced memory and CPU)
  2. Compute Optimized (more CPU than Ram)
  3. Memory Optimized (more Ram)
  4. Storage Optimized(Low Latency mean SSD)
  5. Accelerated Optimized (Graphic Optimized)
  6. High Memory (High Ram, Nitro system)
  7. Previous Generation

General-purpose

Instances provide balance to how to compute memory. 3-series are available in the general purpose

  • A1 series (Medium-Large)
  • M-series (M4, M5, M5a, M5ad, M5d)(Large)
  • T-series (t1, t2, t3)

In the EC2, 4-instances sizes are available, nano, small, medium, and large.

A1-series is used for an AI type.

The M-4 series is the most reliable and best. Still, we can use only EBC storage in this series because this series memory is available only (8 to 160GB), and in the remaining, all series provide a high memory (8 to 384GB). And also, this provides another storage type NVMe (Nitro virtualized) SSD type.

T-series are used only for demo and testing purposes because of much less memory and storage.

Compute Optimized Instance

In this type, if we require many outputs simultaneously, we will use compute-optimized instances.

3-types of compute-optimized instances are available in the EC2.

  • C4
  • C5
  • C5n

C4 and C5 provide high performance. C4 is a cost-effective server in comparison with C5. Therefore, C5 uses more vCPU than C4.

C5n runs on a nitro system (for game development, web server, and any other high-performance system).

Memory-optimized Instance

We use this instance when we work on high-level work, applications, and databases.

3-types of memory-optimized instances are available.

  • R-series (R4, R5, R5ad, AND r5d)
  • X-series (X1 and X1e)
  • Z-series (Z1d)

R-series provide 16 to 768GB memory and EBS storage.

X-series provide 122 to 3904GB memory and SSD storage.

Z-series provide 16 to 384GB memory and also provide SSD storage.

Storage Optimized Instances

3-types are available in this instance.

  • I-series
  • D-series
  • H-series

If any instance will start with I, D, or H, then it means this is the optimized Storage instance. Another name of storage is sequential read and write access.

I 3 & I 3en Instances VCPU: 2 to 96, RAM: 16 to 768GB, and Storage is NVMe SSD.

D2 instance vCPU: 4 to 36, RAM: 30.5 to 244GB and Storage is SSD

H1 Instance vCPU: 8 to 64, RAM: 32 to 256GB, and Storage is HHD

Accelerated Computing Instances

It is also called GPU because it is used for graphic processing like card gaming, 3D gaming. This example is also used in the languages ​​AI (Artificial Intelligence), ML (Machine Learning), and DL (Deep Learning). Also, have three series in this;

  • F1Instance
  • P2 & P3 Instances
  • G2 & G3 Instances

F1Instance In this Instance, FPGA is used & this is used in digital cameras.

vCPU: 8 to 64;

FPGA: 1 to 8; (One FPGA is enough for everything because this is very high)

RAM: 122 to 976; Storage:  NVMe SSD

We use this instance in Genomics Resources (like AI, Machine learning), Video resources (like YouTube and Facebook).

P2 & P3 Instances In this series used the NVIDIA Tesla GPU card. Up to 32GB of memory is available in one GPU.

P2 Instances

VCPU: 4 to 64

GPU: 1 to 16

RAM: 61 to 732GB

Network bandwidth: 25Gbps

Storage:  NVMe SSD

P3 Instances

VCPU: 8 to 96

GPU: 1 to 8

RAM: 61 to 768GB

Nitro bandwidth: 10GB ps

Storage: SSD & EBS

 

G2 & G3 Instances In this series, the NVIDIA Tesla M60 card is used.

vCPU: 4 to 64

RAM: 30.5 to 488GB

GPU: 1 to 4 GPU

Network performance: 25Gbps

High Memory Instance

It is used for dedicated hosts. We can purchase this for a minimum of at least 3year, not one year. You can install it directly on the hardware. We can’t virtualize in this instance. Only one instance runs in the hardware.

Purchasing Options

We have 6-options to purchase these instances.

  1. On-demand
  2. Dedicated Instance
  3. Dedicated Host
  4. Spot Instances
  5. Schedule Instance
  6. Reserved Instances

On-demand for testing purposes like t-series, and we can also set scalability.

Dedicated instance, if we purchase this for one year, then the price will be low. In this Instance, VPC also will never change.

Dedicated Host in this instance purchase license for the Host and then also many instances will be created in this instance but depend on the Dedicated-host

Spot Instances nobody doesn’t use these instances in real-time. Suppose the price of this instance bid goes up after buying this instance. Then AWS gives a notification to the user to save your data anywhere; only two minutes after this time, this virtual machine will terminate and will remove this virtual machine from your account.  That’s why many chances are available to lose your data. So, this instance is best for testing or demo at a low price.

Schedule Instance in this Instance, if our office is working the only 8hrs in one day and two days off in the week. Then you purchase these types of instances for one month or one year. And any other specific work.

Reserved Instances are the best from the on-demand instance because if one instance is required for our office, you can use this instance. And this instance price is also less than the on-demand instance, almost 70%, but in this instance, we have to pay one year’s price advance before buying it.

How to Access EC2 Instances

In this service, we have required key and key pair names. Without the key, we can’t install this instance. This key is a private, not a public key. Because the key is crucial for getting the password and running the Instance through SSH & PUTTY, we can create only 20 instances in one region.

How to Check EC2 Status

At this point, we can check our instance status like AWS running instance, pending, initializing, 2/2 checks, stopped, start and terminated, etc. We can’t change this status because this is a back-end process. If we stop our instance, then the instance bill will not generate.

How to Check EC2 Metadata

Instances information is called metadata, such as Instances Public IP, Private IP, Instance versions, Instance type, local hostname, security group, inbound outbound protocols, etc. If you want to see the metadata of your instance through the SSH & PUTTY, then write this command.

get http://PUBLIC IP/latest/Metadata

Bare Metal Instances

Bare metal instances are not virtualized. It connects to the hardware. These instances are bare metal instances i3, i5, r5-metal, etc.

AWS EC2 Instance Storage

It is faster than EBS. 2nd if we see Instance storage Backed as written, EC2 Instance means our root volume available in instance storage. Only is used for a short time, not for a long time. Also is used in hardware.

Elastic Block Storage

Root volume (Operating system/OS) can store in 2 ways. If we see EBS backers have written, EC2 Instance means our root volume (mean OS) is available in EBS. We can easily replicate/copy/snapshot this volume and transfer it from one region to another and also create a new instance through this snapshot.

Also Read: How to Encrypt and Decrypt data using AWS KMS & CloudHSM

How to Encrypt and Decrypt data using AWS KMS & CloudHSM

This article will guide you on how to use AWS Services KMS & CloudHSM. You can use AWS KMS & CloudHSM to encrypt and decrypt data.

AWS KMS

AWS Key Management Service is an Amazon Web service that allows you to encrypt your stored data efficiently. It provides key storage, maintenance, and management to encrypt data in your website/applications and control your stored data in encryption form.

It allows you to manage and securely store your keys globally, known as customer master keys.

AWS KMS

Key Management with KMS

We can perform the following essential management functions in AWS through KMS:

  • We can create multiple keys with a unique alias/name and description.
  • You can Import your crucial material.
  • We can define which IAM users and roles can handle keys using policies.
  • You can describe which IAM users and roles can use keys to encrypted and decrypted data using policies.
  • It automatically rotates your keys on an annual basis.
  • With the help of it, we can temporarily disable keys so unauthorized personnel cannot use these keys.
  • Re-enable and disabled keys.
  • Delete keys that you have not used for a long time.
  • Go through the use of keys by inspecting logs in AWS Cloud Trail.
  • Create custom key stores*.
  • Connect and disconnect custom key stores*.
  • Delete custom key stores*.

Types of Customer Master Keys (CMK’s)

  1. Customer Managed CMKs
  2. AWS Managed CMKs
  3. Custom Key Stores

Types of AWS KMS

1:Customer Managed CMK

These are in your AWS console account that you create your management.

You have complete control over the CMKs, including establishing and maintaining their key policies, IAM policies, enabling and disabling them, rotating their cryptographic material, adding tags, creating aliases/names that refer to the CMK, and setting the CMKs for deletion.

Customer-managed CMKs have a monthly fee and a fee for use above the given AWS free tier.

2:AWS Managed CMKs

You can use this type through your AWS console account by an AWS service integrated with AWS KMS.

You can’t handle these CMKs, rotate, change their keys, policies, or cryptographic operations directly; the service that creates them uses them on your behalf.

You don’t pay a monthly bill for AWS-managed CMKs. However, you may have to pay some fee over the given AWS free tier, but some AWS services cover these costs only for you.

AWS Managed CMKs

Symmetric CMKs

It uses a single key for both encryption and decryption. The shared key must be sent together with the encrypted data in order for other parties to read it. Because of the simplicity of the process, it is usually faster than asymmetric encryption and is efficient in encrypting large amounts of data.

Asymmetric CMKs

It uses a mathematically related public and private key for encryption and decryption. The public key is used for encrypting data and can never be used for decryption. The private key is only used for decrypting data. The private key stays on the user while both the public key and the encrypted data are sent to other parties. This kind of method makes the sharing of public keys a lot easier because even if someone has managed to steal the data with the public key, he won’t be able to decrypt the information.

AWS KMS also supports both keys. 1st is the symmetric data key, and 2nd is the asymmetric data key pairs designed for use with the client-side signing of AWS KMS. The symmetric data KMS also supports both given keys, designed for use with the client-side signing of AWS KMS.

  • Symmetric Data Key is a symmetric encryption key that you can use to encrypt data outside of AWS KMS.
  • Asymmetric Data Key Pair is an RSA or elliptic curve (ECC) key pair consisting of both keys, 1st is the public key, and 2nd is the private key. Asymmetric CMK saves the private key in AWS KMS. You can use your data and key pair outside of AWS KMS to encrypt and decrypt data, sign messages and verify signatures.

3.Custom Key Stores

A key store is a secure location for storing cryptographic keys. The default key store in AWS KMS also supports methods for generating and managing the keys that its stores. By default, the customer master keys (CMKs) that you create in AWS KMS are generated in and protected by hardware security modules (HSMs) that are FIPS 140-2 validated cryptographic modules. The CMKs never leave the modules unencrypted.

Cloud Hardware Security Module (HSM)

It is a cloud-based hardware security module (HSM). That enables you to quickly generate and use your encryption keys on the AWS Cloud. Cloud HSM can manage your encryption keys using FIPS 140-2 (Level 3) validated HSMs.

Therefore, it is a fully managed service for those automatically time-consuming administrative tasks, such as hardware provisioning, software patching, high availability, and backups. It also allows you to scale quickly by adding & removing HSM capacity according to the user’s requirements, without any payment.  It runs on your VPC.

AWS HSM

The following table helps to understand the critical differences between AWS CloudHSM and AWS KMS:

Difference Between KMS & HSM

Read Also: HOW TO USE AWS ELASTIC LOAD BALANCING (ELB) WITH EC2?

AWS Elastic Block Store (EBS) & It’s Volume Types

what is AWS Elastic Block Store (EBS)?

AWS EBS is a service provided by Amazon Web Services that is used to create instances. This article will guide you on AWS Elastic Block Store (EBS) & It’s Volume Types.

AWS EBS

We use this storage when we want to create a new instance and then selected the root volume for instance storage. This root volume is a default volume of the instance, if you add new volume/storage, this new volume will be AWS EBS.

AWS EBS volume types

There are three types of AWS EBS volume that are as follows:

  1. SSD Backed Volume (Solid State Drive)
  2. HDD Backed Volume (Hard Disk drive )
  3. Magnetic Standard

Types of AWS EBS

1:SSD Backed Volume (Solid State Drive)

This storage is bootable. We can install our OS (Operating System) in this storage like C Drive (used for Windows). It is faster than Magnetic storage. Two types of this volume are as follow;

  • General Purpose SSD (gp2)
  • Provisioned IOPS SSD (io1)
General Purpose SSD (gp2)

If we use this storage, then General Purpose SSD Storage by default is attached with our instance. Its performance is very good for the Magnetic, and also its price is high. It provides 3-IOPs (input-output per second) per GB. And also his price depends on the region.

General Purpose SSD (gp2)

Provisioned-IOPs SSD (io1)

It provides maximum IOPS. It usually provides 3000 IOPs. If you purchased good instances, then this can provide 64000 IOPs. These IOPs are used for reading, writing, and transaction operations. Its price is high in comparison with other storage.

Provisioned-IOPs SSD (io1)

2:HDD Backed Volume (Hard Disk Drive)

This storage is non-bootable. In this storage, we can’t install our OS (Operating System). We only use this volume for extra storage like D Drive, E Drive, F Drive, etc. Two types of this volume are as follow:

  • Throughput Optimized HDD (st1)
  • Cold HDD (sc1)
Throughput Optimized HDD (st1)

We can choose a minimum of 500GB storage for the instance because its price is lower than the other. We use this for daily data, like S3 storage.

Throughput Optimized HDD (st1)

Cold HDD (sc1)

We can choose a minimum of 500GB storage because its price is low to the throughput optimized HDD. It provides 250 IOPs per volume. We use this for long-term data, like S3 Glacier storage.Cold HDD (sc1)

3:Magnetic Standard

This storage also is bootable. We can also install our OS (Operating System) in this storage, like C Drive (used for Windows). This storage price is less than the other storage, but if we use IOPs, we will pay extra. That’s why this storage is more expensive than the other storage.

Magnetic Standard

Difference between EBS and Instance storage

Two types of Block store devices are available in EC2. These devices are used like Root Volume (OS is available), and S3 storage is not used as a Root volume. Types are as follow;

  1. Elastic Block Storage
  2. Instance Storage

1:Elastic Block Storage

This type of storage is persistent. When we stop or reboot our instance, we will not lose our instance data. Instance data will only lose in this form when we terminate our instance. This storage connects through the AWS network. If we add or fetch data in the EBS, then this will work through the AWS network. So, this storage is slower than Instance storage. EBS volume can attach to a single EC2 Instance at a time.

2:Instance Storage

It is directly attached to the instance; this storage is faster than the EBS storage. Limited 10GB of data is required in each device. It is also Ephemeral Storage (Non-Persistent). It means if we stop or terminate our instance, we will automatically lose our OS and Storage; that’s why we do not usually use this storage. We use it only in the reboot instance (our data is not lost).

How to Take and Share AWS EBS Snapshot

AWS EBS Snapshot

AWS EBS Snapshot means instance image/copy. If we want to take a snapshot of our instance, stop your instance; otherwise, our instance will work.  If we take a snapshot of our instance during it is working, the snapshot will take the previous instance work in this AMI image, and it will not take the present instance work in the image. We can create 5000 EBS volumes per AWS account. And we can also take up to 10000 EBS Snapshots per AWS account. We can’t share/connect EBS to other Availability zones and Regions.

You will have created EC2 instance Storage and EBS in the same availability zone, not in the different availability zone.

Snapshot specific in the region. When we take a snapshot of the EC2 or EBS, then this snapshot is automatically created in S3 Storage (in the same region). After creating this snapshot, now we can use this snapshot to the other availability zone, but the region will be the same, and now, we can also create the same EC2 instance again through this snapshot. We can also use this snapshot in the other region through the transfer process. We can also share the snapshot in any availability zone in the same region, other regions, and different AWS accounts because this snapshot is created in the S3 storage. And S3 can quickly transfer its bucket data to the other region through the IAM role.

What is Incremental Snapshot?

Increasing snapshot means, for example, our data is available in our EBS storage, but in these forms, like A, B, and C. We took the first snapshot of the storage on this date, 1/1/21. Now, if we make some changes to our C-block storage and after a few days, we take again the second snapshot on this date, 10/1/21. Now, AWS will only take a snapshot of this C block storage because we have changed this C block, not another’s, and AWS will transfer the storage of the remaining blocks A and B to another snapshot. The process is an incremental snapshot.

If you delete the first snapshot, don’t worry about your data because the second snapshot is available in your S3 storage.

How to encrypt EBS volume?

AWS EBS Encryption always performs on the EC2 instance, not in the EBS. If you encrypt your instance and take a snapshot of it, your snapshot will also be in encrypted form. If your snapshot is encrypted, you can’t create an unencrypted EC2 instance through this snapshot.

Through this method, you can transfer encrypted EBS to the unencrypted EBS or unencrypted EBS to the encrypted EBS. Please create a new unencrypted EBS, and attach it to the same EC2 instance, not the different instance. Now, you can send encrypted EBS to the unencrypted EBS. However, You can also send this data through this EC2 instance, and the EC2 instance will automatically convert encrypted EBS into the unencrypted EBS (because encryption always performs on the EC2 instance).

We can’t make public your encrypted snapshot or EBS.AWS EBS Encryption

Also Read: How to use AWS Elastic Load Balancing (ELB) with EC2?

A Guide on AWS S3 and its Storage Classes

Simple Storage Service (S3) is a service provided by Amazon Web Services (S3). This article will act as a guide for you and explain AWS S3 and its Storage Classes in detail.

What is AWS Simple Storage Service (S3)?

AWS S3

S3 is cloud storage for the internet. It is also object-based storage, and we cannot install OS on s3 storage. When we upload our data on the s3, then AWS automatically creates a backup of this file. But we can’t know where it stores my file data backup. Datastores in a bucket, and we can’t create a bucket in the bucket. We can only create folders and upload our data in the bucket. We can use a maximum of 5TB data upload in one AWS s3 bucket. We can create only 100 buckets in one account, but we can increase this bucket limit through AWS Support. By default, the bucket is created in private, but we can give access to the public.

AWS S3 Bucket

If you want to upload a large file to the AWS, like 1GB or greater than, we will use the AWS s3 CLI command because when we upload data on the S3, session time is finished during uploading. That’s why we will use AWS CLI; through this tool, data is quickly uploaded on the S3. So, follow this below article’s link if you want to upload a large file:

How can we upload large files to AWS S3?

Naming Rules

We can give bucket names only one time and not change this after creating. Bucket names include a minimum of 3 characters and a maximum of 63 characters. We can’t use these types of names in the S3 bucket (capital characters, special characters, etc.).

Life Cycle

We can also create a life cycle in this bucket. It means if we upload data and I know this data is essential only for 15 days, I want this data transferred to the other side.

S3 Versioning

We can also create versioning in this Service. It means if we upload data with this name (my bucket)  and after some time we will upload data again with the same name (my bucket), then AWS will save both files to the S3, and we can use both files at any time.  If we do not use this versioning, then S3 will remove the old file and save the new file. When we enable this process, we can’t disable this process; we can only suspend this. We can also add a policy in this versioning like I was uploading file data this data 26/4, and I want to delete this upload file this data 30/4.

Host Website

We can also host our static website in the S3 Service.

MFA Delete S3

We can also add Multi-Factor Authentication (MFA) security in S3 bucket versioning. This security code automatically changes after 30 sec.

AWS S3 upload data

Open your S3 bucket and go to the bucket properties; in this option, you can choose any options, and here you can also upload any data.

Multipart upload

Another option is also available in this Service Multipart upload. Multipart upload is used as block storage, but when data is wholly uploaded, then upload data comes in the original form. This process is used only for uploading speed. Like I want to upload a 100MB file in this bucket, then multipart convert this file into ten blocks, and also speed will increase, and data will upload quickly. When data completes uploading, ten blocks attach and make will 100MB file; this process is called a parallel process. We can activate this process greater than 5MB file, but AWS recommended this process to activate greater or equal than 100MB and apply this process to a 5GB file or greater file.

AWS S3 Copy Object

We can also copy this object to another s3 bucket or AWS console account.

AWS S3 Storage classes

6-classes of Amazon s3 storage are available;

  1. Amazon S3 Standard
  2. Amazon S3 Standard Interface Access (Standard-IA)
  3. Amazon S3 Intelligent Tiering
  4. Amazon S3 One-Zone-IA
  5. Amazon Glacier
  6. Amazon S3 Glacier Deep Archive

Amazon S3 Standard

  • S3 Standard storage is used daily. It means we can easily use data daily.
  • It is costly to store the other storage.
  • S3 durability is 99.999999%, so; you don’t have to worry about your data.
  • Data lost chances are only 0.000000001%. That’s why Its storage cost is so high compared to other

Amazon S3 Standard Interface Access (Standard-IA)

  • In this storage, we can use data, but like after 2 or 3 days or daily.
  • Its storage price is lower than the S3 storage.
  • Its durability is also 99.999999%, so; you don’t have to worry about your data.
  • Most importantly, when we upload data in this storage and after 2 or 3 days, we will again use this data and then delete all data in this storage. But this storage will generate an entire month’s bill, not a 2 or 3 days bill, because this is cheaper than the S3 storage. This storage generates a larger bill only when we access data or use data in this storage; otherwise, this storage generates a minimum bill.

Amazon S3 Intelligent Tiering

  • This storage is the most interesting. I’ve two companies (for example, ABC & XYZ) data available in standard storage. One company data I’m using daily, and other company data is not using. After 30 days, this storage checks both companies’ data. S3 Intelligent Tiering will check I’m using one company data (ABC) daily and other company data (XYZ) is not using. This storage will pick (XYZ) data and will be moved automatically to any other cheaper storage. And we don’t know where this storage will store data. After 2-months later, I now want to access (XYZ) data. This storage will check I’m using this (XYZ) data now this storage will automatically move this data (XYZ) again to standard storage.
  • This movement process costs less.
  • This storage moves data files up to 128kb; if your file is less than, your file is not moved to the other storage.
  • Its durability is also 99.999999%, so; you don’t have to worry about your data.
  • Your data lost chances are only 0.000000001%.

Amazon S3 One-Zone-IA

  • We discussed we would upload data in the S3 or other storage, then AWS creates extra two copies for backup. This backup file moves to another availability zone and takes this backup file; these charges include your monthly bill.
  • In this storage, AWS did not create extra files for backup; that’s why this storage is so cheaper than other stores.
  • Its durability is also 99.999999%, so; you don’t have to worry about your data.
  • We can also apply a life cycle policy in this storage.

Amazon S3 Glacier

  • This storage is so cheaper than the other storage because this storage is only for a long time. In this storage, we can decide the time, I mean after 2-years later when we will request to the AWS for accessing our data, then we will add fixed time (like AWS can give my data after 2 hours, 6 hours, 12 hours, or 24 hours, etc.).
  • According to our time, the S3 glacier generates bills.
  • Its durability is also 99.999999%, so; you don’t have to worry about your data.
  • If you use a free tier AWS account, you will retrieve data only 10GB per month. If your data is higher than 10GB, S3 Glacier generates a bill according to your data and time.

Amazon S3 Glacier Deep Archive

  • This storage is 75% cheaper than the other S3 Glacier storage because this storage is also only for a long time.
  • We can’t decide the time, like S3 Glacier storage. If I want to retrieve data in this storage, then AWS retrieves my data after 12 hours.
  • AWS also creates two extra files for backup and transfers backup files to another availability zone.
  • Its durability is also 99.999999%, so; you don’t have to worry about your data.

If you use a free tier AWS account, you will retrieve data only 10GB per month. If your data is higher than 10GB, S3 Glacier generates a bill according to your data and time.

How to use AWS Elastic Load Balancing (ELB) with EC2?

What is AWS Elastic Load Balancing (ELB)?

AWS Elastic Load Balancing (ELB) is a very useful service provided by Amazon. This article is about the uses of AWS ELB and how you can add and optimize virtual machines using AWS Elastic Load Balancing (ELB).

The load balancer and Auto-scaling working are the same. AWS ELB automatically divides your incoming traffic in multiple instances, such as EC2 instances and IP addresses, in one or more Availability Zones. AWS ELB leads traffic to healthier targets/instances. Elastic Load Balancing handles your load as your incoming traffic changes over time. It can automatically scale to the majority of workloads.

AWS Elastic Load Balancing (ELB)

But auto-scaling creates or terminates instances if our given load increases or decreases.

If we do not create a load balancer and users search data from our server then it will provide data to the user. But, If some issues damage that server, then we will lose all data. So, that’s why we use a load balancer on our servers.

How to use Virtual Machines with Elastic Load Balancing (ELB)

For that, we have created two virtual machines/servers. Now, we will create a load balancer after the servers/virtual machines. When the user searches data from our servers, the first request will go to the load balancer, and now the load balancer will check which server is good and its load minimum. Then the load balancer will give the request to the minimum load server, and after that, take data from this server and will give a response to the user.

Now, in case, if one server will damage, then the load balancer will check which server is good, and have a low load on it, then gives a request to this server and takes data from this server. If you have also enabled auto-scaling, then auto-scaling will create a new instance because one server has been damaged. After creating the new instance, the load balancer will then distribute the load evenly between both instances.

Types of Load Balancer

Four types of load balancer are available in AWS;

  1. Application Load Balancer (created 2016)
  2. Network Load Balancer
  3. Gateway Load Balancer
  4. Classic Load Balancer (Previous Generation)

Types of Load Balancer

1.Application Load Balancer

  • AWS launched this load balancer in 2016. It is only used for Application software. It works on the 7th layer of OSI. For example, suppose your task’s container definition specifies port 80 for an NGINX container port and port 0 for the host port. In that case, the host port is dynamically selected from the temporary port range of the container instance (e.g. 32768 to 61000 on the latest Amazon ECS-optimized Amazon Machine Image (AMI)).
  • ALB supports HTTP and HTTPS protocols.

Application Load Balancer

2.Network Load Balancer

  • Network load balancer used for networking. It works on the 4th layer of OSI (Transport Layer). For example, if your task container definition defines port 80 for the NGINX container port and port 0 for the host port, dynamically move the host port from the temporary port range of the container instance. 
  • It supports TCP, UDP, and TLS protocols.

3.Gateway Load Balancer

  • Gateway load balancers allow you to deploy and manage virtual tools, such as internet firewalls, detection of interference and prevention systems, and deep packet inspection systems.

4.Classic Load Balancer

  • It works on both the 4th and 7th OSI Model layers because this supports both layer’s protocols. For example, load balancer port 80 can be mapped to container example port 3030 and load balancer port 4040 to container example port 4040.
  • It supports HTTP, HTTPS, TCP, and SSL protocols.

Classic Load Balancer

Load Balancer is always created in the VPC. If you use many subnets, then many availability zones are automatically created. 

Remember this:

  • Public IP is not assigned to each of these instances; when we create a Load balancer, then AWS assigns a Public IP to the load balancer; that’s why now incoming or outgoing traffic will come and out through the Load Balancer. Load Balancer creates inside the VPC and instances create inside the subnet; that’s why incoming traffic, first of all, will go to the load balancer and load balancer calls the instances. Private IP is assigned with each instance. The load balancer can also use Private IP to give and take data from the instances.
  • As we know, the load balancer is connected to EC2-servers / instances but does not use all incoming or outgoing traffic for the load balancer.

AWS ELB does not support ICMP Protocol

  • If anyone can access data to these instances, but through the Ping/SSH/PUTTY, this incoming traffic will not use the load balancer even though this request will directly go to instance fetch data and then back response. Because the ICMP protocol is used for ping commands and ICMP protocol is not used in the load balancer. That’s why the Ping command will directly go to the instances.
  • But you can see Load Balancer only supports these protocols HTTP, HTTPS, TCP, UDP, TLS, and SSL protocols. That’s why not all incoming or outgoing traffic use Load Balancer if incoming or outgoing is coming from the Ping/SSH/PUTTY.

Now, we will discuss more 3-points regarding ELB.

  1. Listener
  2. Target Group
  3. Target

1:Listener

Two types of listeners are as follows:

  • Front end listener
  • Backend listener

Both listeners are available around the Load balancer, but these listeners are always invisible. We use a frontend listener to understand the user request. I mean the listener will check user requests (which user requests are HTTP, HTTPS, or TCP), and when the listener understands, it will send a request that relates to these servers or instances. But make sure the frontend listener will not send a request to the direct EC2 instances because these incoming requests are coming from the protocols. First, the Frontend listener will send a request to the backend listener, and then the backend listener sends a request to related servers for data.

If anyone wants to access data to these servers but through the Ping or SSH, this incoming traffic will not use the load balancer even if this request will directly go to instance fetch data and then back response. Because the ICMP protocol is used for ping commands and ICMP protocol is not used in the load balancer. That’s why the Ping command will directly go to the instances because this will not use by the load balancer.

2:Target Group

We can only 4-CIDR used in target IP. And these are fixed.

  • 10.0.0.0/8
  • 100.64.0.0/10
  • 172.16.0.0/12
  • 192.168.0.0/16

Which IP address Support ELB

Load balancer only supports IPv4. The load balancer can attach multiple availability zones but only one subnet in one availability zone; if you choose another subnet, the last subnet will be disconnected. 8 IP Addresses should always be available in ELB (Elastic Load Balancer).

ELB Support IP address

How to manage load in each server

The load balancer always accesses through DNS, not by the IP address. Load Balancer will divide load in each availability zone. If one EC2 server is in one availability zone, the two EC2 servers are in another availability zone, and the fourth EC2 servers are in the third availability zone. Then the load balancer will divide the load in each availability zone.

For Example:

If your load is 100% in these availability zones, you know four EC2 servers are available in this third availability zone, two in the second availability zone, and one in the first availability zone. The load balancer will divide this load into these instances. Mean ELB will divide this load into 33%, 33%, and 33% availability zones.  You know four instances are in the third availability zone, and now four instances will handle just this 33% load.

You also know one instance is available in one availability zone, and then this 33% load will handle only one instance. That’s why this is not fair; we will use Cross Zone Load Balancing because on one side, four instances are handling just 33%, and on another side, just one instance handles 33% load.

Cross Zone Load Balancing

This type will distribute the load across each EC2 server, not into each availability zone.

For Example:

Total seven EC2 servers are available in these availability zones.

EC2 servers > 1+2+4 = 7

Load   > 33.3%+33.3%+33.3% = 100%

100/7 = 14.28%

This load is 14.28% that divides into each EC2 server/instance, not in the availability zone. Suppose one EC2 server is in one availability zone. In that case, this will manage only 14.28% load, in two EC2 servers are in the second availability zone, then this will manage only 14.28%+14.28% = 28.56% load, and in four EC2 servers are in the third availability zone, then this will manage 14.28*4 = 57.12% load.

This type works on the region and availability zone.

Read Also: How to Export/Import a MySQL database via SSH in AWS Lightsail WordPress

AWS IAM: Complete Guide & Key Features

What is AWS IAM?

AWS provides many services to its users. One of them is AWS IAM. This article will guide you on how to manage policies and permissions using AWS IAM.

AWS Identity and Access Management (IAM) enables you to handle AWS services and resources securely with a command line or without. Using the AWS IAM, you can create and manage AWS users, groups, policies, and roles. You can also use permissions to allow and deny their access to specific AWS resources and services.

AWS IAM Identities and Permissions

IAM Key Features

Four features of IAM are as follows;

  1. Granular control
  2. Multi-factor authentication (MFA)
  3. Temporary credentials
  4. Free to use

1:Granular control

IAM user is used for specific AWS services and resources using IAM policy/permissions. For example, terminating EC2 instances, reading an Amazon S3 bucket’s contents, managing the AWS Lambda applications, etc.

2:Multi-factor authentication (MFA)

You can add two-factor authentication to your account and users separately for extra security. With MFA, you or your users provide not only a password or access key & secret key to work with your account but also provides code from a specially configured device.

3:Temporary credentials

To define access permissions directly to users and user groups, IAM lets you create roles & policies. Roles allow you to define a set of permissions. In addition, you can increase your security by providing temporary access to your services.

4:Free to use

AWS Identity and Access Management (IAM) is a service of your AWS console account. AWS doesn’t take an additional charge for IAM service. You will pay only for the other AWS services through using the AWS IAM with other services, not will pay the IAM; these are temporary security credentials.

How to Access AWS IAM

You can work with AWS Identity Access Management with 3-ways, and these ways are as following;

  1. AWS Management Console
  2. AWS Command Line Interface (CLI) IAM
  3. AWS Software Development Kits (SDK’s) IAM

1:AWS Management Console

The AWS console is a browser-based interface to manage AWS resources & services, IAM service, AWS S3, AWS EC2, Lightsail, and many other services.

2:AWS Command Line Interface (CLI) IAM

You can use the CLI to run commands in your system. Using the command line is faster and easier than using a console. Command-line tools are also helpful in creating scripts that perform AWS tasks and other services.

3:AWS Software Development Kits (SDK’s) IAM

AWS SDKs (software development kits) consisting of libraries and sample code for different programming languages (Python, Java, .NET, Ruby, iOS & Android platforms, etc.). The SDKs provide an easy way to create programmatic access to IAM and AWS.

AWS IAM Security Terminology

  1. Root Account
  2. IAM User
  3. IAM Roles
  4. IAM Groups
  5. Access Keys
  6. IAM Policies
  7. Security Group
  8. Billing Alert
  9. AWS IAM Budget

1:Root Account

When you first create an AWS console account with complete access to all AWS services, this account is known as the AWS root account.

2:AWS IAM User

IAM Users are not separate AWS accounts; these users are of the IAM service of your AWS console account. You have your AWS root account, email, and password to get into the AWS Console.

You can use this service for yourself and for any other function to which you need to give access. It can include people & applications, etc. Each user has their password and their access keys.

The best practices of AWS IAM, per Amazon, are creating an IAM user for themselves, allowing the admin, and using it for all their arrangements, including creating other users. You should also lock down both the root user and any administrative users using 2FA, which now supports authentication apps such as Google Authenticator.

AWS IAM User

3:AWS IAM Roles

IAM Rules is an organization that has permissions assigned to users but does not have credentials because you are handling this role with an existing user. They are suitable when different users play a given role at different times.

AWS IAM Roles

4:AWS IAM Groups

A collection of users. Groups allow you to define permissions for all the users within them.

AWS IAM Groups

5:Access Keys & Secret Keys

The access key and Secret Key are long-term credentials for an AWS IAM user or the AWS account root user. You can use these keys to sign programmatic requests through the AWS CLI.

6:AWS IAM Policies

IAM policies define permissions through the policy for action regardless of the method you want to use to operate. IAM policies are stored in the JSON documents.

Policy types
  • Identity-based policies are JSON policies and also all IAM policies have been stored in the JSON format that controls what actions an identity (users, groups of users, and roles) want to perform, on which resources, on which services, and under what conditions.
  • Resource-based policies are JSON policy documents that you attach to services such as an Amazon S3 bucket & EC2 etc. IAM policies apply the specified principles to perform specific actions on that resource and define what conditions this applies to.
  • Access Control Lists (ACLs) are AWS service policies that allow you to control which rules in another account can access AWS resources/services. We cannot use ACL to control all access to the principal in a single account. Although ACLs are the only policy type that does not use the JSON policy format, ACLs are like resource-based policies. Examples are AWS S3 and VPC services that support ACLs.

AWS IAM Policies

 

AWS IAM Policies

7:AWS Security Group

A security group works as a virtual firewall to control inbound and outbound traffic through multiple protocols.

Note. They’re firewall rules.

You define the protocols, ports, and what source IP can access those ports. And there are some presets for common ports, e.g., HTTP, HTTPS, SSH, ALL ICMP, TCP, etc. One incredible service is that you can share the rules across different ways without recreating them, almost like policies, except for network traffic.

AWS IAM Security Group

8:Billing Alert/Alarm Alert Notification

  • This data includes the estimated charges for every service in AWS that you use, and also has additional charges but the additional charges are only for some services.
  • The alarm triggers when your account billing exceeds the limit you have specified.

9:AWS IAM Billing Access

IAM is a popular service of your AWS account offered at no additional charge. You will be paid only for the use of other AWS services by your AWS IAM users.

IAM Summary

Ok, here they are again in the quick form.

In this article, we have considered the concept of IAM service and all its features. This is one of the essential services that we must properly set up for any application for any organization, and other activities are initiated. Here are some of the best tips from Amazon Web Services when designing an IAM framework. As already mentioned in the article, there are two ways to access multiple services on AWS, either through usernames and passwords or using access and secret keys. You should keep an active eye on your account and monitor your AWS account activities.

It is best to enable 2FA security on an admin-level account. Access keys are SSH keys. They are how you verify your infrastructure. IAM policies are a set of permissions that we may assign to users or groups. Security groups are basically firewall rules.

I hope this helps someone increase AWS knowledge faster than going through the documentation themselves.

Also Read: AWS Lightsail — Pros, Cons & Best Resources to Learn

How Can We Upload Large Files to AWS S3?

Upload Large File to s3 from the Browser

For large files, Amazon S3 might separate the file into multiple uploads to maximize the upload speed. The Amazon S3 console might time out uploading large files to s3 from the browser because of session timeouts. Instead of using the Amazon S3 console, try to upload large files to s3 from the browser using the AWS Command Line Interface (AWS CLI) or an AWS SDK.

Note: If you use the Amazon S3 console, then the maximum upload size of a large file from the browser is 160GB. To upload a larger bigger than 160GB, you can use the AWS CLI, AWS SDK, or Amazon S3 REST API.

AWS S3 Upload File Through CLI

Installing AWS CLI

We will use AWS CLI for uploading large files. This is the best way to upload large files to s3. Download AWS CLI through this link, and after that, install this file into your system.

https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html

Installing AWS CLI

Now, click this window link. When you click this link then this screen will appear. Now, install or update this setup.

Installing AWS CLI

When this setup is downloaded, install this download file in your system by default CLI setting.  Don’t change this file path, and install this file on the C drive.

Command Prompt

Now, open the Command prompt on your window search bar.

Command Prompt

Write this command, this command will check if the AWS CLI has been installed or not in your C drive.

  • aws configure

Command Prompt

If this command, asks you for the Access key, it means your AWS CLI is working. Now, you can upload large files to s3 from the browser. 

AWS Console Account

Now, go back to your AWS console account and open Security Credential. You will see many options and keys of many different AWS Services, but you will create and download IAM Access and Secret Keys.

copy the Access key & Secret key from the download file.

Now, go back to your CMD Prompt and enter the AWS Access key & Secret key.

  • Access key
  • Secret Key ( Then press Enter)
  • S3 bucket region 

Choose AWS S3 Bucket Region

 

AWS Access key & Secret key

 

Now enter this command and check if your S3 bucket is showing here or not.

  • aws s3 ls

If this command is showing your S3 Buckets then good. It means now you can upload large files to s3 from the browser.

AWS S3 Bucket Name

 

Now, go to the C drive, create a new folder here, and give them any nameLocal Disk C

You can see, I’ve created a folder with the name “upload” in C drive. Now, open this folder and paste some files that you want to upload on the S3 bucket.

Uploading Files

Now, open your CMD (command prompt) and write the below-mentioned command.

  • aws s3 cp /FOLDERNAME s3://S3BUCKETNAME/ –recursive –include “FILENAME”

You can see my folder name is upload, S3 bucket name is 4dphd, and my files name.  

  • aws s3 cp /upload s3://4dphd/ –recursive –include “0_1 Knapsack Problem solution (English+Hindi) – YouTube”

Successfully Uploaded

Result

You can see both files have been uploaded. You can also see these files in your S3 Bucket. Any type of large file can easily be uploaded through this AWS CLI. This is the best way to upload large files to AWS s3.

Upload Large Files to AWS S3

Also Read: AWS Lightsail — Pros, Cons & Best Resources to Learn

How to Export/Import a MySQL database via SSH in AWS Lightsail WordPress

How to Export/Import a MySQL database

Suppose you want a database backup of any instances ‘ in AWS Lightsail WordPress ‘ through putty or SSH. For that, first, we need to create an instance. In this article, I am assuming that you already have a WordPress instance in Lightsail.

1:Download Instance SSH Key

So you need to download the SSH key of this instance. This is an essential step. Follow the below screenshot to download the SSH key in Lightsail.

Steps to download SSH key from Lightsail account
2:Install PUTTYgen

Download PUTTYgen through below mentioned link (For SSH), and install this file into your system.

https://www.puttygen.com/

Now Open PUTTYgen and click on ‘load’ and select the downloaded SSH key file and after that click on the ‘save private key’ file. Save this file with any name with this (*.ppk) extension.
how to save Pem file to ppk

3:Install PUTTY

Download PUTTY through below mentioned link, and after that, install this file into your system.

https://www.puttygen.com/

Now, Open PUTTY, in ‘Host Name’ input filed you need to write your Lightsail instance’s Static IP address, now give the name to a session and then save and after that click on ‘Data’ under ‘Connection’ (left bar), write username of your instance in the field  ‘Auto-login username’ (bitnami, ubuntu, etc according to the type of instance ), again go to left bar and click on ‘SSH'(click on SSH ‘+’ sign) then click on ‘Auth’, now choose Private Key(*.ppk) file.
putty Configuration steps for lightsail db connection

OR, If you want to open your AWS Lightsail Instance ‘ PHPMyAdmin ‘ database, and also you want to check if your Instnace database is available in ‘PHPMyAdmin’ or not, then follow these steps;

Go to the left bar and click on ‘SSH'(click on SSH ‘+’ sign) then click on ‘Tunnels’, now add ‘8888’ in the Source post field and add ‘localhost:80’ in the Destination field. Now click on ‘Session’ on the left bar and now choose your last session (in my case, the last session was ‘aws session’ ) then click on ‘save’ and now click on ‘open’ then it will show you a popup and you need to click on “Yes”.

 

putty Configuration steps for aws lightsail wordpress db import

4: Filezilla or WinSCP

Download Filezilla or WinSCP through this link, and after that, install this file into your system.

https://filezilla-project.org/download.php

OR

https://winscp.net/eng/download.php
Open Filezilla/WinSCP to connect to your Instance. I’ll choose Filezilla for this work.
Go to the Edit option of Filezilla > open settings > click SFTP > pick the Private key of your Lightsail Instance (*.ppk).
Filezilla
5:Filezilla Connection

After that go back to the main page of Filezilla and Write-host (IP/domain), the username (server name), password (instance password), Port (22), and Quick connect this.
After that, you will see your Lightsail instance directories are showing. Now, create a new directory/folder for AWS Lightsail database backup and also copy the entire path with the new folder.
Filezilla Connection

6:Check Instance Databases through PUTTY 

Now back to the PUTTY, write some commands For checking, is this showing all databases & tables of Lightsail instance,

  • sudo su
  • mysql -u root -p -h localhost (root mean USERNAME OF DB, P mean password)
  • show databases; (show all databases)
  • use DATABASE NAME; (Backup Database name)
  • show tables; (show all tables of your backup DB)
  •  exit

7:Command for export Lightsail Instance database

  • mysqldump -u DATABASE USERNAME -p –h localhost DBNAME –single-transaction –quick –lock-tables=false > Paste Filezilla path where you save backup/filename.sql

Note: Must care for the Angle Bracket inside the Command. Don’t change this Angle Bracket. Otherwise, you can’t take a backup of your AWS Lightsail database tables.

Commands:

  • mysqldump -u root -p -h localhost wp_demo –single-transaction –quick –lock-tables=false > /opt/bitnami/apache2/htdocs/bbbb/backup.sql
  • enter password

export Lightsail Instance database command

After some time, you will see your database tables will have been created with your given Filezilla path.

database backup file

Now, you can delete all tables of your backup database from MySQL and after that, you can also check your database backup whether it is imported in MySQL or not.

8:Check Database

Again back to the PUTTY, and write below mentioned commands

  • sudo su
  • mysql –u USERNAME –P 
  • Enter Password
  • show databases;
  • use DBNAME; (backup database name )
  • show tables; (now you will see all tables are missing) 
  • exit

9:Command for import Lightsail Instance database

  • mysql -u USERNAME DB -p –h localhost DBNAME –single-transaction –quick –lock-tables=false < Address link where you save backup/filename.sql

Note: Must care for the Angle Bracket inside the Command. Don’t change this Angle Bracket. Otherwise, you can’t import backups of your AWS Lightsail database tables.

Commands:

  • mysql -u root -p -h localhost wp_demo < /opt/bitnami/apache2/htdocs/bbbb/backup.sql
  • Enter password

import Lightsail Instance database command
After some seconds all tables will be successfully imported.

10:Import Database Tables

Now if you want to check if tables have been imported or not, then you can repeat all previous commands;

  • sudo su
  • mysql –u USERNAME –P 
  • Enter Password
  • show databases;
  • use DBNAME; (backup database name)
  • show tables; (now you will see all tables are again showing) 
  • exit

Related Article: How can we Upload Large Files to AWS S3?