Skip to content

Amazon Web Services

James A Henriquez edited this page Dec 14, 2023 · 29 revisions

What is Amazon Web Services?

Amazon Web Services(AWS) is a comprehensive and widely used cloud computing platform provided by Amazon which offers a vast array of on-demand services that businesses and individuals can use to build and manage their applications and infrastructure in the cloud. AWS provides a scalable, flexible, and cost-effective solution for hosting websites, running applications, storing data, and more.

What is Cloud Computing?

Cloud computing is a technology model that allows access to a shared pool of computing resources over the internet. Instead of owning and maintaining physical hardware and infrastructure, users can leverage cloud services to access computing power, storage, databases, networking, software, and other resources on a pay-as-you-go basis. The term "cloud" in cloud computing is a metaphor for the internet, and the services are often hosted on remote servers that are part of a vast network.

Key Characteristics

  • On-Demand Self-Service: Users can provision and manage computing resources as needed without requiring human intervention from service providers.
  • Broad Network Access: Cloud services are accessible over the internet from a variety of devices such as laptops, smartphones, and tablets.
  • Resource Pooling: Cloud providers pool computing resources to serve multiple customers. Resources are dynamically allocated and reassigned based on demand.
  • Rapid Elasticity: Cloud resources can be quickly scaled up or down to accommodate changes in workload. This flexibility allows users to pay only for the resources they consume.
  • Measured Service: Cloud computing resources are metered, and users are billed based on their usage. This pay-as-you-go model offers cost efficiency and allows for better budget management.

What is AWS EC2?

Amazon EC2, otherwise known as Amazon Elastic Compute Cloud is one of their most well known services. What EC2 allows you to do Amazon provides a secure and resizable compute capacity in the cloud. This simply means that they will provide you with a virtual server/machine that would represent a physical server so that we may be able to deploy our applications. This is a great alternative to acquiring our own hardware and connecting to a network to create our server.

The diagram below shows the basic architecture of an AWS EC2 instance deployed within an Amazon Virtual Private Cloud (VPC). The EC2 instance, which is positioned in a specific Availability Zone within the Region, is safeguarded by a security group which acts as a virtual firewall that regulates incoming and outgoing traffic. Authentication is established using a key pair, in which a private key resides on the local computer, while the public key is stored within the EC2 instance. In this specific diagram, the instance is backed by an Amazon EBS volume, which is another service of AWS that acts as a virtual hard drive. The VPC here establishes connectivity with the internet through an internet gateway.

image

What are the benefits of AWS EC2?

  • Cost-effective: Amazon EC2 allows a pay-as-you-go pricing model, which means you, the user, are in control of the costs on server resources.

  • Moldable: EC2 allows a variety of different instance types (Instance types comprise different combinations of CPU, memory, storage, and network capacity) and operating systems, meaning that it is almost fully customizable to fit your needs.

  • Scalable: EC2 is capable of adjusting computing resources as the website gains or diminishes in demand.

  • Security: EC2 provides different security features such as Virtual Private Cloud, which allows us to safely secure our website and data we receive.

  • Reliable: Because EC2 is under AWS, there is a huge reliability factor, and can reduce the risk of the server having downtime.

What are some issues we may experience when using EC2?

  • Complex: Having to set up and manage EC2 instances may be unfamiliar to us, and users have stated that the documentation isn’t always clear and can be quite confusing to understand.

  • Overhead: There could be a mismanagement of resources and optimization, which can lead to unnecessary costs.

  • Maintenance: There will need to be maintenance to ensure the security of our EC2 instances

What do we receive from the free tier?

With AWS free tier, you are given 750 hours of Linux and Windows t2.micro instances each month for up to one year. To stay within the free tier we must only use EC2 Micro instances.

What is T2 Exactly?

T2 instances in particular are Burstable Performance Instances. This means that they provide a baseline level of CPU performance and have the ability to burst above said baseline. T2 instances can sustain high CPU performance for a long time and for most general-purpose workloads will be sufficient enough without any additional charges. If one would need to run at a higher CPU utilization it could do so but it comes with an additional charge of 5 cents per vCPU-hour. Ability to burst are governed using CPU Credits and increase depending on the instance of T2.

T2.micro comapred with other instances of T2

Instance vCPU CPU Credits / hour Mem (GiB) Storage Network Performance
t2.nano 1 3 0.5 EBS-Only Low
t2.micro 1 6 1 EBS-Only Low to Moderate
t2.small 1 12 2 EBS-Only Low to Moderate
t2.medium 2 24 4 EBS-Only Low to Moderate
t2.large 2 36 8 EBS-Only Low to Moderate
t2.xlarge 4 54 16 EBS-Only Moderate
t2.2xlarge 8 81 32 EBS-Only Moderate

How to setup an EC2 Instance

To setup an EC2 instance you must first create an AWS account or sign into an already existing one.

Once done you will enter the console home, from there you can search up and go to the EC2 page.

image

From there you should now be able be greeted to the EC2 Dashboard where you will lots of information that will be talked about later.

For now you should click Launch Instance (clearly highlighted in the Amazon orange color)

image

On the redirected page, first enter the name of the server, it could be whatever you want, for our project I will use the name SDP32.

image

Next option is to choose an AMI. AMI or Amazon Machine Image is a template that contains a certain software configuration that are commonplace in public use.

image

For the Instance type we will be using t2.micro for the reasons discussed earlier.

image

Next you are going to create a key pair. A key pair is a combination of a public key that is used to encrypt data, and a private key that is used to decrypt the same data.

image

Once you clicked "create a new key pair," give you key pair a name that will be downloaded later. Next you can choose either an RSA or ED25519 algorithm of encyrption. The use of the algorithm depends on your needs. ED25519 keys are much smaller than RSA keys which makes it more efficient authentification time and storage space and is more secure. However, RSA is well established and much more used around the tech world.

For the project I will be using RSA and saving it as a .pem instead of a .ptk file format as we will be using SSH to connect to the instance, not PuTTY.

image

There are multiple different settings for network.

Network: The Virtual Private Cloud (VPC) that you will be using. Subnet: A range of IP addresses in your VPC. Auto IP Adress: Automatically assign a public IP address to the primary network of the instance. Firewall (Security Group): Security group is a set of firewall rules to control traffic to and from the instance. There are Inbound (incoming traffic to instance) rules and Outbound (outgoing traffic from instance). Default API values will be used if nothing is specficed.

I will keep everything defaulted for now.

image

You can then configure the storage, we will be using the default given to us for right now.

image

We can then now launch the instance and we should be sent to this screen and a .pem file should be in your downloads that you will want to keep in a folder for later.

image

Other AWS Applications

EC2 is one of many different AWS services. While EC2 is among one of the most popular, there are still many other well liked services that each have different uses and capabilites.

Amazon Relational Database Services

Amazon Relational Database Services (RDS), is a web service that allows database configuration, management, and scaling very easy in the cloud. It has some similar qualities to EC2, however, RDS is much easier to manage and maintain.

Amazon RDS is a relational database that has a pay-as-you-go model. RDS supports many popular database engines such as MySQL, PostgreSQL, SQLSever, and more.

Amazon's themselves say, "Both Amazon RDS and Amazon EC2 offer different advantages for running a database. Amazon RDS is easier to set up, manage, and maintain than running a database on Amazon EC2, and lets you focus on other important tasks, rather than the day-to-day administration of a database. Alternatively, running a database on Amazon EC2 gives you more control, flexibility, and choice. Depending on your application and your requirements, you might prefer one over the other."

RDS while being useful, does not allow for hosting of websites such as EC2. Along with that we have discussed with our sponsor about wanting to utilize likely a NoSQL database to store the information from our application.

Amazon Simple Storage Service

Amazon Simple Storage Service (S3), is an object storage service. It is maleable with any amount of data for things such as mobile applications, websites, archives, big data analytics, etc.

S3 allows the user to be able to organize, configure access and optimze the data being entered in to achieve requirements that one's specfifc organization or business may need.

How it basically works is that data that is put through S3 will be stored as an object within bucks. This object will be a file and any metadata that describe the file and a bucket is the container for objects. Each object will have a key within the bucket that will serve as a unique indentifier.

S3 is a key-value store, something that is considered very important within NoSQL databases to achieve mutating, semi or unstructed data that can grow very big. Thus this effectively makes S3 a NoSQL database. Uploaded objects will be referenced by a unique key which provides with near endless flexibility.

S3 has been shown by many to be very manageable, reliable, secure, compliant with features such as analytics and insight to provide the user more visibility with storage usage. In turn this can lead to a better analysis and optimization of overall storage. Amazon S3 also provides storage logging and monitoring to see exactly how the resources are being used in S3. They even allow S3 Versioning which give the user the ability to keep multiple versions of an object within the same bucket. So, if an object gets deleted or overwritten by accident there is the potential to restore it.

You will often find both Amazon EC2 and S3 being used together. One application allows you to run servers in the cloud with minimal effort, the other is used for storing large amounts of data. They can compliment one another and could be a potential option that our senior design team uses. At this current moment we are still undecided and are trying to figure out what exactly do we need from our backend.

Amazon Lambda

AWS Lambda is an event-based service that delivers very short term computing capability. It is mainly used to run code without having to deploy a virtual machine instance. Lambda can be used to update a databse, cause some changes in storage, or even trigger a custom event generated from other applications.

An example would be that a photograph was taken and uploaded to an Amazon S3 bucket, when it is uploaded it will cause a trigger in AWS Lambda to go off. Lambda will then run code that resizes the image. What it outputs is a photo resized to web, mobile and tablet sizes.

EC2 can work together with AWS Lambda but it is not required, depending on the needs and requirements of the application you may find it useful. For out senior design project I believe that we will not be needing AWS Lambda.

Amazon DynamoDB

Amazon DynamoDB is a fully managed and serverless NoSQL databse. Its service provides fast (Amazon guarentees millisecond latency at any scale) and predictable performance and seamless scalability. DynamoDB can handle up to 20 million requests per second and up to 10 trillion requests per day.

DyanmoDB has two capacity modes, one allows for the tables automatically be scaled up and down based on the work load. The other, the developer needs to define the auto-scaling configurations, which includes the number of capacity units.

DynamoDB also allows different methods for backup and restoration of data. One of the methods is called On-demand backup which is when the user needs to schedule backups manually and quickly are completed with little to no affect with the application's performance. The other method of recovery is point-in-time recovery which is the automatic backup mechanism. DynamoDB will continuously create backups, the user can then use these backups to restore data for upwards of 35 days by selecting the exact date and time.

DyanmoDB sounds similar in some aspects to Amazon S3, they both feature high performance, scalability, backups, security and high availabilty. DynamoDB is generally good for storing structed or semi-structed data and has limits on its storage size, records should usually be less than 400KBs, but as stated earlier, has very high access speeds. S3 is good for storing files, images, videos, etc. It allows the ability to store files up to 5 TBs with reasonable access speeds, however not as fast as DynamoDB. So, many people recommend DynamoDB for a real time application.

Depending on the application and its needs both applications can be used together. As an example say you have a social media application. The image of the profile picture of the user could be uploaded and stored onto S3 and then store the link in a table of user-profiles in Dynamodb as one of it's attributes.

Amazon Virtual Private Cloud

As discussed earlier, Amazon EC2 deploys within a Virtual Private Cloud (VPC). To go into greater detail of what a VPC is, a virtual private cloud allows you to launch AWS resources in an isolted virtual network that you have defined.

image

The specific features of VPC Include:

  • Subnets: A range of IP Addresses within your VPC
  • IP Addressing: You can assign specific IP Addresses, from both IPv4 and IPv6 to your VPC and subnets.
  • Routing: Using route tables (route table is a table that contains a set of rules, called routes that determine where network traffic from your subnet or gateway is directed) you can determine where traffic from your subnet or gateway will be directed towards.
  • Gateways and endpoints: Gateways allows another network to connect to our VPC. Then, endpoint allows to connect to AWS services privately, without the use of something like an internet gateway or NAT device.
  • Peering Connections: Use VPC peering connections (a networking connection between to VPCs that enables you to route traffic between them using private IPv4 addresses or IPv^ addresses) to route traffic between the resources into two VPCs.
  • Traffic Mirroring: Copy network traffic from a network interafce and send it to appliances that provide security and monitor to insepct the packets.
  • Transit gateways: Using a transite gateway (central hub), route traffic between VPCs, VPN connections, and AWS Direct Connect connections.
  • VPN Flow Logs: Capture information about the IP traffic going to and from network interfaces within your VPC.
  • VPN connections: The ability to connect your VPCs to your on-premise networks using AWS Virtual Private Network (AWS VPN)

As show VPC allows many security featuers and come included in almost if not all of their services, which can allow for an easier, more secure method of connecting different AWS services with one another for whatever application you may need it for.