Skip to content

Amazon Web Services

Michael Sawan edited this page Feb 5, 2024 · 1 revision

Overview

Amazon Web Services(AWS) is a comprehensive and widely used cloud computing platform provided by Amazon which offers a vast array of on-demand services that businesses and individuals can use to build and manage their applications and infrastructure in the cloud. AWS provides a scalable, flexible, and cost-effective solution for hosting websites, running applications, storing data, and more. These services aim to help businesses scale and grow while minimizing the need for physical infrastructure. With a global network of data centers, AWS facilitates the deployment of applications and services worldwide, making AWS one of the top leading cloud computing platforms.

What is Cloud Computing?

Cloud computing is a transformative approach to computing that leverages the internet to deliver a wide array of computing services and resources. Rather than relying on local servers or dedicated hardware, cloud computing enables users to access and utilize computing power, storage, databases, networking, software, and other resources remotely. Typically this on-demand access is provided through a pay-as-you-go model which will allow users to scale their usage to their needs. The cloud eliminates the need for organizations and individuals to invest in and maintain their own physical infrastructure, offering a more flexible, cost-effective, and scalable alternative. Cloud computing has grown to become an integral part of the digital landscape, providing the foundation for a range of different applications and services across industries. It has significantly simplified the way resources are managed and accessed, which allows for innovation and efficiency in the world of technology.

Key Characteristics and Advantages of Cloud Computing

  • On-Demand Self-Service: Users can provision and manage computing resources as needed without requiring human intervention from service providers.

  • Broad Network Access: Cloud services are accessible over the internet from a variety of devices such as laptops, smartphones, and tablets. This facilitates remote work, collaboration, and allows for greater flexibility in managing business operations.

  • Resource Pooling: Cloud providers pool computing resources to serve multiple customers. Resources are dynamically allocated and reassigned based on demand. This leads to increased efficiency and cost savings.

  • Rapid Elasticity: Cloud resources can be quickly scaled up or down to accommodate changes in workload. This flexibility allows businesses and users to handle varying workloads and adapt to changing requirements without the need for significant manual intervention.

  • Measured Service: Cloud computing resources are metered, and users are billed based on their usage. This eliminates the need for large upfront investments in hardware and infrastructure. This pay-as-you-go model offers cost efficiency and allows for better budget management.

  • Reliability: Cloud providers typically have multiple data centers with redundant systems, ensuring high availability and reliability. In the case of hardware failures or other issues, services can often be quickly shifted to alternative resources.

  • Security: Cloud providers invest heavily in security measures, including encryption, access controls, and regular security audits.

Basics and Key features of AWS

AWS follows this cloud computing model, offering the following specific key features:

  • Regions and Availability Zones: AWS operates in multiple geographic regions around the world. Each region consists of multiple Availability Zones, which are essentially data centers. This helps in providing redundancy and high availability.

  • Pricing Model: AWS follows a pay-as-you-go pricing model, where users are charged based on their actual usage of resources. This can include compute instances, storage, data transfer, and other services.

  • Management Console: The AWS Management Console is a web-based interface that allows users to manage their AWS resources and services. It provides a user-friendly way to interact with and configure AWS services.

  • SDKs and Command Line Interface (CLI): AWS offers Software Development Kits (SDKs) for various programming languages, enabling developers to interact with AWS services in their preferred language. Additionally, the AWS Command Line Interface (CLI) allows users to manage AWS services from the command line.

  • Security: AWS places a strong emphasis on security. Users can employ features such as IAM for access control, VPC for network isolation, and various encryption options for data security.

  • Compliance: AWS complies with numerous industry standards and certifications, making it suitable for a wide range of use cases, including those with specific regulatory requirements.

  • Global Reach: AWS has a global presence with data centers in multiple regions, allowing users to deploy applications and services closer to their end-users for improved performance.

  • Community and Support: AWS has a large and active community, and users can access extensive documentation, forums, and support resources.

Why use AWS?

AWS is a widely known and documented cloud computing platform, which made it a very easy decision for us when deciding what to use in order to host our application. The flexibility and immense number of options allows us to cater towards the unique requirements and challenges that we may face when building a healthcare type application. AWS is also continuously innovating and introducing new services which provides our application with access to the latest technology. This ensures that our application would be able to take advantage and stay up to date on emerging new technologies, especially in the world of healthcare technology. AWS also places a strong emphasis on security, offering a comprehensive set of tools and services to ensure the confidentiality, integrity, and safety of user information. The platform's commitment to security and compliance aligns closely with our need to follow healthcare regulations, making it a great choice in this situation. In addition, the scalability of AWS is particularly beneficial for telehealth apps that may often experience fluctuations in usage. For now, the AWS free tier will provide us with more than enough resources to handle the needs and demands of our application. In the case we require more resources, we can easily scale it towards our needs to ensure a responsive and reliable user experience. This level of scalability is crucial for an app like ours as we will be needing to accommodate to the diverse needs of healthcare providers, patients, and any other staff engaged in telehealth interactions. Additionally, AWS has extensive documentation and support services available. This information can be a valuable resource when troubleshooting and general guidance throughout the development and deployment phase. In summary, AWS provides a comprehensive array of tools that can effectively help the different needs of our telehealth application. All of these factors make AWS an excellent choice that fits our project criteria.

Potential Areas of Expansion

In addition to satisfying our project requirements, AWS will allow us to expand upon our application. With over 200 different services, there are many avenues that we can take and additional features that can be added using AWS to further the development of our application. Whether it's focusing on user experience improvements, data management, or advanced analytics, AWS provides a strong foundation for innovation and potential growth of our application.

  • Machine Learning and Analytics: AWS offers machine learning services like Amazon SageMaker, which can be utilized to enhance our telehealth application with predictive analytics, diagnostic assistance, and personalized healthcare insights. Machine learning models can analyze patient data to provide proactive health recommendations or identify potential health trends.
  • Managed Services for Healthcare: AWS offers managed services designed for healthcare applications, such as Amazon HealthLake for storing and analyzing health data in a structured manner. Amazon HealthLake itself is a HIPAA eligible service, which can allow us to handle health data in a secure and ethical way while complying with different regulations. These services simplify data management and processing which will allows us to focus on building features specific to our application.
  • Real-time Communication: AWS provides services like Amazon Connect and Amazon Chime SDK, which can facilitate real-time communication features such as video conferencing and secure messaging within the telehealth application. This is essential for enhancing interactions within our application, enabling secure and high quality communication between healthcare providers and patients.
  • Enhanced User Engagement with Amazon Polly: Amazon Polly is AWS's text-to-speech service which can be used to convert text-based information into natural-sounding speech, which we will enhance user accessibility and engagement.
  • Chatbots with Amazon Lex: Through this service, we can implement a chatbot that can assist users with general inquiries, basic health information, etc., which can add another layer of user interaction.

By incorporating these AWS services and features, we can expand the functionality of our app, while also potentially improving upon its performance, security, and overall user experience. However as we are still relatively early in development, our primary service we will be working with is AWS EC2. EC2 will be one of the main foundational services that we will be using in order to host our application.

What is AWS EC2?

Amazon EC2, otherwise known as Amazon Elastic Compute Cloud is one of Amazon's most well known services. What EC2 allows you to do is Amazon provides a secure and resizable compute capacity in the cloud. This simply means that they will provide you with a virtual server/machine that would represent a physical server so that we may be able to deploy our applications. This is a great alternative to acquiring our own hardware and connecting to a network to create our server.

The diagram below shows the basic architecture of an AWS EC2 instance deployed within an Amazon Virtual Private Cloud (VPC). The EC2 instance, which is positioned in a specific Availability Zone within the Region, is safeguarded by a security group which acts as a virtual firewall that regulates incoming and outgoing traffic. Authentication is established using a key pair, in which a private key resides on the local computer, while the public key is stored within the EC2 instance. In this specific diagram, the instance is backed by an Amazon EBS volume, which is another service of AWS that acts as a virtual hard drive. The VPC here establishes connectivity with the internet through an internet gateway.

image

What are the benefits of AWS EC2?

  • Cost-effective: Amazon EC2 allows a pay-as-you-go pricing model, which means you, the user, are in control of the costs on server resources.

  • Moldable: EC2 allows a variety of different instance types (Instance types comprise different combinations of CPU, memory, storage, and network capacity) and operating systems, meaning that it is almost fully customizable to fit your needs.

  • Scalable: EC2 is capable of adjusting computing resources as the website gains or diminishes in demand.

  • Security: EC2 provides different security features such as Virtual Private Cloud, which allows us to safely secure our website and data we receive.

  • Reliable: Because EC2 is under AWS, there is a huge reliability factor, and can reduce the risk of the server having downtime.

What are some issues we may experience when using EC2?

  • Complex: Having to set up and manage EC2 instances may be unfamiliar to us, and users have stated that the documentation isn’t always clear and can be quite confusing to understand.

  • Overhead: There could be a mismanagement of resources and optimization, which can lead to unnecessary costs.

  • Maintenance: There will need to be maintenance to ensure the security of our EC2 instances

What do we receive from the free tier?

With AWS free tier, you are given 750 hours of Linux and Windows t2.micro instances each month for up to one year. To stay within the free tier we must only use EC2 Micro instances.

What is T2 Exactly?

T2 instances in particular are Burstable Performance Instances. This means that they provide a baseline level of CPU performance and have the ability to burst above said baseline. T2 instances can sustain high CPU performance for a long time and for most general-purpose workloads will be sufficient enough without any additional charges. If one would need to run at a higher CPU utilization it could do so but it comes with an additional charge of 5 cents per vCPU-hour. Ability to burst are governed using CPU Credits and increase depending on the instance of T2.

T2.micro comapred with other instances of T2

Instance vCPU CPU Credits / hour Mem (GiB) Storage Network Performance
t2.nano 1 3 0.5 EBS-Only Low
t2.micro 1 6 1 EBS-Only Low to Moderate
t2.small 1 12 2 EBS-Only Low to Moderate
t2.medium 2 24 4 EBS-Only Low to Moderate
t2.large 2 36 8 EBS-Only Low to Moderate
t2.xlarge 4 54 16 EBS-Only Moderate
t2.2xlarge 8 81 32 EBS-Only Moderate

How to setup an EC2 Instance

To setup an EC2 instance you must first create an AWS account or sign into an already existing one.

Once done, you will enter the AWS Management Console home, from there you can search up and go to the EC2 page.

image

From there you should now be able be greeted to the EC2 Dashboard where you will lots of information that will be talked about later.

For now you should click Launch Instance (clearly highlighted in the Amazon orange color)

image

On the redirected page, first enter the name of the server, it could be whatever you want, for our project I will use the name SDP32.

image

Next option is to choose an AMI. AMI or Amazon Machine Image is a template that contains a certain software configuration that are commonplace in public use.

image

For the Instance type we will be using t2.micro for the reasons discussed earlier.

image

Next you are going to create a key pair. A key pair is a combination of a public key that is used to encrypt data, and a private key that is used to decrypt the same data.

image

Once you clicked "create a new key pair," give your key pair a name that will be downloaded later. Next, you can choose either an RSA or ED25519 algorithm of encyrption. The use of the algorithm depends on your needs. ED25519 keys are much smaller than RSA keys which makes for more efficient authentification time and storage space, and overall is more secure. However, RSA is well established and much more used around the tech world.

For the project I will be using RSA and saving it as a .pem instead of a .ptk file format as we will be using SSH to connect to the instance, not PuTTY.

image

There are multiple different settings for network.

Network: The Virtual Private Cloud (VPC) that you will be using. Subnet: A range of IP addresses in your VPC. Auto IP Adress: Automatically assign a public IP address to the primary network of the instance. Firewall (Security Group): Security group is a set of firewall rules to control traffic to and from the instance. There are Inbound (incoming traffic to instance) rules and Outbound (outgoing traffic from instance). Default API values will be used if nothing is specficed.

I will keep everything defaulted for now.

image

You can then configure the storage, we will be using the default given to us for right now.

image

We can then now launch the instance and we should be sent to this screen and a .pem file should be in your downloads that you will want to keep in a folder for later.

image

Other AWS Applications

EC2 is one of many different AWS services. While EC2 is among one of the most popular, there are still many other well liked services that each have different uses and capabilites.

Amazon Relational Database Services

Amazon Relational Database Services (RDS), is a web service that allows database configuration, management, and scaling very easy in the cloud. It has some similar qualities to EC2. However, RDS is much easier to manage and maintain.

Amazon RDS is a relational database that has a pay-as-you-go model. RDS supports many popular database engines such as MySQL, PostgreSQL, SQLSever, and more.

Amazon's themselves say, "Both Amazon RDS and Amazon EC2 offer different advantages for running a database. Amazon RDS is easier to set up, manage, and maintain than running a database on Amazon EC2, and lets you focus on other important tasks, rather than the day-to-day administration of a database. Alternatively, running a database on Amazon EC2 gives you more control, flexibility, and choice. Depending on your application and your requirements, you might prefer one over the other."

RDS while being useful, does not allow for hosting of websites such as EC2. Along with that we have discussed with our sponsor about wanting to utilize likely a NoSQL database to store the information from our application.

Amazon Simple Storage Service

Amazon Simple Storage Service (S3), is an object storage service. It is maleable with any amount of data for things such as mobile applications, websites, archives, big data analytics, etc.

S3 allows the user to be able to organize, configure access and optimze the data being entered in to achieve requirements that one's specfifc organization or business may need.

How it basically works is that data that is put through S3 will be stored as an object within bucks. This object will be a file and any metadata that describe the file and a bucket is the container for objects. Each object will have a key within the bucket that will serve as a unique indentifier.

S3 is a key-value store, something that is considered very important within NoSQL databases to achieve mutating, semi or unstructed data that can grow very big. Thus, this effectively makes S3 a NoSQL database. Uploaded objects will be referenced by a unique key which provides with near endless flexibility.

S3 has been shown by many to be very manageable, reliable, secure, and compliant with features such as analytics and insight to provide the user more visibility with storage usage. In turn this can lead to a better analysis and optimization of overall storage. Amazon S3 also provides storage logging and monitoring to see exactly how the resources are being used in S3. They even allow S3 Versioning which give the user the ability to keep multiple versions of an object within the same bucket. So, if an object gets deleted or overwritten by accident there is the potential to restore it.

You will often find both Amazon EC2 and S3 being used together. One application allows you to run servers in the cloud with minimal effort, the other is used for storing large amounts of data. They can compliment one another and could be a potential option that our senior design team uses. At this current moment we are still undecided and are trying to figure out what exactly do we need from our backend.

Amazon Lambda

AWS Lambda is an event-based service that delivers very short term computing capability. It is mainly used to run code without having to deploy a virtual machine instance. Lambda can be used to update a databse, cause some changes in storage, or even trigger a custom event generated from other applications.

An example would be that a photograph was taken and uploaded to an Amazon S3 bucket, when it is uploaded it will cause a trigger in AWS Lambda to go off. Lambda will then run code that resizes the image. What it outputs is a photo resized to web, mobile and tablet sizes.

EC2 can work together with AWS Lambda but it is not required, depending on the needs and requirements of the application you may find it useful. For out senior design project I believe that we will not be needing AWS Lambda but there opinions can always change depending on what our situation is.

Amazon Virtual Private Cloud

As discussed earlier, Amazon EC2 deploys within a Virtual Private Cloud (VPC). To go into greater detail of what a VPC is, a virtual private cloud allows you to launch AWS resources in an isolted virtual network that you have defined.

image

The specific features of VPC Include:

  • Subnets: A range of IP Addresses within your VPC
  • IP Addressing: You can assign specific IP Addresses, from both IPv4 and IPv6 to your VPC and subnets.
  • Routing: Using route tables (route table is a table that contains a set of rules, called routes that determine where network traffic from your subnet or gateway is directed) you can determine where traffic from your subnet or gateway will be directed towards.
  • Gateways and endpoints: Gateways allows another network to connect to our VPC. Then, endpoint allows to connect to AWS services privately, without the use of something like an internet gateway or NAT device.
  • Peering Connections: Use VPC peering connections (a networking connection between to VPCs that enables you to route traffic between them using private IPv4 addresses or IPv^ addresses) to route traffic between the resources into two VPCs.
  • Traffic Mirroring: Copy network traffic from a network interafce and send it to appliances that provide security and monitor to insepct the packets.
  • Transit gateways: Using a transite gateway (central hub), route traffic between VPCs, VPN connections, and AWS Direct Connect connections.
  • VPN Flow Logs: Capture information about the IP traffic going to and from network interfaces within your VPC.
  • VPN connections: The ability to connect your VPCs to your on-premise networks using AWS Virtual Private Network (AWS VPN)

As show VPC allows many security features and come included with all of their services, which can allow for an easier, more secure method of connecting different AWS services with one another for whatever application you may need it for.

Amazon DynamoDB

Amazon DynamoDB is a fully managed and serverless NoSQL databse. Its service provides fast (Amazon guarentees millisecond latency at any scale) and predictable performance and seamless scalability. DynamoDB can handle up to 20 million requests per second and up to 10 trillion requests per day.

DyanmoDB has two capacity modes, one allows for the tables automatically be scaled up and down based on the work load. The other, the developer needs to define the auto-scaling configurations, which includes the number of capacity units.

DynamoDB also allows different methods for backup and restoration of data. One of the methods is called On-demand backup which is when the user needs to schedule backups manually and quickly are completed with little to no affect with the application's performance. The other method of recovery is point-in-time recovery which is the automatic backup mechanism. DynamoDB will continuously create backups, the user can then use these backups to restore data for upwards of 35 days by selecting the exact date and time.

DyanmoDB is similar in some aspects to Amazon S3. Both feature high performance, scalability, backups, security and high availabilty. DynamoDB is generally good for storing structed or semi-structed data and has limits on its storage size, records should usually be less than 400KBs, but as stated earlier, has very high access speeds. S3 is good for storing files, images, videos, etc. It allows the ability to store files up to 5 TBs with reasonable access speeds, however not as fast as DynamoDB. So, many people recommend DynamoDB for a real time application.

Depending on the application and its needs both applications can be used together. As an example say you have a social media application. The image of the profile picture of the user could be uploaded and stored onto S3 and then store the link in a table of user-profiles in Dynamodb as one of it's attributes.