17/06/2024

17/06/2024

1st - 5th Grade

6 Qs

quiz-placeholder

Similar activities

26/03/2024

26/03/2024

1st - 5th Grade

6 Qs

Product Tour GCP 03/06/2021

Product Tour GCP 03/06/2021

1st - 3rd Grade

11 Qs

06/05/2024

06/05/2024

1st - 5th Grade

6 Qs

klompok1 (anies)

klompok1 (anies)

1st Grade

10 Qs

5th Grade Water Cycle Check

5th Grade Water Cycle Check

5th Grade

10 Qs

Producers and Consumers

Producers and Consumers

4th Grade

10 Qs

22/04/2024

22/04/2024

1st - 5th Grade

5 Qs

Energi (kalor, suhu, bunyi, cahaya dan listrik)

Energi (kalor, suhu, bunyi, cahaya dan listrik)

5th - 6th Grade

10 Qs

17/06/2024

17/06/2024

Assessment

Quiz

Science

1st - 5th Grade

Hard

Created by

Ben_ _Papuche

Used 1+ times

FREE Resource

6 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

1 min • 10 pts

Media Image

A junior developer has downloaded a sample Amazon S3 bucket policy to make changes to it based on new company-wide access policies. He has requested your help in understanding this bucket policy.

As a Solutions Architect, which of the following would you identify as the correct description for the given policy?

It authorizes an IP address and a CIDR to access the S3 bucket

It ensures EC2 instances that have inherited a security group can access the bucket

It ensures the S3 bucket is exposing an external IP within the CIDR range specified, except one IP

It authorizes an entire CIDR except one IP address to access the S3 bucket

Answer explanation

Correct option:

It authorizes an entire CIDR except one IP address to access the S3 bucket -

You manage access in AWS by creating policies and attaching them to IAM identities (users, groups of users, or roles) or AWS resources. A policy is an object in AWS that, when associated with an identity or resource, defines their permissions. AWS evaluates these policies when an IAM principal (user or role) makes a request. Permissions in the policies determine whether the request is allowed or denied. Most policies are stored in AWS as JSON documents. AWS supports six types of policies: identity-based policies, resource-based policies, permissions boundaries, AWS Organizations SCPs, ACLs, and session policies.

Let's analyze the bucket policy one step at a time:

The snippet "Effect": "Allow" implies an allow effect. The snippet "Principal": "*" implies any Principal. The snippet "Action": "s3:*" implies any S3 API. The snippet "Resource": "arn:aws:s3:::examplebucket/*" implies that the resource can be the bucket examplebucket and its contents. Consider the last snippet of the given bucket policy: "Condition": { "IpAddress": {"aws:SourceIp": "54.240.143.0/24"}, "NotIpAddress": {"aws:SourceIp": "54.240.143.188/32"} } This snippet implies that if the source IP is in the CIDR block "54.240.143.0/24" (== 54.240.143.0 - 54.240.143.255), then it is allowed to access the examplebucket and its contents. However, the source IP cannot be in the CIDR "54.240.143.188/32" (== 1 IP, 54.240.143.188/32), which means one IP address is explicitly blocked from accessing the examplebucket and its contents.

Example Bucket policies:

via -https://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html

Incorrect options:

It ensures the S3 bucket is exposing an external IP within the CIDR range specified, except one IP

It ensures EC2 instances that have inherited a security group can access the bucket

It authorizes an IP address and a CIDR to access the S3 bucket

These three options contradict the explanation provided above, so these options are incorrect.

Reference:

https://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html

2.

MULTIPLE SELECT QUESTION

1 min • 11 pts

A company runs a popular dating website on the AWS Cloud. As a Solutions Architect, you've designed the architecture of the website to follow a serverless pattern on the AWS Cloud using API Gateway and AWS Lambda. The backend uses an RDS PostgreSQL database. Currently, the application uses a username and password combination to connect the Lambda function to the RDS database.

You would like to improve the security at the authentication level by leveraging short-lived credentials. What will you choose? (Select two)

Use IAM authentication from Lambda to RDS PostgreSQL

Restrict the RDS database security group to the Lambda's security group

Deploy AWS Lambda in a VPC

Attach an AWS Identity and Access Management (IAM) role to AWS Lambda

Embed a credential rotation logic in the AWS Lambda, retrieving them from SSM

Answer explanation

Correct options:

Use IAM authentication from Lambda to RDS PostgreSQL

Attach an AWS Identity and Access Management (IAM) role to AWS Lambda

You can authenticate to your DB instance using AWS Identity and Access Management (IAM) database authentication. IAM database authentication works with MySQL and PostgreSQL. With this authentication method, you don't need to use a password when you connect to a DB instance. Instead, you use an authentication token.

An authentication token is a unique string of characters that Amazon RDS generates on request. Authentication tokens are generated using AWS Signature Version 4. Each token has a lifetime of 15 minutes. You don't need to store user credentials in the database, because authentication is managed externally using IAM. You can also still use standard database authentication.

IAM database authentication provides the following benefits: 1. Network traffic to and from the database is encrypted using Secure Sockets Layer (SSL). 2. You can use IAM to centrally manage access to your database resources, instead of managing access individually on each DB instance. 3. For applications running on Amazon EC2, you can use profile credentials specific to your EC2 instance to access your database instead of a password, for greater security.

Incorrect options:

AWS Systems Manager Parameter Store (aka SSM Parameter Store) provides secure, hierarchical storage for configuration data management and secrets management. You can store data such as passwords, database strings, EC2 instance IDs, Amazon Machine Image (AMI) IDs, and license codes as parameter values. You can store values as plain text or encrypted data.

Embed a credential rotation logic in the AWS Lambda, retrieving them from SSM - Retrieving credentials from SSM is overkill for the expected solution and hence this is not a correct option.

Restrict the RDS database security group to the Lambda's security group

Deploy AWS Lambda in a VPC

This question is very tricky because all answers do indeed increase security. But the question is related to authentication mechanisms, and as such, deploying a Lambda in a VPC or tightening security groups does not change the authentication layer. IAM authentication to RDS is supported, which must be achieved by attaching an IAM role the AWS Lambda function

References:

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.html

https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html

3.

MULTIPLE CHOICE QUESTION

1 min • 10 pts

You are working as an AWS architect for a weather tracking facility. You are asked to set up a Disaster Recovery (DR) mechanism with minimum costs. In case of failure, the facility can only bear data loss of a few minutes without jeopardizing the forecasting models.

As a Solutions Architect, which DR method will you suggest?

Multi-Site

Warm Standby

Pilot Light

Backup and Restore

Answer explanation

Correct option:

Pilot Light - The term pilot light is often used to describe a DR scenario in which a minimal version of an environment is always running in the cloud. The idea of the pilot light is an analogy that comes from the gas heater. In a gas heater, a small flame that’s always on can quickly ignite the entire furnace to heat up a house. This scenario is similar to a backup-and-restore scenario. For example, with AWS you can maintain a pilot light by configuring and running the most critical core elements of your system in AWS. For the given use-case, a small part of the backup infrastructure is always running simultaneously syncing mutable data (such as databases or documents) so that there is no loss of critical data. When the time comes for recovery, you can rapidly provision a full-scale production environment around the critical core. For Pilot light, RPO is in minutes, so this is the correct solution.

Four Disaster Recovery scenarios:

via -https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/plan-for-disaster-recovery-dr.html

Incorrect options:

Backup and Restore - In most traditional environments, data is backed up to tape and sent off-site regularly. If you use this method, it can take a long time to restore your system in the event of a disruption or disaster. Amazon S3 is an ideal destination for backup data that might be needed quickly to perform a restore. Transferring data to and from Amazon S3 is typically done through the network and is therefore accessible from any location. There are many commercial and open-source backup solutions that integrate with Amazon S3. You can use AWS Import/Export to transfer very large data sets by shipping storage devices directly to AWS. For longer-term data storage where retrieval times of several hours are adequate, there is Amazon Glacier, which has the same durability model as Amazon S3. Amazon Glacier is a low-cost alternative starting from $0.01/GB per month. Amazon Glacier and Amazon S3 can be used in conjunction to produce a tiered backup solution. Even though Backup and Restore method is cheaper, it has an RPO in hours, so this option is not the right fit.

Warm Standby - The term warm standby is used to describe a DR scenario in which a scaled-down version of a fully functional environment is always running in the cloud. A warm standby solution extends the pilot light elements and preparation. It further decreases the recovery time because some services are always running. By identifying your business-critical systems, you can fully duplicate these systems on AWS and have them always on. This option is more costly compared to Pilot Light.

Multi-Site - A multi-site solution runs on AWS as well as on your existing on-site infrastructure in an active-active configuration. The data replication method that you employ will be determined by the recovery point that you choose, either Recovery Time Objective (the maximum allowable downtime before degraded operations are restored) or Recovery Point Objective (the maximum allowable time window whereby you will accept the loss of transactions during the DR process). This option is more costly compared to Pilot Light.

References:

https://aws.amazon.com/blogs/publicsector/rapidly-recover-mission-critical-systems-in-a-disaster/

https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/plan-for-disaster-recovery-dr.html

https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/disaster-recovery-dr-objectives.html

4.

MULTIPLE CHOICE QUESTION

1 min • 11 pts

An enterprise has decided to move its secondary workloads such as backups and archives to AWS cloud. The CTO wishes to move the data stored on physical tapes to Cloud, without changing their current tape backup workflows. The company holds petabytes of data on tapes and needs a cost-optimized solution to move this data to cloud.

What is an optimal solution that meets these requirements while keeping the costs to a minimum?

Use AWS VPN connection between the on-premises datacenter and your Amazon VPC. Once this is established, you can use Amazon Elastic File System (Amazon EFS) to get a scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources

Use Tape Gateway, which can be used to move on-premises tape data onto AWS Cloud. Then, Amazon S3 archiving storage classes can be used to store data cost-effectively for years

Use AWS Direct Connect, a cloud service solution that makes it easy to establish a dedicated network connection from on-premises to AWS to transfer data. Once this is done, Amazon S3 can be used to store data at lesser costs

Use AWS DataSync, which makes it simple and fast to move large amounts of data online between on-premises storage and AWS Cloud. Data moved to Cloud can then be stored cost-effectively in Amazon S3 archiving storage classes

Answer explanation

Correct option:

Use Tape Gateway, which can be used to move on-premises tape data onto AWS Cloud. Then, Amazon S3 archiving storage classes can be used to store data cost-effectively for years - Tape Gateway enables you to replace using physical tapes on-premises with virtual tapes in AWS without changing existing backup workflows. Tape Gateway supports all leading backup applications and caches virtual tapes on-premises for low-latency data access. Tape Gateway encrypts data between the gateway and AWS for secure data transfer and compresses data while transitioning virtual tapes between Amazon S3 and Amazon S3 Glacier, or Amazon S3 Glacier Deep Archive, to minimize storage costs.

Tape Gateway compresses and stores archived virtual tapes in the lowest-cost Amazon S3 storage classes, Amazon S3 Glacier and Amazon S3 Glacier Deep Archive. This makes it feasible for you to retain long-term data in the AWS Cloud at a very low cost. With Tape Gateway, you only pay for what you consume, with no minimum commitments and no upfront fees.

Tape Gateway stores your virtual tapes in S3 buckets managed by the AWS Storage Gateway service, so you don’t have to manage your own Amazon S3 storage. Tape Gateway integrates with all leading backup applications allowing you to start using cloud storage for on-premises backup and archive without any changes to your backup and archive workflows.

Tape Gateway Overview:

via -https://aws.amazon.com/storagegateway/vtl/

Incorrect options:

Use AWS DataSync, which makes it simple and fast to move large amounts of data online between on-premises storage and AWS Cloud. Data moved to Cloud can then be stored cost-effectively in Amazon S3 archiving storage classes - AWS DataSync supports only NFS and SMB file types and hence is not the right choice for the given use case.

Use AWS Direct Connect, a cloud service solution that makes it easy to establish a dedicated network connection from on-premises to AWS to transfer data. Once this is done, Amazon S3 can be used to store data at lesser costs - AWS Direct Connect is used when customers need to retain on-premises structure because of compliance reasons and have moved the rest of the architecture to AWS Cloud. These businesses generally have an on-going requirement for low latency access to AWS Cloud and hence are willing to spend on installing the physical lines needed for this connection. The given use-case needs a cost-optimized solution and they do not have an ongoing requirement for high availability bandwidth.

Use AWS VPN connection between the on-premises datacenter and your Amazon VPC. Once this is established, you can use Amazon Elastic File System (Amazon EFS) to get a scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources - VPN connection is used when businesses have an on-going requirement for connectivity from the on-premises data center to AWS Cloud. Amazon EFS is a managed file system by AWS and cannot be used for archiving on-premises tape data onto AWS Cloud.

References:

https://aws.amazon.com/storagegateway/vtl/

https://aws.amazon.com/storagegateway/faqs/

5.

MULTIPLE CHOICE QUESTION

1 min • 10 pts

A digital media company needs to manage uploads of around 1TB each from an application being used by a partner company.

As a Solutions Architect, how will you handle the upload of these files to Amazon S3?

Use Direct Connect to provide extra bandwidth

Use multi-part upload feature of Amazon S3

Use AWS Snowball

Use Amazon S3 Versioning

Answer explanation

Correct option:

Use multi-part upload feature of Amazon S3 - Multi-part upload allows you to upload a single object as a set of parts. Each part is a contiguous portion of the object's data. You can upload these object parts independently and in any order. If transmission of any part fails, you can retransmit that part without affecting other parts. After all parts of your object are uploaded, Amazon S3 assembles these parts and creates the object.

AWS recommends that you use multi-part uploading in the following ways: If you're uploading large objects over a stable high-bandwidth network, use multi-part uploading to maximize the use of your available bandwidth by uploading object parts in parallel for multi-threaded performance. If you're uploading over a spotty network, use multi-part uploading to increase resiliency to network errors by avoiding upload restarts. When using multi-part uploading, you need to retry uploading only parts that are interrupted during the upload. You don't need to restart uploading your object from the beginning.

In general, when your object size reaches 100 MB, you should consider using multipart uploads instead of uploading the object in a single operation. If the file is greater than 5GB in size, you must use multi-part upload to upload that file to S3.

Incorrect options:

Use Amazon S3 Versioning - S3 Versioning is a means of keeping multiple variants of an object in the same bucket. You can use versioning to preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket. With versioning, you can easily recover from both unintended user actions and application failures. When you enable versioning for a bucket, if Amazon S3 receives multiple write requests for the same object simultaneously, it stores all of the objects. If you overwrite an object, it results in a new object version in the bucket. You can always restore the previous version.

Use Direct Connect connection to provide extra bandwidth - AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. Using AWS Direct Connect, you can establish private connectivity between AWS and your datacenter, office, or colocation environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections.

AWS Direct Connect lets you establish a dedicated network connection between your network and one of the AWS Direct Connect locations. This dedicated connection can be partitioned into multiple virtual interfaces. This allows you to use the same connection to access public resources such as objects stored in Amazon S3 using public IP address space, and private resources such as Amazon EC2 instances running within an Amazon Virtual Private Cloud (VPC) using private IP space, while maintaining network separation between the public and private environments. Virtual interfaces can be reconfigured at any time to meet your changing needs. This is a physical connection that takes at least a month to set up.

Use AWS Snowball - Snowball Edge Storage Optimized is the optimal choice if you need to securely and quickly transfer dozens of terabytes to petabytes of data to AWS. It provides up to 80 TB of usable HDD storage, 40 vCPUs, 1 TB of SATA SSD storage, and up to 40 Gb network connectivity to address large scale data transfer and pre-processing use cases.

(The original Snowball devices were transitioned out of service and Snowball Edge Storage Optimized are now the primary devices used for data transfer. You may see the Snowball device on the exam, just remember that the original Snowball device had 80TB of storage space.)

References:

https://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html

https://docs.aws.amazon.com/AmazonS3/latest/dev/UploadingObjects.html
Q19-Test4

6.

MULTIPLE SELECT QUESTION

30 sec • 20 pts

BARBEUC CHEZ JOSEPH LUNDI PROCHAIN

Je viens

Oui

Apéro

Merguez