01-10

01-10

KG

10 Qs

quiz-placeholder

Similar activities

AWS Timed Quiz -1

AWS Timed Quiz -1

Professional Development

13 Qs

S3 Summary

S3 Summary

Professional Development

15 Qs

BIG DATA DAY 3 Quiz

BIG DATA DAY 3 Quiz

Professional Development

15 Qs

AWS 3

AWS 3

1st - 3rd Grade

11 Qs

Team Quiz 1

Team Quiz 1

Professional Development

15 Qs

SAA-C03 - Módulo 14

SAA-C03 - Módulo 14

Professional Development

10 Qs

Terraform101

Terraform101

Professional Development

10 Qs

AWS ACF Módulo 7 - Armazenamento

AWS ACF Módulo 7 - Armazenamento

Professional Development

15 Qs

01-10

01-10

Assessment

Quiz

Computers

KG

Medium

Created by

Ai Pham

Used 18+ times

FREE Resource

10 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

A company collects data for temperature, humidity, and atmospheric pressure in cities across multiple continents. The average volume of data that the company collects from each site daily is 500 GB. Each site has a high-speed Internet connection. The company wants to aggregate the data from all these global sites as quickly as possible in a single Amazon S3 bucket. The solution must minimize operational complexity. Which solution meets these requirements?

Turn on S3 Transfer Acceleration on the destination S3 bucket. Use multipart uploads to directly upload site data to the destination S3 bucket.

Upload the data from each site to an S3 bucket in the closest Region. Use S3 Cross-Region Replication to copy objects to the destination S3 bucket. Then remove the data from the origin S3 bucket.

Schedule AWS Snowball Edge Storage Optimized device jobs daily to transfer data from each site to the closest Region. Use S3 Cross Region Replication to copy objects to the destination S3 bucket.

Upload the data from each site to an Amazon EC2 instance in the closest Region. Store the data in an Amazon Elastic Block Store (Amazon EBS) volume. At regular intervals, take an EBS snapshot and copy it to the Region that contains the destination S3 bucket. Restore the EBS volume in that Region.

Answer explanation

Amazon S3 Transfer Acceleration is a bucket-level feature that enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket. Transfer Acceleration is designed to optimize transfer speeds from across the world into S3 buckets.

2.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

A company needs the ability to analyze the log files of its proprietary application. The logs are stored in JSON format in an Amazon S3 bucket. Queries will be simple and will run on-demand. A solutions architect needs to perform the analysis with minimal changes to the existing architecture.What should the solutions architect do to meet these requirements with the LEAST amount of operational overhead?

Use Amazon Redshift to load all the content into one place and run the SQL queries as needed.

Use Amazon CloudWatch Logs to store the logs. Run SQL queries as needed from the Amazon CloudWatch console.

Use Amazon Athena directly with Amazon S3 to run the queries as needed.

Use AWS Glue to catalog the logs. Use a transient Apache Spark cluster on Amazon EMR to run the SQL queries as needed.

Answer explanation

https://docs.aws.amazon.com/athena/latest/ug/what-is.html Amazon Athena is an interactive query service that makes it easy to analyze data directly in Amazon Simple Storage Service (Amazon S3) using standard SQL. With a few actions in the AWS Management Console, you can point Athena at your data stored in Amazon S3 and begin using standard SQL to run ad-hoc queries and get results in seconds

3.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

A company uses AWS Organizations to manage multiple AWS accounts for different departments. The management account has an Amazon S3 bucket that contains project reports. The company wants to limit access to this S3 bucket to only users of accounts within the organization in AWS Organizations.Which solution meets these requirements with the LEAST amount of operational overhead?

Add the aws PrincipalOrgID global condition key with a reference to the organization ID to the S3 bucket policy.

Create an organizational unit (OU) for each department. Add the aws:PrincipalOrgPaths global condition key to the S3 bucket policy.

Use AWS CloudTrail to monitor the CreateAccount, InviteAccountToOrganization, LeaveOrganization, and RemoveAccountFromOrganization events. Update the S3 bucket policy accordingly.

Tag each user that needs access to the S3 bucket. Add the aws:PrincipalTag global condition key to the S3 bucket policy.

Answer explanation

-aws:PrincipalOrgID Validates if the principal accessing the resource belongs to an account in your organization. https://aws.amazon.com/blogs/security/control-access-to-aws-resources-by-using-the-aws-organization-of-iam-principals/

-the condition key aws:PrincipalOrgID can prevent the members who don't belong to your organization to access the resource

4.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

An application runs on an Amazon EC2 instance in a VPC. The application processes logs that are stored in an Amazon S3 bucket. The EC2 instance needs to access the S3 bucket without connectivity to the internet.Which solution will provide private network connectivity to Amazon S3?

Create a gateway VPC endpoint to the S3 bucket.

Stream the logs to Amazon CloudWatch Logs. Export the logs to the S3 bucket.

Create an instance profile on Amazon EC2 to allow S3 access.

Create an Amazon API Gateway API with a private link to access the S3 endpoint.

Answer explanation

VPC endpoint allows you to connect to AWS services using a private network instead of using the public Internet

5.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

A company is hosting a web application on AWS using a single Amazon EC2 instance that stores user-uploaded documents in an Amazon EBS volume. For better scalability and availability, the company duplicated the architecture and created a second EC2 instance and EBS volume in another Availability Zone, placing both behind an Application Load Balancer. After completing this change, users reported that, each time they refreshed the website, they could see one subset of their documents or the other, but never all of the documents at the same time.What should a solutions architect propose to ensure users see all of their documents at once?

Copy the data so both EBS volumes contain all the documents

Configure the Application Load Balancer to direct a user to the server with the documents

Copy the data from both EBS volumes to Amazon EFS. Modify the application to save new documents to Amazon EFS

Configure the Application Load Balancer to send the request to both servers. Return each document from the correct server

Answer explanation

-Concurrent or at the same time are keywords for EFS

-Amazon Elastic File System is a cloud storage service provided by Amazon Web Services, and designed to provide scalable, elastic and concurrency

-EFS can be mounted to multiple EC2 instances across AZs. The Performance is higher latency & throughput.

6.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

A company uses NFS to store large video files in on-premises network attached storage. Each video file ranges in size from 1 MB to 500 GB. The total storage is 70 TB and is no longer growing. The company decides to migrate the video files to Amazon S3. The company must migrate the video files as soon as possible while using the least possible network bandwidth.Which solution will meet these requirements?

Create an S3 bucket. Create an IAM role that has permission to write to the S3 bucket. Use the AWS CLI to copy all files locally to the S3 bucket.

Create an AWS Snowball Edge job. Receive a Snowball Edge device on-premises. Use the Snowball Edge client to transfer data to the device. Return the device so that AWS can import the data into Amazon S3.

Deploy an S3 File Gateway on premises. Create a public service endpoint to connect to the S3 File Gateway. Create an S3 bucket. Create a new NFS file share on the S3 File Gateway. Point the new file share to the S3 bucket. Transfer the data from the existing NFS file share to the S3 File Gateway.

Set up an AWS Direct Connect connection between the on-premises network and AWS. Deploy an S3 File Gateway on premises. Create a public virtual interface (VIF) to connect to the S3 File Gateway. Create an S3 bucket. Create a new NFS file share on the S3 File Gateway. Point the new file share to the S3 bucket. Transfer the data from the existing NFS file share to the S3 File Gateway.

Answer explanation

-Option C is the first to be ruled out. File gateway (storage gateway) is not a migration tool. It simply bridges data from on-prem and AWS (e.g. S3) with local caching on-prem. They are asking about migrating data... file gateway does not migrate data, it is not a migration solution.

-B. On a Snowball Edge device you can copy files with a speed of up to 100Gbps. 70TB will take around 5600 seconds, so very quickly, less than 2 hours. The downside is that it'll take between 4-6 working days to receive the device and then another 2-3 working days to send it back and for AWS to move the data onto S3 once it reaches them. Total time: 6-9 working days. Bandwidth used: 0. C. File Gateway uses the Internet, so maximum speed will be at most 1Gbps, so it'll take a minimum of 6.5 days and you use 70TB of Internet bandwidth. D. You can achieve speeds of up to 10Gbps with Direct Connect. Total time 15.5 hours and you will use 70TB of bandwidth. However, what's interesting is that the question does not specific what type of bandwidth? Direct Connect does not use your Internet bandwidth, as you will have a dedicate peer to peer connectivity between your on-prem and the AWS Cloud, so technically, you're not using your "public" bandwidth. The requirements are a bit too vague but I think that B is the most appropriate answer, although D might also be correct if the bandwidth usage refers strictly to your public connectivity.

7.

MULTIPLE CHOICE QUESTION

30 sec • 1 pt

A company has an application that ingests incoming messages. Dozens of other applications and microservices then quickly consume these messages. The number of messages varies drastically and sometimes increases suddenly to 100,000 each second. The company wants to decouple the solution and increase scalability.Which solution meets these requirements?

Persist the messages to Amazon Kinesis Data Analytics. Configure the consumer applications to read and process the messages.

Deploy the ingestion application on Amazon EC2 instances in an Auto Scaling group to scale the number of EC2 instances based on CPU metrics.

Write the messages to Amazon Kinesis Data Streams with a single shard. Use an AWS Lambda function to preprocess messages and store them in Amazon DynamoDB. Configure the consumer applications to read from DynamoDB to process the messages.

Publish the messages to an Amazon Simple Notification Service (Amazon SNS) topic with multiple Amazon Simple Queue Service (Amazon SOS) subscriptions. Configure the consumer applications to process the messages from the queues.

Answer explanation

-SNS Fan Out Pattern https://docs.aws.amazon.com/sns/latest/dg/sns-common-scenarios.html (A is wrong Kinesis Analysis does not 'persist' by itself.)

-By default, FIFO queues support up to 3,000 messages per second with batching or up to 300 messages per second (300 send, receive, or delete operations per second) without batching. So with batching it solved the question and option D also mentions “multiple”. So D is the right choice.

--decoupling = SQS in most cases, and they did mention SNS into multiple SQS so with multiple SQS you can handle more than 100k requests

-a link with a debate with very good arguments. https://www.examtopics.com/discussions/amazon/view/43777-exam-aws-certified-solutions-architect-associate-saa-c02/

Create a free account and access millions of resources

Create resources
Host any resource
Get auto-graded reports
or continue with
Microsoft
Apple
Others
By signing up, you agree to our Terms of Service & Privacy Policy
Already have an account?