AWS-SAA-C02

AWS-SAA-C02

Professional Development

20 Qs

quiz-placeholder

Similar activities

CB-Quiz On AWS_IAM USER

CB-Quiz On AWS_IAM USER

Professional Development

20 Qs

Simulado SAA-C03 23 Questões

Simulado SAA-C03 23 Questões

Professional Development

23 Qs

SAA-C03) - Módulos: 10 (Rede-2) e 11 (Sem servidor)

SAA-C03) - Módulos: 10 (Rede-2) e 11 (Sem servidor)

Professional Development

25 Qs

Digital Classroom - Architecting on AWS - Módulos 5-7

Digital Classroom - Architecting on AWS - Módulos 5-7

Professional Development

25 Qs

Quizz Segurança e conformidade AWS

Quizz Segurança e conformidade AWS

Professional Development

25 Qs

Domínio 4

Domínio 4

Professional Development

15 Qs

AWS Solutions Architect Associate - Architectures

AWS Solutions Architect Associate - Architectures

Professional Development

20 Qs

AWS DVA-C02 Quiz by CAAI

AWS DVA-C02 Quiz by CAAI

Professional Development

25 Qs

AWS-SAA-C02

AWS-SAA-C02

Assessment

Quiz

Information Technology (IT)

Professional Development

Hard

Created by

Vu Hung

Used 1+ times

FREE Resource

20 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

A company collects data for temperature, humidity, and atmospheric pressure in cities across multiple continents. The average volume of data that the company collects from each site daily is 500 GB. Each site has a high-speed Internet connection. The company wants to aggregate the data from all these global sites as quickly as possible in a single Amazon S3 bucket. The solution must minimize operational complexity. Which solution meets these requirements?
está correto porque o S3 Transfer Acceleration é compatível com a transferência de alta velocidade. upvoted 1 times
Upload the data from each site to an S3 bucket in the closest Region. Use S3 Cross-Region Replication to copy objects to the destination S3 bucket. Then remove the data from the origin S3 bucket.
Schedule AWS Snowball Edge Storage Optimized device jobs daily to transfer data from each site to the closest Region. Use S3 Cross- Region Replication to copy objects to the destination S3 bucket.
is not minimize operational and fast when compare to answer A upvoted 22 times d01vectmoy3 weeks, 1 day ago you can see all latest information about saa c03 exam - > https://link2.it/4MU5R upvoted 1 times RuffyitHighly Voted 9 months, 3 weeks ago General line: Collect huge amount of the files across multiple continents Conditions: High speed Internet connectivity Task: aggregate the data from all in a single S3 bucket Requirements: as quick as possible, minimize operational complexity Correct answer A: S3 Transfer Acceleration because: - ideally works with objects for long-distance transfer (uses Edge Locations) - can speed up content transfers to and from S3 as much as 50-500% - use cases: mobile & web application uploads and downloads, distributed office transfers, data exchange with trusted partners. Generally for sharing of large data sets between companies, customers can set up special access to their S3 buckets with accelerated uploads to speed data exchanges and the pace of innovation. B - about disaster recovery C - about transferring data between your local environment and the AWS Cloud D - about disaster recovery upvoted 12 times pentium751 year, 6 months ago C, we DO have a "local environment" (a "site") that collect data, and THAT data must go to "the AWS cloud". Why not C? The stem says nothing about large objects, which would be a requirement for the "multipart upload" that is mentioned. upvoted 3 times jake99Most Recent 1 day, 20 hours ago Selected Answer: A A is the Corect Option I’ve tried a few mock test platforms, but SkillCertExams stood out. Their content is top-notch and very similar to what you see on the actual exam. upvoted 1 times DovanDu3 days, 3 hours ago Selected Answer: A Vote A upvoted 1 times JKDemon2 weeks, 4 days ago Selected Answer: A A: least complex B,D: could work but they’re more complex C: high-speed internet connection, so no snowball devices required upvoted 1 times BrantD1 month ago Selected Answer: A Least operation complexity. upvoted 1 times clfigueiredo122 months ago Selected Answer: U
está correto porque o S3 Transfer Acceleration é compatível com a transferência de alta velocidade. upvoted 1 times

Answer explanation

S3 Transfer Acceleration is the best solution for fast, global uploads to a single S3 bucket with minimal operational complexity.

2.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

A company needs the ability to analyze the log files of its proprietary application. The logs are stored in JSON format in an Amazon S3 bucket. Queries will be simple and will run on-demand. A solutions architect needs to perform the analysis with minimal changes to the existing architecture. What should the solutions architect do to meet these requirements with the LEAST amount of operational overhead?
Use Amazon Redshift to load all the content into one place and run the SQL queries as needed.
Use Amazon CloudWatch Logs to store the logs. Run SQL queries as needed from the Amazon CloudWatch console.
Use Amazon Athena directly with Amazon S3 to run the queries as needed.
Use AWS Glue to catalog the logs. Use a transient Apache Spark cluster on Amazon EMR to run the SQL queries as needed. Show Suggested Answer by airraid2010 at Oct. 9, 2022, 2:21 p.m. Disclaimers: - ExamTopics website is not related to, affiliated with, endorsed or authorized by Amazon.- Trademarks, certification & product names are used for reference only and belong to Amazon. Comments airraid2010Highly Voted 2 years, 9 months ago Answer: C Amazon Athena is an interactive query service that makes it easy to analyze data directly in Amazon Simple Storage Service (Amazon S3) using standard SQL. With a few actions in the AWS Management Console, you can point Athena at your data stored in Amazon S3 and begin using standard SQL to run ad-hoc queries and get results in seconds. upvoted 73 times BoboChow2 years, 9 months ago I agree C is the answer upvoted 2 times tt792 years, 9 months ago C is right. upvoted 1 times PhucVuuHighly Voted 2 years, 3 months ago Selected Answer: C Keyword: - Queries will be simple and will run on-demand. - Minimal changes to the existing architecture. A: Incorrect - We have to do 2 step. load all content to Redshift and run SQL query (This is simple query so we can you Athena, for complex query we will apply Redshit) B: Incorrect - Our query will be run on-demand so we don't need to use CloudWatch Logs to store the logs. C: Correct - This is simple query we can apply Athena directly on S3 D: Incorrect - This take 2 step: use AWS Glue to catalog the logs and use Spark to run SQL query upvoted 47 times DovanDuMost Recent 2 days, 23 hours ago Selected Answer: C Vote C upvoted 1 times hossex53 weeks, 3 days ago Selected Answer: C its c bro upvoted 1 times clfigueiredo122 months ago Selected Answer: C C está certo. upvoted 1 times K_SAA2 months ago Selected Answer: C Because the query is simples and run on demand so C is the correct answer B: use aws cloudwatch is not designed for this use-case because cloudwatch do not support SQL queries and the log json is stored in S3 and cloudwatch can not directly query from s3. Cloudwatch only support its own queries syntax not standard SQL A: use aws RedShift is not considered the right option since you would need to load all log data into Redshift, which is required more worked and high operational overhead, it goes against the requirement of this use-case queries is simple and running on demand D: use aws glue and Amazon EMR is more complex in setup and high operational overhead In conclusion, use Amazon athena is the right option, its simple and serverless meaning no infrastructure to manage. upvoted 1 times topitexamcom2 months, 1 week ago Selected Answer: C C is the right answer upvoted 1 times Bl_124 months, 3 weeks ago Selected Answer: A Redshift is query upvoted 1 times sumanl755 months, 1 week ago Selected Answer: C C Correct answer. Athena integrates seamlessly with S3 and allows you to run simple SQL queries in no time. When working with Apache Spark or with SQL in S3, using this service is the best option upvoted 1 times
Use Amazon Redshift to load all the content into one place and run the SQL queries as needed.

Answer explanation

Amazon Athena allows direct, on-demand SQL queries on S3 data with minimal changes and operational overhead.

3.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

A company uses AWS Organizations to manage multiple AWS accounts for different departments. The management account has an Amazon S3 bucket that contains project reports. The company wants to limit access to this S3 bucket to only users of accounts within the organization in AWS Organizations. Which solution meets these requirements with the LEAST amount of operational overhead?
Add the aws PrincipalOrgID global condition key with a reference to the organization ID to the S3 bucket policy.
Create an organizational unit (OU) for each department. Add the aws:PrincipalOrgPaths global condition key to the S3 bucket policy.
Use AWS CloudTrail to monitor the CreateAccount, InviteAccountToOrganization, LeaveOrganization, and RemoveAccountFromOrganization events. Update the S3 bucket policy accordingly.
Tag each user that needs access to the S3 bucket. Add the aws:PrincipalTag global condition key to the S3 bucket policy. Show Suggested Answer by Rock08 at Oct. 9, 2022, 1:28 p.m. Disclaimers: - ExamTopics website is not related to, affiliated with, endorsed or authorized by Amazon.- Trademarks, certification & product names are used for reference only and belong to Amazon. Comments udeHighly Voted 2 years, 9 months ago Selected Answer: A aws:PrincipalOrgID Validates if the principal accessing the resource belongs to an account in your organization. upvoted 80 times BoboChow2 years, 9 months ago the condition key aws:PrincipalOrgID can prevent the members who don't belong to your organization to access the resource upvoted 22 times NaneyerockyHighly Voted 9 months, 3 weeks ago Selected Answer: A Condition keys: AWS provides condition keys that you can query to provide more granular control over certain actions. The following condition keys are especially useful with AWS Organizations: aws:PrincipalOrgID – Simplifies specifying the Principal element in a resource-based policy. This global key provides an alternative to listing all the account IDs for all AWS accounts in an organization. Instead of listing all of the accounts that are members of an organization, you can specify the organization ID in the Condition element. aws:PrincipalOrgPaths – Use this condition key to match members of a specific organization root, an OU, or its children. The aws:PrincipalOrgPaths condition key returns true when the principal (root user, IAM user, or role) making the request is in the specified organization path. A path is a text representation of the structure of an AWS Organizations entity. upvoted 26 times Sleepy_Lazy_Coder1 year, 11 months ago are we not choosing ou because the least overhead term was use? option B also seems correct upvoted 5 times EMPERBACH1 year, 2 months ago As there are many OU, you need more effort to list down OU path. And question mention about least management overhead to allow users in Organization, not single OU. upvoted 3 times BlackMamba_41 year, 10 months ago Exactly upvoted 1 times ClouddonMost Recent 1 week, 3 days ago Selected Answer: A Use aws:PrincipalOrgID in the bucket policy for a clean, scalable, and low-maintenance solution to restrict access to AWS Organization members. upvoted 1 times K_SAA2 months ago Selected Answer: A A: use aws PrincipalOrgID global condition key with a with a reference to the organization ID to the S3 bucket policy is the correct answer because the company wants to limit access to only user within the organization. B: is also be considered since the company wanted to limit access to s3 bucket within organization not mention to specific Organizational unit. if you or the company wanted to restrict access to S3 bucket by specific OU consider option B C: use AWS cloudtrail to monitor action of users within organization, which is more complex and high operational overhead D: tag each user and use aws:PrincipalTag global condition key to the S3 bucket policy. You would have to tag each user manually, which is required more work overhead upvoted 1 times ernie19762 months, 1 week ago Selected Answer: A This solution es the most simple comparing to other alternatives, just modify one parameter. aws:PrincipalOrgID upvoted 1 times Wylla3 months, 2 weeks ago Selected Answer: A aws:PrincipalOrgID - global key provides an alternative to listing all the account IDs for all AWS accounts in an organization. upvoted 1 times
Add the aws PrincipalOrgID global condition key with a reference to the organization ID to the S3 bucket policy.

Answer explanation

Using aws:PrincipalOrgID in the S3 bucket policy restricts access to users within the AWS Organization with the least operational overhead.

4.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

An application runs on an Amazon EC2 instance in a VPC. The application processes logs that are stored in an Amazon S3 bucket. The EC2 instance needs to access the S3 bucket without connectivity to the internet. Which solution will provide private network connectivity to Amazon S3?
Create a gateway VPC endpoint to the S3 bucket.
Stream the logs to Amazon CloudWatch Logs. Export the logs to the S3 bucket.
upvoted 2 times Austinlorenzmccoy1 year, 7 months ago Thank you so much upvoted 1 times D2wHighly Voted 2 years, 9 months ago Selected Answer: A VPC endpoint allows you to connect to AWS services using a private network instead of using the public Internet upvoted 32 times ClouddonMost Recent 1 week, 3 days ago Selected Answer: A Use a Gateway VPC Endpoint for S3 to enable secure, private access to the S3 bucket from EC2 without internet connectivity. upvoted 1 times K_SAA2 months ago Selected Answer: A Keywords: log store in s3, Ec2 instance need to access to s3 bucket without internet A: use gateway vpc endpoint allow private network connectivity to s3 bucket without publishing to internet B: Stream the logs to Amazon CloudWatch Logs. Export the logs to the S3 bucket since the question need ec2 instance stay in vpc access to s3 bucket privately but this way do not grant access to s3 bucket C: Create an instance profile on Amazon EC2 to allow S3 access. This way only authorization access to s3 bucket, not grant private network connectivity to s3 bucket since this also need internet access
Create an Amazon API Gateway API with a private link to access the S3 endpoint. Since API gateway acts like a proxy receive requests from outside and forward to the request to aws services like lambda, etc, not using for s3 access upvoted 1 times ernie19762 months, 1 week ago Selected Answer: A It is a simple communication between EC2 and S3, it is no needed to go out to internet, so and endopint is the best solution. upvoted 1 times Palee2 months, 2 weeks ago Selected Answer: A A is correct upvoted 1 times melvis83 months, 3 weeks ago Selected Answer: A The correct answer is A because a gateway endpoint allows us to securely and cost efficiently access either an S3 or Amazon DynamoDB database within our VPC upvoted 1 times
Create a gateway VPC endpoint to the S3 bucket.

Answer explanation

A gateway VPC endpoint allows private network connectivity from EC2 to S3 without internet access.

5.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

A company is hosting a web application on AWS using a single Amazon EC2 instance that stores user-uploaded documents in an Amazon EBS volume. For better scalability and availability, the company duplicated the architecture and created a second EC2 instance and EBS volume in another Availability Zone, placing both behind an Application Load Balancer. After completing this change, users reported that, each time they refreshed the website, they could see one subset of their documents or the other, but never all of the documents at the same time. What should a solutions architect propose to ensure users see all of their documents at once?
Copy the data so both EBS volumes contain all the documents is not correct because when user send request to one instance on a specific AZ they only see the documents from that EBS AZ not both upvoted 1 times Palee2 months, 2 weeks ago Selected Answer: C C is correct upvoted 1 times melvis83 months, 3 weeks ago Selected Answer: C The correct anszwer to this question is C question EBS volumes are AZ locked so when transferring the architecture to a different AZ the data can no longer be received by users whereas with EFS volume we can EC2 instances across AZs upvoted 1 times kimardamina3 months, 3 weeks ago Selected Answer: C My immediate understanding was the fact that EFS is multi az and will make it less complex as a solution for this. upvoted 1 times francisizme4 months, 1 week ago Selected Answer: C Not A: because it requires sync between the EBS volumes which is complex and not scalable. Not B: It's not scalable if the LB only directs user to one instance Not D: Impractical. The application will need to have a mechanism to merge the responses coming from 2 instance upvoted 1 times
This ensures high availability, scalability, and consistent data access across Availability Zones with minimal operational overhead. upvoted 1 times Yaredd3 weeks, 2 days ago Selected Answer: C EFS is best suited for multi AZ and EC2 distribution upvoted 1 times hossex53 weeks, 3 days ago Selected Answer: C c is correct upvoted 1 times K_SAA2 months ago Selected Answer: C Problem each EC2 instance has its own EBS volume, and EBS volume is specific for AZ specific so when user send request to specific ec2 instance they only see the response from each EBS volume not all because EBS volume do not sever for cross region problem C: use EFS because EFS a fully managed and network file system can be mounted to mutiple ec2 instances across mutiple AZ so user can see all documents response from EFS instead of subset of documents
Copy the data from both EBS volumes to Amazon EFS. Modify the application to save new documents to Amazon EFS
Configure the Application Load Balancer to send the request to both servers. Return each document from the correct server Show Suggested Answer by D2w at Oct. 10, 2022, 11:39 a.m. Disclaimers: - ExamTopics website is not related to, affiliated with, endorsed or authorized by Amazon.- Trademarks, certification & product names are used for reference only and belong to Amazon. Comments D2wHighly Voted 2 years, 9 months ago Selected Answer: C Concurrent or at the same time key word for EFS upvoted 56 times mikey2000Highly Voted 2 years, 7 months ago Ebs doesnt support cross az only reside in one Az but Efs does, that why it's c upvoted 41 times pbpally2 years, 2 months ago And just for clarification to others, you can have COPIES of the same EBS volume in one AZ and in another via EBS Snapshots, but don't confuse that with the idea of having some sort of global capability that has concurrent copying mechanisms. upvoted 10 times ClouddonMost Recent 1 week, 3 days ago Selected Answer: C Implement Amazon EFS as shared storage for all EC2 instances behind the AL
Copy the data so both EBS volumes contain all the documents is not correct because when user send request to one instance on a specific AZ they only see the documents from that EBS AZ not both upvoted 1 times Palee2 months, 2 weeks ago Selected Answer: C C is correct upvoted 1 times melvis83 months, 3 weeks ago Selected Answer: C The correct anszwer to this question is C question EBS volumes are AZ locked so when transferring the architecture to a different AZ the data can no longer be received by users whereas with EFS volume we can EC2 instances across AZs upvoted 1 times kimardamina3 months, 3 weeks ago Selected Answer: C My immediate understanding was the fact that EFS is multi az and will make it less complex as a solution for this. upvoted 1 times francisizme4 months, 1 week ago Selected Answer: C Not A: because it requires sync between the EBS volumes which is complex and not scalable. Not B: It's not scalable if the LB only directs user to one instance Not D: Impractical. The application will need to have a mechanism to merge the responses coming from 2 instance upvoted 1 times

Answer explanation

Amazon EFS provides shared storage accessible by EC2 instances across AZs, ensuring all users see all documents.

6.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

A company uses NFS to store large video files in on-premises network attached storage. Each video file ranges in size from 1 MB to 500 GB. The total storage is 70 TB and is no longer growing. The company decides to migrate the video files to Amazon S3. The company must migrate the video files as soon as possible while using the least possible network bandwidth. Which solution will meet these requirements?
Create an S3 bucket. Create an IAM role that has permissions to write to the S3 bucket. Use the AWS CLI to copy all files locally to the S3 bucket.
upvoted 7 times YDUYGU3 months, 1 week ago DX Lead times are often longer than I month to establish a new connection.That’s why D is the wrong answer on the other hand. upvoted 5 times Gatt2 years, 8 months ago I will add that the question does not specify if the company already has DA in place or not. If they don't have DA in place, it will take a long time (weeks) for DA connectivity to be setup. Another point for B here, as Snowball is much quicker from this perspective. upvoted 10 times pentium751 year, 6 months ago It does, because option D says "SET UP a DirectConnect connection", not "use an existing DirectConnect connection". upvoted 7 times OBIOHAnze1 year, 1 month ago B is the correct answer because the migration needs to be completed as soon as possible with limited bandwidth upvoted 3 times tuloveuHighly Voted 2 years, 9 months ago Selected Answer: B As using the least possible network bandwidth. upvoted 35 times ClouddonMost Recent 1 week, 3 days ago Selected Answer: B Use AWS Snowball Edge for secure, fast, and bandwidth-efficient bulk data migration of 70 TB video files to S3. upvoted 1 times K_SAA2 months ago Selected Answer: B Keywords: large volume data, least network bandwidth and volume data no longer growing B: use aws snowball edge job is more suitable way to migrate large amount of data with least network bandwidth and very fast. You setup aws snowball edge job physically and copy data to aws snowball edge job and send it to aws and then aws will uploads that for you to s3 bucket C is incorrect since this also offer a high network bandwidth to upload D is incorrect since this require heavy setup, take a week to set up aws direct connect and still network bandwidth A: take a longer to upload this large volume data so this option is incorrect upvoted 1 times ernie19762 months, 1 week ago Selected Answer: B It is needed to do with urgency, so no more configuration for using internet or expensive connection as direct connect, just transport it with Edge Snowball is the right way. upvoted 1 times Palee2 months, 2 weeks ago Selected Answer: B D is close to correct but B makes more sense upvoted 1 times
File Gateway uses the Internet, so maximum speed will be at most 1Gbps, so it'll take a minimum of 6.5 days and you use 70TB of Internet bandwidth.
You can achieve speeds of up to 10Gbps with Direct Connect. Total time 15.5 hours and you will use 70TB of bandwidth. However, what's interesting is that the question does not specific what type of bandwidth? Direct Connect does not use your Internet bandwidth, as you will have a dedicate peer to peer connectivity between your on-prem and the AWS Cloud, so technically, you're not using your "public" bandwidth. The requirements are a bit too vague but I think that B is the most appropriate answer, although D might also be correct if the bandwidth usage refers strictly to your public connectivity. upvoted 126 times abhishek_m892 years, 7 months ago and it says, "The total storage is 70 TB and is no longer growing". Thats why it should be
Create an S3 bucket. Create an IAM role that has permissions to write to the S3 bucket. Use the AWS CLI to copy all files locally to the S3 bucket.

Answer explanation

AWS Snowball Edge enables fast, bandwidth-efficient migration of large data volumes to S3.

7.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

A company has an application that ingests incoming messages. Dozens of other applications and microservices then quickly consume these messages. The number of messages varies drastically and sometimes increases suddenly to 100,000 each second. The company wants to decouple the solution and increase scalability. Which solution meets these requirements?
but i think Kinesis Data Analytics is just wrong, so D is most correct answer. upvoted 3 times ClouddonMost Recent 1 week, 3 days ago Selected Answer: D Use SNS + multiple SQS queues to achieve a highly scalable, decoupled, and reliable architecture for sudden and massive message ingestion and processing. upvoted 1 times clfigueiredo121 month, 3 weeks ago Selected Answer: D Letra D! upvoted 1 times vovanbi942 months, 1 week ago Selected Answer: D D the best choose upvoted 1 times
Configure the consumer applications to read from DynamoDB to process the messages.
Write the messages to Amazon Kinesis Data Streams with a single shard. Use an AWS Lambda function to preprocess messages and store them in Amazon DynamoD
Publish the messages to an Amazon Simple Notification Service (Amazon SNS) topic with multiple Amazon Simple Queue Service (Amazon SQS) subscriptions. Configure the consumer applications to process the messages from the queues. This solution uses Amazon SNS and SQS to publish and subscribe to messages respectively, which decouples the system and enables scalability by allowing multiple consumer applications to process the messages in parallel. Additionally, using Amazon SQS with multiple subscriptions can provide increased resiliency by allowing multiple copies of the same message to be processed in parallel. upvoted 19 times SilentMilli2 years, 6 months ago By default, an SQS queue can handle a maximum of 3,000 messages per second. However, you can request higher throughput by contacting AWS Support. AWS can increase the message throughput for your queue beyond the default limits in increments of 300 messages per second, up to a maximum of 10,000 messages per second. It's important to note that the maximum number of messages per second that a queue can handle is not the same as the maximum number of requests per second that the SQS API can handle. The SQS API is designed to handle a high volume of requests per second, so it can be used to send messages to your queue at a rate that exceeds the maximum message throughput of the queue. upvoted 19 times 90142 years, 7 months ago of course, the answer is D upvoted 3 times PhucVuuHighly Voted 9 months, 3 weeks ago Selected Answer: D Keywords: - The number of messages varies drastically - Sometimes increases suddenly to 100,000 each second A: Incorrect - Don't confuse between Kinesis Data Analytics and Kinesis Data Stream =)) Kinesis Data Analytics will get the data from Kinesis Data Stream or Kinesis Data FireHose or MSK (Managed Stream for apache Kafka) for analytic purpose. It can not consume message and send to applications. B: Incorrect - Base on the keywords -> Auto Scaling group not scale well because it need time to check the CPU metric and need time to start up the EC2 and the messages varies drastically. Example: we have to scale from 10 to 100 EC2. Our servers may be down a while when it was scaling. C: Incorrect - Kinesis Data Streams can handle this case but we should increase the more shards but not single shard. D: Correct: We can handle high workload well with fan-out pattern SNS + multiple SQS -> This is good for use case: - The number of messages varies drastically - Sometimes increases suddenly to 100,000 each second upvoted 24 times shinejh05282 years, 3 months ago oh... I confused between Kinesis Data Analytics and Kinesis Data Stream as you mentioned... I solved several this type of questions, but SNS is always about 'notification', so i choose
but i think Kinesis Data Analytics is just wrong, so D is most correct answer. upvoted 3 times ClouddonMost Recent 1 week, 3 days ago Selected Answer: D Use SNS + multiple SQS queues to achieve a highly scalable, decoupled, and reliable architecture for sudden and massive message ingestion and processing. upvoted 1 times clfigueiredo121 month, 3 weeks ago Selected Answer: D Letra D! upvoted 1 times vovanbi942 months, 1 week ago Selected Answer: D D the best choose upvoted 1 times

Answer explanation

SNS with multiple SQS subscriptions decouples producers and consumers, providing scalability for sudden message spikes.

Create a free account and access millions of resources

Create resources
Host any resource
Get auto-graded reports
or continue with
Microsoft
Apple
Others
By signing up, you agree to our Terms of Service & Privacy Policy
Already have an account?