Search⌘ K
AI Features

Data Ingestion and Transformation III

Discover how to optimize data ingestion and transformation in AWS environments using Lambda, Glue ETL, and other services. Learn practical solutions to handle large datasets, integrate multiple data sources, and deploy secure, scalable data APIs. This lesson helps you choose the right AWS tools for building robust data pipelines and infrastructure automation.

Question 11

A company processes large datasets using AWS Lambda functions. Each Lambda function needs to access a 5 GB reference dataset during execution. The dataset is updated weekly. The data engineer must make this dataset available to all concurrent Lambda invocations without downloading it each time.

Which solution meets these requirements?

A. Package the reference dataset as a Lambda layer attached to the function.

B. Download the reference dataset from Amazon S3 into the /tmp directory at the start of each Lambda invocation.

C. Mount an Amazon EFS file system to the Lambda function to provide shared, persistent storage accessible by all concurrent invocations.

D. Store the reference dataset in the Lambda function’s deployment package.

Question 12

A data ...