GCP - 2
![]() |
![]() |
![]() |
Título del Test:![]() GCP - 2 Descripción: GCP preguntas comunidad Fecha de Creación: 2023/10/17 Categoría: Otros Número Preguntas: 116
|




Comentarios |
---|
NO HAY REGISTROS |
You migrated your applications to Google Cloud Platform and kept your existing monitoring platform. You now find that your notification system is too slow for time critical problems. What should you do?. A. Replace your entire monitoring platform with Stackdriver. B. Install the Stackdriver agents on your Compute Engine instances. C. Use Stackdriver to capture and alert on logs, then ship them to your existing platform. D. Migrate some traffic back to your old platform and perform AB testing on the two platforms concurrently. You are planning to migrate a MySQL database to the managed Cloud SQL database for Google Cloud. You have Compute Engine virtual machine instances that will connect with this Cloud SQL instance. You do not want to whitelist IPs for the Compute Engine instances to be able to access Cloud SQL. What should you do?. A. Enable private IP for the Cloud SQL instance. B. Whitelist a project to access Cloud SQL, and add Compute Engine instances in the whitelisted project. C. Create a role in Cloud SQL that allows access to the database from external instances, and assign the Compute Engine instances to that role. D. Create a CloudSQL instance on one project. Create Compute engine instances in a different project. Create a VPN between these two projects to allow internal access to CloudSQL. You have an application deployed in production. When a new version is deployed, some issues don't arise until the application receives traffic from users in production. You want to reduce both the impact and the number of users affected. Which deployment strategy should you use?. A. Blue/green deployment. B. Canary deployment. C. Rolling deployment. D. Recreate deployment. You are developing a JPEG image-resizing API hosted on Google Kubernetes Engine (GKE). Callers of the service will exist within the same GKE cluster. You want clients to be able to get the IP address of the service. What should you do?. A. Define a GKE Service. Clients should use the name of the A record in Cloud DNS to find the service's cluster IP address. B. Define a GKE Service. Clients should use the service name in the URL to connect to the service. C. Define a GKE Endpoint. Clients should get the endpoint name from the appropriate environment variable in the client container. D. Define a GKE Endpoint. Clients should get the endpoint name from Cloud DNS. Your application takes an input from a user and publishes it to the user's contacts. This input is stored in a table in Cloud Spanner. Your application is more sensitive to latency and less sensitive to consistency. How should you perform reads from Cloud Spanner for this application?. A. Perform Read-Only transactions. B. Perform stale reads using single-read methods. C. Perform strong reads using single-read methods. D. Perform stale reads using read-write transactions. You want to use the Stackdriver Logging Agent to send an application's log file to Stackdriver from a Compute Engine virtual machine instance. After installing the Stackdriver Logging Agent, what should you do first?. A. Enable the Error Reporting API on the project. B. Grant the instance full access to all Cloud APIs. C. Configure the application log file as a custom source. D. Create a Stackdriver Logs Export Sink with a filter that matches the application's log entries. Your company has a BigQuery data mart that provides analytics information to hundreds of employees. One user of wants to run jobs without interrupting important workloads. This user isn't concerned about the time it takes to run these jobs. You want to fulfill this request while minimizing cost to the company and the effort required on your part. What should you do?. A. Ask the user to run the jobs as batch jobs. B. Create a separate project for the user to run jobs. C. Add the user as a job.user role in the existing project. D. Allow the user to run jobs when important workloads are not running. You want to notify on-call engineers about a service degradation in production while minimizing development time. What should you do?. A. Use Cloud Function to monitor resources and raise alerts. B. Use Cloud Pub/Sub to monitor resources and raise alerts. C. Use Stackdriver Error Reporting to capture errors and raise alerts. D. Use Stackdriver Monitoring to monitor resources and raise alerts. You are writing a single-page web application with a user-interface that communicates with a third-party API for content using XMLHttpRequest. The data displayed on the UI by the API results is less critical than other data displayed on the same web page, so it is acceptable for some requests to not have the API data displayed in the UI. However, calls made to the API should not delay rendering of other parts of the user interface. You want your application to perform well when the API response is an error or a timeout. What should you do?. A. Set the asynchronous option for your requests to the API to false and omit the widget displaying the API results when a timeout or error is encountered. B. Set the asynchronous option for your request to the API to true and omit the widget displaying the API results when a timeout or error is encountered. C. Catch timeout or error exceptions from the API call and keep trying with exponential backoff until the API response is successful. D. Catch timeout or error exceptions from the API call and display the error response in the UI widget. You are creating a web application that runs in a Compute Engine instance and writes a file to any user's Google Drive. You need to configure the application to authenticate to the Google Drive API. What should you do?. A. Use an OAuth Client ID that uses the https://www.googleapis.com/auth/drive.file scope to obtain an access token for each user. B. Use an OAuth Client ID with delegated domain-wide authority. C. Use the App Engine service account and https://www.googleapis.com/auth/drive.file scope to generate a signed JSON Web Token (JWT). D. Use the App Engine service account with delegated domain-wide authority. You are parsing a log file that contains three columns: a timestamp, an account number (a string), and a transaction amount (a number). You want to calculate the sum of all transaction amounts for each unique account number efficiently. Which data structure should you use?. A. A linked list. B. A hash table. C. A two-dimensional array. D. A comma-delimited string. You are load testing your server application. During the first 30 seconds, you observe that a previously inactive Cloud Storage bucket is now servicing 2000 write requests per second and 7500 read requests per second. Your application is now receiving intermittent 5xx and 429 HTTP responses from the Cloud Storage JSON API as the demand escalates. You want to decrease the failed responses from the Cloud Storage API. What should you do?. A. Distribute the uploads across a large number of individual storage buckets. B. Use the XML API instead of the JSON API for interfacing with Cloud Storage. C. Pass the HTTP response codes back to clients that are invoking the uploads from your application. D. Limit the upload rate from your application clients so that the dormant bucket's peak request rate is reached more gradually. You are developing an HTTP API hosted on a Compute Engine virtual machine instance that needs to be invoked by multiple clients within the same Virtual Private Cloud (VPC). You want clients to be able to get the IP address of the service. What should you do?. A. Reserve a static external IP address and assign it to an HTTP(S) load balancing service's forwarding rule. Clients should use this IP address to connect to the service. B. Reserve a static external IP address and assign it to an HTTP(S) load balancing service's forwarding rule. Then, define an A record in Cloud DNS. Clients should use the name of the A record to connect to the service. C. Ensure that clients use Compute Engine internal DNS by connecting to the instance name with the url https://[INSTANCE_NAME].[ZONE].c. [PROJECT_ID].internal/. D. Ensure that clients use Compute Engine internal DNS by connecting to the instance name with the url https://[API_NAME]/[API_VERSION]/. You want to re-architect a monolithic application so that it follows a microservices model. You want to accomplish this efficiently while minimizing the impact of this change to the business. Which approach should you take?. A. Deploy the application to Compute Engine and turn on autoscaling. B. Replace the application's features with appropriate microservices in phases. C. Refactor the monolithic application with appropriate microservices in a single effort and deploy it. D. Build a new application with the appropriate microservices separate from the monolith and replace it when it is complete. Your existing application keeps user state information in a single MySQL database. This state information is very user-specific and depends heavily on how long a user has been using an application. The MySQL database is causing challenges to maintain and enhance the schema for various users. Which storage option should you choose?. A. Cloud SQL. B. Cloud Storage. C. Cloud Spanner. D. Cloud Datastore/Firestore. Your company's development teams want to use Cloud Build in their projects to build and push Docker images to Container Registry. The operations team requires all Docker images to be published to a centralized, securely managed Docker registry that the operations team manages. What should you do?. A. Use Container Registry to create a registry in each development team's project. Configure the Cloud Build build to push the Docker image to the project's registry. Grant the operations team access to each development team's registry. B. Create a separate project for the operations team that has Container Registry configured. Assign appropriate permissions to the Cloud Build service account in each developer team's project to allow access to the operation team's registry. C. Create a separate project for the operations team that has Container Registry configured. Create a Service Account for each development team and assign the appropriate permissions to allow it access to the operations team's registry. Store the service account key file in the source code repository and use it to authenticate against the operations team's registry. D. Create a separate project for the operations team that has the open source Docker Registry deployed on a Compute Engine virtual machine instance. Create a username and password for each development team. Store the username and password in the source code repository and use it to authenticate against the operations team's Docker registry. Your code is running on Cloud Functions in project A. It is supposed to write an object in a Cloud Storage bucket owned by project B. However, the write call is failing with the error "403 Forbidden". What should you do to correct the problem?. A. Grant your user account the roles/storage.objectCreator role for the Cloud Storage bucket. B. Grant your user account the roles/iam.serviceAccountUser role for the service-PROJECTA@gcf-admin-robot.iam.gserviceaccount.com service account. C. Grant the service-PROJECTA@gcf-admin-robot.iam.gserviceaccount.com service account the roles/storage.objectCreator role for the Cloud Storage bucket. D. Enable the Cloud Storage API in project B. Your application is running in multiple Google Kubernetes Engine clusters. It is managed by a Deployment in each cluster. The Deployment has created multiple replicas of your Pod in each cluster. You want to view the logs sent to stdout for all of the replicas in your Deployment in all clusters. Which command should you use?. A. kubectl logs [PARAM]. B. gcloud logging read [PARAM]. C. kubectl exec ג€"it [PARAM] journalctl. D. gcloud compute ssh [PARAM] ג€"-command= ג€sudo journalctlג€. You are using Cloud Build to create a new Docker image on each source code commit to a Cloud Source Repositories repository. Your application is built on every commit to the master branch. You want to release specific commits made to the master branch in an automated method. What should you do?. A. Manually trigger the build for new releases. B. Create a build trigger on a Git tag pattern. Use a Git tag convention for new releases. C. Create a build trigger on a Git branch name pattern. Use a Git branch naming convention for new releases. D. Commit your source code to a second Cloud Source Repositories repository with a second Cloud Build trigger. Use this repository for new releases only. You are designing a schema for a table that will be moved from MySQL to Cloud Bigtable. The MySQL table is as follows: How should you design a row key for Cloud Bigtable for this table?. A. Set Account_id as a key. B. Set Account_id_Event_timestamp as a key. C. Set Event_timestamp_Account_id as a key. D. Set Event_timestamp as a key. You want to view the memory usage of your application deployed on Compute Engine. What should you do?. A. Install the Stackdriver Client Library. B. Install the Stackdriver Monitoring Agent. C. Use the Stackdriver Metrics Explorer. D. Use the Google Cloud Platform Console. Your application is built as a custom machine image. You have multiple unique deployments of the machine image. Each deployment is a separate managed instance group with its own template. Each deployment requires a unique set of configuration values. You want to provide these unique values to each deployment but use the same custom machine image in all deployments. You want to use out-of-the-box features of Compute Engine. What should you do?. A. Place the unique configuration values in the persistent disk. B. Place the unique configuration values in a Cloud Bigtable table. C. Place the unique configuration values in the instance template startup script. D. Place the unique configuration values in the instance template instance metadata. Your application performs well when tested locally, but it runs significantly slower after you deploy it to a Compute Engine instance. You need to diagnose the problem. What should you do? What should you do?. A. File a ticket with Cloud Support indicating that the application performs faster locally. B. Use Cloud Debugger snapshots to look at a point-in-time execution of the application. C. Use Cloud Profiler to determine which functions within the application take the longest amount of time. D. Add logging commands to the application and use Cloud Logging to check where the latency problem occurs. Your App Engine standard configuration is as follows: service: production instance_class: B1 You want to limit the application to 5 instances. Which code snippet should you include in your configuration?. A. manual_scaling: instances: 5 min_pending_latency: 30ms. B. manual_scaling: max_instances: 5 idle_timeout: 10m. C. basic_scaling: instances: 5 min_pending_latency: 30ms. D. basic_scaling: max_instances: 5 idle_timeout: 10m. Your analytics system executes queries against a BigQuery dataset. The SQL query is executed in batch and passes the contents of a SQL file to the BigQuery CLI. Then it redirects the BigQuery CLI output to another process. However, you are getting a permission error from the BigQuery CLI when the queries are executed. You want to resolve the issue. What should you do?. A. Grant the service account BigQuery Data Viewer and BigQuery Job User roles. B. Grant the service account BigQuery Data Editor and BigQuery Data Viewer roles. C. Create a view in BigQuery from the SQL query and SELECT* from the view in the CLI. D. Create a new dataset in BigQuery, and copy the source table to the new dataset Query the new dataset and table from the CLI. Your application is running on Compute Engine and is showing sustained failures for a small number of requests. You have narrowed the cause down to a single Compute Engine instance, but the instance is unresponsive to SSH. What should you do next?. A. Reboot the machine. B. Enable and check the serial port output. C. Delete the machine and create a new one. D. Take a snapshot of the disk and attach it to a new machine. Your company has created an application that uploads a report to a Cloud Storage bucket. When the report is uploaded to the bucket, you want to publish a message to a Cloud Pub/Sub topic. You want to implement a solution that will take a small amount to effort to implement. What should you do?. A. Configure the Cloud Storage bucket to trigger Cloud Pub/Sub notifications when objects are modified. B. Create an App Engine application to receive the file; when it is received, publish a message to the Cloud Pub/Sub topic. C. Create a Cloud Function that is triggered by the Cloud Storage bucket. In the Cloud Function, publish a message to the Cloud Pub/Sub topic. D. Create an application deployed in a Google Kubernetes Engine cluster to receive the file; when it is received, publish a message to the Cloud Pub/Sub topic. Your teammate has asked you to review the code below, which is adding a credit to an account balance in Cloud Datastore. Which improvement should you suggest your teammate make?. A. Get the entity with an ancestor query. B. Get and put the entity in a transaction. C. Use a strongly consistent transactional database. D. Don't return the account entity from the function. You are developing a corporate tool on Compute Engine for the finance department, which needs to authenticate users and verify that they are in the finance department. All company employees use G Suite. What should you do?. A. Enable Cloud Identity-Aware Proxy on the HTTP(s) load balancer and restrict access to a Google Group containing users in the finance department. Verify the provided JSON Web Token within the application. B. Enable Cloud Identity-Aware Proxy on the HTTP(s) load balancer and restrict access to a Google Group containing users in the finance department. Issue client-side certificates to everybody in the finance team and verify the certificates in the application. C. Configure Cloud Armor Security Policies to restrict access to only corporate IP address ranges. Verify the provided JSON Web Token within the application. D. Configure Cloud Armor Security Policies to restrict access to only corporate IP address ranges. Issue client side certificates to everybody in the finance team and verify the certificates in the application. You have an application deployed in production. When a new version is deployed, you want to ensure that all production traffic is routed to the new version of your application. You also want to keep the previous version deployed so that you can revert to it if there is an issue with the new version. Which deployment strategy should you use?. A. Blue/green deployment. B. Canary deployment. C. Rolling deployment. D. Recreate deployment. You are porting an existing Apache/MySQL/PHP application stack from a single machine to Google Kubernetes Engine. You need to determine how to containerize the application. Your approach should follow Google-recommended best practices for availability. What should you do?. A. Package each component in a separate container. Implement readiness and liveness probes. B. Package the application in a single container. Use a process management tool to manage each component. C. Package each component in a separate container. Use a script to orchestrate the launch of the components. D. Package the application in a single container. Use a bash script as an entrypoint to the container, and then spawn each component as a background job. You are developing an application that will be launched on Compute Engine instances into multiple distinct projects, each corresponding to the environments in your software development process (development, QA, staging, and production). The instances in each project have the same application code but a different configuration. During deployment, each instance should receive the application's configuration based on the environment it serves. You want to minimize the number of steps to configure this flow. What should you do?. A. When creating your instances, configure a startup script using the gcloud command to determine the project name that indicates the correct environment. B. In each project, configure a metadata key ג€environmentג€ whose value is the environment it serves. Use your deployment tool to query the instance metadata and configure the application based on the ג€environmentג€ value. C. Deploy your chosen deployment tool on an instance in each project. Use a deployment job to retrieve the appropriate configuration file from your version control system, and apply the configuration when deploying the application on each instance. D. During each instance launch, configure an instance custom-metadata key named ג€environmentג€ whose value is the environment the instance serves. Use your deployment tool to query the instance metadata, and configure the application based on the ג€environmentג€ value. You are developing an ecommerce application that stores customer, order, and inventory data as relational tables inside Cloud Spanner. During a recent load test, you discover that Spanner performance is not scaling linearly as expected. Which of the following is the cause?. A. The use of 64-bit numeric types for 32-bit numbers. B. The use of the STRING data type for arbitrary-precision values. C. The use of Version 1 UUIDs as primary keys that increase monotonically. D. The use of LIKE instead of STARTS_WITH keyword for parameterized SQL queries. You are designing an application that will subscribe to and receive messages from a single Pub/Sub topic and insert corresponding rows into a database. Your application runs on Linux and leverages preemptible virtual machines to reduce costs. You need to create a shutdown script that will initiate a graceful shutdown. What should you do?. A. Write a shutdown script that uses inter-process signals to notify the application process to disconnect from the database. B. Write a shutdown script that broadcasts a message to all signed-in users that the Compute Engine instance is going down and instructs them to save current work and sign out. C. Write a shutdown script that writes a file in a location that is being polled by the application once every five minutes. After the file is read, the application disconnects from the database. D. Write a shutdown script that publishes a message to the Pub/Sub topic announcing that a shutdown is in progress. After the application reads the message, it disconnects from the database. You work for a web development team at a small startup. Your team is developing a Node.js application using Google Cloud services, including Cloud Storage and Cloud Build. The team uses a Git repository for version control. Your manager calls you over the weekend and instructs you to make an emergency update to one of the company's websites, and you're the only developer available. You need to access Google Cloud to make the update, but you don't have your work laptop. You are not allowed to store source code locally on a non-corporate computer. How should you set up your developer environment?. A. Use a text editor and the Git command line to send your source code updates as pull requests from a public computer. B. Use a text editor and the Git command line to send your source code updates as pull requests from a virtual machine running on a public computer. C. Use Cloud Shell and the built-in code editor for development. Send your source code updates as pull requests. D. Use a Cloud Storage bucket to store the source code that you need to edit. Mount the bucket to a public computer as a drive, and use a code editor to update the code. Turn on versioning for the bucket, and point it to the team's Git repository. You need to redesign the ingestion of audit events from your authentication service to allow it to handle a large increase in traffic. Currently, the audit service and the authentication system run in the same Compute Engine virtual machine. You plan to use the following Google Cloud tools in the new architecture: ✑ Multiple Compute Engine machines, each running an instance of the authentication service ✑ Multiple Compute Engine machines, each running an instance of the audit service ✑ Pub/Sub to send the events from the authentication services. How should you set up the topics and subscriptions to ensure that the system can handle a large volume of messages and can scale efficiently?. A. Create one Pub/Sub topic. Create one pull subscription to allow the audit services to share the messages. B. Create one Pub/Sub topic. Create one pull subscription per audit service instance to allow the services to share the messages. C. Create one Pub/Sub topic. Create one push subscription with the endpoint pointing to a load balancer in front of the audit services. D. Create one Pub/Sub topic per authentication service. Create one pull subscription per topic to be used by one audit service. E. Create one Pub/Sub topic per authentication service. Create one push subscription per topic, with the endpoint pointing to one audit service. You are developing a marquee stateless web application that will run on Google Cloud. The rate of the incoming user traffic is expected to be unpredictable, with no traffic on some days and large spikes on other days. You need the application to automatically scale up and down, and you need to minimize the cost associated with running the application. What should you do?. A. Build the application in Python with Firestore as the database. Deploy the application to Cloud Run. B. Build the application in C# with Firestore as the database. Deploy the application to App Engine flexible environment. C. Build the application in Python with CloudSQL as the database. Deploy the application to App Engine standard environment. D. Build the application in Python with Firestore as the database. Deploy the application to a Compute Engine managed instance group with autoscaling. You are developing an internal application that will allow employees to organize community events within your company. You deployed your application on a single Compute Engine instance. Your company uses Google Workspace (formerly G Suite), and you need to ensure that the company employees can authenticate to the application from anywhere. What should you do?. A. Add a public IP address to your instance, and restrict access to the instance using firewall rules. Allow your company's proxy as the only source IP address. B. Add an HTTP(S) load balancer in front of the instance, and set up Identity-Aware Proxy (IAP). Configure the IAP settings to allow your company domain to access the website. C. Set up a VPN tunnel between your company network and your instance's VPC location on Google Cloud. Configure the required firewall rules and routing information to both the on-premises and Google Cloud networks. D. Add a public IP address to your instance, and allow traffic from the internet. Generate a random hash, and create a subdomain that includes this hash and points to your instance. Distribute this DNS address to your company's employees. You want to create `fully baked` or `golden` Compute Engine images for your application. You need to bootstrap your application to connect to the appropriate database according to the environment the application is running on (test, staging, production). What should you do?. A. Embed the appropriate database connection string in the image. Create a different image for each environment. B. When creating the Compute Engine instance, add a tag with the name of the database to be connected. In your application, query the Compute Engine API to pull the tags for the current instance, and use the tag to construct the appropriate database connection string. C. When creating the Compute Engine instance, create a metadata item with a key of ג€DATABASEג€ and a value for the appropriate database connection string. In your application, read the ג€DATABASEג€ environment variable, and use the value to connect to the appropriate database. D. When creating the Compute Engine instance, create a metadata item with a key of ג€DATABASEג€ and a value for the appropriate database connection string. In your application, query the metadata server for the ג€DATABASEג€ value, and use the value to connect to the appropriate database. You are developing a microservice-based application that will be deployed on a Google Kubernetes Engine cluster. The application needs to read and write to a Spanner database. You want to follow security best practices while minimizing code changes. How should you configure your application to retrieve Spanner credentials?. A. Configure the appropriate service accounts, and use Workload Identity to run the pods. B. Store the application credentials as Kubernetes Secrets, and expose them as environment variables. C. Configure the appropriate routing rules, and use a VPC-native cluster to directly connect to the database. D. Store the application credentials using Cloud Key Management Service, and retrieve them whenever a database connection is made. You recently developed an application. You need to call the Cloud Storage API from a Compute Engine instance that doesn't have a public IP address. What should you do?. A. Use Carrier Peering. B. Use VPC Network Peering. C. Use Shared VPC networks. D. Use Private Google Access. You are a developer working with the CI/CD team to troubleshoot a new feature that your team introduced. The CI/CD team used HashiCorp Packer to create a new Compute Engine image from your development branch. The image was successfully built, but is not booting up. You need to investigate the issue with the CI/ CD team. What should you do?. A. Create a new feature branch, and ask the build team to rebuild the image. B. Shut down the deployed virtual machine, export the disk, and then mount the disk locally to access the boot logs. C. Install Packer locally, build the Compute Engine image locally, and then run it in your personal Google Cloud project. D. Check Compute Engine OS logs using the serial port, and check the Cloud Logging logs to confirm access to the serial port. You manage an application that runs in a Compute Engine instance. You also have multiple backend services executing in stand-alone Docker containers running in Compute Engine instances. The Compute Engine instances supporting the backend services are scaled by managed instance groups in multiple regions. You want your calling application to be loosely coupled. You need to be able to invoke distinct service implementations that are chosen based on the value of an HTTP header found in the request. Which Google Cloud feature should you use to invoke the backend services?. A. Traffic Director. B. Service Directory. C. Anthos Service Mesh. D. Internal HTTP(S) Load Balancing. Your team is developing an ecommerce platform for your company. Users will log in to the website and add items to their shopping cart. Users will be automatically logged out after 30 minutes of inactivity. When users log back in, their shopping cart should be saved. How should you store users' session and shopping cart information while following Google-recommended best practices?. A. Store the session information in Pub/Sub, and store the shopping cart information in Cloud SQL. B. Store the shopping cart information in a file on Cloud Storage where the filename is the SESSION ID. C. Store the session and shopping cart information in a MySQL database running on multiple Compute Engine instances. D. Store the session information in Memorystore for Redis or Memorystore for Memcached, and store the shopping cart information in Firestore. You are developing a new application that has the following design requirements: ✑ Creation and changes to the application infrastructure are versioned and auditable. ✑ The application and deployment infrastructure uses Google-managed services as much as possible. ✑ The application runs on a serverless compute platform. How should you design the application's architecture?. A. 1. Store the application and infrastructure source code in a Git repository. 2. Use Cloud Build to deploy the application infrastructure with Terraform. 3. Deploy the application to a Cloud Function as a pipeline step. B. 1. Deploy Jenkins from the Google Cloud Marketplace, and define a continuous integration pipeline in Jenkins. 2. Configure a pipeline step to pull the application source code from a Git repository. 3. Deploy the application source code to App Engine as a pipeline step. C. 1. Create a continuous integration pipeline on Cloud Build, and configure the pipeline to deploy the application infrastructure using Deployment Manager templates. 2. Configure a pipeline step to create a container with the latest application source code. 3. Deploy the container to a Compute Engine instance as a pipeline step. D. 1. Deploy the application infrastructure using gcloud commands. 2. Use Cloud Build to define a continuous integration pipeline for changes to the application source code. 3. Configure a pipeline step to pull the application source code from a Git repository, and create a containerized application. 4. Deploy the new container on Cloud Run as a pipeline step. You are creating and running containers across different projects in Google Cloud. The application you are developing needs to access Google Cloud services from within Google Kubernetes Engine (GKE). What should you do?. A. Assign a Google service account to the GKE nodes. B. Use a Google service account to run the Pod with Workload Identity. C. Store the Google service account credentials as a Kubernetes Secret. D. Use a Google service account with GKE role-based access control (RBAC). You have containerized a legacy application that stores its configuration on an NFS share. You need to deploy this application to Google Kubernetes Engine (GKE) and do not want the application serving traffic until after the configuration has been retrieved. What should you do?. A. Use the gsutil utility to copy files from within the Docker container at startup, and start the service using an ENTRYPOINT script. B. Create a PersistentVolumeClaim on the GKE cluster. Access the configuration files from the volume, and start the service using an ENTRYPOINT script. C. Use the COPY statement in the Dockerfile to load the configuration into the container image. Verify that the configuration is available, and start the service using an ENTRYPOINT script. D. Add a startup script to the GKE instance group to mount the NFS share at node startup. Copy the configuration files into the container, and start the service using an ENTRYPOINT script. Your team is developing a new application using a PostgreSQL database and Cloud Run. You are responsible for ensuring that all traffic is kept private on Google Cloud. You want to use managed services and follow Google-recommended best practices. What should you do?. A. 1. Enable Cloud SQL and Cloud Run in the same project. 2. Configure a private IP address for Cloud SQL. Enable private services access. 3. Create a Serverless VPC Access connector. 4. Configure Cloud Run to use the connector to connect to Cloud SQL. B. 1. Install PostgreSQL on a Compute Engine virtual machine (VM), and enable Cloud Run in the same project. 2. Configure a private IP address for the VM. Enable private services access. 3. Create a Serverless VPC Access connector. 4. Configure Cloud Run to use the connector to connect to the VM hosting PostgreSQL. C. 1. Use Cloud SQL and Cloud Run in different projects. 2. Configure a private IP address for Cloud SQL. Enable private services access. 3. Create a Serverless VPC Access connector. 4. Set up a VPN connection between the two projects. Configure Cloud Run to use the connector to connect to Cloud SQL. D. 1. Install PostgreSQL on a Compute Engine VM, and enable Cloud Run in different projects. 2. Configure a private IP address for the VM. Enable private services access. 3. Create a Serverless VPC Access connector. 4. Set up a VPN connection between the two projects. Configure Cloud Run to use the connector to access the VM hosting PostgreSQL. Your team develops services that run on Google Kubernetes Engine. Your team's code is stored in Cloud Source Repositories. You need to quickly identify bugs in the code before it is deployed to production. You want to invest in automation to improve developer feedback and make the process as efficient as possible. What should you do?. A. Use Spinnaker to automate building container images from code based on Git tags. B. Use Cloud Build to automate building container images from code based on Git tags. C. Use Spinnaker to automate deploying container images to the production environment. D. Use Cloud Build to automate building container images from code based on forked versions. You are developing an ecommerce web application that uses App Engine standard environment and Memorystore for Redis. When a user logs into the app, the application caches the user's information (e.g., session, name, address, preferences), which is stored for quick retrieval during checkout. While testing your application in a browser, you get a 502 Bad Gateway error. You have determined that the application is not connecting to Memorystore. What is the reason for this error?. A. Your Memorystore for Redis instance was deployed without a public IP address. B. You configured your Serverless VPC Access connector in a different region than your App Engine instance. C. The firewall rule allowing a connection between App Engine and Memorystore was removed during an infrastructure update by the DevOps team. D. You configured your application to use a Serverless VPC Access connector on a different subnet in a different availability zone than your App Engine instance. You have an application that uses an HTTP Cloud Function to process user activity from both desktop browser and mobile application clients. This function will serve as the endpoint for all metric submissions using HTTP POST. Due to legacy restrictions, the function must be mapped to a domain that is separate from the domain requested by users on web or mobile sessions. The domain for the Cloud Function is https://fn.example.com. Desktop and mobile clients use the domain https://www.example.com. You need to add a header to the function's HTTP response so that only those browser and mobile sessions can submit metrics to the Cloud Function. Which response header should you add?. A. Access-Control-Allow-Origin: *. B. Access-Control-Allow-Origin: https://*.example.com. C. Access-Control-Allow-Origin: https://fn.example.com. D. Access-Control-Allow-origin: https://www.example.com. You have an HTTP Cloud Function that is called via POST. Each submission's request body has a flat, unnested JSON structure containing numeric and text data. After the Cloud Function completes, the collected data should be immediately available for ongoing and complex analytics by many users in parallel. How should you persist the submissions?. A. Directly persist each POST request's JSON data into Datastore. B. Transform the POST request's JSON data, and stream it into BigQuery. C. Transform the POST request's JSON data, and store it in a regional Cloud SQL cluster. D. Persist each POST request's JSON data as an individual file within Cloud Storage, with the file name containing the request identifier. Your security team is auditing all deployed applications running in Google Kubernetes Engine. After completing the audit, your team discovers that some of the applications send traffic within the cluster in clear text. You need to ensure that all application traffic is encrypted as quickly as possible while minimizing changes to your applications and maintaining support from Google. What should you do?. A. Use Network Policies to block traffic between applications. B. Install Istio, enable proxy injection on your application namespace, and then enable mTLS. C. Define Trusted Network ranges within the application, and configure the applications to allow traffic only from those networks. D. Use an automated process to request SSL Certificates for your applications from Let's Encrypt and add them to your applications. You recently deployed your application in Google Kubernetes Engine, and now need to release a new version of your application. You need the ability to instantly roll back to the previous version in case there are issues with the new version. Which deployment model should you use?. A. Perform a rolling deployment, and test your new application after the deployment is complete. B. Perform A/B testing, and test your application periodically after the new tests are implemented. C. Perform a blue/green deployment, and test your new application after the deployment is. complete. D. Perform a canary deployment, and test your new application periodically after the new version is deployed. You manage an ecommerce application that processes purchases from customers who can subsequently cancel or change those purchases. You discover that order volumes are highly variable and the backend order-processing system can only process one request at a time. You want to ensure seamless performance for customers regardless of usage volume. It is crucial that customers' order update requests are performed in the sequence in which they were generated. What should you do?. A. Send the purchase and change requests over WebSockets to the backend. B. Send the purchase and change requests as REST requests to the backend. C. Use a Pub/Sub subscriber in pull mode and use a data store to manage ordering. D. Use a Pub/Sub subscriber in push mode and use a data store to manage ordering. You are using Cloud Build for your CI/CD pipeline to complete several tasks, including copying certain files to Compute Engine virtual machines. Your pipeline requires a flat file that is generated in one builder in the pipeline to be accessible by subsequent builders in the same pipeline. How should you store the file so that all the builders in the pipeline can access it?. A. Store and retrieve the file contents using Compute Engine instance metadata. B. Output the file contents to a file in /workspace. Read from the same /workspace file in the subsequent build step. C. Use gsutil to output the file contents to a Cloud Storage object. Read from the same object in the subsequent build step. D. Add a build argument that runs an HTTP POST via curl to a separate web server to persist the value in one builder. Use an HTTP GET via curl from the subsequent build step to read the value. Your operations team has asked you to create a script that lists the Cloud Bigtable, Memorystore, and Cloud SQL databases running within a project. The script should allow users to submit a filter expression to limit the results presented. How should you retrieve the data?. A. Use the HBase API, Redis API, and MySQL connection to retrieve database lists. Combine the results, and then apply the filter to display the results. B. Use the HBase API, Redis API, and MySQL connection to retrieve database lists. Filter the results individually, and then combine them to display the results. C. Run gcloud bigtable instances list, gcloud redis instances list, and gcloud sql databases list. Use a filter within the application, and then display the results. D. Run gcloud bigtable instances list, gcloud redis instances list, and gcloud sql databases list. Use --filter flag with each command, and then display the results. You need to deploy a new European version of a website hosted on Google Kubernetes Engine. The current and new websites must be accessed via the same HTTP(S) load balancer's external IP address, but have different domain names. What should you do?. A. Define a new Ingress resource with a host rule matching the new domain. B. Modify the existing Ingress resource with a host rule matching the new domain. C. Create a new Service of type LoadBalancer specifying the existing IP address as the loadBalancerIP. D. Generate a new Ingress resource and specify the existing IP address as the kubernetes.io/ingress.global-static-ip-name annotation value. The development teams in your company want to manage resources from their local environments. You have been asked to enable developer access to each team’s Google Cloud projects. You want to maximize efficiency while following Google-recommended best practices. What should you do?. A. Add the users to their projects, assign the relevant roles to the users, and then provide the users with each relevant Project ID. B. Add the users to their projects, assign the relevant roles to the users, and then provide the users with each relevant Project Number. C. Create groups, add the users to their groups, assign the relevant roles to the groups, and then provide the users with each relevant Project ID. D. Create groups, add the users to their groups, assign the relevant roles to the groups, and then provide the users with each relevant Project Number. Your company’s product team has a new requirement based on customer demand to autoscale your stateless and distributed service running in a Google Kubernetes Engine (GKE) duster. You want to find a solution that minimizes changes because this feature will go live in two weeks. What should you do?. A. Deploy a Vertical Pod Autoscaler, and scale based on the CPU load. B. Deploy a Vertical Pod Autoscaler, and scale based on a custom metric. C. Deploy a Horizontal Pod Autoscaler, and scale based on the CPU toad. D. Deploy a Horizontal Pod Autoscaler, and scale based on a custom metric. Your development team has been tasked with maintaining a .NET legacy application. The application incurs occasional changes and was recently updated. Your goal is to ensure that the application provides consistent results while moving through the CI/CD pipeline from environment to environment. You want to minimize the cost of deployment while making sure that external factors and dependencies between hosting environments are not problematic. Containers are not yet approved in your organization. What should you do?. A. Rewrite the application using .NET Core, and deploy to Cloud Run. Use revisions to separate the environments. B. Use Cloud Build to deploy the application as a new Compute Engine image for each build. Use this image in each environment. C. Deploy the application using MS Web Deploy, and make sure to always use the latest, patched MS Windows Server base image in Compute Engine. D. Use Cloud Build to package the application, and deploy to a Google Kubernetes Engine cluster. Use namespaces to separate the environments. Users are complaining that your Cloud Run-hosted website responds too slowly during traffic spikes. You want to provide a better user experience during traffic peaks. What should you do?. A. Read application configuration and static data from the database on application startup. B. Package application configuration and static data into the application image during build time. C. Perform as much work as possible in the background after the response has been returned to the user. D. Ensure that timeout exceptions and errors cause the Cloud Run instance to exit quickly so a replacement instance can be started. You are a developer working on an internal application for payroll processing. You are building a component of the application that allows an employee to submit a timesheet, which then initiates several steps: • An email is sent to the employee and manager, notifying them that the timesheet was submitted. • A timesheet is sent to payroll processing for the vendor's API. • A timesheet is sent to the data warehouse for headcount planning. These steps are not dependent on each other and can be completed in any order. New steps are being considered and will be implemented by different development teams. Each development team will implement the error handling specific to their step. What should you do?. A. Deploy a Cloud Function for each step that calls the corresponding downstream system to complete the required action. B. Create a Pub/Sub topic for each step. Create a subscription for each downstream development team to subscribe to their step's topic. C. Create a Pub/Sub topic for timesheet submissions. Create a subscription for each downstream development team to subscribe to the topic. D. Create a timesheet microservice deployed to Google Kubernetes Engine. The microservice calls each downstream step and waits for a successful response before calling the next step. Your company just experienced a Google Kubernetes Engine (GKE) API outage due to a zone failure. You want to deploy a highly available GKE architecture that minimizes service interruption to users in the event of a future zone failure. What should you do?. A. Deploy Zonal clusters. B. Deploy Regional clusters. C. Deploy Multi-Zone clusters. D. Deploy GKE on-premises clusters. You are running a containerized application on Google Kubernetes Engine. Your container images are stored in Container Registry. Your team uses CI/CD practices. You need to prevent the deployment of containers with known critical vulnerabilities. What should you do?. A. • Use Web Security Scanner to automatically crawl your application • Review your application logs for scan results, and provide an attestation that the container is free of known critical vulnerabilities • Use Binary Authorization to implement a policy that forces the attestation to be provided before the container is deployed. B. • Use Web Security Scanner to automatically crawl your application • Review the scan results in the scan details page in the Cloud Console, and provide an attestation that the container is free of known critical vulnerabilities • Use Binary Authorization to implement a policy that forces the attestation to be provided before the container is deployed. C. • Enable the Container Scanning API to perform vulnerability scanning • Review vulnerability reporting in Container Registry in the Cloud Console, and provide an attestation that the container is free of known critical vulnerabilities • Use Binary Authorization to implement a policy that forces the attestation to be provided before the container is deployed. D. • Enable the Container Scanning API to perform vulnerability scanning • Programmatically review vulnerability reporting through the Container Scanning API, and provide an attestation that the container is free of known critical vulnerabilities • Use Binary Authorization to implement a policy that forces the attestation to be provided before the container is deployed. You are building a mobile application that will store hierarchical data structures in a database. The application will enable users working offline to sync changes when they are back online. A backend service will enrich the data in the database using a service account. The application is expected to be very popular and needs to scale seamlessly and securely. Which database and IAM role should you use?. A. Use Cloud SQL, and assign the roles/cloudsql.editor role to the service account. B. Use Bigtable, and assign the roles/bigtable.viewer role to the service account. C. Use Firestore in Native mode and assign the roles/datastore.user role to the service account. D. Use Firestore in Datastore mode and assign the roles/datastore.viewer role to the service account. Your company’s corporate policy states that there must be a copyright comment at the very beginning of all source files. You want to write a custom step in Cloud Build that is triggered by each source commit. You need the trigger to validate that the source contains a copyright and add one for subsequent steps if not there. What should you do?. A. Build a new Docker container that examines the files in /workspace and then checks and adds a copyright for each source file. Changed files are explicitly committed back to the source repository. B. Build a new Docker container that examines the files in /workspace and then checks and adds a copyright for each source file. Changed files do not need to be committed back to the source repository. C. Build a new Docker container that examines the files in a Cloud Storage bucket and then checks and adds a copyright for each source file. Changed files are written back to the Cloud Storage bucket. D. Build a new Docker container that examines the files in a Cloud Storage bucket and then checks and adds a copyright for each source file. Changed files are explicitly committed back to the source repository. Case study - This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided. To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study. At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section. To start the case study - To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question. Company Overview - HipLocal is a community application designed to facilitate communication between people in close proximity. It is used for event planning and organizing sporting events, and for businesses to connect with their local communities. HipLocal launched recently in a few neighborhoods in Dallas and is rapidly growing into a global phenomenon. Its unique style of hyper-local community communication and business outreach is in demand around the world. Executive Statement - We are the number one local community app; it's time to take our local community services global. Our venture capital investors want to see rapid growth and the same great experience for new local and virtual communities that come online, whether their members are 10 or 10000 miles away from each other. Solution Concept - HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their global customers. They want to hire and train a new team to support these regions in their time zones. They will need to ensure that the application scales smoothly and provides clear uptime data, and that they analyze and respond to any issues that occur. Existing Technical Environment - HipLocal's environment is a mix of on-premises hardware and infrastructure running in Google Cloud Platform. The HipLocal team understands their application well, but has limited experience in global scale applications. Their existing technical environment is as follows: • Existing APIs run on Compute Engine virtual machine instances hosted in GCP. • State is stored in a single instance MySQL database in GCP. • Release cycles include development freezes to allow for QA testing. • The application has no logging. • Applications are manually deployed by infrastructure engineers during periods of slow traffic on weekday evenings. • There are basic indicators of uptime; alerts are frequently fired when the APIs are unresponsive. Business Requirements - HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their requirements are: • Expand availability of the application to new regions. • Support 10x as many concurrent users. • Ensure a consistent experience for users when they travel to different regions. • Obtain user activity metrics to better understand how to monetize their product. • Ensure compliance with regulations in the new regions (for example, GDPR). • Reduce infrastructure management time and cost. • Adopt the Google-recommended practices for cloud computing. ○ Develop standardized workflows and processes around application lifecycle management. ○ Define service level indicators (SLIs) and service level objectives (SLOs). Technical Requirements - • Provide secure communications between the on-premises data center and cloud-hosted applications and infrastructure. • The application must provide usage metrics and monitoring. • APIs require authentication and authorization. • Implement faster and more accurate validation of new features. • Logging and performance metrics must provide actionable information to be able to provide debugging information and alerts. • Must scale to meet user demand. For this question, refer to the HipLocal case study. How should HipLocal redesign their architecture to ensure that the application scales to support a large increase in users?. A. Use Google Kubernetes Engine (GKE) to run the application as a microservice. Run the MySQL database on a dedicated GKE node. B. Use multiple Compute Engine instances to run MySQL to store state information. Use a Google Cloud-managed load balancer to distribute the load between instances. Use managed instance groups for scaling. C. Use Memorystore to store session information and CloudSQL to store state information. Use a Google Cloud-managed load balancer to distribute the load between instances. Use managed instance groups for scaling. D. Use a Cloud Storage bucket to serve the application as a static website, and use another Cloud Storage bucket to store user state information. Case study - This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided. To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study. At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section. To start the case study - To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question. Company Overview - HipLocal is a community application designed to facilitate communication between people in close proximity. It is used for event planning and organizing sporting events, and for businesses to connect with their local communities. HipLocal launched recently in a few neighborhoods in Dallas and is rapidly growing into a global phenomenon. Its unique style of hyper-local community communication and business outreach is in demand around the world. Executive Statement - We are the number one local community app; it's time to take our local community services global. Our venture capital investors want to see rapid growth and the same great experience for new local and virtual communities that come online, whether their members are 10 or 10000 miles away from each other. Solution Concept - HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their global customers. They want to hire and train a new team to support these regions in their time zones. They will need to ensure that the application scales smoothly and provides clear uptime data, and that they analyze and respond to any issues that occur. Existing Technical Environment - HipLocal's environment is a mix of on-premises hardware and infrastructure running in Google Cloud Platform. The HipLocal team understands their application well, but has limited experience in global scale applications. Their existing technical environment is as follows: • Existing APIs run on Compute Engine virtual machine instances hosted in GCP. • State is stored in a single instance MySQL database in GCP. • Release cycles include development freezes to allow for QA testing. • The application has no logging. • Applications are manually deployed by infrastructure engineers during periods of slow traffic on weekday evenings. • There are basic indicators of uptime; alerts are frequently fired when the APIs are unresponsive. Business Requirements - HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their requirements are: • Expand availability of the application to new regions. • Support 10x as many concurrent users. • Ensure a consistent experience for users when they travel to different regions. • Obtain user activity metrics to better understand how to monetize their product. • Ensure compliance with regulations in the new regions (for example, GDPR). • Reduce infrastructure management time and cost. • Adopt the Google-recommended practices for cloud computing. ○ Develop standardized workflows and processes around application lifecycle management. ○ Define service level indicators (SLIs) and service level objectives (SLOs). Technical Requirements - • Provide secure communications between the on-premises data center and cloud-hosted applications and infrastructure. • The application must provide usage metrics and monitoring. • APIs require authentication and authorization. • Implement faster and more accurate validation of new features. • Logging and performance metrics must provide actionable information to be able to provide debugging information and alerts. • Must scale to meet user demand. For this question, refer to the HipLocal case study. How should HipLocal increase their API development speed while continuing to provide the QA team with a stable testing environment that meets feature requirements?. A. Include unit tests in their code, and prevent deployments to QA until all tests have a passing status. B. Include performance tests in their code, and prevent deployments to QA until all tests have a passing status. C. Create health checks for the QA environment, and redeploy the APIs at a later time if the environment is unhealthy. D. Redeploy the APIs to App Engine using Traffic Splitting. Do not move QA traffic to the new versions if errors are found. You are in the final stage of migrating an on-premises data center to Google Cloud. You are quickly approaching your deadline, and discover that a web API is running on a server slated for decommissioning. You need to recommend a solution to modernize this API while migrating to Google Cloud. The modernized web API must meet the following requirements: • Autoscales during high traffic periods at the end of each month • Written in Python 3.x • Developers must be able to rapidly deploy new versions in response to frequent code changes You want to minimize cost, effort, and operational overhead of this migration. What should you do?. A. Modernize and deploy the code on App Engine flexible environment. B. Modernize and deploy the code on App Engine standard environment. C. Deploy the modernized application to an n1-standard-1 Compute Engine instance. D. Ask the development team to re-write the application to run as a Docker container on Google Kubernetes Engine. You are developing an application that consists of several microservices running in a Google Kubernetes Engine cluster. One microservice needs to connect to a third-party database running on-premises. You need to store credentials to the database and ensure that these credentials can be rotated while following security best practices. What should you do?. A. Store the credentials in a sidecar container proxy, and use it to connect to the third-party database. B. Configure a service mesh to allow or restrict traffic from the Pods in your microservice to the database. C. Store the credentials in an encrypted volume mount, and associate a Persistent Volume Claim with the client Pod. D. Store the credentials as a Kubernetes Secret, and use the Cloud Key Management Service plugin to handle encryption and decryption. You need to migrate a standalone Java application running in an on-premises Linux virtual machine (VM) to Google Cloud in a cost-effective manner. You decide not to take the lift-and-shift approach, and instead you plan to modernize the application by converting it to a container. How should you accomplish this task?. A. Use Migrate for Anthos to migrate the VM to your Google Kubernetes Engine (GKE) cluster as a container. B. Export the VM as a raw disk and import it as an image. Create a Compute Engine instance from the Imported image. C. Use Migrate for Compute Engine to migrate the VM to a Compute Engine instance, and use Cloud Build to convert it to a container. D. Use Jib to build a Docker image from your source code, and upload it to Artifact Registry. Deploy the application in a GKE cluster, and test the application. Your organization has recently begun an initiative to replatform their legacy applications onto Google Kubernetes Engine. You need to decompose a monolithic application into microservices. Multiple instances have read and write access to a configuration file, which is stored on a shared file system. You want to minimize the effort required to manage this transition, and you want to avoid rewriting the application code. What should you do?. A. Create a new Cloud Storage bucket, and mount it via FUSE in the container. B. Create a new persistent disk, and mount the volume as a shared PersistentVolume. C. Create a new Filestore instance, and mount the volume as an NFS PersistentVolume. D. Create a new ConfigMap and volumeMount to store the contents of the configuration file. Your development team has built several Cloud Functions using Java along with corresponding integration and service tests. You are building and deploying the functions and launching the tests using Cloud Build. Your Cloud Build job is reporting deployment failures immediately after successfully validating the code. What should you do?. A. Check the maximum number of Cloud Function instances. B. Verify that your Cloud Build trigger has the correct build parameters. C. Retry the tests using the truncated exponential backoff polling strategy. D. Verify that the Cloud Build service account is assigned the Cloud Functions Developer role. You manage a microservices application on Google Kubernetes Engine (GKE) using Istio. You secure the communication channels between your microservices by implementing an Istio AuthorizationPolicy, a Kubernetes NetworkPolicy, and mTLS on your GKE cluster. You discover that HTTP requests between two Pods to specific URLs fail, while other requests to other URLs succeed. What is the cause of the connection issue?. A. A Kubernetes NetworkPolicy resource is blocking HTTP traffic between the Pods. B. The Pod initiating the HTTP requests is attempting to connect to the target Pod via an incorrect TCP port. C. The Authorization Policy of your cluster is blocking HTTP requests for specific paths within your application. D. The cluster has mTLS configured in permissive mode, but the Pod's sidecar proxy is sending unencrypted traffic in plain text. Your company has deployed a new API to a Compute Engine instance. During testing, the API is not behaving as expected. You want to monitor the application over 12 hours to diagnose the problem within the application code without redeploying the application. Which tool should you use?. A. Cloud Trace. B. Cloud Monitoring. C. Cloud Debugger logpoints. D. Cloud Debugger snapshots. You are designing an application that consists of several microservices. Each microservice has its own RESTful API and will be deployed as a separate Kubernetes Service. You want to ensure that the consumers of these APIs aren't impacted when there is a change to your API, and also ensure that third-party systems aren't interrupted when new versions of the API are released. How should you configure the connection to the application following Google-recommended best practices?. A. Use an Ingress that uses the API's URL to route requests to the appropriate backend. B. Leverage a Service Discovery system, and connect to the backend specified by the request. C. Use multiple clusters, and use DNS entries to route requests to separate versioned backends. D. Combine multiple versions in the same service, and then specify the API version in the POST request. Your team is building an application for a financial institution. The application's frontend runs on Compute Engine, and the data resides in Cloud SQL and one Cloud Storage bucket. The application will collect data containing PII, which will be stored in the Cloud SQL database and the Cloud Storage bucket. You need to secure the PII data. What should you do?. A. 1. Create the relevant firewall rules to allow only the frontend to communicate with the Cloud SQL database 2. Using IAM, allow only the frontend service account to access the Cloud Storage bucket. B. 1. Create the relevant firewall rules to allow only the frontend to communicate with the Cloud SQL database 2. Enable private access to allow the frontend to access the Cloud Storage bucket privately. C. 1. Configure a private IP address for Cloud SQL 2. Use VPC-SC to create a service perimeter 3. Add the Cloud SQL database and the Cloud Storage bucket to the same service perimeter. D. 1. Configure a private IP address for Cloud SQL 2. Use VPC-SC to create a service perimeter 3. Add the Cloud SQL database and the Cloud Storage bucket to different service perimeters. You are developing an application that will handle requests from end users. You need to secure a Cloud Function called by the application to allow authorized end users to authenticate to the function via the application while restricting access to unauthorized users. You will integrate Google Sign-In as part of the solution and want to follow Google-recommended best practices. What should you do?. A. Deploy from a source code repository and grant users the roles/cloudfunctions.viewer role. B. Deploy from a source code repository and grant users the roles/cloudfunctions.invoker role. C. Deploy from your local machine using gcloud and grant users the roles/cloudfunctions.admin role. D. Deploy from your local machine using gcloud and grant users the roles/cloudfunctions.developer role. You are building a highly available and globally accessible application that will serve static content to users. You need to configure the storage and serving components. You want to minimize management overhead and latency while maximizing reliability for users. What should you do?. A. 1. Create a managed instance group. Replicate the static content across the virtual machines (VMs) 2. Create an external HTTP(S) load balancer. 3. Enable Cloud CDN, and send traffic to the managed instance group. B. 1. Create an unmanaged instance group. Replicate the static content across the VMs. 2. Create an external HTTP(S) load balancer 3. Enable Cloud CDN, and send traffic to the unmanaged instance group. C. 1. Create a Standard storage class, regional Cloud Storage bucket. Put the static content in the bucket 2. Reserve an external IP address, and create an external HTTP(S) load balancer 3. Enable Cloud CDN, and send traffic to your backend bucket. D. 1. Create a Standard storage class, multi-regional Cloud Storage bucket. Put the static content in the bucket. 2. Reserve an external IP address, and create an external HTTP(S) load balancer. 3. Enable Cloud CDN, and send traffic to your backend bucket. Case study - This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided. To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study. At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section. To start the case study - To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question. Company Overview - HipLocal is a community application designed to facilitate communication between people in close proximity. It is used for event planning and organizing sporting events, and for businesses to connect with their local communities. HipLocal launched recently in a few neighborhoods in Dallas and is rapidly growing into a global phenomenon. Its unique style of hyper-local community communication and business outreach is in demand around the world. Executive Statement - We are the number one local community app; it's time to take our local community services global. Our venture capital investors want to see rapid growth and the same great experience for new local and virtual communities that come online, whether their members are 10 or 10000 miles away from each other. Solution Concept - HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their global customers. They want to hire and train a new team to support these regions in their time zones. They will need to ensure that the application scales smoothly and provides clear uptime data, and that they analyze and respond to any issues that occur. Existing Technical Environment - HipLocal's environment is a mix of on-premises hardware and infrastructure running in Google Cloud Platform. The HipLocal team understands their application well, but has limited experience in global scale applications. Their existing technical environment is as follows: • Existing APIs run on Compute Engine virtual machine instances hosted in GCP. • State is stored in a single instance MySQL database in GCP. • Release cycles include development freezes to allow for QA testing. • The application has no logging. • Applications are manually deployed by infrastructure engineers during periods of slow traffic on weekday evenings. • There are basic indicators of uptime; alerts are frequently fired when the APIs are unresponsive. Business Requirements - HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their requirements are: • Expand availability of the application to new regions. • Support 10x as many concurrent users. • Ensure a consistent experience for users when they travel to different regions. • Obtain user activity metrics to better understand how to monetize their product. • Ensure compliance with regulations in the new regions (for example, GDPR). • Reduce infrastructure management time and cost. • Adopt the Google-recommended practices for cloud computing. ○ Develop standardized workflows and processes around application lifecycle management. ○ Define service level indicators (SLIs) and service level objectives (SLOs). Technical Requirements - • Provide secure communications between the on-premises data center and cloud-hosted applications and infrastructure. • The application must provide usage metrics and monitoring. • APIs require authentication and authorization. • Implement faster and more accurate validation of new features. • Logging and performance metrics must provide actionable information to be able to provide debugging information and alerts. • Must scale to meet user demand. For this question refer to the HipLocal case study. HipLocal wants to reduce the latency of their services for users in global locations. They have created read replicas of their database in locations where their users reside and configured their service to read traffic using those replicas. How should they further reduce latency for all database interactions with the least amount of effort?. A. Migrate the database to Bigtable and use it to serve all global user traffic. B. Migrate the database to Cloud Spanner and use it to serve all global user traffic. C. Migrate the database to Firestore in Datastore mode and use it to serve all global user traffic. D. Migrate the services to Google Kubernetes Engine and use a load balancer service to better scale the application. You are writing from a Go application to a Cloud Spanner database. You want to optimize your application’s performance using Google-recommended best practices. What should you do?. A. Write to Cloud Spanner using Cloud Client Libraries. B. Write to Cloud Spanner using Google API Client Libraries. C. Write to Cloud Spanner using a custom gRPC client library. D. Write to Cloud Spanner using a third-party HTTP client library. You have an application deployed in Google Kubernetes Engine (GKE). You need to update the application to make authorized requests to Google Cloud managed services. You want this to be a one-time setup, and you need to follow security best practices of auto-rotating your security keys and storing them in an encrypted store. You already created a service account with appropriate access to the Google Cloud service. What should you do next?. A. Assign the Google Cloud service account to your GKE Pod using Workload Identity. B. Export the Google Cloud service account, and share it with the Pod as a Kubernetes Secret. C. Export the Google Cloud service account, and embed it in the source code of the application. D. Export the Google Cloud service account, and upload it to HashiCorp Vault to generate a dynamic service account for your application. You are planning to deploy hundreds of microservices in your Google Kubernetes Engine (GKE) cluster. How should you secure communication between the microservices on GKE using a managed service?. A. Use global HTTP(S) Load Balancing with managed SSL certificates to protect your services. B. Deploy open source Istio in your GKE cluster, and enable mTLS in your Service Mesh. C. Install cert-manager on GKE to automatically renew the SSL certificates. D. Install Anthos Service Mesh, and enable mTLS in your Service Mesh. You are developing an application using different microservices that must remain internal to the cluster. You want the ability to configure each microservice with a specific number of replicas. You also want the ability to address a specific microservice from any other microservice in a uniform way, regardless of the number of replicas the microservice scales to. You plan to implement this solution on Google Kubernetes Engine. What should you do?. A. Deploy each microservice as a Deployment. Expose the Deployment in the cluster using a Service, and use the Service DNS name to address it from other microservices within the cluster. B. Deploy each microservice as a Deployment. Expose the Deployment in the cluster using an Ingress, and use the Ingress IP address to address the Deployment from other microservices within the cluster. C. Deploy each microservice as a Pod. Expose the Pod in the cluster using a Service, and use the Service DNS name to address the microservice from other microservices within the cluster. D. Deploy each microservice as a Pod. Expose the Pod in the cluster using an Ingress, and use the Ingress IP address to address the Pod from other microservices within the cluster. You are building an application that uses a distributed microservices architecture. You want to measure the performance and system resource utilization in one of the microservices written in Java. What should you do?. A. Instrument the service with Cloud Profiler to measure CPU utilization and method-level execution times in the service. B. Instrument the service with Debugger to investigate service errors. C. Instrument the service with Cloud Trace to measure request latency. D. Instrument the service with OpenCensus to measure service latency, and write custom metrics to Cloud Monitoring. Your team is responsible for maintaining an application that aggregates news articles from many different sources. Your monitoring dashboard contains publicly accessible real-time reports and runs on a Compute Engine instance as a web application. External stakeholders and analysts need to access these reports via a secure channel without authentication. How should you configure this secure channel?. A. Add a public IP address to the instance. Use the service account key of the instance to encrypt the traffic. B. Use Cloud Scheduler to trigger Cloud Build every hour to create an export from the reports. Store the reports in a public Cloud Storage bucket. C. Add an HTTP(S) load balancer in front of the monitoring dashboard. Configure Identity-Aware Proxy to secure the communication channel. D. Add an HTTP(S) load balancer in front of the monitoring dashboard. Set up a Google-managed SSL certificate on the load balancer for traffic encryption. You are using Cloud Run to host a web application. You need to securely obtain the application project ID and region where the application is running and display this information to users. You want to use the most performant approach. What should you do?. A. Use HTTP requests to query the available metadata server at the http://metadata.google.internal/ endpoint with the Metadata-Flavor: Google header. B. In the Google Cloud console, navigate to the Project Dashboard and gather configuration details. Navigate to the Cloud Run “Variables & Secrets” tab, and add the desired environment variables in Key:Value format. C. In the Google Cloud console, navigate to the Project Dashboard and gather configuration details. Write the application configuration information to Cloud Run's in-memory container filesystem. D. Make an API call to the Cloud Asset Inventory API from the application and format the request to include instance metadata. You need to deploy resources from your laptop to Google Cloud using Terraform. Resources in your Google Cloud environment must be created using a service account. Your Cloud Identity has the roles/iam.serviceAccountTokenCreator Identity and Access Management (IAM) role and the necessary permissions to deploy the resources using Terraform. You want to set up your development environment to deploy the desired resources following Google-recommended best practices. What should you do?. A. 1. Download the service account’s key file in JSON format, and store it locally on your laptop. 2. Set the GOOGLE_APPLICATION_CREDENTIALS environment variable to the path of your downloaded key file. B. 1. Run the following command from a command line: gcloud config set auth/impersonate_service_account service-account-name@project.iam.gserviceacccount.com. 2. Set the GOOGLE_OAUTH_ACCESS_TOKEN environment variable to the value that is returned by the gcloud auth print-access-token command. C. 1. Run the following command from a command line: gcloud auth application-default login. 2. In the browser window that opens, authenticate using your personal credentials. D. 1. Store the service account's key file in JSON format in Hashicorp Vault. 2. Integrate Terraform with Vault to retrieve the key file dynamically, and authenticate to Vault using a short-lived access token. Your company uses Cloud Logging to manage large volumes of log data. You need to build a real-time log analysis architecture that pushes logs to a third-party application for processing. What should you do?. A. Create a Cloud Logging log export to Pub/Sub. B. Create a Cloud Logging log export to BigQuery. C. Create a Cloud Logging log export to Cloud Storage. D. Create a Cloud Function to read Cloud Logging log entries and send them to the third-party application. You need to configure a Deployment on Google Kubernetes Engine (GKE). You want to include a check that verifies that the containers can connect to the database. If the Pod is failing to connect, you want a script on the container to run to complete a graceful shutdown. How should you configure the Deployment?. A. Create two jobs: one that checks whether the container can connect to the database, and another that runs the shutdown script if the Pod is failing. B. Create the Deployment with a livenessProbe for the container that will fail if the container can't connect to the database. Configure a Prestop lifecycle handler that runs the shutdown script if the container is failing. C. Create the Deployment with a PostStart lifecycle handler that checks the service availability. Configure a PreStop lifecycle handler that runs the shutdown script if the container is failing. D. Create the Deployment with an initContainer that checks the service availability. Configure a Prestop lifecycle handler that runs the shutdown script if the Pod is failing. You are deploying a microservices application to Google Kubernetes Engine (GKE). The application will receive daily updates. You expect to deploy a large number of distinct containers that will run on the Linux operating system (OS). You want to be alerted to any known OS vulnerabilities in the new containers. You want to follow Google-recommended best practices. What should you do?. A. Use the gcloud CLI to call Container Analysis to scan new container images. Review the vulnerability results before each deployment. B. Enable Container Analysis, and upload new container images to Artifact Registry. Review the vulnerability results before each deployment. C. Enable Container Analysis, and upload new container images to Artifact Registry. Review the critical vulnerability results before each deployment. D. Use the Container Analysis REST API to call Container Analysis to scan new container images. Review the vulnerability results before each deployment. Your team is developing unit tests for Cloud Function code. The code is stored in a Cloud Source Repositories repository. You are responsible for implementing the tests. Only a specific service account has the necessary permissions to deploy the code to Cloud Functions. You want to ensure that the code cannot be deployed without first passing the tests. How should you configure the unit testing process?. A. Configure Cloud Build to deploy the Cloud Function. If the code passes the tests, a deployment approval is sent to you. B. Configure Cloud Build to deploy the Cloud Function, using the specific service account as the build agent. Run the unit tests after successful deployment. C. Configure Cloud Build to run the unit tests. If the code passes the tests, the developer deploys the Cloud Function. D. Configure Cloud Build to run the unit tests, using the specific service account as the build agent. If the code passes the tests, Cloud Build deploys the Cloud Function. Your team detected a spike of errors in an application running on Cloud Run in your production project. The application is configured to read messages from Pub/Sub topic A, process the messages, and write the messages to topic B. You want to conduct tests to identify the cause of the errors. You can use a set of mock messages for testing. What should you do?. A. Deploy the Pub/Sub and Cloud Run emulators on your local machine. Deploy the application locally, and change the logging level in the application to DEBUG or INFO. Write mock messages to topic A, and then analyze the logs. B. Use the gcloud CLI to write mock messages to topic A. Change the logging level in the application to DEBUG or INFO, and then analyze the logs. C. Deploy the Pub/Sub emulator on your local machine. Point the production application to your local Pub/Sub topics. Write mock messages to topic A, and then analyze the logs. D. Use the Google Cloud console to write mock messages to topic A. Change the logging level in the application to DEBUG or INFO, and then analyze the logs. You are developing a Java Web Server that needs to interact with Google Cloud services via the Google Cloud API on the user's behalf. Users should be able to authenticate to the Google Cloud API using their Google Cloud identities. Which workflow should you implement in your web application?. A. 1. When a user arrives at your application, prompt them for their Google username and password. 2. Store an SHA password hash in your application's database along with the user's username. 3. The application authenticates to the Google Cloud API using HTTPs requests with the user's username and password hash in the Authorization request header. B. 1. When a user arrives at your application, prompt them for their Google username and password. 2. Forward the user's username and password in an HTTPS request to the Google Cloud authorization server, and request an access token. 3. The Google server validates the user's credentials and returns an access token to the application. 4. The application uses the access token to call the Google Cloud API. C. 1. When a user arrives at your application, route them to a Google Cloud consent screen with a list of requested permissions that prompts the user to sign in with SSO to their Google Account. 2. After the user signs in and provides consent, your application receives an authorization code from a Google server. 3. The Google server returns the authorization code to the user, which is stored in the browser's cookies. 4. The user authenticates to the Google Cloud API using the authorization code in the cookie. D. 1. When a user arrives at your application, route them to a Google Cloud consent screen with a list of requested permissions that prompts the user to sign in with SSO to their Google Account. 2. After the user signs in and provides consent, your application receives an authorization code from a Google server. 3. The application requests a Google Server to exchange the authorization code with an access token. 4. The Google server responds with the access token that is used by the application to call the Google Cloud API. You work for an organization that manages an online ecommerce website. Your company plans to expand across the world; however, the estore currently serves one specific region. You need to select a SQL database and configure a schema that will scale as your organization grows. You want to create a table that stores all customer transactions and ensure that the customer (CustomerId) and the transaction (TransactionId) are unique. What should you do?. A. Create a Cloud SQL table that has TransactionId and CustomerId configured as primary keys. Use an incremental number for the TransactionId. B. Create a Cloud SQL table that has TransactionId and CustomerId configured as primary keys. Use a random string (UUID) for the Transactionid. C. Create a Cloud Spanner table that has TransactionId and CustomerId configured as primary keys. Use a random string (UUID) for the TransactionId. D. Create a Cloud Spanner table that has TransactionId and CustomerId configured as primary keys. Use an incremental number for the TransactionId. You work for an organization that manages an ecommerce site. Your application is deployed behind a global HTTP(S) load balancer. You need to test a new product recommendation algorithm. You plan to use A/B testing to determine the new algorithm’s effect on sales in a randomized way. How should you test this feature?. A. Split traffic between versions using weights. B. Enable the new recommendation feature flag on a single instance. C. Mirror traffic to the new version of your application. D. Use HTTP header-based routing. You plan to deploy a new application revision with a Deployment resource to Google Kubernetes Engine (GKE) in production. The container might not work correctly. You want to minimize risk in case there are issues after deploying the revision. You want to follow Google-recommended best practices. What should you do?. A. Perform a rolling update with a PodDisruptionBudget of 80%. B. Perform a rolling update with a HorizontalPodAutoscaler scale-down policy value of 0. C. Convert the Deployment to a StatefulSet, and perform a rolling update with a PodDisruptionBudget of 80%. D. Convert the Deployment to a StatefulSet, and perform a rolling update with a HorizontalPodAutoscaler scale-down policy value of 0. You are developing an application hosted on Google Cloud that uses a MySQL relational database schema. The application will have a large volume of reads and writes to the database and will require backups and ongoing capacity planning. Your team does not have time to fully manage the database but can take on small administrative tasks. How should you host the database?. A. Configure Cloud SQL to host the database, and import the schema into Cloud SQL. B. Deploy MySQL from the Google Cloud Marketplace to the database using a client, and import the schema. C. Configure Bigtable to host the database, and import the data into Bigtable. D. Configure Cloud Spanner to host the database, and import the schema into Cloud Spanner. E. Configure Firestore to host the database, and import the data into Firestore. You are a developer at a financial institution. You use Cloud Shell to interact with Google Cloud services. User data is currently stored on an ephemeral disk; however, a recently passed regulation mandates that you can no longer store sensitive information on an ephemeral disk. You need to implement a new storage solution for your user data. You want to minimize code changes. Where should you store your user data?. A. Store user data on a Cloud Shell home disk, and log in at least every 120 days to prevent its deletion. B. Store user data on a persistent disk in a Compute Engine instance. C. Store user data in a Cloud Storage bucket. D. Store user data in BigQuery tables. Your team is setting up a build pipeline for an application that will run in Google Kubernetes Engine (GKE). For security reasons, you only want images produced by the pipeline to be deployed to your GKE cluster. Which combination of Google Cloud services should you use?. A. Cloud Build, Cloud Storage, and Binary Authorization. B. Google Cloud Deploy, Cloud Storage, and Google Cloud Armor. C. Google Cloud Deploy, Artifact Registry, and Google Cloud Armor. D. Cloud Build, Artifact Registry, and Binary Authorization. You are supporting a business-critical application in production deployed on Cloud Run. The application is reporting HTTP 500 errors that are affecting the usability of the application. You want to be alerted when the number of errors exceeds 15% of the requests within a specific time window. What should you do?. A. Create a Cloud Function that consumes the Cloud Monitoring API. Use Cloud Scheduler to trigger the Cloud Function daily and alert you if the number of errors is above the defined threshold. B. Navigate to the Cloud Run page in the Google Cloud console, and select the service from the services list. Use the Metrics tab to visualize the number of errors for that revision, and refresh the page daily. C. Create an alerting policy in Cloud Monitoring that alerts you if the number of errors is above the defined threshold. D. Create a Cloud Function that consumes the Cloud Monitoring API. Use Cloud Composer to trigger the Cloud Function daily and alert you if the number of errors is above the defined threshold. You need to build a public API that authenticates, enforces quotas, and reports metrics for API callers. Which tool should you use to complete this architecture?. A. App Engine. B. Cloud Endpoints. C. Identity-Aware Proxy. D. GKE Ingress for HTTP(S) Load Balancing. You noticed that your application was forcefully shut down during a Deployment update in Google Kubernetes Engine. Your application didn’t close the database connection before it was terminated. You want to update your application to make sure that it completes a graceful shutdown. What should you do?. A. Update your code to process a received SIGTERM signal to gracefully disconnect from the database. B. Configure a PodDisruptionBudget to prevent the Pod from being forcefully shut down. C. Increase the terminationGracePeriodSeconds for your application. D. Configure a PreStop hook to shut down your application. You are a lead developer working on a new retail system that runs on Cloud Run and Firestore in Datastore mode. A web UI requirement is for the system to display a list of available products when users access the system and for the user to be able to browse through all products. You have implemented this requirement in the minimum viable product (MVP) phase by returning a list of all available products stored in Firestore. A few months after go-live, you notice that Cloud Run instances are terminated with HTTP 500: Container instances are exceeding memory limits errors during busy times. This error coincides with spikes in the number of Datastore entity reads. You need to prevent Cloud Run from crashing and decrease the number of Datastore entity reads. You want to use a solution that optimizes system performance. What should you do?. A. Modify the query that returns the product list using integer offsets. B. Modify the query that returns the product list using limits. C. Modify the Cloud Run configuration to increase the memory limits. D. Modify the query that returns the product list using cursors. You have deployed a Java application to Cloud Run. Your application requires access to a database hosted on Cloud SQL. Due to regulatory requirements, your connection to the Cloud SQL instance must use its internal IP address. How should you configure the connectivity while following Google-recommended best practices?. A. Configure your Cloud Run service with a Cloud SQL connection. B. Configure your Cloud Run service to use a Serverless VPC Access connector. C. Configure your application to use the Cloud SQL Java connector. D. Configure your application to connect to an instance of the Cloud SQL Auth proxy. You have two Google Cloud projects, named Project A and Project B. You need to create a Cloud Function in Project A that saves the output in a Cloud Storage bucket in Project B. You want to follow the principle of least privilege. What should you do?. A. 1. Create a Google service account in Project B. 2. Deploy the Cloud Function with the service account in Project A. 3. Assign this service account the roles/storage.objectCreator role on the storage bucket residing in Project B. B. 1. Create a Google service account in Project A 2. Deploy the Cloud Function with the service account in Project A. 3. Assign this service account the roles/storage.objectCreator role on the storage bucket residing in Project B. C. 1. Determine the default App Engine service account (PROJECT_ID@appspot.gserviceaccount.com) in Project A. 2. Deploy the Cloud Function with the default App Engine service account in Project A. 3. Assign the default App Engine service account the roles/storage.objectCreator role on the storage bucket residing in Project B. D. 1. Determine the default App Engine service account (PROJECT_ID@appspot.gserviceaccount.com) in Project B. 2. Deploy the Cloud Function with the default App Engine service account in Project A. 3. Assign the default App Engine service account the roles/storage.objectCreator role on the storage bucket residing in Project B. A governmental regulation was recently passed that affects your application. For compliance purposes, you are now required to send a duplicate of specific application logs from your application’s project to a project that is restricted to the security team. What should you do?. A. Create user-defined log buckets in the security team’s project. Configure a Cloud Logging sink to route your application’s logs to log buckets in the security team’s project. B. Create a job that copies the logs from the _Required log bucket into the security team’s log bucket in their project. C. Modify the _Default log bucket sink rules to reroute the logs into the security team’s log bucket. D. Create a job that copies the System Event logs from the _Required log bucket into the security team’s log bucket in their project. You have an application running on Google Kubernetes Engine (GKE). The application is currently using a logging library and is outputting to standard output. You need to export the logs to Cloud Logging, and you need the logs to include metadata about each request. You want to use the simplest method to accomplish this. What should you do?. A. Change your application’s logging library to the Cloud Logging library, and configure your application to export logs to Cloud Logging. B. Update your application to output logs in JSON format, and add the necessary metadata to the JSON. C. Update your application to output logs in CSV format, and add the necessary metadata to the CSV. D. Install the Fluent Bit agent on each of your GKE nodes, and have the agent export all logs from /var/log. You are working on a new application that is deployed on Cloud Run and uses Cloud Functions. Each time new features are added, new Cloud Functions and Cloud Run services are deployed. You use ENV variables to keep track of the services and enable interservice communication, but the maintenance of the ENV variables has become difficult. You want to implement dynamic discovery in a scalable way. What should you do?. A. Configure your microservices to use the Cloud Run Admin and Cloud Functions APIs to query for deployed Cloud Run services and Cloud Functions in the Google Cloud project. B. Create a Service Directory namespace. Use API calls to register the services during deployment, and query during runtime. C. Rename the Cloud Functions and Cloud Run services endpoint is using a well-documented naming convention. D. Deploy Hashicorp Consul on a single Compute Engine instance. Register the services with Consul during deployment, and query during runtime. You are reviewing and updating your Cloud Build steps to adhere to best practices. Currently, your build steps include: 1. Pull the source code from a source repository. 2. Build a container image 3. Upload the built image to Artifact Registry. You need to add a step to perform a vulnerability scan of the built container image, and you want the results of the scan to be available to your deployment pipeline running in Google Cloud. You want to minimize changes that could disrupt other teams’ processes. What should you do?. A. Enable Binary Authorization, and configure it to attest that no vulnerabilities exist in a container image. B. Upload the built container images to your Docker Hub instance, and scan them for vulnerabilities. C. Enable the Container Scanning API in Artifact Registry, and scan the built container images for vulnerabilities. D. Add Artifact Registry to your Aqua Security instance, and scan the built container images for vulnerabilities. Your team is creating a serverless web application on Cloud Run. The application needs to access images stored in a private Cloud Storage bucket. You want to give the application Identity and Access Management (IAM) permission to access the images in the bucket, while also securing the services using Google-recommended best practices. What should you do?. A. Enforce signed URLs for the desired bucket. Grant the Storage Object Viewer IAM role on the bucket to the Compute Engine default service account. B. Enforce public access prevention for the desired bucket. Grant the Storage Object Viewer IAM role on the bucket to the Compute Engine default service account. C. Enforce signed URLs for the desired bucket. Create and update the Cloud Run service to use a user-managed service account. Grant the Storage Object Viewer IAM role on the bucket to the service account. D. Enforce public access prevention for the desired bucket. Create and update the Cloud Run service to use a user-managed service account. Grant the Storage Object Viewer IAM role on the bucket to the service account. You are using Cloud Run to host a global ecommerce web application. Your company’s design team is creating a new color scheme for the web app. You have been tasked with determining whether the new color scheme will increase sales. You want to conduct testing on live production traffic. How should you design the study?. A. Use an external HTTP(S) load balancer to route a predetermined percentage of traffic to two different color schemes of your application. Analyze the results to determine whether there is a statistically significant difference in sales. B. Use an external HTTP(S) load balancer to route traffic to the original color scheme while the new deployment is created and tested. After testing is complete, reroute all traffic to the new color scheme. Analyze the results to determine whether there is a statistically significant difference in sales. C. Use an external HTTP(S) load balancer to mirror traffic to the new version of your application. Analyze the results to determine whether there is a statistically significant difference in sales. D. Enable a feature flag that displays the new color scheme to half of all users. Monitor sales to see whether they increase for this group of users. You are a developer at a large corporation. You manage three Google Kubernetes Engine clusters on Google Cloud. Your team’s developers need to switch from one cluster to another regularly without losing access to their preferred development tools. You want to configure access to these multiple clusters while following Google-recommended best practices. What should you do?. A. Ask the developers to use Cloud Shell and run gcloud container clusters get-credential to switch to another cluster. B. In a configuration file, define the clusters, users, and contexts. Share the file with the developers and ask them to use kubect1 contig to add cluster, user, and context details. C. Ask the developers to install the gcloud CLI on their workstation and run gcloud container clusters get-credentials to switch to another cluster. D. Ask the developers to open three terminals on their workstation and use kubect1 config to configure access to each cluster. You are a lead developer working on a new retail system that runs on Cloud Run and Firestore. A web UI requirement is for the user to be able to browse through all products. A few months after go-live, you notice that Cloud Run instances are terminated with HTTP 500: Container instances are exceeding memory limits errors during busy times. This error coincides with spikes in the number of Firestore queries. You need to prevent Cloud Run from crashing and decrease the number of Firestore queries. You want to use a solution that optimizes system performance. What should you do?. A. Modify the query that returns the product list using cursors with limits. B. Create a custom index over the products. C. Modify the query that returns the product list using integer offsets. D. Modify the Cloud Run configuration to increase the memory limits. You are a developer at a large organization. Your team uses Git for source code management (SCM). You want to ensure that your team follows Google-recommended best practices to manage code to drive higher rates of software delivery. Which SCM process should your team use?. A. Each developer commits their code to the main branch before each product release, conducts testing, and rolls back if integration issues are detected. B. Each group of developers copies the repository, commits their changes to their repository, and merges their code into the main repository before each product release. C. Each developer creates a branch for their own work, commits their changes to their branch, and merges their code into the main branch daily. D. Each group of developers creates a feature branch from the main branch for their work, commits their changes to their branch, and merges their code into the main branch after the change advisory board approves it. |