option
Cuestiones
ayuda
daypo
buscar.php

GCP Dev Exam Test 2

COMENTARIOS ESTADÍSTICAS RÉCORDS
REALIZAR TEST
Título del Test:
GCP Dev Exam Test 2

Descripción:
More real questions about GCP Dev Exam

Fecha de Creación: 2024/03/12

Categoría: Otros

Número Preguntas: 50

Valoración:(1)
COMPARTE EL TEST
Nuevo ComentarioNuevo Comentario
Comentarios
NO HAY REGISTROS
Temario:

You manage your company's ecommerce platform's payment system, which runs on Google Cloud. Your company must retain user logs for 1 year for internal auditing purposes and for 3 years to meet compliance requirements. You need to store new user logs on Google Cloud to minimize on-premises storage usage and ensure that they are easily searchable. You want to minimize effort while ensuring that the logs are stored correctly. What should you do?. Store the logs in a Cloud Storage bucket with bucket lock turned on. Store the logs in a Cloud Storage bucket with a 3-year retention period. Store the logs in Cloud Logging as custom logs with a custom retention period. Store the logs in a Cloud Storage bucket with a 1-year retention period. After 1 year, move the logs to another bucket with a 2-year retention period.

Your company has a new security initiative that requires all data stored in Google Cloud to be encrypted by customer-managed encryption keys. You plan to use Cloud Key Management Service (KMS) to configure access to the keys. You need to follow the "separation of duties" principle and Google-recommended best practices. What should you do? (Choose two.). Provision Cloud KMS in its own project. Do not assign an owner to the Cloud KMS project. Provision Cloud KMS in the project where the keys are being used. Grant the roles/cloudkms.admin role to the owner of the project where the keys from Cloud KMS are being used. Grant an owner role for the Cloud KMS project to a different user than the owner of the project where the keys from Cloud KMS are being used.

Your organization has recently begun an initiative to replatform their legacy applications onto Google Kubernetes Engine. You need to decompose a monolithic application into microservices. Multiple instances have read and write access to a configuration file, which is stored on a shared file system. You want to minimize the effort required to manage this transition, and you want to avoid rewriting the application code. What should you do?. Create a new Cloud Storage bucket, and mount it via FUSE in the container. Create a new persistent disk, and mount the volume as a shared PersistentVolume. Create a new Filestore instance, and mount the volume as an NFS PersistentVolume. Create a new ConfigMap and volumeMount to store the contents of the configuration file.

Your development team has built several Cloud Functions using Java along with corresponding integration and service tests. You are building and deploying the functions and launching the tests using Cloud Build. Your Cloud Build job is reporting deployment failures immediately after successfully validating the code. What should you do?. Check the maximum number of Cloud Function instances. Verify that your Cloud Build trigger has the correct build parameters. Retry the tests using the truncated exponential backoff polling strategy. Verify that the Cloud Build service account is assigned the Cloud Functions Developer role.

You manage a microservices application on Google Kubernetes Engine (GKE) using Istio. You secure the communication channels between your microservices by implementing an Istio AuthorizationPolicy, a Kubernetes NetworkPolicy, and mTLS on your GKE cluster. You discover that HTTP requests between two Pods to specific URLs fail, while other requests to other URLs succeed. What is the cause of the connection issue?. A Kubernetes NetworkPolicy resource is blocking HTTP traffic between the Pods. The Pod initiating the HTTP requests is attempting to connect to the target Pod via an incorrect TCP port. The Authorization Policy of your cluster is blocking HTTP requests for specific paths within your application. The cluster has mTLS configured in permissive mode, but the Pod's sidecar proxy is sending unencrypted traffic in plain text.

You recently migrated an on-premises monolithic application to a microservices application on Google Kubernetes Engine (GKE). The application has dependencies on backend services on-premises, including a CRM system and a MySQL database that contains personally identifiable information (PII). The backend services must remain on-premises to meet regulatory requirements. You established a Cloud VPN connection between your on-premises data center and Google Cloud. You notice that some requests from your microservices application on GKE to the backend services are failing due to latency issues caused by fluctuating bandwidth, which is causing the application to crash. How should you address the latency issues?. Use Memorystore to cache frequently accessed PII data from the on-premises MySQL database. Use Istio to create a service mesh that includes the microservices on GKE and the on-premises services. Increase the number of Cloud VPN tunnels for the connection between Google Cloud and the on-premises services. Decrease the network layer packet size by decreasing the Maximum Transmission Unit (MTU) value from its default value on Cloud VPN.

You are designing an application that consists of several microservices. Each microservice has its own RESTful API and will be deployed as a separate Kubernetes Service. You want to ensure that the consumers of these APIs aren't impacted when there is a change to your API, and also ensure that third-party systems aren't interrupted when new versions of the API are released. How should you configure the connection to the application following Google-recommended best practices?. Use an Ingress that uses the API's URL to route requests to the appropriate backend. Leverage a Service Discovery system, and connect to the backend specified by the request. Use multiple clusters, and use DNS entries to route requests to separate versioned backends. Combine multiple versions in the same service, and then specify the API version in the POST request.

Your team is building an application for a financial institution. The application's frontend runs on Compute Engine, and the data resides in Cloud SQL and one Cloud Storage bucket. The application will collect data containing PII, which will be stored in the Cloud SQL database and the Cloud Storage bucket. You need to secure the PII data. What should you do?. 1. Create the relevant firewall rules to allow only the frontend to communicate with the Cloud SQL database 2. Using IAM, allow only the frontend service account to access the Cloud Storage bucket. 1. Create the relevant firewall rules to allow only the frontend to communicate with the Cloud SQL database 2. Enable private access to allow the frontend to access the Cloud Storage bucket privately. 1. Configure a private IP address for Cloud SQL 2. Use VPC-SC to create a service perimeter 3. Add the Cloud SQL database and the Cloud Storage bucket to the same service perimeter. 1. Configure a private IP address for Cloud SQL 2. Use VPC-SC to create a service perimeter 3. Add the Cloud SQL database and the Cloud Storage bucket to different service perimeters.

You are designing a chat room application that will host multiple rooms and retain the message history for each room. You have selected Firestore as your database. How should you represent the data in Firestore?. Create a collection for the rooms. For each room, create a document that lists the contents of the messages. Create a collection for the rooms. For each room, create a collection that contains a document for each message. Create a collection for the rooms. For each room, create a document that contains a collection for documents, each of which contains a message. Create a collection for the rooms, and create a document for each room. Create a separate collection for messages, with one document per message. Each room’s document contains a list of references to the messages.

You are developing an application that will handle requests from end users. You need to secure a Cloud Function called by the application to allow authorized end users to authenticate to the function via the application while restricting access to unauthorized users. You will integrate Google Sign-In as part of the solution and want to follow Google-recommended best practices. What should you do?. Deploy from a source code repository and grant users the roles/cloudfunctions.viewer role. Deploy from a source code repository and grant users the roles/cloudfunctions.invoker role. Deploy from your local machine using gcloud and grant users the roles/cloudfunctions.admin role. Deploy from your local machine using gcloud and grant users the roles/cloudfunctions.developer role.

You are running a web application on Google Kubernetes Engine that you inherited. You want to determine whether the application is using libraries with known vulnerabilities or is vulnerable to XSS attacks. Which service should you use?. Google Cloud Armor. Debugger. Web Security Scanner. Error Reporting.

You are building a highly available and globally accessible application that will serve static content to users. You need to configure the storage and serving components. You want to minimize management overhead and latency while maximizing reliability for users. What should you do?. 1. Create a managed instance group. Replicate the static content across the virtual machines (VMs) 2. Create an external HTTP(S) load balancer. 3. Enable Cloud CDN, and send traffic to the managed instance group. 1. Create an unmanaged instance group. Replicate the static content across the VMs. 2. Create an external HTTP(S) load balancer 3. Enable Cloud CDN, and send traffic to the unmanaged instance group. 1. Create a Standard storage class, regional Cloud Storage bucket. Put the static content in the bucket 2. Reserve an external IP address, and create an external HTTP(S) load balancer 3. Enable Cloud CDN, and send traffic to your backend bucket. 1. Create a Standard storage class, multi-regional Cloud Storage bucket. Put the static content in the bucket. 2. Reserve an external IP address, and create an external HTTP(S) load balancer. 3. Enable Cloud CDN, and send traffic to your backend bucket.

You are writing from a Go application to a Cloud Spanner database. You want to optimize your application’s performance using Google-recommended best practices. What should you do?. Write to Cloud Spanner using Cloud Client Libraries. Write to Cloud Spanner using Google API Client Libraries . Write to Cloud Spanner using a custom gRPC client library. Write to Cloud Spanner using a third-party HTTP client library.

You have an application deployed in Google Kubernetes Engine (GKE). You need to update the application to make authorized requests to Google Cloud managed services. You want this to be a one-time setup, and you need to follow security best practices of auto-rotating your security keys and storing them in an encrypted store. You already created a service account with appropriate access to the Google Cloud service. What should you do next?. Assign the Google Cloud service account to your GKE Pod using Workload Identity. Export the Google Cloud service account, and share it with the Pod as a Kubernetes Secret. Export the Google Cloud service account, and embed it in the source code of the application. Export the Google Cloud service account, and upload it to HashiCorp Vault to generate a dynamic service account for your application.

You are planning to deploy hundreds of microservices in your Google Kubernetes Engine (GKE) cluster. How should you secure communication between the microservices on GKE using a managed service?. Use global HTTP(S) Load Balancing with managed SSL certificates to protect your services. Deploy open source Istio in your GKE cluster, and enable mTLS in your Service Mesh. Install cert-manager on GKE to automatically renew the SSL certificates. Install Anthos Service Mesh, and enable mTLS in your Service Mesh.

You are building an application that uses a distributed microservices architecture. You want to measure the performance and system resource utilization in one of the microservices written in Java. What should you do?. Instrument the service with Cloud Profiler to measure CPU utilization and method-level execution times in the service. Instrument the service with Debugger to investigate service errors. Instrument the service with Cloud Trace to measure request latency. Instrument the service with OpenCensus to measure service latency, and write custom metrics to Cloud Monitoring.

Your team is responsible for maintaining an application that aggregates news articles from many different sources. Your monitoring dashboard contains publicly accessible real-time reports and runs on a Compute Engine instance as a web application. External stakeholders and analysts need to access these reports via a secure channel without authentication. How should you configure this secure channel?. Add a public IP address to the instance. Use the service account key of the instance to encrypt the traffic. Use Cloud Scheduler to trigger Cloud Build every hour to create an export from the reports. Store the reports in a public Cloud Storage bucket. Add an HTTP(S) load balancer in front of the monitoring dashboard. Configure Identity-Aware Proxy to secure the communication channel. Add an HTTP(S) load balancer in front of the monitoring dashboard. Set up a Google-managed SSL certificate on the load balancer for traffic encryption.

You are using Cloud Run to host a web application. You need to securely obtain the application project ID and region where the application is running and display this information to users. You want to use the most performant approach. What should you do?. Use HTTP requests to query the available metadata server at the http://metadata.google.internal/ endpoint with the Metadata-Flavor: Google header. In the Google Cloud console, navigate to the Project Dashboard and gather configuration details. Navigate to the Cloud Run “Variables & Secrets” tab, and add the desired environment variables in Key:Value format. In the Google Cloud console, navigate to the Project Dashboard and gather configuration details. Write the application configuration information to Cloud Run's in-memory container filesystem. Make an API call to the Cloud Asset Inventory API from the application and format the request to include instance metadata.

You need to deploy resources from your laptop to Google Cloud using Terraform. Resources in your Google Cloud environment must be created using a service account. Your Cloud Identity has the roles/iam.serviceAccountTokenCreator Identity and Access Management (IAM) role and the necessary permissions to deploy the resources using Terraform. You want to set up your development environment to deploy the desired resources following Google-recommended best practices. What should you do?. 1. Download the service account’s key file in JSON format, and store it locally on your laptop. 2. Set the GOOGLE_APPLICATION_CREDENTIALS environment variable to the path of your downloaded key file. 1. Run the following command from a command line: gcloud config set auth/impersonate_service_account service-account-name@project.iam.gserviceacccount.com. 2. Set the GOOGLE_OAUTH_ACCESS_TOKEN environment variable to the value that is returned by the gcloud auth print-access-token command. 1. Run the following command from a command line: gcloud auth application-default login. 2. In the browser window that opens, authenticate using your personal credentials. 1. Store the service account's key file in JSON format in Hashicorp Vault. 2. Integrate Terraform with Vault to retrieve the key file dynamically, and authenticate to Vault using a short-lived access token.

Your company uses Cloud Logging to manage large volumes of log data. You need to build a real-time log analysis architecture that pushes logs to a third-party application for processing. What should you do?. Create a Cloud Logging log export to Pub/Sub. Create a Cloud Logging log export to BigQuery. Create a Cloud Logging log export to Cloud Storage. Create a Cloud Function to read Cloud Logging log entries and send them to the third-party application.

You are developing a new public-facing application that needs to retrieve specific properties in the metadata of users’ objects in their respective Cloud Storage buckets. Due to privacy and data residency requirements, you must retrieve only the metadata and not the object data. You want to maximize the performance of the retrieval process. How should you retrieve the metadata?. Use the patch method. Use the compose method. Use the copy method. Use the fields request parameter.

You are deploying a microservices application to Google Kubernetes Engine (GKE) that will broadcast livestreams. You expect unpredictable traffic patterns and large variations in the number of concurrent users. Your application must meet the following requirements: • Scales automatically during popular events and maintains high availability • Is resilient in the event of hardware failures How should you configure the deployment parameters? (Choose two.). Distribute your workload evenly using a multi-zonal node pool. Distribute your workload evenly using multiple zonal node pools. Use cluster autoscaler to resize the number of nodes in the node pool, and use a Horizontal Pod Autoscaler to scale the workload. Create a managed instance group for Compute Engine with the cluster nodes. Configure autoscaling rules for the managed instance group. Create alerting policies in Cloud Monitoring based on GKE CPU and memory utilization. Ask an on-duty engineer to scale the workload by executing a script when CPU and memory usage exceed predefined thresholds.

You work at a rapidly growing financial technology startup. You manage the payment processing application written in Go and hosted on Cloud Run in the Singapore region (asia-southeast1). The payment processing application processes data stored in a Cloud Storage bucket that is also located in the Singapore region. The startup plans to expand further into the Asia Pacific region. You plan to deploy the Payment Gateway in Jakarta, Hong Kong, and Taiwan over the next six months. Each location has data residency requirements that require customer data to reside in the country where the transaction was made. You want to minimize the cost of these deployments. What should you do?. Create a Cloud Storage bucket in each region, and create a Cloud Run service of the payment processing application in each region. Create a Cloud Storage bucket in each region, and create three Cloud Run services of the payment processing application in the Singapore region. Create three Cloud Storage buckets in the Asia multi-region, and create three Cloud Run services of the payment processing application in the Singapore region. Create three Cloud Storage buckets in the Asia multi-region, and create three Cloud Run revisions of the payment processing application in the Singapore region.

You recently joined a new team that has a Cloud Spanner database instance running in production. Your manager has asked you to optimize the Spanner instance to reduce cost while maintaining high reliability and availability of the database. What should you do?. Use Cloud Logging to check for error logs, and reduce Spanner processing units by small increments until you find the minimum capacity required. Use Cloud Trace to monitor the requests per sec of incoming requests to Spanner, and reduce Spanner processing units by small increments until you find the minimum capacity required. Use Cloud Monitoring to monitor the CPU utilization, and reduce Spanner processing units by small increments until you find the minimum capacity required. Use Snapshot Debugger to check for application errors, and reduce Spanner processing units by small increments until you find the minimum capacity required.

You recently deployed a Go application on Google Kubernetes Engine (GKE). The operations team has noticed that the application's CPU usage is high even when there is low production traffic. The operations team has asked you to optimize your application's CPU resource consumption. You want to determine which Go functions consume the largest amount of CPU. What should you do?. Deploy a Fluent Bit daemonset on the GKE cluster to log data in Cloud Logging. Analyze the logs to get insights into your application code’s performance. Create a custom dashboard in Cloud Monitoring to evaluate the CPU performance metrics of your application. Connect to your GKE nodes using SSH. Run the top command on the shell to extract the CPU utilization of your application. Modify your Go application to capture profiling data. Analyze the CPU metrics of your application in flame graphs in Profiler.

Your team manages a Google Kubernetes Engine (GKE) cluster where an application is running. A different team is planning to integrate with this application. Before they start the integration, you need to ensure that the other team cannot make changes to your application, but they can deploy the integration on GKE. What should you do?. Using Identity and Access Management (IAM), grant the Viewer IAM role on the cluster project to the other team. Create a new GKE cluster. Using Identity and Access Management (IAM), grant the Editor role on the cluster project to the other team. Create a new namespace in the existing cluster. Using Identity and Access Management (IAM), grant the Editor role on the cluster project to the other team. Create a new namespace in the existing cluster. Using Kubernetes role-based access control (RBAC), grant the Admin role on the new namespace to the other team.

You are trying to connect to your Google Kubernetes Engine (GKE) cluster using kubectl from Cloud Shell. You have deployed your GKE cluster with a public endpoint. From Cloud Shell, you run the following command. You notice that the kubectl commands time out without returning an error message. What is the most likely cause of this issue?. Your user account does not have privileges to interact with the cluster using kubectl. Your Cloud Shell external IP address is not part of the authorized networks of the cluster. The Cloud Shell is not part of the same VPC as the GKE cluster. A VPC firewall is blocking access to the cluster’s endpoint.

You are developing a web application that contains private images and videos stored in a Cloud Storage bucket. Your users are anonymous and do not have Google Accounts. You want to use your application-specific logic to control access to the images and videos. How should you configure access?. Cache each web application user's IP address to create a named IP table using Google Cloud Armor. Create a Google Cloud Armor security policy that allows users to access the backend bucket. Grant the Storage Object Viewer IAM role to allUsers. Allow users to access the bucket after authenticating through your web application. Configure Identity-Aware Proxy (IAP) to authenticate users into the web application. Allow users to access the bucket after authenticating through IAP. Generate a signed URL that grants read access to the bucket. Allow users to access the URL after authenticating through your web application.

You need to configure a Deployment on Google Kubernetes Engine (GKE). You want to include a check that verifies that the containers can connect to the database. If the Pod is failing to connect, you want a script on the container to run to complete a graceful shutdown. How should you configure the Deployment?. Create two jobs: one that checks whether the container can connect to the database, and another that runs the shutdown script if the Pod is failing. Create the Deployment with a livenessProbe for the container that will fail if the container can't connect to the database. Configure a Prestop lifecycle handler that runs the shutdown script if the container is failing. Create the Deployment with a PostStart lifecycle handler that checks the service availability. Configure a PreStop lifecycle handler that runs the shutdown script if the container is failing. Create the Deployment with an initContainer that checks the service availability. Configure a Prestop lifecycle handler that runs the shutdown script if the Pod is failing.

You are responsible for deploying a new API. That API will have three different URL paths: • https://yourcompany.com/students • https://yourcompany.com/teachers • https://yourcompany.com/classes You need to configure each API URL path to invoke a different function in your code. What should you do?. Create one Cloud Function as a backend service exposed using an HTTPS load balancer. Create three Cloud Functions exposed directly. Create one Cloud Function exposed directly. Create three Cloud Functions as three backend services exposed using an HTTPS load balancer.

You are deploying a microservices application to Google Kubernetes Engine (GKE). The application will receive daily updates. You expect to deploy a large number of distinct containers that will run on the Linux operating system (OS). You want to be alerted to any known OS vulnerabilities in the new containers. You want to follow Google-recommended best practices. What should you do?. Use the gcloud CLI to call Container Analysis to scan new container images. Review the vulnerability results before each deployment. Enable Container Analysis, and upload new container images to Artifact Registry. Review the vulnerability results before each deployment. Enable Container Analysis, and upload new container images to Artifact Registry. Review the critical vulnerability results before each deployment. Use the Container Analysis REST API to call Container Analysis to scan new container images. Review the vulnerability results before each deployment.

You are a developer at a large organization. You have an application written in Go running in a production Google Kubernetes Engine (GKE) cluster. You need to add a new feature that requires access to BigQuery. You want to grant BigQuery access to your GKE cluster following Google-recommended best practices. What should you do?. Create a Google service account with BigQuery access. Add the JSON key to Secret Manager, and use the Go client library to access the JSON key. Create a Google service account with BigQuery access. Add the Google service account JSON key as a Kubernetes secret, and configure the application to use this secret. Create a Google service account with BigQuery access. Add the Google service account JSON key to Secret Manager, and use an init container to access the secret for the application to use. Create a Google service account and a Kubernetes service account. Configure Workload Identity on the GKE cluster, and reference the Kubernetes service account on the application Deployment.

You have an application written in Python running in production on Cloud Run. Your application needs to read/write data stored in a Cloud Storage bucket in the same project. You want to grant access to your application following the principle of least privilege. What should you do?. Create a user-managed service account with a custom Identity and Access Management (IAM) role. Create a user-managed service account with the Storage Admin Identity and Access Management (IAM) role. Create a user-managed service account with the Project Editor Identity and Access Management (IAM) role. Use the default service account linked to the Cloud Run revision in production.

Your team is developing unit tests for Cloud Function code. The code is stored in a Cloud Source Repositories repository. You are responsible for implementing the tests. Only a specific service account has the necessary permissions to deploy the code to Cloud Functions. You want to ensure that the code cannot be deployed without first passing the tests. How should you configure the unit testing process?. Configure Cloud Build to deploy the Cloud Function. If the code passes the tests, a deployment approval is sent to you. Configure Cloud Build to deploy the Cloud Function, using the specific service account as the build agent. Run the unit tests after successful deployment. Configure Cloud Build to run the unit tests. If the code passes the tests, the developer deploys the Cloud Function. Configure Cloud Build to run the unit tests, using the specific service account as the build agent. If the code passes the tests, Cloud Build deploys the Cloud Function.

Your team detected a spike of errors in an application running on Cloud Run in your production project. The application is configured to read messages from Pub/Sub topic A, process the messages, and write the messages to topic B. You want to conduct tests to identify the cause of the errors. You can use a set of mock messages for testing. What should you do?. Deploy the Pub/Sub and Cloud Run emulators on your local machine. Deploy the application locally, and change the logging level in the application to DEBUG or INFO. Write mock messages to topic A, and then analyze the logs. Use the gcloud CLI to write mock messages to topic A. Change the logging level in the application to DEBUG or INFO, and then analyze the logs. Deploy the Pub/Sub emulator on your local machine. Point the production application to your local Pub/Sub topics. Write mock messages to topic A, and then analyze the logs. Use the Google Cloud console to write mock messages to topic A. Change the logging level in the application to DEBUG or INFO, and then analyze the logs.

You are developing a Java Web Server that needs to interact with Google Cloud services via the Google Cloud API on the user's behalf. Users should be able to authenticate to the Google Cloud API using their Google Cloud identities. Which workflow should you implement in your web application?. 1. When a user arrives at your application, prompt them for their Google username and password. 2. Store an SHA password hash in your application's database along with the user's username. 3. The application authenticates to the Google Cloud API using HTTPs requests with the user's username and password hash in the Authorization request header. 1. When a user arrives at your application, prompt them for their Google username and password. 2. Forward the user's username and password in an HTTPS request to the Google Cloud authorization server, and request an access token. 3. The Google server validates the user's credentials and returns an access token to the application. 4. The application uses the access token to call the Google Cloud API. 1. When a user arrives at your application, route them to a Google Cloud consent screen with a list of requested permissions that prompts the user to sign in with SSO to their Google Account. 2. After the user signs in and provides consent, your application receives an authorization code from a Google server. 3. The Google server returns the authorization code to the user, which is stored in the browser's cookies. 4. The user authenticates to the Google Cloud API using the authorization code in the cookie. 1. When a user arrives at your application, route them to a Google Cloud consent screen with a list of requested permissions that prompts the user to sign in with SSO to their Google Account. 2. After the user signs in and provides consent, your application receives an authorization code from a Google server. 3. The application requests a Google Server to exchange the authorization code with an access token. 4. The Google server responds with the access token that is used by the application to call the Google Cloud API.

You recently developed a new application. You want to deploy the application on Cloud Run without a Dockerfile. Your organization requires that all container images are pushed to a centrally managed container repository. How should you build your container using Google Cloud services? (Choose two.). Push your source code to Artifact Registry. Submit a Cloud Build job to push the image. Use the pack build command with pack CLI. Include the --source flag with the gcloud run deploy CLI command. Include the --platform=kubernetes flag with the gcloud run deploy CLI command.

You work for an organization that manages an online ecommerce website. Your company plans to expand across the world; however, the store currently serves one specific region. You need to select a SQL database and configure a schema that will scale as your organization grows. You want to create a table that stores all customer transactions and ensure that the customer (CustomerId) and the transaction (TransactionId) are unique. What should you do?. Create a Cloud SQL table that has TransactionId and CustomerId configured as primary keys. Use an incremental number for the TransactionId. Create a Cloud SQL table that has TransactionId and CustomerId configured as primary keys. Use a random string (UUID) for the Transactionid. Create a Cloud Spanner table that has TransactionId and CustomerId configured as primary keys. Use a random string (UUID) for the TransactionId. Create a Cloud Spanner table that has TransactionId and CustomerId configured as primary keys. Use an incremental number for the TransactionId.

You are monitoring a web application that is written in Go and deployed in Google Kubernetes Engine. You notice an increase in CPU and memory utilization. You need to determine which source code is consuming the most CPU and memory resources. What should you do?. Download, install, and start the Snapshot Debugger agent in your VM. Take debug snapshots of the functions that take the longest time. Review the call stack frame, and identify the local variables at that level in the stack. Import the Cloud Profiler package into your application, and initialize the Profiler agent. Review the generated flame graph in the Google Cloud console to identify time-intensive functions. Import OpenTelemetry and Trace export packages into your application, and create the trace provider. Review the latency data for your application on the Trace overview page, and identify where bottlenecks are occurring. Create a Cloud Logging query that gathers the web application's logs. Write a Python script that calculates the difference between the timestamps from the beginning and the end of the application's longest functions to identity time-intensive functions.

You have a container deployed on Google Kubernetes Engine. The container can sometimes be slow to launch, so you have implemented a liveness probe. You notice that the liveness probe occasionally fails on launch. What should you do?. Add a startup probe. Increase the initial delay for the liveness probe. Increase the CPU limit for the container. Add a readiness probe.

You work for an organization that manages an ecommerce site. Your application is deployed behind a global HTTP(S) load balancer. You need to test a new product recommendation algorithm. You plan to use A/B testing to determine the new algorithm’s effect on sales in a randomized way. How should you test this feature?. Split traffic between versions using weights. Enable the new recommendation feature flag on a single instance. Mirror traffic to the new version of your application. Use HTTP header-based routing.

You plan to deploy a new application revision with a Deployment resource to Google Kubernetes Engine (GKE) in production. The container might not work correctly. You want to minimize risk in case there are issues after deploying the revision. You want to follow Google-recommended best practices. What should you do?. Perform a rolling update with a PodDisruptionBudget of 80%. Perform a rolling update with a HorizontalPodAutoscaler scale-down policy value of 0. Convert the Deployment to a StatefulSet, and perform a rolling update with a PodDisruptionBudget of 80%. Convert the Deployment to a StatefulSet, and perform a rolling update with a HorizontalPodAutoscaler scale-down policy value of 0.

Before promoting your new application code to production, you want to conduct testing across a variety of different users. Although this plan is risky, you want to test the new version of the application with production users and you want to control which users are forwarded to the new version of the application based on their operating system. If bugs are discovered in the new version, you want to roll back the newly deployed version of the application as quickly as possible. What should you do?. Deploy your application on Cloud Run. Use traffic splitting to direct a subset of user traffic to the new version based on the revision tag. Deploy your application on Google Kubernetes Engine with Anthos Service Mesh. Use traffic splitting to direct a subset of user traffic to the new version based on the user-agent header. Deploy your application on App Engine. Use traffic splitting to direct a subset of user traffic to the new version based on the IP address. Deploy your application on Compute Engine. Use Traffic Director to direct a subset of user traffic to the new version based on predefined weights.

our team is writing a backend application to implement the business logic for an interactive voice response (IVR) system that will support a payroll application. The IVR system has the following technical characteristics: • Each customer phone call is associated with a unique IVR session. • The IVR system creates a separate persistent gRPC connection to the backend for each session. • If the connection is interrupted, the IVR system establishes a new connection, causing a slight latency for that call. You need to determine which compute environment should be used to deploy the backend application. Using current call data, you determine that: • Call duration ranges from 1 to 30 minutes. • Calls are typically made during business hours. • There are significant spikes of calls around certain known dates (e.g., pay days), or when large payroll changes occur. You want to minimize cost, effort, and operational overhead. Where should you deploy the backend application?. Compute Engine. Google Kubernetes Engine cluster in Standard mode. Cloud Functions. Cloud Run.

You are developing an application hosted on Google Cloud that uses a MySQL relational database schema. The application will have a large volume of reads and writes to the database and will require backups and ongoing capacity planning. Your team does not have time to fully manage the database but can take on small administrative tasks. How should you host the database?. Configure Cloud SQL to host the database, and import the schema into Cloud SQL. Deploy MySQL from the Google Cloud Marketplace to the database using a client, and import the schema. Configure Bigtable to host the database, and import the data into Bigtable. Configure Cloud Spanner to host the database, and import the schema into Cloud Spanner. Configure Firestore to host the database, and import the data into Firestore.

You are developing a new web application using Cloud Run and committing code to Cloud Source Repositories. You want to deploy new code in the most efficient way possible. You have already created a Cloud Build YAML file that builds a container and runs the following command: gcloud run deploy. What should you do next?. Create a Pub/Sub topic to be notified when code is pushed to the repository. Create a Pub/Sub trigger that runs the build file when an event is published to the topic. Create a build trigger that runs the build file in response to a repository code being pushed to the development branch. Create a webhook build trigger that runs the build file in response to HTTP POST calls to the webhook URL. Create a Cron job that runs the following command every 24 hours: gcloud builds submit.

You are a developer at a large organization. You are deploying a web application to Google Kubernetes Engine (GKE). The DevOps team has built a CI/CD pipeline that uses Cloud Deploy to deploy the application to Dev, Test, and Prod clusters in GKE. After Cloud Deploy successfully deploys the application to the Dev cluster, you want to automatically promote it to the Test cluster. How should you configure this process following Google-recommended best practices?. 1. Create a Cloud Build trigger that listens for SUCCEEDED Pub/Sub messages from the clouddeploy-operations topic. 2. Configure Cloud Build to include a step that promotes the application to the Test cluster. 1. Create a Cloud Function that calls the Google Cloud Deploy API to promote the application to the Test cluster. 2. Configure this function to be triggered by SUCCEEDED Pub/Sub messages from the cloud-builds topic. 1. Create a Cloud Function that calls the Google Cloud Deploy API to promote the application to the Test cluster. 2. Configure this function to be triggered by SUCCEEDED Pub/Sub messages from the clouddeploy-operations topic. 1. Create a Cloud Build pipeline that uses the gke-deploy builder. 2. Create a Cloud Build trigger that listens for SUCCEEDED Pub/Sub messages from the cloud-builds topic. 3. Configure this pipeline to run a deployment step to the Test cluster.

Your application is running as a container in a Google Kubernetes Engine cluster. You need to add a secret to your application using a secure approach. What should you do?. Create a Kubernetes Secret, and pass the Secret as an environment variable to the container. Enable Application-layer Secret Encryption on the cluster using a Cloud Key Management Service (KMS) key. Store the credential in Cloud KMS. Create a Google service account (GSA) to read the credential from Cloud KMS. Export the GSA as a .json file, and pass the .json file to the container as a volume which can read the credential from Cloud KMS. Store the credential in Secret Manager. Create a Google service account (GSA) to read the credential from Secret Manager. Create a Kubernetes service account (KSA) to run the container. Use Workload Identity to configure your KSA to act as a GSA.

You are a developer at a financial institution. You use Cloud Shell to interact with Google Cloud services. User data is currently stored on an ephemeral disk; however, a recently passed regulation mandates that you can no longer store sensitive information on an ephemeral disk. You need to implement a new storage solution for your user data. You want to minimize code changes. Where should you store your user data?. Store user data on a Cloud Shell home disk, and log in at least every 120 days to prevent its deletion. Store user data on a persistent disk in a Compute Engine instance. Store user data in a Cloud Storage bucket. Store user data in BigQuery tables.

You recently developed a web application to transfer log data to a Cloud Storage bucket daily. Authenticated users will regularly review logs from the prior two weeks for critical events. After that, logs will be reviewed once annually by an external auditor. Data must be stored for a period of no less than 7 years. You want to propose a storage solution that meets these requirements and minimizes costs. What should you do? (Choose two.). Use the Bucket Lock feature to set the retention policy on the data. Run a scheduled job to set the storage class to Coldline for objects older than 14 days. Create a JSON Web Token (JWT) for users needing access to the Coldline storage buckets. Create a lifecycle management policy to set the storage class to Coldline for objects older than 14 days. Create a lifecycle management policy to set the storage class to Nearline for objects older than 14 days.

Denunciar Test