Google Cloud Platform Practice Questions (PDF)

Summary

This document contains practice questions for Google Cloud Platform, covering various topics such as database solutions, cost analysis, VPC networking, and project migration. The questions are in a multiple-choice format and accompanied by explanations. The questions are appropriate for professional development and certification.

Full Transcript

D.Store game statistics in a Bigtable database partitioned by username. Answer: B Explanation: Among the options provided, the better answer for ensuring optimal gaming performance for global users without increasing management complexity would be option BCloud Spanner is a globally distr...

D.Store game statistics in a Bigtable database partitioned by username. Answer: B Explanation: Among the options provided, the better answer for ensuring optimal gaming performance for global users without increasing management complexity would be option BCloud Spanner is a globally distributed, horizontally scalable database service provided by Google Cloud Platform. It offers strong consistency guarantees, high availability, and automatic scaling. It offers the necessary features to ensure optimal gaming performance, global scalability, strong consistency, and automatic scaling, making it a suitable choice for storing user data mapped to game statistics. Question: 200 CertyIQ You are building an application that stores relational data from users. Users across the globe will use this application. Your CTO is concerned about the scaling requirements because the size of the user base is unknown. You need to implement a database solution that can scale with your user growth with minimum configuration changes. Which storage solution should you use? A.Cloud SQL B.Firestore C.Cloud Spanner D.Bigtable Answer: C Explanation: Cloud Spanner Question: 201 CertyIQ Your company has multiple projects linked to a single billing account in Google Cloud. You need to visualize the costs with specific metrics that should be dynamically calculated based on company-specific criteria. You want to automate the process. What should you do? A.In the Google Cloud console, visualize the costs related to the projects in the Reports section. B.In the Google Cloud console, visualize the costs related to the projects in the Cost breakdown section. C.In the Google Cloud console, use the export functionality of the Cost table. Create a Looker Studio dashboard on top of the CSV export. D.Configure Cloud Billing data export to BigQuery for the billing account. Create a Looker Studio dashboard on top of the BigQuery export. Answer: D Explanation: Option D closely aligns with the requirements mentioned in the question.By configuring Cloud Billing data export to BigQuery, you can automate the process of exporting billing data to a BigQuery dataset. You can then use Looker Studio, a data visualization and exploration platform, to create a dashboard on top of the BigQuery export. This allows you to visualize costs with specific metrics that can be dynamically calculated based on company-specific criteria. Question: 202 CertyIQ You have an application that runs on Compute Engine VM instances in a custom Virtual Private Cloud (VPC). Your company’s security policies only allow the use of internal IP addresses on VM instances and do not let VM instances connect to the internet. You need to ensure that the application can access a file hosted in a Cloud Storage bucket within your project. What should you do? A.Enable Private Service Access on the Cloud Storage Bucket. B.Add storage.googleapis.com to the list of restricted services in a VPC Service Controls perimeter and add your project to the list of protected projects. C.Enable Private Google Access on the subnet within the custom VPC. D.Deploy a Cloud NAT instance and route the traffic to the dedicated IP address of the Cloud Storage bucket. Answer: C Explanation: 1. Private Google Access is a VPC feature 2. C allows access to Google services & API's Question: 203 CertyIQ Your company completed the acquisition of a startup and is now merging the IT systems of both companies. The startup had a production Google Cloud project in their organization. You need to move this project into your organization and ensure that the project is billed to your organization. You want to accomplish this task with minimal effort. What should you do? A.Use the projects.move method to move the project to your organization. Update the billing account of the project to that of your organization. B.Ensure that you have an Organization Administrator Identity and Access Management (IAM) role assigned to you in both organizations. Navigate to the Resource Manager in the startup’s Google Cloud organization, and drag the project to your company's organization. C.Create a Private Catalog for the Google Cloud Marketplace, and upload the resources of the startup's production project to the Catalog. Share the Catalog with your organization, and deploy the resources in your company’s project. D.Create an infrastructure-as-code template for all resources in the project by using Terraform, and deploy that template to a new project in your organization. Delete the project from the startup’s Google Cloud organization. Answer: A Explanation: Option A is correct as it suggests using the "projects.move" method provided by Google Cloud to move the project from the startup's organization to your organization. This method allows you to transfer the ownership and control of a project to another organization. By moving the project, you can ensure that it is under your organization's management.While the other options contain elements that may be relevant in certain scenarios, they do not directly address the requirement of moving the project and ensuring billing to your organization. https://cloud.google.com/resource-manager/docs/project-migration-checklist Question: 204 CertyIQ All development (dev) teams in your organization are located in the United States. Each dev team has its own Google Cloud project. You want to restrict access so that each dev team can only create cloud resources in the United States (US). What should you do? A.Create a folder to contain all the dev projects. Create an organization policy to limit resources in US locations. B.Create an organization to contain all the dev projects. Create an Identity and Access Management (IAM) policy to limit the resources in US regions. C.Create an Identity and Access Management (IAM) policy to restrict the resources locations in the US. Apply the policy to all dev projects. D.Create an Identity and Access Management (IAM) policy to restrict the resources locations in all dev projects. Apply the policy to all dev roles. Answer: A Explanation: Option A is the most suitable answer among the provided choices. By creating a folder to contain all the dev projects, you can organize them in a logical structure within your organization. Then, you can apply an organization policy to limit the resources in US locations. This policy can be configured to restrict the creation of cloud resources outside the United States. It provides a centralized approach to enforce the restriction across all the dev projects within the folder. You need to use "Google Cloud Platform - Resource Location Restriction" organization policy.https://cloud.google.com/resource-manager/docs/organization-policy/org-policy-constraints Question: 205 CertyIQ You are configuring Cloud DNS. You want to create DNS records to point home.mydomain.com, mydomain.com, and www.mydomain.com to the IP address of your Google Cloud load balancer. What should you do? A.Create one CNAME record to point mydomain.com to the load balancer, and create two A records to point WWW and HOME to mydomain.com respectively. B.Create one CNAME record to point mydomain.com to the load balancer, and create two AAAA records to point WWW and HOME to mydomain.com respectively. C.Create one A record to point mydomain.com to the load balancer, and create two CNAME records to point WWW and HOME to mydomain.com respectively. D.Create one A record to point mydomain.com to the load balancer, and create two NS records to point WWW and HOME to mydomain.com respectively. Answer: C Explanation: 1. Option A suggests creating one CNAME record to point mydomain.com to the load balancer, which is incorrect because CNAME records cannot coexist with other record types on the same domain/subdomain. In this case, you need to use an A record instead.Option B suggests creating two AAAA records, which are used for IPv6 addresses. Unless you specifically have an IPv6 address for your load balancer, using AAAA records would not be appropriate.Option D suggests creating two NS records, which are used for specifying the authoritative name servers for a domain. NS records are not used to point subdomains to IP addresses or load balancers.Therefore, option C is the correct answer, as it correctly suggests creating one A record to point mydomain.com to the load balancer, and two CNAME records to point WWW and HOME to mydomain.com respectively. 2. You can only associate A(IP) record to a domain.https://cloud.google.com/dns/docs/set-up-dns-records- domain-name#create_a_record_to_point_the_domain_to_an_external_ip_address Question: 206 CertyIQ You have two subnets (subnet-a and subnet-b) in the default VPC. Your database servers are running in subnet-a. Your application servers and web servers are running in subnet-b. You want to configure a firewall rule that only allows database traffic from the application servers to the database servers. What should you do? A. Create service accounts sa-app and sa-db. Associate service account sa-app with the application servers and the service account sa-db with the database servers. Create an ingress firewall rule to allow network traffic from source service account sa-app to target service account sa-db. B. Create network tags app-server and db-server. Add the app-server tag to the application servers and the db-server tag to the database servers. Create an egress firewall rule to allow network traffic from source network tag app-server to target network tag db-server. C. Create a service account sa-app and a network tag db-server. Associate the service account sa-app with the application servers and the network tag db-server with the database servers. Create an ingress firewall rule to allow network traffic from source VPC IP addresses and target the subnet-a IP addresses. D. Create a network tag app-server and service account sa-db. Add the tag to the application servers and associate the service account with the database servers. Create an egress firewall rule to allow network traffic from source network tag app-server to target service account sa-db. Answer: A Explanation: 1. Service accs can be used for firewall management. 2. You can use service for firewall rules.https://cloud.google.com/blog/products/gcp/simplify-cloud-vpc- firewall-management-with-service-accounts Question: 207 CertyIQ Your team wants to deploy a specific content management system (CMS) solution to Google Cloud. You need a quick and easy way to deploy and install the solution. What should you do? A.Search for the CMS solution in Google Cloud Marketplace. Use gcloud CLI to deploy the solution. B.Search for the CMS solution in Google Cloud Marketplace. Deploy the solution directly from Cloud Marketplace. C.Search for the CMS solution in Google Cloud Marketplace. Use Terraform and the Cloud Marketplace ID to deploy the solution with the appropriate parameters. D.Use the installation guide of the CMS provider. Perform the installation through your configuration management system. Answer: B Explanation: 1. Indeed directly from Cloud Marketplace 2. We can deploy it directly from Cloud Marketplace. Question: 208 CertyIQ You are working for a startup that was officially registered as a business 6 months ago. As your customer base grows, your use of Google Cloud increases. You want to allow all engineers to create new projects without asking them for their credit card information. What should you do? A.Create a Billing account, associate a payment method with it, and provide all project creators with permission to associate that billing account with their projects. B.Grant all engineers permission to create their own billing accounts for each new project. C.Apply for monthly invoiced billing, and have a single invoice for the project paid by the finance team. D.Create a billing account, associate it with a monthly purchase order (PO), and send the PO to Google Cloud. Answer: A Explanation: Option A is the better answer for the given scenario. It allows you to centralize billing and payment management while providing flexibility to project creators. By creating a billing account and associating a payment method with it, you establish a central source for billing and payment for all projects.Granting project creators permission to associate the billing account with their projects ensures that they can create projects without the need for their individual credit card information. This approach streamlines the process and avoids the hassle of collecting credit card details from each engineer.Additionally, this option allows for easy monitoring and management of project costs through a single billing account, making it simpler to track expenses and allocate resources effectively. Question: 209 CertyIQ Your continuous integration and delivery (CI/CD) server can’t execute Google Cloud actions in a specific project because of permission issues. You need to validate whether the used service account has the appropriate roles in the specific project. What should you do? A.Open the Google Cloud console, and check the Identity and Access Management (IAM) roles assigned to the service account at the project or inherited from the folder or organization levels. B.Open the Google Cloud console, and check the organization policies. C.Open the Google Cloud console, and run a query to determine which resources this service account can access. D.Open the Google Cloud console, and run a query of the audit logs to find permission denied errors for this service account. Answer: A Explanation: Open the Google Cloud console, and check the Identity and Access Management (IAM) roles assigned to the service account at the project or inherited from the folder or organization levels. Question: 210 CertyIQ Your team is using Linux instances on Google Cloud. You need to ensure that your team logs in to these instances in the most secure and cost efficient way. What should you do? A.Attach a public IP to the instances and allow incoming connections from the internet on port 22 for SSH. B.Use the gcloud compute ssh command with the --tunnel-through-iap flag. Allow ingress traffic from the IP range 35.235.240.0/20 on port 22. C.Use a third party tool to provide remote access to the instances. D.Create a bastion host with public internet access. Create the SSH tunnel to the instance through the bastion host. Answer: B Explanation: 1. You can use Bastion if "You have a specific use case, like session recording, and you can't use IAP."https://cloud.google.com/compute/docs/connect/ssh-internal-ip 2. https://cloud.google.com/compute/docs/connect/ssh-using-iap#gcloudaccording the documentation the correct answer is B Question: 211 CertyIQ An external member of your team needs list access to compute images and disks in one of your projects. You want to follow Google-recommended practices when you grant the required permissions to this user. What should you do? A.Create a custom role, and add all the required compute.disks.list and compute.images.list permissions as includedPermissions. Grant the custom role to the user at the project level. B.Create a custom role based on the Compute Image User role. Add the compute.disks.list to the includedPermissions field. Grant the custom role to the user at the project level. C.Create a custom role based on the Compute Storage Admin role. Exclude unnecessary permissions from the custom role. Grant the custom role to the user at the project level. D.Grant the Compute Storage Admin role at the project level. Answer: A Explanation: Create a custom role, and add all the required compute.disks.list and compute.images.list permissions as includedPermissions. Grant the custom role to the user at the project level. Question: 212 CertyIQ You are running a web application on Cloud Run for a few hundred users. Some of your users complain that the initial web page of the application takes much longer to load than the following pages. You want to follow Google’s recommendations to mitigate the issue. What should you do? A.Set the minimum number of instances for your Cloud Run service to 3. B.Set the concurrency number to 1 for your Cloud Run service. C.Set the maximum number of instances for your Cloud Run service to 100. D.Update your web application to use the protocol HTTP/2 instead of HTTP/1.1. Answer: A Explanation: Set the minimum number of instances for your Cloud Run service to 3. Question: 213 CertyIQ You are building a data lake on Google Cloud for your Internet of Things (IoT) application. The IoT application has millions of sensors that are constantly streaming structured and unstructured data to your backend in the cloud. You want to build a highly available and resilient architecture based on Google-recommended practices. What should you do? A.Stream data to Pub/Sub, and use Dataflow to send data to Cloud Storage. B.Stream data to Pub/Sub, and use Storage Transfer Service to send data to BigQuery. C.Stream data to Dataflow, and use Dataprep by Trifacta to send data to Bigtable. D.Stream data to Dataflow, and use Storage Transfer Service to send data to BigQuery. Answer: A Explanation: A. Streaming data to Pub/Sub allows you to decouple the ingestion of data from the processing and storage, providing a scalable and reliable message queue that can handle the high volume of data coming from millions of sensors. Using Dataflow to consume data from Pub/Sub and send it to Cloud Storage allows for real-time data processing and storage. Dataflow is a fully managed service for processing data in real-time or batch mode, making it an ideal choice for handling the constant stream of data from IoT sensors. Storing data in Cloud Storage offers high durability and availability, providing a robust foundation for building a data lake. Cloud Storage is a scalable object storage service that can handle large volumes of structured and unstructured data, making it well-suited for the IoT application's data requirements. Question: 214 CertyIQ You are running out of primary internal IP addresses in a subnet for a custom mode VPC. The subnet has the IP range 10.0.0.0/20, and the IP addresses are primarily used by virtual machines in the project. You need to provide more IP addresses for the virtual machines. What should you do? A.Add a secondary IP range 10.1.0.0/20 to the subnet. B.Change the subnet IP range from 10.0.0.0/20 to 10.0.0.0/18. C.Change the subnet IP range from 10.0.0.0/20 to 10.0.0.0/22. D.Convert the subnet IP range from IPv4 to IPv6. Answer: B Explanation: https://cloud.google.com/vpc/docs/create-modify-vpc-networks#expand-subnet Question: 215 CertyIQ Your company requires all developers to have the same permissions, regardless of the Google Cloud project they are working on. Your company’s security policy also restricts developer permissions to Compute Engine, Cloud Functions, and Cloud SQL. You want to implement the security policy with minimal effort. What should you do? A. Create a custom role with Compute Engine, Cloud Functions, and Cloud SQL permissions in one project within the Google Cloud organization. Copy the role across all projects created within the organization with the gcloud iam roles copy command. Assign the role to developers in those projects. B. Add all developers to a Google group in Google Groups for Workspace. Assign the predefined role of Compute Admin to the Google group at the Google Cloud organization level. C. Add all developers to a Google group in Cloud Identity. Assign predefined roles for Compute Engine, Cloud Functions, and Cloud SQL permissions to the Google group for each project in the Google Cloud organization. D. Add all developers to a Google group in Cloud Identity. Create a custom role with Compute Engine, Cloud Functions, and Cloud SQL permissions at the Google Cloud organization level. Assign the custom role to the Google group. Answer: D Explanation: https://www.cloudskillsboost.google/focuses/1035? parent=catalog#:~:text=custom%20role%20at%20the%20organization%20level Question: 216 CertyIQ You are working for a hospital that stores its medical images in an on-premises data room. The hospital wants to use Cloud Storage for archival storage of these images. The hospital wants an automated process to upload any new medical images to Cloud Storage. You need to design and implement a solution. What should you do? A.Create a Pub/Sub topic, and enable a Cloud Storage trigger for the Pub/Sub topic. Create an application that sends all medical images to the Pub/Sub topic. B.Create a script that uses the gcloud storage command to synchronize the on-premises storage with Cloud Storage, Schedule the script as a cron job. C.Create a Pub/Sub topic, and create a Cloud Function connected to the topic that writes data to Cloud Storage. Create an application that sends all medical images to the Pub/Sub topic. D.In the Google Cloud console, go to Cloud Storage. Upload the relevant images to the appropriate bucket. Answer: B Explanation: Create a script that uses the g cloud storage command to synchronize the on-premises storage with Cloud Storage, Schedule the script as a cron job. Question: 217 CertyIQ Your company has an internal application for managing transactional orders. The application is used exclusively by employees in a single physical location. The application requires strong consistency, fast queries, and ACID guarantees for multi-table transactional updates. The first version of the application is implemented in PostgreSQL, and you want to deploy it to the cloud with minimal code changes. Which database is most appropriate for this application? A.Bigtable B.BigQuery C.Cloud SQL D.Firestore Answer: C Explanation: ACID and strong consistency are in C or D, but Fire store is for documents and in question we have "multi- table updates" so there left C Question: 218 CertyIQ Your company runs one batch process in an on-premises server that takes around 30 hours to complete. The task runs monthly, can be performed offline, and must be restarted if interrupted. You want to migrate this workload to the cloud while minimizing cost. What should you do? A.Create an Instance Template with Spot VMs On. Create a Managed Instance Group from the template and adjust Target CPU Utilization. Migrate the workload. B.Migrate the workload to a Compute Engine VM. Start and stop the instance as needed. C.Migrate the workload to a Google Kubernetes Engine cluster with Spot nodes. D.Migrate the workload to a Compute Engine Spot VM. Answer: B Explanation: B. Migrating the workload to a Compute Engine VM and starting and stopping the instance as needed allows you to control when the task runs. This approach provides flexibility in terms of when to initiate the batch process, and it can be easily scheduled to run monthly. By stopping the instance when the task is not running, you can save on compute costs. Question: 219 CertyIQ You are planning to migrate the following on-premises data management solutions to Google Cloud: One MySQL cluster for your main database Apache Kafka for your event streaming platform One Cloud SQL for PostgreSQL database for your analytical and reporting needs You want to implement Google-recommended solutions for the migration. You need to ensure that the new solutions provide global scalability and require minimal operational and infrastructure management. What should you do? A.Migrate from MySQL to Cloud SQL, from Kafka to Pub/Sub, and from Cloud SQL for PostgreSQL to BigQuery. B.Migrate from MySQL to Cloud Spanner, from Kafka to Pub/Sub, and from Cloud SQL for PostgreSQL to BigQuery. C.Migrate from MySQL to Cloud Spanner, from Kafka to Memorystore, and from Cloud SQL for PostgreSQL to Cloud SQL. D.Migrate from MySQL to Cloud SQL, from Kafka to Memorystore, and from Cloud SQL for PostgreSQL to Cloud SQL. Answer: B Explanation: B should be the answer as cloud spanner provides scalability Question: 220 CertyIQ During a recent audit of your existing Google Cloud resources, you discovered several users with email addresses outside of your Google Workspace domain. You want to ensure that your resources are only shared with users whose email addresses match your domain. You need to remove any mismatched users, and you want to avoid having to audit your resources to identify mismatched users. What should you do? A.Create a Cloud Scheduler task to regularly scan your projects and delete mismatched users. B.Create a Cloud Scheduler task to regularly scan your resources and delete mismatched users. C.Set an organizational policy constraint to limit identities by domain to automatically remove mismatched users. D.Set an organizational policy constraint to limit identities by domain, and then retroactively remove the existing mismatched users Answer: D Explanation: 1. https://cloud.google.com/resource-manager/docs/organization-policy/org-policy-constraintsThis list constraint defines the set of domains that email addresses added to Essential Contacts can have.By default, email addresses with any domain can be added to Essential Contacts.The allowed/denied list must specify one or more domains of the form @example.com. If this constraint is active and configured with allowed values, only email addresses with a suffix matching one of the entries from the list of allowed domains can be added in Essential Contacts.This constraint has no effect on updating or removing existing contacts.constraints/essentialcontacts.allowedContactDomains 2. In order to define an organization policy, you choose a constraint, which is a particular type of restriction Question: 221 CertyIQ Your application is running on Google Cloud in a managed instance group (MIG). You see errors in Cloud Logging for one VM that one of the processes is not responsive. You want to replace this VM in the MIG quickly. What should you do? A.Use the gcloud compute instances update command with a REFRESH action for the VM. B.Use the gcloud compute instance-groups managed recreate-instances command to recreate the VM. C.Select the MIG from the Compute Engine console and, in the menu, select Replace VMs. D.Update and apply the instance template of the MIG. Answer: B Explanation: 1. https://cloud.google.com/sdk/gcloud/reference/compute/instance-groups/managed/recreate-instances 2. Following the document below is B:https://cloud.google.com/sdk/gcloud/reference/compute/instance- groups/managed/recreate-instances Question: 222 CertyIQ You want to permanently delete a Pub/Sub topic managed by Config Connector in your Google Cloud project. What should you do? A.Use kubectl to create the label deleted-by-cnrm and to change its value to true for the topic resource. B.Use kubectl to delete the topic resource. C.Use gcloud CLI to delete the topic. D.Use gcloud CLI to update the topic label managed-by-cnrm to false. Answer: B Explanation: 1. created by kubectl should be removed by it 2. https://cloud.google.com/config-connector/docs/how-to/getting-started#deleting_a_resource Question: 223 CertyIQ Your company is using Google Workspace to manage employee accounts. Anticipated growth will increase the number of personnel from 100 employees to 1,000 employees within 2 years. Most employees will need access to your company’s Google Cloud account. The systems and processes will need to support 10x growth without performance degradation, unnecessary complexity, or security issues. What should you do? A.Migrate the users to Active Directory. Connect the Human Resources system to Active Directory. Turn on Google Cloud Directory Sync (GCDS) for Cloud Identity. Turn on Identity Federation from Cloud Identity to Active Directory. B.Organize the users in Cloud Identity into groups. Enforce multi-factor authentication in Cloud Identity. C.Turn on identity federation between Cloud Identity and Google Workspace. Enforce multi-factor authentication for domain wide delegation. D.Use a third-party identity provider service through federation. Synchronize the users from Google Workplace to the third-party provider in real time. Answer: C Explanation: Turn on identity federation between Cloud Identity and Google Workspace. Enforce multi-factor authentication for domain wide delegation. Question: 224 CertyIQ You want to host your video encoding software on Compute Engine. Your user base is growing rapidly, and users need to be able to encode their videos at any time without interruption or CPU limitations. You must ensure that your encoding solution is highly available, and you want to follow Google-recommended practices to automate operations. What should you do? A.Deploy your solution on multiple standalone Compute Engine instances, and increase the number of existing instances when CPU utilization on Cloud Monitoring reaches a certain threshold. B.Deploy your solution on multiple standalone Compute Engine instances, and replace existing instances with high-CPU instances when CPU utilization on Cloud Monitoring reaches a certain threshold. C.Deploy your solution to an instance group, and increase the number of available instances whenever you see high CPU utilization in Cloud Monitoring. D.Deploy your solution to an instance group, and set the autoscaling based on CPU utilization. Answer: D Explanation: 1. definitely D 2. https://cloud.google.com/sdk/gcloud/reference/compute/instance-groups/managed/set-autoscaling Question: 225 CertyIQ Your managed instance group raised an alert stating that new instance creation has failed to create new instances. You need to solve the instance creation problem. What should you do? A.Create an instance template that contains valid syntax which will be used by the instance group. Delete any persistent disks with the same name as instance names. B.Create an instance template that contains valid syntax that will be used by the instance group. Verify that the instance name and persistent disk name values are not the same in the template. C.Verify that the instance template being used by the instance group contains valid syntax. Delete any persistent disks with the same name as instance names. Set the disks.autoDelete property to true in the instance template. D.Delete the current instance template and replace it with a new instance template. Verify that the instance name and persistent disk name values are not the same in the template. Set the disks.autoDelete property to true in the instance template. Answer: A Explanation: Create an instance template that contains valid syntax which will be used by the instance group. Delete any persistent disks with the same name as instance names. Question: 226 CertyIQ You have created an application that is packaged into a Docker image. You want to deploy the Docker image as a workload on Google Kubernetes Engine. What should you do? A.Upload the image to Cloud Storage and create a Kubernetes Service referencing the image. B.Upload the image to Cloud Storage and create a Kubernetes Deployment referencing the image. C.Upload the image to Artifact Registry and create a Kubernetes Service referencing the image. D.Upload the image to Artifact Registry and create a Kubernetes Deployment referencing the image. Answer: D Explanation: Artifact Registry is a fully managed container registry that integrates seamlessly with Google Kubernetes Engine and other Google Cloud services. By uploading the Docker image to Artifact Registry, you can create a Kubernetes Deployment that references the image stored in Artifact Registry. This ensures that Kubernetes can pull the image from a trusted and managed source, while the Deployment manages the deployment and scaling of the application pods based on the image. Question: 227 CertyIQ You are using Looker Studio to visualize a table from your data warehouse that is built on top of BigQuery. Data is appended to the data warehouse during the day. At night, the daily summary is recalculated by overwriting the table. You just noticed that the charts in Looker Studio are broken, and you want to analyze the problem. What should you do? A.In Cloud Logging, create a filter for your Looker Studio report. B.Use the open source CLI tool, Snapshot Debugger, to find out why the data was not refreshed correctly. C.Review the Error Reporting page in the Google Cloud console to find any errors. D.Use the BigQuery interface to review the nightly job and look for any errors. Answer: D Explanation: Use the Big Query interface to review the nightly job and look for any errors. Question: 228 CertyIQ You have a batch workload that runs every night and uses a large number of virtual machines (VMs). It is fault- tolerant and can tolerate some of the VMs being terminated. The current cost of VMs is too high. What should you do? A.Run a test using simulated maintenance events. If the test is successful, use Spot N2 Standard VMs when running future jobs. B.Run a test using simulated maintenance events. If the test is successful, use N2 Standard VMs when running future jobs. C.Run a test using a managed instance group. If the test is successful, use N2 Standard VMs in the managed instance group when running future jobs. D.Run a test using N1 standard VMs instead of N2. If the test is successful, use N1 Standard VMs when running future jobs. Answer: A Explanation: 1. Spot VMs are highly affordable compute instances suitable for batch jobs and fault-tolerant workloads. Spot VMs offer the same machine types, options, and performance as regular compute instances. If your applications are fault tolerant and can withstand possible instance preemptions, then Spot instances can reduce your Compute Engine costs by up to 91%! 2. A definitely Question: 229 CertyIQ You created several resources in multiple Google Cloud projects. All projects are linked to different billing accounts. To better estimate future charges, you want to have a single visual representation of all costs incurred. You want to include new cost data as soon as possible. What should you do? A.Fill all resources in the Pricing Calculator to get an estimate of the monthly cost. B.Use the Reports view in the Cloud Billing Console to view the desired cost information. C.Visit the Cost Table page to get a CSV export and visualize it using Looker Studio. D.Configure Billing Data Export to BigQuery and visualize the data in Looker Studio. Answer: D Explanation: We want to aggregate the costs for multiple billing accounts Question: 230 CertyIQ Your company has a large quantity of unstructured data in different file formats. You want to perform ETL transformations on the data. You need to make the data accessible on Google Cloud so it can be processed by a Dataflow job. What should you do? A.Upload the data to BigQuery using the bq command line tool. B.Upload the data to Cloud Storage using the gcloud storage command. C.Upload the data into Cloud SQL using the import function in the Google Cloud console. D.Upload the data into Cloud Spanner using the import function in the Google Cloud console. Answer: B Explanation: Unstructured is the keyword in this questions. All possible answers are structured, but Cloud Storage. Question: 231 CertyIQ You have deployed an application on a single Compute Engine instance. The application writes logs to disk. Users start reporting errors with the application. You want to diagnose the problem. What should you do? A.Navigate to Cloud Logging and view the application logs. B.Configure a health check on the instance and set a “consecutive successes” Healthy threshold value of 1. C.Connect to the instance’s serial console and read the application logs. D.Install and configure the Ops agent and view the logs from Cloud Logging. Answer: D Explanation: D. By default there is no logs agent installed on a compute instance. So first you will have to install the Ops Agent and after a few minutes the logs will be visible in Cloud logging Question: 232 CertyIQ You recently received a new Google Cloud project with an attached billing account where you will work. You need to create instances, set firewalls, and store data in Cloud Storage. You want to follow Google-recommended practices. What should you do? A.Use the gcloud CLI services enable cloudresourcemanager.googleapis.com command to enable all resources. B.Use the gcloud services enable compute.googleapis.com command to enable Compute Engine and the gcloud services enable storage-api.googleapis.com command to enable the Cloud Storage APIs. C.Open the Google Cloud console and enable all Google Cloud APIs from the API dashboard. D.Open the Google Cloud console and run gcloud init --project in a Cloud Shell. Answer: B Explanation: Use the gcloud services enable compute.googleapis.com command to enable Compute Engine and the gcloud services enable storage-api.googleapis.com command to enable the Cloud Storage APIs. Question: 233 CertyIQ Your application development team has created Docker images for an application that will be deployed on Google Cloud. Your team does not want to manage the infrastructure associated with this application. You need to ensure that the application can scale automatically as it gains popularity. What should you do? A.Create an instance template with the container image, and deploy a Managed Instance Group with Autoscaling. B.Upload Docker images to Artifact Registry, and deploy the application on Google Kubernetes Engine using Standard mode. C.Upload Docker images to the Cloud Storage, and deploy the application on Google Kubernetes Engine using Standard mode. D.Upload Docker images to Artifact Registry, and deploy the application on Cloud Run. Answer: D Explanation: GKE Standard mode: You manage the underlying infrastructure, including configuring the individual nodes.Instance group - you manage the infrastructure as wellso after elimination A,B,C stays D Question: 234 CertyIQ You are migrating a business critical application from your local data center into Google Cloud. As part of your high-availability strategy, you want to ensure that any data used by the application will be immediately available if a zonal failure occurs. What should you do? A.Store the application data on a zonal persistent disk. Create a snapshot schedule for the disk. If an outage occurs, create a new disk from the most recent snapshot and attach it to a new VM in another zone. B.Store the application data on a zonal persistent disk. If an outage occurs, create an instance in another zone with this disk attached. C.Store the application data on a regional persistent disk. Create a snapshot schedule for the disk. If an outage occurs, create a new disk from the most recent snapshot and attach it to a new VM in another zone. D.Store the application data on a regional persistent disk. If an outage occurs, create an instance in another zone with this disk attached. Answer: D Explanation: 1. The benefit of regional persistent disks is that in the event of a zonal outage, where your virtual machine (VM) instance might become unavailable, you can usually force attach a regional persistent disk to a VM instance in a secondary zone in the same region. 2. https://cloud.google.com/compute/docs/disks/high-availability-regional-persistent-disk Question: 235 CertyIQ The DevOps group in your organization needs full control of Compute Engine resources in your development project. However, they should not have permission to create or update any other resources in the project. You want to follow Google’s recommendations for setting permissions for the DevOps group. What should you do? A.Grant the basic role roles/viewer and the predefined role roles/compute.admin to the DevOps group. B.Create an IAM policy and grant all compute.instanceAdmin.* permissions to the policy. Attach the policy to the DevOps group. C.Create a custom role at the folder level and grant all compute.instanceAdmin.* permissions to the role. Grant the custom role to the DevOps group. D.Grant the basic role roles/editor to the DevOps group. Answer: C Explanation: Create a custom role at the folder level and grant all compute.instanceAdmin.* permissions to the role. Grant the custom role to the DevOps group. Question: 236 CertyIQ Your team is running an on-premises ecommerce application. The application contains a complex set of microservices written in Python, and each microservice is running on Docker containers. Configurations are injected by using environment variables. You need to deploy your current application to a serverless Google Cloud cloud solution. What should you do? A.Use your existing CI/CD pipeline. Use the generated Docker images and deploy them to Cloud Run. Update the configurations and the required endpoints. B.Use your existing continuous integration and delivery (CI/CD) pipeline. Use the generated Docker images and deploy them to Cloud Function. Use the same configuration as on-premises. C.Use the existing codebase and deploy each service as a separate Cloud Function. Update the configurations and the required endpoints. D.Use your existing codebase and deploy each service as a separate Cloud Run. Use the same configurations as on-premises. Answer: A Explanation: Use your existing CI/CD pipeline. Use the generated Docker images and deploy them to Cloud Run. Update the configurations and the required endpoints. Question: 237 CertyIQ You are running multiple microservices in a Kubernetes Engine cluster. One microservice is rendering images. The microservice responsible for the image rendering requires a large amount of CPU time compared to the memory it requires. The other microservices are workloads that are optimized for n2-standard machine types. You need to optimize your cluster so that all workloads are using resources as efficiently as possible. What should you do? A.Assign the pods of the image rendering microservice a higher pod priority than the other microservices. B.Create a node pool with compute-optimized machine type nodes for the image rendering microservice. Use the node pool with general-purpose machine type nodes for the other microservices. C.Use the node pool with general-purpose machine type nodes for the image rendering microservice. Create a node pool with compute-optimized machine type nodes for the other microservices. D.Configure the required amount of CPU and memory in the resource requests specification of the image rendering microservice deployment. Keep the resource requests for the other microservices at the default. Answer: B Explanation: Create a node pool with compute-optimized machine type nodes for the image rendering microservice. Use the node pool with general-purpose machine type nodes for the other microservices. Question: 238 CertyIQ You are working in a team that has developed a new application that needs to be deployed on Kubernetes. The production application is business critical and should be optimized for reliability. You need to provision a Kubernetes cluster and want to follow Google-recommended practices. What should you do? A.Create a GKE Autopilot cluster. Enroll the cluster in the rapid release channel. B.Create a GKE Autopilot cluster. Enroll the cluster in the stable release channel. C.Create a zonal GKE standard cluster. Enroll the cluster in the stable release channel. D.Create a regional GKE standard cluster. Enroll the cluster in the rapid release channel. Answer: B Explanation: Autopilot is more reliable and stable release gives more time to fix issues in new version of GKE Question: 239 CertyIQ You are responsible for a web application on Compute Engine. You want your support team to be notified automatically if users experience high latency for at least 5 minutes. You need a Google-recommended solution with no development cost. What should you do? A.Export Cloud Monitoring metrics to BigQuery and use a Looker Studio dashboard to monitor your web application’s latency. B.Create an alert policy to send a notification when the HTTP response latency exceeds the specified threshold. C.Implement an App Engine service which invokes the Cloud Monitoring API and sends a notification in case of anomalies. D.Use the Cloud Monitoring dashboard to observe latency and take the necessary actions when the response latency exceeds the specified threshold. Answer: B Explanation: 1. https://cloud.google.com/monitoring/alerts#alerting-example 2. B seems to be the best answer Question: 240 CertyIQ You have an on-premises data analytics set of binaries that processes data files in memory for about 45 minutes every midnight. The sizes of those data files range from 1 gigabyte to 16 gigabytes. You want to migrate this application to Google Cloud with minimal effort and cost. What should you do? A.Create a container for the set of binaries. Use Cloud Scheduler to start a Cloud Run job for the container. B.Create a container for the set of binaries. Deploy the container to Google Kubernetes Engine (GKE) and use the Kubernetes scheduler to start the application. C.Upload the code to Cloud Functions. Use Cloud Scheduler to start the application. D.Lift and shift to a VM on Compute Engine. Use an instance schedule to start and stop the instance. Answer: D Explanation: D because I want to migrate this application to Google Cloud with minimal effort and cost. Cloud Run requires to create a container image and this means some kind of development and testing. Question: 241 CertyIQ You used the gcloud container clusters command to create two Google Cloud Kubernetes (GKE) clusters: prod- cluster and dev-cluster. prod-cluster is a standard cluster. dev-cluster is an auto-pilot cluster. When you run the kubectl get nodes command, you only see the nodes from prod-cluster. Which commands should you run to check the node status for dev-cluster? A.gcloud container clusters get-credentials dev-cluster kubectl get nodes B.gcloud container clusters update -generate-password dev-cluster kubectl get nodes C.kubectl config set-context dev-cluster kubectl cluster-info D.kubectl config set-credentials dev-cluster kubectl cluster-info Answer: A Explanation: 1. gcloud container clusters get-credentials updates a kubeconfig file with appropriate credentials and endpoint information to point kubectl at a specific cluster in Google Kubernetes Engine 2. The gcloud container clusters get-credentials command sets the Kubernetes context to the specified cluster (in this case, dev-cluster). This ensures that the subsequent kubectl commands will be executed against the dev-cluster.After setting the context, the kubectl get nodes command is used to retrieve the node status for the dev-cluster, showing the list of nodes in the cluster. Question: 242 CertyIQ You recently discovered that your developers are using many service account keys during their development process. While you work on a long term improvement, you need to quickly implement a process to enforce short- lived service account credentials in your company. You have the following requirements: All service accounts that require a key should be created in a centralized project called pj-sa. Service account keys should only be valid for one day. You need a Google-recommended solution that minimizes cost. What should you do? A.Implement a Cloud Run job to rotate all service account keys periodically in pj-sa. Enforce an org policy to deny service account key creation with an exception to pj-sa. B.Implement a Kubernetes CronJob to rotate all service account keys periodically. Disable attachment of service accounts to resources in all projects with an exception to pj-sa. C.Enforce an org policy constraint allowing the lifetime of service account keys to be 24 hours. Enforce an org policy constraint denying service account key creation with an exception on pj-sa. D.Enforce a DENY org policy constraint over the lifetime of service account keys for 24 hours. Disable attachment of service accounts to resources in all projects with an exception to pj-sa. Answer: C Explanation: 1. it should be C 2. You can use an org policy to enforce a 24-hour lifetime for service account keys.You can use an org policy to deny service account key creation, with an exception for the pj-sa project.This is a Google-recommended solution and it is relatively inexpensive. Question: 243 CertyIQ Your company is running a three-tier web application on virtual machines that use a MySQL database. You need to create an estimated total cost of cloud infrastructure to run this application on Google Cloud instances and Cloud SQL. What should you do? A.Create a Google spreadsheet with multiple Google Cloud resource combinations. On a separate sheet, import the current Google Cloud prices and use these prices for the calculations within formulas. B.Use the Google Cloud Pricing Calculator and select the Cloud Operations template to define your web application with as much detail as possible. C.Implement a similar architecture on Google Cloud, and run a reasonable load test on a smaller scale. Check the billing information, and calculate the estimated costs based on the real load your system usually handles. D.Use the Google Cloud Pricing Calculator to determine the cost of every Google Cloud resource you expect to use. Use similar size instances for the web server, and use your current on-premises machines as a comparison for Cloud SQL. Answer: D Explanation: Google Cloud Pricing Calculator, is the recommended approach for creating an estimated total cost of cloud infrastructure. By selecting the relevant Google Cloud resources (such as instances for web servers and Cloud SQL for the database), and specifying similar sizes and configurations, you can obtain a more accurate estimation of the costs. Question: 244 CertyIQ You have a Bigtable instance that consists of three nodes that store personally identifiable information (PII) data. You need to log all read or write operations, including any metadata or configuration reads of this database table, in your company’s Security Information and Event Management (SIEM) system. What should you do? A. Navigate to Cloud Monitoring in the Google Cloud console, and create a custom monitoring job for the Bigtable instance to track all changes. Create an alert by using webhook endpoints, with the SIEM endpoint as a receiver. B. Navigate to the Audit Logs page in the Google Cloud console, and enable Admin Write logs for the Bigtable instance. Create a Cloud Functions instance to export logs from Cloud Logging to your SIEM. C. Navigate to the Audit Logs page in the Google Cloud console, and enable Data Read, Data Write and Admin Read logs for the Bigtable instance. Create a Pub/Sub topic as a Cloud Logging sink destination, and add your SIEM as a subscriber to the topic. D. Install the Ops Agent on the Bigtable instance during configuration. Create a service account with read permissions for the Bigtable instance. Create a custom Dataflow job with this service account to export logs to the company’s SIEM system. Answer: C Explanation: Navigate to the Audit Logs page in the Google Cloud console, and enable Data Read, Data Write and Admin Read logs for the Bigtable instance. Create a Pub/Sub topic as a Cloud Logging sink destination, and add your SIEM as a subscriber to the topic. Question: 245 CertyIQ You want to set up a Google Kubernetes Engine cluster. Verifiable node identity and integrity are required for the cluster, and nodes cannot be accessed from the internet. You want to reduce the operational cost of managing your cluster, and you want to follow Google-recommended practices. What should you do? A.Deploy a private autopilot cluster. B.Deploy a public autopilot cluster. C.Deploy a standard public cluster and enable shielded nodes. D.Deploy a standard private cluster and enable shielded nodes. Answer: A Explanation: The Shielded GKE node feature is enabled by default for all Autopilot clusters and is impossible to disable manually. https://www.googlecloudcommunity.com/gc/Architecture-Framework-Community/Manage-GKE- Cluster-Security-with-Autopilot-Mode/ba-p/396435 Question: 246 CertyIQ Your company wants to migrate their on-premises workloads to Google Cloud. The current on-premises workloads consist of: A Flask web application A backend API A scheduled long-running background job for ETL and reporting You need to keep operational costs low. You want to follow Google-recommended practices to migrate these workloads to serverless solutions on Google Cloud. What should you do? A.Migrate the web application to App Engine and the backend API to Cloud Run. Use Cloud Tasks to run your background job on Compute Engine. B.Migrate the web application to App Engine and the backend API to Cloud Run. Use Cloud Tasks to run your background job on Cloud Run. C.Run the web application on a Cloud Storage bucket and the backend API on Cloud Run. Use Cloud Tasks to run your background job on Cloud Run. D.Run the web application on a Cloud Storage bucket and the backend API on Cloud Run. Use Cloud Tasks to run your background job on Compute Engine. Answer: B Explanation: B is most reasonable https://cloud.google.com/architecture/migration-to-gcp-deploying-your-workloads Question: 247 CertyIQ Your company is moving its continuous integration and delivery (CI/CD) pipeline to Compute Engine instances. The pipeline will manage the entire cloud infrastructure through code. How can you ensure that the pipeline has appropriate permissions while your system is following security best practices? A. Attach a single service account to the compute instances. Add minimal rights to the service account. Allow the service account to impersonate a Cloud Identity user with elevated permissions to create, update, or delete resources. B. Add a step for human approval to the CI/CD pipeline before the execution of the infrastructure provisioning. Use the human approvals IAM account for the provisioning. C. Attach a single service account to the compute instances. Add all required Identity and Access Management (IAM) permissions to this service account to create, update, or delete resources. D. Create multiple service accounts, one for each pipeline with the appropriate minimal Identity and Access Management (IAM) permissions. Use a secret manager service to store the key files of the service accounts. Allow the CI/CD pipeline to request the appropriate secrets during the execution of the pipeline. Answer: D Explanation: Create multiple service accounts, one for each pipeline with the appropriate minimal Identity and Access Management (IAM) permissions. Use a secret manager service to store the key files of the service accounts. Allow the CI/CD pipeline to request the appropriate secrets during the execution of the pipeline. Question: 248 CertyIQ Your application stores files on Cloud Storage by using the Standard Storage class. The application only requires access to files created in the last 30 days. You want to automatically save costs on files that are no longer accessed by the application. What should you do? A.Create an object lifecycle on the storage bucket to change the storage class to Archive Storage for objects with an age over 30 days. B.Create a cron job in Cloud Scheduler to call a Cloud Functions instance every day to delete files older than 30 days. C.Create a retention policy on the storage bucket of 30 days, and lock the bucket by using a retention policy lock. D.Enable object versioning on the storage bucket and add lifecycle rules to expire non-current versions after 30 days. Answer: A Explanation: A. Create an object lifecycle on the storage bucket to change the storage class to Archive Storage for objects with an age over 30 days. Question: 249 CertyIQ Your manager asks you to deploy a workload to a Kubernetes cluster. You are not sure of the workload's resource requirements or how the requirements might vary depending on usage patterns, external dependencies, or other factors. You need a solution that makes cost-effective recommendations regarding CPU and memory requirements, and allows the workload to function consistently in any situation. You want to follow Google- recommended practices. What should you do? A.Configure the Horizontal Pod Autoscaler for availability, and configure the cluster autoscaler for suggestions. B.Configure the Horizontal Pod Autoscaler for availability, and configure the Vertical Pod Autoscaler recommendations for suggestions. C.Configure the Vertical Pod Autoscaler recommendations for availability, and configure the Cluster autoscaler for suggestions. D.Configure the Vertical Pod Autoscaler recommendations for availability, and configure the Horizontal Pod Autoscaler for suggestions. Answer: B Explanation: Ans is B B. Configure the Horizontal Pod Autoscaler for availability, and configure the Vertical Pod Autoscaler recommendations for suggestions. This approach allows you to manage the number of pods based on the workload (HPA) and get optimal CPU and memory settings for each pod (VPA), which is in line with Google- recommended practices for managing Kubernetes workloads with uncertain resource requirements. This combination ensures that your workload can function consistently in varying situations by automatically adjusting both the quantity of pods and the resources each pod is allocated. Question: 250 CertyIQ You need to migrate invoice documents stored on-premises to Cloud Storage. The documents have the following storage requirements: Documents must be kept for five years. Up to five revisions of the same invoice document must be stored, to allow for corrections. Documents older than 365 days should be moved to lower cost storage tiers. You want to follow Google-recommended practices to minimize your operational and development costs. What should you do? A.Enable retention policies on the bucket, and use Cloud Scheduler to invoke a Cloud Function to move or delete your documents based on their metadata. B.Enable retention policies on the bucket, use lifecycle rules to change the storage classes of the objects, set the number of versions, and delete old files. C.Enable object versioning on the bucket, and use Cloud Scheduler to invoke a Cloud Functions instance to move or delete your documents based on their metadata. D.Enable object versioning on the bucket, use lifecycle conditions to change the storage class of the objects, set the number of versions, and delete old files. Answer: D Explanation: Enable object versioning on the bucket, use lifecycle conditions to change the storage class of the objects, set the number of versions, and delete old files. Question: 251 CertyIQ You installed the Google Cloud CLI on your workstation and set the proxy configuration. However, you are worried that your proxy credentials will be recorded in the gcloud CLI logs. You want to prevent your proxy credential from being logged. What should you do? A.Configure username and password by using gcloud config set proxy/username and gcloud config set proxy/password commands. B.Encode username and password in sha256 encoding, and save in to a text file. Use filename as a value in the gcloud config set core/custom_ca_certs_file command. C.Provide values for CLOUDSDK_PROXY_USERNAME and CLOUDSDK_PROXY_PASSWORD in the gcloud CLI tool configuration file. D.Set the CLOUDSDK_PROXY_USERNAME and CLOUDSDK_PROXY_PASSWORD properties by using environment variables in your command line tool. Answer: D Explanation: Using Environment Variables: By setting the proxy credentials as environment variables. (CLOUDSDK_PROXY_USERNAME and CLOUDSDK_PROXY_PASSWORD), you avoid having to enter them directly into the CLI tool where they might be logged. Environment variables are a common way to securely pass sensitive information like credentials.- No Logging of Credentials: The g cloud CLI typically does not log environment variables, so your credentials should be safe from being recorded in the CLI logs.- Ease of Use: Setting environment variables is straightforward and does not require modifying configuration files or encoding credentials. Reference: https://cloud.google.com/sdk/docs/proxy-settings Question: 252 CertyIQ Your company developed an application to deploy on Google Kubernetes Engine. Certain parts of the application are not fault-tolerant and are allowed to have downtime. Other parts of the application are critical and must always be available. You need to configure a Google Kubernetes Engine cluster while optimizing for cost. What should you do? A.Create a cluster with a single node-pool by using standard VMs. Label he fault-tolerant Deployments as spot_true. B.Create a cluster with a single node-pool by using Spot VMs. Label the critical Deployments as spot_false. C.Create a cluster with both a Spot VM node pool and a node pool by using standard VMs. Deploy the critical deployments on the Spot VM node pool and the fault-tolerant deployments on the node pool by using standard VMs. D.Create a cluster with both a Spot VM node pool and a nods pool by using standard VMs. Deploy the critical deployments on the node pool by using standard VMs and the fault-tolerant deployments on the Spot VM node pool. Answer: D Explanation: D. Create a cluster with both a Spot VM node pool and a nods pool by using standard VMs. Deploy the critical deployments on the node pool by using standard VMs and the fault-tolerant deployments on the Spot VM node pool. Spot VM Node Pool for Fault-Tolerant Parts: Spot VMs in GKE are cost-effective but can be preempted (terminated) by Google Cloud with little notice if their resources are needed elsewhere. They are suitable for workloads that can handle interruptions, like the fault-tolerant parts of your application.Standard VM Node Pool for Critical Parts: Standard VMs offer more reliability and are not subject to preemption like Spot VMs. Using a standard VM node pool for the critical parts of your application ensures they remain available and are not disrupted by potential preemptions. Question: 253 CertyIQ You need to deploy an application in Google Cloud using serverless technology. You want to test a new version of the application with a small percentage of production traffic. What should you do? A.Deploy the application to Cloud Run. Use gradual rollouts for traffic splitting. B.Deploy the application to Google Kubernetes Engine. Use Anthos Service Mash for traffic splitting. C.Deploy the application to Cloud Functions. Specify the version number in the functions name. D.Deploy the application to App Engine. For each new version, create a new service. Answer: A Explanation: The correct answer is **A. Deploy the application to Cloud Run. Use gradual rollouts for traffic splitting**. **Cloud Run** is a serverless platform that allows you to deploy and run your applications without worrying about infrastructure management. It supports deploying new versions of an application and gradually rolling out updates using traffic splitting. This makes it ideal for testing a new version of an application with a small percentage of production traffic. - The other options do not provide the same level of support for serverless deployment and traffic splitting for testing new versions of an application. Question: 254 CertyIQ Your company's security vulnerability management policy wants a member of the security team to have visibility into vulnerabilities and other OS metadata for a specific Compute Engine instance. This Compute Engine instance hosts a critical application in your Google Cloud project. You need to implement your company's security vulnerability management policy. What should you do? A. Ensure that the Ops Agent is installed on the Compute Engine instance. Create a custom metric in the Cloud Monitoring dashboard. Provide the security team member with access to this dashboard. B. Ensure that the Ops Agent is installed on the Compute Engine instance. Provide the security team member roles/osconfig.inventoryViewer permission. C. Ensure that the OS Config agent is installed on the Compute Engine instance. Provide the security team member roles/osconfig.vulnerabilityReportViewer permission. D. Ensure that the OS Config agent is installed on the Compute Engine instance. Create a log sink to BigQuery dataset. Provide the security team member with access to this dataset. Answer: C Explanation: C. Ensure that the OS Config agent is installed on the Compute Engine instance. Provide the security team member roles/os config. vulnerability Report Viewer permission. Question: 255 CertyIQ You want to enable your development team to deploy new features to an existing Cloud Run service in production. To minimize the risk associated with a new revision, you want to reduce the number of customers who might be affected by an outage without introducing any development or operational costs to your customers. You want to follow Google-recommended practices for managing revisions to a service. What should you do? A.Ask your customers to retry access to your service with exponential backoff to mitigate any potential problems after the new revision is deployed. B.Gradually roll out the new revision and split customer traffic between the revisions to allow rollback in case a problem occurs. C.Send all customer traffic to the new revision, and roll back to a previous revision if you witness any problems in production. D.Deploy your application to a second Cloud Run service, and ask your customers to use the second Cloud Run service. Answer: B Explanation: B. Gradually roll out the new revision and split customer traffic between the revisions to allow rollback in case a problem occurs. Question: 256 CertyIQ You have deployed an application on a Compute Engine instance. An external consultant needs to access the Linux-based instance. The consultant is connected to your corporate network through a VPN connection, but the consultant has no Google account. What should you do? A.Instruct the external consultant to use the gcloud compute ssh command line tool by using Identity-Aware Proxy to access the instance. B.Instruct the external consultant to use the gcloud compute ssh command line tool by using the public IP address of the instance to access it. C.Instruct the external consultant to generate an SSH key pair, and request the public key from the consultant. Add the public key to the instance yourself, and have the consultant access the instance through SSH with their private key. D.Instruct the external consultant to generate an SSH key pair, and request the private key from the consultant. Add the private key to the instance yourself, and have the consultant access the instance through SSH with their public key. Answer: C Explanation: C. Instruct the external consultant to generate an SSH key pair, and request the public key from the consultant. Add the public key to the instance yourself, and have the consultant access the instance through SSH with their private key. Question: 257 CertyIQ After a recent security incident, your startup company wants better insight into what is happening in the Google Cloud environment. You need to monitor unexpected firewall changes and instance creation. Your company prefers simple solutions. What should you do? A.Create a log sink to forward Cloud Audit Logs filtered for firewalls and compute instances to Cloud Storage. Use BigQuery to periodically analyze log events in the storage bucket. B.Use Cloud Logging filters to create log-based metrics for firewall and instance actions. Monitor the changes and set up reasonable alerts. C.Install Kibana on a compute instance. Create a log sink to forward Cloud Audit Logs filtered for firewalls and compute instances to Pub/Sub. Target the Pub/Sub topic to push messages to the Kibana instance. Analyze the logs on Kibana in real time. D.Turn on Google Cloud firewall rules logging, and set up alerts for any insert, update, or delete events. Answer: B Explanation: Use Cloud Logging filters to create log-based metrics for firewall and instance actions. Monitor the changes and set up reasonable alerts. Question: 258 CertyIQ You are configuring service accounts for an application that spans multiple projects. Virtual machines (VMs) running in the web-applications project need access to BigQuery datasets in the crm-databases project. You want to follow Google-recommended practices to grant access to the service account in the web-applications project. What should you do? A.Grant "project owner" for web-applications appropriate roles to crm-databases. B.Grant "project owner" role to crm-databases and the web-applications project. C.Grant "project owner" role to crm-databases and roles/bigquery.dataViewer role to web-applications. D.Grant roles/bigquery.dataViewer role to crm-databases and appropriate roles to web-applications. Answer: D Explanation: D. Grant roles/bigquery.dataViewer role to crm-databases and appropriate roles to web-applications. Question: 259 CertyIQ Your Dataproc cluster runs in a single Virtual Private Cloud (VPC) network in a single subnetwork with range 172.16.20.128/25. There are no private IP addresses available in the subnetwork. You want to add new VMs to communicate with your cluster using the minimum number of steps. What should you do? A.Modify the existing subnet range to 172.16.20.0/24. B.Create a new Secondary IP Range in the VPC and configure the VMs to use that range. C.Create a new VPC network for the VMs. Enable VPC Peering between the VMs'VPC network and the Dataproc cluster VPC network. D.Create a new VPC network for the VMs with a subnet of 172.32.0.0/16. Enable VPC network Peering between the Dataproc VPC network and the VMs VPC network. Configure a custom Route exchange. Answer: A Explanation: Option A involves modifying the subnet range of the existing VPC network to increase the number of available IP addresses. By changing the subnet range to 172.16.20.0/24, you will have a larger IP address range to allocate to new VMs, allowing them to communicate with the Dataproc cluster.To expand the IP range of a Compute Engine subnetwork, you can use:gcloud compute networks subnets expand-ip-range NAME. Question: 260 CertyIQ You are building a backend service for an ecommerce platform that will persist transaction data from mobile and web clients. After the platform is launched, you expect a large volume of global transactions. Your business team wants to run SQL queries to analyze the data. You need to build a highly available and scalable data store for the platform. What should you do? A.Create a multi-region Cloud Spanner instance with an optimized schema. B.Create a multi-region Firestore database with aggregation query enabled. C.Create a multi-region Cloud SQL for PostgreSQL database with optimized indexes. D.Create a multi-region BigQuery dataset with optimized tables. Answer: A Explanation: A. Create a multi-region Cloud Spanner instance with an optimized schema. Option A, creating a multi-region Cloud Spanner instance with an optimized schema, is the best choice for building a highly available and scalable data store that can efficiently handle global transactions and support SQL queries for analysis. Question: 261 CertyIQ You are in charge of provisioning access for all Google Cloud users in your organization. Your company recently acquired a startup company that has their own Google Cloud organization. You need to ensure that your Site Reliability Engineers (SREs) have the same project permissions in the startup company's organization as in your own organization. What should you do? A.In the Google Cloud console for your organization, select Create role from selection, and choose destination as the startup company's organization. B.In the Google Cloud console for the startup company, select Create role from selection and choose source as the startup company's Google Cloud organization. C.Use the gcloud iam roles copy command, and provide the Organization ID of the startup company's Google Cloud Organization as the destination. D.Use the gcloud iam roles copy command, and provide the project IDs of all projects in the startup company's organization as the destination. Answer: C Explanation: C. Use the gcloud iam roles copy command, and provide the Organization ID of the startup company's Google Cloud Organization as the destination. Question: 262 CertyIQ You need to extract text from audio files by using the Speech-to-Text API. The audio files are pushed to a Cloud Storage bucket. You need to implement a fully managed, serverless compute solution that requires authentication and aligns with Google-recommended practices. You want to automate the call to the API by submitting each file to the API as the audio file arrives in the bucket. What should you do? A.Create an App Engine standard environment triggered by Cloud Storage bucket events to submit the file URI to the Google Speech-to-TextAPI. B.Run a Kubernetes job to scan the bucket regularly for incoming files, and call the Speech-to-Text API for each unprocessed file. C.Run a Python script by using a Linux cron job in Compute Engine to scan the bucket regularly for incoming files, and call the Speech-to-Text API for each unprocessed file. D.Create a Cloud Function triggered by Cloud Storage bucket events to submit the file URI to the Google Speech-to-Text API. Answer: D Explanation: D. Create a Cloud Function triggered by Cloud Storage bucket events to submit the file URI to the Google Speech-to-Text API. Question: 263 CertyIQ Your customer wants you to create a secure website with autoscaling based on the compute instance CPU load. You want to enhance performance by storing static content in Cloud Storage. Which resources are needed to distribute the user traffic? A.An external HTTP(S) load balancer with a managed SSL certificate to distribute the load and a URL map to target the requests for the static content to the Cloud Storage backend. B.An external network load balancer pointing to the backend instances to distribute the load evenly. The web servers will forward the request to the Cloud Storage as needed. C.An internal HTTP(S) load balancer together with Identity-Aware Proxy to allow only HTTPS traffic. D.An external HTTP(S) load balancer to distribute the load and a URL map to target the requests for the static content to the Cloud Storage backend. Install the HTTPS certificates on the instance. Answer: A Explanation: An external HTTP(S) load balancer with a managed SSL certificate to distribute the load and a URL map to target the requests for the static content to the Cloud Storage backend. Question: 264 CertyIQ The core business of your company is to rent out construction equipment at large scale. All the equipment that is being rented out has been equipped with multiple sensors that send event information every few seconds. These signals can vary from engine status, distance traveled, fuel level, and more. Customers are billed based on the consumption monitored by these sensors. You expect high throughput – up to thousands of events per hour per device – and need to retrieve consistent data based on the time of the event. Storing and retrieving individual signals should be atomic. What should you do? A.Create files in Cloud Storage as data comes in. B.Create a file in Filestore per device, and append new data to that file. C.Ingest the data into Cloud SQL. Use multiple read replicas to match the throughput. D.Ingest the data into Bigtable. Create a row key based on the event timestamp. Answer: D Explanation: D. Ingest the data into Big table. Create a row key based on the event timestamp. D. Bigtable is a highly scalable, NoSQL database designed for high throughput and low-latency applications, making it suitable for scenarios with high ingest rates and rapid data retrieval.- Creating a row key based on the event timestamp would facilitate efficient retrieval of time-based data, ensuring consistency and atomicity for individual signals.- Bigtable's design allows for fast access to data using row keys, providing optimal performance when retrieving specific signals or events based on timestamps.- It also offers the scalability needed for handling thousands of events per hour per device. Question: 265 CertyIQ You just installed the Google Cloud CLI on your new corporate laptop. You need to list the existing instances of your company on Google Cloud. What must you do before you run the gcloud compute instances list command? (Choose two.) A.Run gcloud auth login, enter your login credentials in the dialog window, and paste the received login token to gcloud CLI. B.Create a Google Cloud service account, and download the service account key. Place the key file in a folder on your machine where gcloud CLI can find it. C.Download your Cloud Identity user account key. Place the key file in a folder on your machine where gcloud CLI can find it. D.Run gcloud config set compute/zone $my_zone to set the default zone for gcloud CLI. E.Run gcloud config set project $my_project to set the default project for gcloud CLI. Answer: AE Explanation: A. Run gcloud auth login, enter your login credentials in the dialog window, and paste the received login token to gcloud CLI. E.Run gcloud config set project $my_project to set the default project for gcloud CLI. Question: 266 CertyIQ You are planning to migrate your on-premises data to Google Cloud. The data includes: 200 TB of video files in SAN storage Data warehouse data stored on Amazon Redshift 20 GB of PNG files stored on an S3 bucket You need to load the video files into a Cloud Storage bucket, transfer the data warehouse data into BigQuery, and load the PNG files into a second Cloud Storage bucket. You want to follow Google-recommended practices and avoid writing any code for the migration. What should you do? A.Use gcloud storage for the video files, Dataflow for the data warehouse data, and Storage Transfer Service for the PNG files. B.Use Transfer Appliance for the videos, BigQuery Data Transfer Service for the data warehouse data, and Storage Transfer Service for the PNG files. C.Use Storage Transfer Service for the video files, BigQuery Data Transfer Service for the data warehouse data, and Storage Transfer Service for the PNG files. D.Use Cloud Data Fusion for the video files, Dataflow for the data warehouse data, and Storage Transfer Service for the PNG files. Answer: B Explanation: B. Use Transfer Appliance for the videos, BigQuery Data Transfer Service for the data warehouse data, and Storage Transfer Service for the PNG files: Transfer Appliance is designed for moving large amounts of data (like 200 TB of videos) into Google Cloud Storage. The BigQuery Data Transfer Service automates data movement from several sources, including Amazon Redshift, into BigQuery. Storage Transfer Service is appropriate for moving data from Amazon S3 to Google Cloud Storage. Question: 267 CertyIQ You want to deploy a new containerized application into Google Cloud by using a Kubernetes manifest. You want to have full control over the Kubernetes deployment, and at the same time, you want to minimize configuring infrastructure. What should you do? A.Deploy the application on GKE Autopilot. B.Deploy the application on Cloud Run. C.Deploy the application on GKE Standard. D.Deploy the application on Cloud Functions. Answer: A Explanation: Deploy the application on GKE Autopilot. Question: 268 CertyIQ Your team is building a website that handles votes from a large user population. The incoming votes will arrive at various rates. You want to optimize the storage and processing of the votes. What should you do? A.Save the incoming votes to Firestore. Use Cloud Scheduler to trigger a Cloud Functions instance to periodically process the votes. B.Use a dedicated instance to process the incoming votes. Send the votes directly to this instance. C.Save the incoming votes to a JSON file on Cloud Storage. Process the votes in a batch at the end of the day. D.Save the incoming votes to Pub/Sub. Use the Pub/Sub topic to trigger a Cloud Functions instance to process the votes. Answer: D Explanation: Save the incoming votes to Pub/Sub. Use the Pub/Sub topic to trigger a Cloud Functions instance to process the votes. Question: 269 CertyIQ You are deploying an application on Google Cloud that requires a relational database for storage. To satisfy your company’s security policies, your application must connect to your database through an encrypted and authenticated connection that requires minimal management and integrates with Identity and Access Management (IAM). What should you do? A.Deploy a Cloud SQL database with the SSL mode set to encrypted only, configure SSL/TLS client certificates, and configure a database user and password. B.Deploy a Cloud SQL database with the SSL mode set to encrypted only, configure SSL/TLS client certificates, and configure IAM database authentication. C.Deploy a Cloud SQL database and configure IAM database authentication. Access the database through the Cloud SQL Auth Proxy. D.Deploy a Cloud SQL database and configure a database user and password. Access the database through the Cloud SQL Auth Proxy. Answer: C Explanation: Deploy a Cloud SQL database and configure IAM database authentication. Access the database through the Cloud SQL Auth Proxy. Question: 270 CertyIQ You have two Google Cloud projects: project-a with VPC vpc-a (10.0.0.0/16) and project-b with VPC vpc-b (10.8.0.0/16). Your frontend application resides in vpc-a and the backend API services are deployed in vpc-b. You need to efficiently and cost-effectively enable communication between these Google Cloud projects. You also want to follow Google-recommended practices. What should you do? A.Create an OpenVPN connection between vpc-a and vpc-b. B.Create VPC Network Peering between vpc-a and vpc-b. C.Configure a Cloud Router in vpc-a and another Cloud Router in vpc-b. D.Configure a Cloud Interconnect connection between vpc-a and vpc-b. Answer: B Explanation: Create VPC Network Peering between vpc-a and vpc-b. Question: 271 CertyIQ Your company is running a critical workload on a single Compute Engine VM instance. Your company's disaster recovery policies require you to back up the entire instance’s disk data every day. The backups must be retained for 7 days. You must configure a backup solution that complies with your company’s security policies and requires minimal setup and configuration. What should you do? A.Configure the instance to use persistent disk asynchronous replication. B.Configure daily scheduled persistent disk snapshots with a retention period of 7 days. C.Configure Cloud Scheduler to trigger a Cloud Function each day that creates a new machine image and deletes machine images that are older than 7 days. D.Configure a bash script using gsutil to run daily through a cron job. Copy the disk’s files to a Cloud Storage bucket with archive storage class and an object lifecycle rule to delete the objects after 7 days. Answer: B Explanation: Configure daily scheduled persistent disk snapshots with a retention period of 7 days. Question: 272 CertyIQ Your company requires that Google Cloud products are created with a specific configuration to comply with your company’s security policies. You need to implement a mechanism that will allow software engineers at your company to deploy and update Google Cloud products in a preconfigured and approved manner. What should you do? A.Create Java packages that utilize the Google Cloud Client Libraries for Java to configure Google Cloud products. Store and share the packages in a source code repository. B.Create bash scripts that utilize the Google Cloud CLI to configure Google Cloud products. Store and share the bash scripts in a source code repository. C.Use the Google Cloud APIs by using curl to configure Google Cloud products. Store and share the curl commands in a source code repository. D.Create Terraform modules that utilize the Google Cloud Terraform Provider to configure Google Cloud products. Store and share the modules in a source code repository. Answer: D Explanation: Create Terraform modules that utilize the Google Cloud Terraform Provider to configure Google Cloud products. Store and share the modules in a source code repository. Question: 273 CertyIQ You are a Google Cloud organization administrator. You need to configure organization policies and log sinks on Google Cloud projects that cannot be removed by project users to comply with your company's security policies. The security policies are different for each company department. Each company department has a user with the Project Owner role assigned to their projects. What should you do? A.Use a standard naming convention for projects that includes the department name. Configure organization policies on the organization and log sinks on the projects. B.Use a standard naming convention for projects that includes the department name. Configure both organization policies and log sinks on the projects. C.Organize projects under folders for each department. Configure both organization policies and log sinks on the folders. D.Organize projects under folders for each department. Configure organization policies on the organization and log sinks on the folders. Answer: C Explanation: Organize projects under folders for each department. Configure both organization policies and log sinks on the folders. Question: 274 CertyIQ You are deploying a web application using Compute Engine. You created a managed instance group (MIG) to host the application. You want to follow Google-recommended practices to implement a secure and highly available solution. What should you do? A.Use SSL proxy load balancing for the MIG and an A record in your DNS private zone with the load balancer's IP address. B.Use SSL proxy load balancing for the MIG and a CNAME record in your DNS public zone with the load balancer’s IP address. C.Use HTTP(S) load balancing for the MIG and a CNAME record in your DNS private zone with the load balancer’s IP address. D.Use HTTP(S) load balancing for the MIG and an A record in your DNS public zone with the load balancer’s IP address. Answer: D Explanation: Use HTTP(S) load balancing for the MIG and an A record in your DNS public zone with the load balancer’s IP address. Question: 275 CertyIQ You have several hundred microservice applications running in a Google Kubernetes Engine (GKE) cluster. Each microservice is a deployment with resource limits configured for each container in the deployment. You've observed that the resource limits for memory and CPU are not appropriately set for many of the microservices. You want to ensure that each microservice has right sized limits for memory and CPU. What should you do? A.Configure a Vertical Pod Autoscaler for each microservice. B.Modify the cluster's node pool machine type and choose a machine type with more memory and CPU. C.Configure a Horizontal Pod Autoscaler for each microservice. D.Configure GKE cluster autoscaling. Answer: A Explanation: Configure a Vertical Pod Autoscaler for each microservice. Question: 276 CertyIQ Your company uses BigQuery to store and analyze data. Upon submitting your query in BigQuery, the query fails with a quotaExceeded error. You need to diagnose the issue causing the error. What should you do? (Choose two.) A.Use BigQuery BI Engine to analyze the issue. B.Use the INFORMATION_SCHEMA views to analyze the underlying issue. C.Configure Cloud Trace to analyze the issue. D.Search errors in Cloud Audit Logs to analyze the issue. E.View errors in Cloud Monitoring to analyze the issue. Answer: BD Explanation: B.Use the INFORMATION_SCHEMA views to analyze the underlying issue. D.Search errors in Cloud Audit Logs to analyze the issue. Question: 277 CertyIQ Your team has developed a stateless application which requires it to be run directly on virtual machines. The application is expected to receive a fluctuating amount of traffic and needs to scale automatically. You need to deploy the application. What should you do? A.Deploy the application on a managed instance group and configure autoscaling. B.Deploy the application on a Kubernetes Engine cluster and configure node pool autoscaling. C.Deploy the application on Cloud Functions and configure the maximum number instances. D.Deploy the application on Cloud Run and configure autoscaling. Answer: A Explanation: Deploy the application on a managed instance group and configure autoscaling. Question: 278 CertyIQ Your web application is hosted on Cloud Run and needs to query a Cloud SQL database. Every morning during a traffic spike, you notice API quota errors in Cloud SQL logs. The project has already reached the maximum API quota. You want to make a configuration change to mitigate the issue. What should you do? A.Modify the minimum number of Cloud Run instances. B.Use traffic splitting. C.Modify the maximum number of Cloud Run instances. D.Set a minimum concurrent requests environment variable for the application. Answer: A Explanation: Modify the minimum number of Cloud Run instances. Question: 279 CertyIQ You need to deploy a single stateless web application with a web interface and multiple endpoints. For security reasons, the web application must be reachable from an internal IP address from your company's private VPC and on-premises network. You also need to update the web application multiple times per day with minimal effort and want to manage a minimal amount of cloud infrastructure. What should you do? A.Deploy the web application on Google Kubernetes Engine standard edition with an internal ingress. B.Deploy the web application on Cloud Run with Private Google Access configured. C.Deploy the web application on C

Use Quizgecko on...
Browser
Browser